algorithms.rst 15 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374
  1. Optimisation process
  2. =======================
  3. Let us now tackle the interesting part concerning the search for optimum solutions in our research space.
  4. Find local and global optima
  5. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  6. Overall, in an optimization process, we will seek to find the best, or the best solutions that minimize or maximize our objective function (fitness score obtained) in order to respond to our problem.
  7. .. image:: ../_static/documentation/search_space.png
  8. :width: 800 px
  9. :align: center
  10. Sometimes, the search space can be very simple. A local search can provide access to the global optimum as shown in figure (a) above.
  11. In other cases, the search space is more complex. It may be necessary to explore more rather than exploit in order to get out of a convex zone and not find the global optimum but only a local opmatime solution.
  12. This problem is illustrated in figure (b).
  13. Abstract algorithm class
  14. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  15. An abstract class is proposed within Macop to generalize the management of an algorithm and therefore of a heuristic.
  16. It is located in the ``macop.algorithms.base`` module.
  17. We will pay attention to the different methods of which she is composed. This class enables to manage some common usages of operation research algorithms:
  18. - initialization function of solution
  19. - validator function to check if solution is valid or not (based on some criteria)
  20. - evaluation function to give fitness score to a solution
  21. - operators used in order to update solution during search process
  22. - policy process applied when choosing next operator to apply
  23. - callbacks function in order to do some relative stuff every number of evaluation or reload algorithm state
  24. - parent algorithm associated to this new algorithm instance (hierarchy management)
  25. She is composed of few default attributes:
  26. - initializer: {function} -- basic function strategy to initialize solution
  27. - evaluator: {Evaluator} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
  28. - operators: {[Operator]} -- list of operator to use when launching algorithm
  29. - policy: {Policy} -- Policy instance strategy to select operators
  30. - validator: {function} -- basic function to check if solution is valid or not under some constraints
  31. - maximise: {bool} -- specify kind of optimisation problem
  32. - verbose: {bool} -- verbose or not information about the algorithm
  33. - currentSolution: {Solution} -- current solution managed for current evaluation comparison
  34. - bestSolution: {Solution} -- best solution found so far during running algorithm
  35. - callbacks: {[Callback]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initializing algorithm
  36. - parent: {Algorithm} -- parent algorithm reference in case of inner Algorithm instance (optional)
  37. .. code-block:: python
  38. class Algorithm():
  39. def __init__(self,
  40. initializer,
  41. evaluator,
  42. operators,
  43. policy,
  44. validator,
  45. maximise=True,
  46. parent=None,
  47. verbose=True):
  48. ...
  49. def addCallback(self, callback):
  50. """
  51. Add new callback to algorithm specifying usefull parameters
  52. """
  53. ...
  54. def resume(self):
  55. """
  56. Resume algorithm using Callback instances
  57. """
  58. ...
  59. def getParent(self):
  60. """
  61. Recursively find the main parent algorithm attached of the current algorithm
  62. """
  63. ...
  64. def setParent(self, parent):
  65. """
  66. Set parent algorithm to current algorithm
  67. """
  68. ...
  69. def initRun(self):
  70. """
  71. Initialize the current solution and best solution using the `initialiser` function
  72. """
  73. ...
  74. def increaseEvaluation(self):
  75. """
  76. Increase number of evaluation once a solution is evaluated for each dependant algorithm (parents hierarchy)
  77. """
  78. ...
  79. def getGlobalEvaluation(self):
  80. """
  81. Get the global number of evaluation (if inner algorithm)
  82. """
  83. ...
  84. def getGlobalMaxEvaluation(self):
  85. """
  86. Get the global max number of evaluation (if inner algorithm)
  87. """
  88. ...
  89. def stop(self):
  90. """
  91. Global stopping criteria (check for parents algorithm hierarchy too)
  92. """
  93. ...
  94. def evaluate(self, solution):
  95. """
  96. Evaluate a solution using evaluator passed when intialize algorithm
  97. """
  98. ...
  99. def update(self, solution):
  100. """
  101. Apply update function to solution using specific `policy`
  102. Check if solution is valid after modification and returns it
  103. """
  104. ...
  105. def isBetter(self, solution):
  106. """
  107. Check if solution is better than best found
  108. """
  109. ...
  110. def run(self, evaluations):
  111. """
  112. Run the specific algorithm following number of evaluations to find optima
  113. """
  114. ...
  115. def progress(self):
  116. """
  117. Log progress and apply callbacks if necessary
  118. """
  119. ...
  120. The notion of hierarchy between algorithms is introduced here. We can indeed have certain dependencies between algorithms.
  121. The methods ``increaseEvaluation``, ``getGlobalEvaluation`` and ``getGlobalMaxEvaluation`` ensure that the expected global number of evaluations is correctly managed, just like the ``stop`` method for the search stop criterion.
  122. The ``evaluate``, ``update`` and ``isBetter`` will be used a lot when looking for a solution in the search space.
  123. In particular the ``update`` function, which will call the ``policy`` instance to generate a new valid solution.
  124. ``isBetter`` method is also overloadable especially if the algorithm does not take any more into account than a single solution to be verified (verification via a population for example).
  125. The ``initRun`` method specify the way you intialise your algorithm (``bestSolution`` and ``currentSolution`` as example) if algorithm not already initialized.
  126. .. note::
  127. The ``initRun`` method can also be used for intialise population of solutions instead of only one best solution, if you want to manage a genetic algorithm.
  128. Most important part is the ``run`` method. Into abstract, the ``run`` method only initialized the current number of evaluation for the algorithm based on the parent algorithm if we are into inner algorithm.
  129. It is always **mandatory** to call the parent class ``run`` method using ``super().run(evaluations)``. Then, using ``evaluations`` parameter which is the number of evaluations budget to run, we can process or continue to find solutions into search space.
  130. .. warning::
  131. The other methods such as ``addCallback``, ``resume`` and ``progress`` will be detailed in the next part focusing on the notion of callback.
  132. Local search algorithm
  133. ~~~~~~~~~~~~~~~~~~~~~~~~~~~
  134. We are going to carry out our first local search algorithm within our search space. A `local search` consists of starting from a solution, then applying a mutation or crossover operation to it, in order to obtain a new one.
  135. This new solution is evaluated and retained if it is better. We will speak here of the notion of **neighborhood exploration**. The process is then completed in the same way.
  136. The local search ends after a certain number of evaluations and the best evaluated solution obtained is returned.
  137. Let's implement an algorithm well known under the name of hill climber best improvment inheriting from the mother algorithm class and name it ``HillClimberBestImprovment``.
  138. .. code-block:: python
  139. """
  140. module imports
  141. """
  142. from macop.algorithms.base import Algorithm
  143. class HillClimberBestImprovment(Algorithm):
  144. def run(self, evaluations):
  145. """
  146. Run a local search algorithm
  147. """
  148. # by default use of mother method to initialize variables
  149. super().run(evaluations)
  150. # initialize current solution and best solution
  151. self.initRun()
  152. solutionSize = self._currentSolution._size
  153. # local search algorithm implementation
  154. while not self.stop():
  155. for _ in range(solutionSize):
  156. # update current solution using policy
  157. newSolution = self.update(self._currentSolution)
  158. # if better solution than currently, replace it
  159. if self.isBetter(newSolution):
  160. self._bestSolution = newSolution
  161. # increase number of evaluations
  162. self.increaseEvaluation()
  163. # stop algorithm if necessary
  164. if self.stop():
  165. break
  166. # set new current solution using best solution found in this neighbor search
  167. self._currentSolution = self._bestSolution
  168. return self._bestSolution
  169. Our algorithm is now ready to work. As previously, let us define two operators as well as a random choice strategy.
  170. We will also need to define a **solution initialisation function** so that the algorithm can generate new solutions.
  171. .. code-block:: python
  172. """
  173. Problem instance definition
  174. """
  175. elements_score = [ 4, 2, 10, 1, 2 ] # worth of each object
  176. elements_weight = [ 12, 1, 4, 1, 2 ] # weight of each object
  177. # evaluator instance
  178. evaluator = KnapsackEvaluator(data={'worths': elements_score})
  179. # valid instance using lambda
  180. validator = lambda solution: sum([ elements_weight[i] * solution._data[i] for i in range(len(solution._data))]) <= 15
  181. # initialiser instance using lambda with default param value
  182. initialiser = lambda x=5: BinarySolution.random(x, validator)
  183. # operators list with crossover and mutation
  184. operators = [SimpleCrossover(), SimpleMutation()]
  185. # policy random instance
  186. policy = RandomPolicy(operators)
  187. # maximizing algorithm (relative to knapsack problem)
  188. algo = HillClimberBestImprovment(initialiser, evaluator, operators, policy, validator, maximise=True, verbose=False)
  189. # run the algorithm and get solution found
  190. solution = algo.run(100)
  191. print(solution.fitness())
  192. .. note::
  193. The ``verbose`` algorithm parameter will log into console the advancement process of the algorithm is set to ``True`` (the default value).
  194. Exploratory algorithm
  195. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  196. As explained in **figure (b)** of **section 8.1**, sometimes the search space is more complicated due to convex parts and need heuristic with other strategy rather than a simple local search.
  197. The way to counter this problem is to allow the algorithm to exit the exploitation phase offered by local search. But rather to seek to explore other parts of the research space. This is possible by simply carrying out several local searches with our budget (number of evaluations).
  198. The idea is to make a leap in the search space in order to find a new local optimum which can be the global optimum. The explained process is illustrated below:
  199. .. image:: ../_static/documentation/search_space_simple.png
  200. :width: 400 px
  201. :align: center
  202. We are going to implement a more specific algorithm allowing to take a new parameter as input. This is a local search, the one previously developed. For that, we will have to modify the constructor a little.
  203. Let's called this new algorithm ``IteratedLocalSearch``:
  204. .. code-block:: python
  205. """
  206. module imports
  207. """
  208. from macop.algorithms.base import Algorithm
  209. class IteratedLocalSearch(Algorithm):
  210. def __init__(self,
  211. initializer,
  212. evaluator,
  213. operators,
  214. policy,
  215. validator,
  216. localSearch,
  217. maximise=True,
  218. parent=None,
  219. verbose=True):
  220. super().__init__(initializer, evaluator, operators, policy, validator, maximise, parent, verbose)
  221. # specific local search associated with current algorithm
  222. self._localSearch = localSearch
  223. # need to attach current algorithm as parent
  224. self._localSearch.setParent(self)
  225. def run(self, evaluations, ls_evaluations=100):
  226. """
  227. Run the iterated local search algorithm using local search
  228. """
  229. # by default use of mother method to initialize variables
  230. super().run(evaluations)
  231. # initialize current solution
  232. self.initRun()
  233. # local search algorithm implementation
  234. while not self.stop():
  235. # create and search solution from local search (stop method can be called inside local search)
  236. newSolution = self._localSearch.run(ls_evaluations)
  237. # if better solution than currently, replace it
  238. if self.isBetter(newSolution):
  239. self._bestSolution = newSolution
  240. self.information()
  241. return self._bestSolution
  242. In the initialization phase we have attached our local search passed as a parameter with the current algorithm as parent.
  243. The goal is to touch keep track of the overall search evaluation number (relative to the parent algorithm).
  244. Then, we use this local search in our ``run`` method to allow a better search for solutions.
  245. .. code-block:: python
  246. """
  247. Problem instance definition
  248. """
  249. elements_score = [ 4, 2, 10, 1, 2 ] # worth of each object
  250. elements_weight = [ 12, 1, 4, 1, 2 ] # weight of each object
  251. # evaluator instance
  252. evaluator = KnapsackEvaluator(data={'worths': elements_score})
  253. # valid instance using lambda
  254. validator = lambda solution: sum([ elements_weight[i] * solution._data[i] for i in range(len(solution._data))]) <= 15
  255. # initialiser instance using lambda with default param value
  256. initialiser = lambda x=5: BinarySolution.random(x, validator)
  257. # operators list with crossover and mutation
  258. operators = [SimpleCrossover(), SimpleMutation()]
  259. # policy random instance
  260. policy = RandomPolicy(operators)
  261. # maximizing algorithm (relative to knapsack problem)
  262. localSearch = HillClimberBestImprovment(initialiser, evaluator, operators, policy, validator, maximise=True, verbose=False)
  263. algo = IteratedLocalSearch(initializer, evaluator, operators, policy, validator, localSearch=local_search, maximise=True, verbose=False)
  264. # run the algorithm using local search and get solution found
  265. solution = algo.run(evaluations=100, ls_evaluations=10)
  266. print(solution.fitness())
  267. .. note::
  268. These two last algorithms developed are available in the library within the module ``maocp.algorithms.mono``.
  269. We have one final feature to explore in the next part. This is the notion of ``callback``.