examples.rst 7.2 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255
  1. Some examples
  2. =====================================
  3. 1. Mono-objective
  4. -----------------------
  5. In this tutorial, it will introduce the way of running your algorithm quickly.
  6. First of all we need to define the kind of solution best represent the problem. In this tutorial, we use the well known knapsack problem using 30 objects.
  7. 1.1 Problem definition
  8. ~~~~~~~~~~~~~~~~~~~~~~
  9. Hence, we define our problem :
  10. - value of each component of knapsack
  11. - weight associated to each of these components (objects)
  12. .. code:: python
  13. """
  14. imports part
  15. """
  16. import random
  17. """
  18. Problem definition
  19. """
  20. random.seed(42)
  21. elements_score = [ random.randint(1, 20) for _ in range(30) ]
  22. elements_weight = [ random.randint(5, 25) for _ in range(30) ]
  23. We can now define the solution representation. In knapsack problem we want to fill our knapsack in an optimization way selecting or not each component (object).
  24. The best way to represent this problem is to use the `BinarySolution` from `macop` which stores solution as a binary array.
  25. Using the solution representation, we need to define multiple things to fit our algorithm :
  26. - 1. function which validates or not a solution (based on constraints)
  27. - 2. function which evaluates the solution (in order to obtain fitness)
  28. - 3. initialization solution function
  29. .. code:: python
  30. """
  31. imports part
  32. """
  33. import random
  34. from macop.solutions.BinarySolution import BinarySolution
  35. """
  36. Problem definition
  37. """
  38. random.seed(42)
  39. elements_score = [ random.randint(1, 20) for _ in range(30) ]
  40. elements_weight = [ random.randint(2, 5) for _ in range(30) ]
  41. # 1. validator function (we accept only bag with maximum weight 80kg)
  42. def validator(_solution):
  43. weight_sum = 0
  44. for index, elem in enumerate(_solution.data):
  45. weight_sum += elements_weight[index] * elem
  46. if weight_sum <= 80:
  47. return True
  48. else:
  49. False
  50. # 2. function which computes fitness of solution
  51. def evaluator(_solution):
  52. fitness = 0
  53. for index, elem in enumerate(_solution.data):
  54. fitness += (elements_score[index] * elem)
  55. return fitness
  56. # 3. function which here initializes solution ramdomly and check validity of solution
  57. def init():
  58. return BinarySolution([], 30).random(validator)
  59. 1.2 Operators and Policy
  60. ~~~~~~~~~~~~~~~~~~~~~~~~
  61. In our algorithm we need to use some operators in order to improve current best solution found at current `n` evaluations.
  62. In `macop` you have some available operators. In this example, we use 3 of them.
  63. .. code:: python
  64. """
  65. imports part
  66. """
  67. ...
  68. from macop.operators.mutators.SimpleMutation import SimpleMutation
  69. from macop.operators.mutators.SimpleBinaryMutation import SimpleBinaryMutation
  70. from macop.operators.crossovers.SimpleCrossover import SimpleCrossover
  71. """
  72. Problem definition
  73. """
  74. ...
  75. """
  76. Algorithm parameters
  77. """
  78. # list of operators instance to use
  79. operators = [SimpleBinaryMutation(), SimpleMutation(), SimpleCrossover(), RandomSplitCrossover()]
  80. As we defined multiple operators, we have to tell how we want to select them into the algorithm. This is why **Policy** classes have been implemented.
  81. `Policy` class implementation enables to select the next operator to use and once new solution is generated, computes its score (in `apply` method). This class requires all the operators use to be instanciate.
  82. Why computing score into **Policy** `apply` method ? Because it's a way to get some important statistics from solution improvment using specific operator.
  83. **UCBPolicy** as example, based on Upper Confidence Bound (UCB_), computes reward each time a new solution is generated from an operator in order to better select next operator later. We use in this example the `UCBPolicy` implementation.
  84. .. _UCB: https://banditalgs.com/2016/09/18/the-upper-confidence-bound-algorithm/
  85. .. code:: python
  86. """
  87. imports part
  88. """
  89. ...
  90. from macop.operators.mutators.SimpleMutation import SimpleMutation
  91. from macop.operators.mutators.SimpleBinaryMutation import SimpleBinaryMutation
  92. from macop.operators.crossovers.SimpleCrossover import SimpleCrossover
  93. from macop.operators.policies.UCBPolicy import UCBPolicy
  94. """
  95. Problem definition
  96. """
  97. ...
  98. """
  99. Algorithm parameters
  100. """
  101. # list of operators instance to use
  102. operators = [SimpleBinaryMutation(), SimpleMutation(), SimpleCrossover(), RandomSplitCrossover()]
  103. # `policy` instance is created using specific value for Upper Confidence Bound
  104. policy = UCBPolicy(operators, C=100.)
  105. 1.3 Before running algorithm
  106. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  107. Before running algorithm we can define a logger to keep track of the all algorithm run.
  108. .. code:: python
  109. """
  110. imports part
  111. """
  112. ...
  113. import logging
  114. """
  115. Problem definition
  116. """
  117. ...
  118. """
  119. Algorithm parameters
  120. """
  121. ...
  122. if not os.path.exists('data'):
  123. os.makedirs('data')
  124. # logging configuration
  125. logging.basicConfig(format='%(asctime)s %(message)s', filename='data/example.log', level=logging.DEBUG)
  126. We can now instanciate our algorithm. We use the Iterated Local Search in this example. It is mainly used to avoid local optima using multiple local search.
  127. .. code:: python
  128. """
  129. imports part
  130. """
  131. ...
  132. import logging
  133. from macop.algorithms.IteratedLocalSearch import IteratedLocalSearch as ILS
  134. """
  135. Problem definition
  136. """
  137. ...
  138. """
  139. Algorithm parameters
  140. """
  141. ...
  142. if not os.path.exists('data'):
  143. os.makedirs('data')
  144. # logging configuration
  145. logging.basicConfig(format='%(asctime)s %(message)s', filename='data/example.log', level=logging.DEBUG)
  146. algo = ILS(init, evaluator, operators, policy, validator, _maximise=True)
  147. The algorithm is now well defined and is ready to run ! But one thing can be done, and it's very interesting to avoir restart from scratch the algorithm run.
  148. The use of checkpoint is available in `macop`. A `BasicCheckpoint` class let the algorithm save at `every` evaluations the best solution found.
  149. We need to specify the use of checkpoint if we prefer to restart from.
  150. .. code:: python
  151. """
  152. imports part
  153. """
  154. ...
  155. import logging
  156. from macop.algorithms.IteratedLocalSearch import IteratedLocalSearch as ILS
  157. from macop.checkpoints.BasicCheckpoint import BasicCheckpoint
  158. """
  159. Problem definition
  160. """
  161. ...
  162. """
  163. Algorithm parameters
  164. """
  165. ...
  166. if not os.path.exists('data'):
  167. os.makedirs('data')
  168. # logging configuration
  169. logging.basicConfig(format='%(asctime)s %(message)s', filename='data/example.log', level=logging.DEBUG)
  170. algo = ILS(init, evaluator, operators, policy, validator, _maximise=True)
  171. # we specify the checkpoint class directly, the frequency and the path we want to save algorithm evolution
  172. algo.addCheckpoint(_class=BasicCheckpoint, _every=5, _filepath='data/checkpoint.csv')
  173. In this way, now we can run and obtained the best solution found in `n` evaluations
  174. .. code:: python
  175. bestSol = algo.run(10000)
  176. print('Solution score is {}'.format(evaluator(bestSol)))
  177. 2. Multi-objective example
  178. --------------------------
  179. Available soon...