documentations.rst 51 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046104710481049105010511052105310541055105610571058105910601061106210631064106510661067106810691070107110721073107410751076107710781079108010811082108310841085108610871088108910901091109210931094109510961097109810991100110111021103110411051106110711081109111011111112111311141115111611171118111911201121112211231124112511261127112811291130113111321133113411351136113711381139114011411142114311441145114611471148114911501151115211531154115511561157115811591160116111621163116411651166116711681169117011711172117311741175117611771178117911801181118211831184118511861187118811891190119111921193119411951196119711981199120012011202120312041205120612071208120912101211121212131214121512161217121812191220122112221223122412251226122712281229123012311232123312341235123612371238123912401241124212431244124512461247124812491250125112521253125412551256125712581259126012611262126312641265126612671268126912701271127212731274127512761277127812791280128112821283128412851286128712881289129012911292129312941295129612971298129913001301130213031304130513061307130813091310131113121313131413151316131713181319132013211322132313241325132613271328132913301331133213331334133513361337133813391340134113421343134413451346134713481349135013511352135313541355135613571358135913601361136213631364136513661367136813691370137113721373137413751376137713781379138013811382138313841385138613871388138913901391139213931394139513961397139813991400140114021403140414051406140714081409
  1. ===================
  2. A tour of Macop
  3. ===================
  4. .. image:: _static/logo_macop.png
  5. :width: 300 px
  6. :align: center
  7. This documentation will allow a user who wishes to use the **Macop** optimisation package to understand both how it works and offers examples of how to implement specific needs.
  8. It will gradually take up the major ideas developed within **Macop** to allow for quick development. You can navigate directly via the menu available below to access a specific part of the documentation.
  9. Introduction
  10. ================
  11. `Macop` is a python package for solving discrete optimisation problems in nature. Continuous optimisation is also applicable but not yet developed. The objective is to allow a user to exploit the basic structure proposed by this package to solve a problem specific to him. The interest is that he can quickly abstract himself from the complications related to the way of evaluating, comparing, saving the progress of the search for good solutions but rather concentrate if necessary on his own algorithm. Indeed, `Macop` offers the following main and basic features:
  12. - **solutions:** representation of the solution;
  13. - **validator:** such as constraint programmig, a `validator` is function which is used for validate or not a solution data state;
  14. - **evaluator:** stores problem instance data and implement a `compute` method in order to evaluate a solution;
  15. - **operators:** mutators, crossovers operators for update and obtain new solution;
  16. - **policies:** the way you choose the available operators (might be using reinforcement learning);
  17. - **algorithms:** generic and implemented optimisation research algorithms;
  18. - **callbacks:** callbacks to automatically keep track of the search space advancement and restart from previous state if nedded.
  19. .. image:: _static/documentation/macop_behaviour.png
  20. :width: 50 %
  21. :align: center
  22. Based on all of these generic and/or implemented functionalities, the user will be able to quickly develop a solution to his problem while retaining the possibility of remaining in control of his development by overloading existing functionalities if necessary.
  23. Problem instance
  24. ===================
  25. In this tutorial, we introduce the way of using **Macop** and running your algorithm quickly using the well known `knapsack` problem.
  26. Problem definition
  27. ~~~~~~~~~~~~~~~~~~~~~~
  28. The **knapsack problem** is a problem in combinatorial optimisation: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.
  29. The image below provides an illustration of the problem:
  30. .. image:: _static/documentation/knapsack_problem.png
  31. :width: 40 %
  32. :align: center
  33. In this problem, we try to optimise the value associated with the objects we wish to put in our backpack while respecting the capacity of the bag (weight constraint).
  34. .. warning::
  35. It is a combinatorial and therefore discrete problem. **Macop** decomposes its package into two parts, which is related to discrete optimisation on the one hand, and continuous optimisation on the other hand. This will be detailed later.
  36. Problem implementation
  37. ~~~~~~~~~~~~~~~~~~~~~~~~~~~
  38. During the whole tutorial, the example used is based on the previous illustration with:
  39. .. image:: _static/documentation/project_knapsack_problem.png
  40. :width: 85 %
  41. :align: center
  42. Hence, we now define our problem in Python:
  43. - worth value of each objects
  44. - weight associated to each of these objects
  45. .. code-block:: python
  46. """
  47. Problem instance definition
  48. """
  49. elements_score = [ 4, 2, 10, 1, 2 ] # worth of each object
  50. elements_weight = [ 12, 1, 4, 1, 2 ] # weight of each object
  51. Once we have defined the instance of our problem, we will need to define the representation of a solution to that problem.
  52. Let's define the ``SimpleBinaryCrossover`` operator, allows to randomly change a binary value of our current solution.
  53. Solutions
  54. =============
  55. Representing a solution to a specific problem is very important in an optimisation process. In this example, we will always use the **knapsack problem** as a basis.
  56. In a first step, the management of the solutions by the macop package will be presented. Then a specific implementation for the current problem will be detailed.
  57. Generic Solution
  58. ~~~~~~~~~~~~~~~~~~~~~~~~~
  59. Inside macop.solutions.base_ module of `Macop`, the ``Solution`` class is available. It's an abstract solution class structure which:
  60. - stores the solution data representation into its ``data`` attribute
  61. - get ``size`` (shape) of specific data representation
  62. - stores the ``score`` of the solution once a solution is evaluated
  63. Some specific methods are available:
  64. .. code-block:: python
  65. class Solution():
  66. def __init__(self, data, size):
  67. """
  68. Abstract solution class constructor
  69. """
  70. ...
  71. def isValid(self, validator):
  72. """
  73. Use of custom function which checks if a solution is valid or not
  74. """
  75. ...
  76. def evaluate(self, evaluator):
  77. """
  78. Evaluate solution using specific `evaluator`
  79. """
  80. ...
  81. @property
  82. def fitness(self):
  83. """
  84. Returns fitness score (by default `score` private attribute)
  85. """
  86. ...
  87. @fitness.setter
  88. def fitness(self, score):
  89. """
  90. Set solution score as wished (by default `score` private attribute)
  91. """
  92. ...
  93. @property
  94. def data(self):
  95. """
  96. Returns solution data (by default `data` private attribute)
  97. """
  98. ...
  99. @data.setter
  100. def data(self, data):
  101. """
  102. Set solution data (by default `data` private attribute)
  103. """
  104. ...
  105. @property
  106. def size(self):
  107. """
  108. Returns solution size (by default `size` private attribute)
  109. """
  110. ...
  111. @size.setter
  112. def size(self, size):
  113. """
  114. Set solution size (by default `size` private attribute)
  115. """
  116. ...
  117. @staticmethod
  118. def random(size, validator=None):
  119. """
  120. initialise solution using random data with validator or not
  121. """
  122. ...
  123. def clone(self):
  124. """
  125. Clone the current solution and its data, but without keeping evaluated `_score`
  126. """
  127. ...
  128. .. caution::
  129. An important thing here are the ``fitness``, ``size`` and ``data`` functions brought as an editable attribute by the ``@property`` and ``@XXXXX.setter`` decorators. The idea is to allow the user to modify these functions in order to change the expected result of the algorithm regardless of the data to be returned/modified.
  130. From these basic methods, it is possible to manage a representation of a solution to our problem.
  131. Allowing to initialise it randomly or not (using constructor or ``random`` method), to evaluate it (``evaluate`` method) and to check some constraints of validation of the solution (``isValid`` method).
  132. .. note::
  133. Only one of these methods needs specification if we create our own type of solution. This is the ``random`` method, which depends on the need of the problem.
  134. We will now see how to define a type of solution specific to our problem.
  135. Solution representation for knapsack
  136. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  137. We will now use the abstract ``Solution`` type available in the macop.solutions.base_ module in order to define our own solution.
  138. First of all, let's look at the representation of our knapsack problem. **How to represent the solution?**
  139. Knapsack solution
  140. ************************
  141. A valid solution can be shown below where the sum of the object weights is 15 and the sum of the selected objects values is 8 (its fitness):
  142. .. image:: _static/documentation/project_knapsack_solution.png
  143. :width: 85 %
  144. :align: center
  145. Its representation can be translate as a **binary array** with value:
  146. .. code-block::
  147. [1, 1, 0, 0, 1]
  148. where selected objects have **1** as value otherwise **0**.
  149. Binary Solution
  150. **********************
  151. We will now define our own type of solution by inheriting from macop.solutions.base.Solution_, which we will call ``BinarySolution``.
  152. First we will define our new class as inheriting functionality from ``Solution`` (such as child class).
  153. We will also have to implement the ``random`` method to create a new random solution.
  154. .. code-block:: python
  155. """
  156. modules imports
  157. """
  158. from macop.solutions.base import Solution
  159. import numpy as np
  160. class BinarySolution(Solution):
  161. @staticmethod
  162. def random(size, validator=None):
  163. # create binary array of specific size using numpy random module
  164. data = np.random.randint(2, size=size)
  165. # initialise new solution using constructor
  166. solution = BinarySolution(data, size)
  167. # check if validator is set
  168. if not validator:
  169. return solution
  170. # try to generate solution until solution validity (if validator is provided)
  171. while not validator(solution):
  172. data = np.random.randint(2, size=size)
  173. solution = BinarySolution(data, size)
  174. return solution
  175. .. note::
  176. The current developed ``BinarySolution`` is available into macop.solutions.discrete.BinarySolution_ in **Macop**.
  177. Using this new Solution representation, we can now generate solution randomly:
  178. .. code-block:: python
  179. solution = BinarySolution.random(5)
  180. In the next part, we will see how to verify that a solution meets certain modeling constraints of the problem.
  181. Validate a solution
  182. ======================
  183. When an optimisation problem requires respecting certain constraints, Macop allows you to quickly verify that a solution is valid.
  184. It is based on a defined function taking a solution as input and returning the validity criterion (true or false).
  185. Validator definition
  186. ~~~~~~~~~~~~~~~~~~~~~~~~~
  187. An invalid solution can be shown below where the sum of the object weights is greater than 15:
  188. .. image:: _static/documentation/project_knapsack_invalid.png
  189. :width: 85 %
  190. :align: center
  191. In fact, **[1, 0, 1, 0, 0]** is an invalid solution as we have a weight of **16** which violates the knapsack capacity constraint.
  192. To avoid taking into account invalid solutions, we can define our function which will validate or not a solution based on our problem instance:
  193. .. code-block:: python
  194. """
  195. Problem instance definition
  196. """
  197. elements_score = [ 4, 2, 10, 1, 2 ] # worth of each object
  198. elements_weight = [ 12, 1, 4, 1, 2 ] # weight of each object
  199. """
  200. Validator function definition
  201. """
  202. def validator(solution):
  203. weight_sum = 0
  204. for i, w in enumerate(elements_weight):
  205. # add weight if current object is set to 1
  206. weight_sum += w * solution.data[i]
  207. # validation condition
  208. return weight_sum <= 15
  209. Use of validator
  210. ~~~~~~~~~~~~~~~~~~~~~
  211. We can now generate solutions randomly by passing our validation function as a parameter:
  212. .. code-block:: python
  213. """
  214. Problem instance definition
  215. """
  216. ...
  217. """
  218. Validator function definition
  219. """
  220. ...
  221. # ensure valid solution
  222. solution = BinarySolution.random(5, validator)
  223. .. caution::
  224. If the search space for valid solutions is very small compared to the overall search space, this can involve a considerable time for validating the solution and therefore obtaining a solution.
  225. The validation of a solution is therefore now possible. In the next part we will focus on the evaluation of a solution.
  226. Use of evaluators
  227. ====================
  228. Now that it is possible to generate a solution randomly or not. It is important to know the value associated with this solution. We will then speak of evaluation of the solution. With the score associated with it, the `fitness`.
  229. Generic evaluator
  230. ~~~~~~~~~~~~~~~~~~~~~~
  231. As for the management of solutions, a generic evaluator class macop.evaluators.base.Evaluator_ is developed within **Macop**:
  232. Abstract Evaluator class is used for computing fitness score associated to a solution. To evaluate all the solutions, this class:
  233. - stores into its ``_data`` dictionary attritute required measures when computing a solution
  234. - has a ``compute`` abstract method enable to compute and associate a score to a given solution
  235. - stores into its ``_algo`` attritute the current algorithm to use (we will talk about algorithm later)
  236. .. code-block: python
  237. class Evaluator():
  238. """
  239. Abstract Evaluator class which enables to compute solution using specific `_data`
  240. """
  241. def __init__(self, data):
  242. self._data = data
  243. @abstractmethod
  244. def compute(self, solution):
  245. """
  246. Apply the computation of fitness from solution
  247. """
  248. pass
  249. def setAlgo(self, algo):
  250. """
  251. Keep into evaluator reference of the whole algorithm
  252. """
  253. self._algo = algo
  254. We must therefore now create our own evaluator based on the proposed structure.
  255. Custom evaluator
  256. ~~~~~~~~~~~~~~~~~~~~~
  257. To create our own evaluator, we need both:
  258. - data useful for evaluating a solution
  259. - calculate the score (fitness) associated with the state of the solution from these data. Hence, implement specific ``compute`` method.
  260. We will define the ``KnapsackEvaluator`` class, which will therefore allow us to evaluate solutions to our current problem.
  261. .. code-block:: python
  262. """
  263. modules imports
  264. """
  265. from macop.evaluators.base import Evaluator
  266. class KnapsackEvaluator(Evaluator):
  267. def compute(solution):
  268. # `_data` contains worths array values of objects
  269. fitness = 0
  270. for index, elem in enumerate(solution.data):
  271. fitness += self._data['worths'][index] * elem
  272. return fitness
  273. It is now possible to initialise our new evaluator with specific data of our problem instance:
  274. .. code-block:: python
  275. """
  276. Problem instance definition
  277. """
  278. elements_score = [ 4, 2, 10, 1, 2 ] # worth of each object
  279. elements_weight = [ 12, 1, 4, 1, 2 ] # weight of each object
  280. """
  281. Evaluator problem instance
  282. """
  283. evaluator = KnapsackEvaluator(data={'worths': elements_score})
  284. # using defined BinarySolution
  285. solution = BinarySolution.random(5)
  286. # obtaining current solution score
  287. solution_fitness = solution.evaluate(evaluator)
  288. # score is also stored into solution
  289. solution_fitness = solution.fitness
  290. .. note::
  291. The current developed ``KnapsackEvaluator`` is available into macop.evaluators.discrete.mono.KnapsackEvaluator_ in **Macop**.
  292. In the next part we will see how to modify our current solution with the use of modification operator.
  293. Apply operators to solution
  294. ==============================
  295. Applying an operator to a solution consists of modifying the current state of the solution in order to obtain a new one. The goal is to find a better solution in the search space.
  296. Operators definition
  297. ~~~~~~~~~~~~~~~~~~~~~~~~~
  298. In the discrete optimisation literature, we can categorise operators into two sections:
  299. - **mutators**: modification of one or more elements of a solution from its current state.
  300. - **crossovers**: Inspired by Darwin's theory of evolution, we are going here from two solutions to generate a so-called offspring solution composed of the fusion of the data of the parent solutions.
  301. Inside **Macop**, operators are also decomposed into these two categories. Inside macop.operators.base_, generic class ``Operator`` enables to manage any kind of operator.
  302. .. code-block:: python
  303. class Operator():
  304. """
  305. Abstract Operator class which enables to update solution applying operator (computation)
  306. """
  307. @abstractmethod
  308. def __init__(self):
  309. pass
  310. @abstractmethod
  311. def apply(self, solution):
  312. """
  313. Apply the current operator transformation
  314. """
  315. pass
  316. def setAlgo(self, algo):
  317. """
  318. Keep into operator reference of the whole algorithm
  319. """
  320. self._algo = algo
  321. Like the evaluator, the operator keeps **track of the algorithm** (using ``setAlgo`` method) to which he will be linked. This will allow better management of the way in which the operator must take into account the state of current data relating to the evolution of research.
  322. ``Mutation`` and ``Crossover`` classes inherite from ``Operator``. An ``apply`` function is required for any new operator.
  323. .. code-block:: python
  324. class Mutation(Operator):
  325. """Abstract Mutation extend from Operator
  326. Attributes:
  327. kind: {:class:`~macop.operators.base.KindOperator`} -- specify the kind of operator
  328. """
  329. def __init__(self):
  330. self._kind = KindOperator.MUTATOR
  331. def apply(self, solution):
  332. raise NotImplementedError
  333. class Crossover(Operator):
  334. """Abstract crossover extend from Operator
  335. Attributes:
  336. kind: {:class:`~macop.operators.base.KindOperator`} -- specify the kind of operator
  337. """
  338. def __init__(self):
  339. self._kind = KindOperator.CROSSOVER
  340. def apply(self, solution1, solution2):
  341. raise NotImplementedError
  342. We will now detail these categories of operators and suggest some relative to our problem.
  343. Mutator operator
  344. ~~~~~~~~~~~~~~~~~~~~~
  345. As detailed, the mutation operator consists in having a minimum impact on the current state of our solution. Here is an example of a modification that could be done for our problem.
  346. .. image:: _static/documentation/project_knapsack_mutator.png
  347. :width: 90 %
  348. :align: center
  349. In this example we change a bit value randomly and obtain a new solution from our search space.
  350. .. warning::
  351. Applying an operator can conduct to a new but invalid solution from the search space.
  352. The modification applied here is just a bit swapped. Let's define the ``SimpleBinaryMutation`` operator, allows to randomly change a binary value of our current solution.
  353. .. code-block:: python
  354. """
  355. modules imports
  356. """
  357. from macop.operators.discrete.base import Mutation
  358. class SimpleBinaryMutation(Mutation):
  359. def apply(self, solution):
  360. # obtain targeted cell using solution size
  361. size = solution.size
  362. cell = random.randint(0, size - 1)
  363. # copy of solution
  364. copy_solution = solution.clone()
  365. # swicth values
  366. if copy_solution.data[cell]:
  367. copy_solution.data[cell] = 0
  368. else:
  369. copy_solution.data[cell] = 1
  370. # return the new obtained solution
  371. return copy_solution
  372. We can now instanciate our new operator in order to obtain a new solution:
  373. .. code-block:: python
  374. """
  375. BinaryMutator instance
  376. """
  377. mutator = SimpleBinaryMutation()
  378. # using defined BinarySolution
  379. solution = BinarySolution.random(5)
  380. # obtaining new solution using operator
  381. new_solution = mutator.apply(solution)
  382. .. note::
  383. The developed ``SimpleBinaryMutation`` is available into macop.operators.discrete.mutators.SimpleBinaryMutation_ in **Macop**.
  384. Crossover operator
  385. ~~~~~~~~~~~~~~~~~~~~~~~
  386. Inspired by Darwin's theory of evolution, crossover starts from two solutions to generate a so-called offspring solution composed of the fusion of the data of the parent solutions.
  387. .. image:: _static/documentation/project_knapsack_crossover.png
  388. :width: 95%
  389. :align: center
  390. In this example we merge two solutions with a specific splitting criterion in order to obtain an offspring.
  391. We will now implement the SimpleCrossover crossover operator, which will merge data from two solutions.
  392. The first half of solution 1 will be saved and added to the second half of solution 2 to generate the new solution (offspring).
  393. .. code-block:: python
  394. """
  395. modules imports
  396. """
  397. from macop.operators.discrete.base import Crossover
  398. class SimpleCrossover(Crossover):
  399. def apply(self, solution1, solution2):
  400. size = solution1.size
  401. # default split index used
  402. splitIndex = int(size / 2)
  403. # copy data of solution 1
  404. firstData = solution1._data.copy()
  405. # copy of solution 2
  406. copy_solution = solution2.clone()
  407. copy_solution.data[splitIndex:] = firstData[splitIndex:]
  408. return copy_solution
  409. We can now use the crossover operator created to generate new solutions. Here is an example of use:
  410. .. code-block:: python
  411. """
  412. SimpleCrossover instance
  413. """
  414. crossover = SimpleCrossover()
  415. # using defined BinarySolution
  416. solution1 = BinarySolution.random(5)
  417. solution2 = BinarySolution.random(5)
  418. # obtaining new solution using crossover
  419. offspring = crossover.apply(solution1, solution2)
  420. .. tip::
  421. The developed ``SimpleCrossover`` is available into macop.operators.discrete.crossovers.SimpleCrossover_ in **Macop**.
  422. However, the choice of halves of the merged data is made randomly.
  423. Next part introduce the ``policy`` feature of **Macop** which enables to choose the next operator to apply during the search process based on specific criterion.
  424. Operator choices
  425. ===================
  426. The ``policy`` feature of **Macop** enables to choose the next operator to apply during the search process of the algorithm based on specific criterion.
  427. Why using policy ?
  428. ~~~~~~~~~~~~~~~~~~~~~~~
  429. Sometimes the nature of the problem and its instance can strongly influence the search results when using mutation operators or crossovers.
  430. Automated operator choice strategies have also been developed in the literature, notably based on reinforcement learning.
  431. The operator choice problem can be seen as the desire to find the best solution generation operator at the next evaluation that will be the most conducive to precisely improving the solution.
  432. .. image:: _static/documentation/operators_choice.png
  433. :width: 45 %
  434. :align: center
  435. .. note::
  436. An implementation using reinforcement learning has been developed as an example in the macop.policies.reinforcement_ module.
  437. However, it will not be detailed here. You can refer to the API documentation for more details.
  438. Custom policy
  439. ~~~~~~~~~~~~~~~~~~
  440. In our case, we are not going to exploit a complex enough implementation of a ``policy``. Simply, we will use a random choice of operator.
  441. First, let's take a look of the ``policy`` abstract class available in macop.policies.base_:
  442. .. code-block:: python
  443. class Policy():
  444. def __init__(self, operators):
  445. self._operators = operators
  446. @abstractmethod
  447. def select(self):
  448. """
  449. Select specific operator
  450. """
  451. pass
  452. def apply(self, solution):
  453. """
  454. Apply specific operator to create new solution, compute its fitness and return it
  455. """
  456. ...
  457. def setAlgo(self, algo):
  458. """
  459. Keep into policy reference of the whole algorithm
  460. """
  461. ...
  462. ``Policy`` instance will have of ``_operators`` attributs in order to keep track of possible operators when selecting one.
  463. Here, in our implementation we only need to specify the ``select`` abstract method. The ``apply`` method will select the next operator and return the new solution.
  464. .. code-block:: python
  465. """
  466. module imports
  467. """
  468. from macop.policies.base import Policy
  469. class RandomPolicy(Policy):
  470. def select(self):
  471. """
  472. Select specific operator
  473. """
  474. # choose operator randomly
  475. index = random.randint(0, len(self._operators) - 1)
  476. return self._operators[index]
  477. We can now use this operator choice policy to update our current solution:
  478. .. code-block:: python
  479. """
  480. Operators instances
  481. """
  482. mutator = SimpleMutation()
  483. crossover = SimpleCrossover()
  484. """
  485. RandomPolicy instance
  486. """
  487. policy = RandomPolicy([mutator, crossover])
  488. """
  489. Current solutions instance
  490. """
  491. solution1 = BinarySolution.random(5)
  492. solution2 = BinarySolution.random(5)
  493. # pass two solutions in parameters in case of selected crossover operator
  494. new_solution = policy.apply(solution1, solution2)
  495. .. caution::
  496. By default if ``solution2`` parameter is not provided into ``policy.apply`` method for crossover, the best solution known is used from the algorithm linked to the ``policy``.
  497. Updating solutions is therefore now possible with our policy. It is high time to dive into the process of optimizing solutions and digging into our research space.
  498. Optimisation process
  499. =======================
  500. Let us now tackle the interesting part concerning the search for optimum solutions in our research space.
  501. Find local and global optima
  502. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  503. Overall, in an optimization process, we will seek to find the best, or the best solutions that minimize or maximize our objective function (fitness score obtained) in order to respond to our problem.
  504. .. image:: _static/documentation/search_space.png
  505. :width: 95 %
  506. :align: center
  507. Sometimes, the search space can be very simple. A local search can provide access to the global optimum as shown in figure (a) above.
  508. In other cases, the search space is more complex. It may be necessary to explore more rather than exploit in order to get out of a convex zone and not find the global optimum but only a local opmatime solution.
  509. This problem is illustrated in figure (b).
  510. Abstract algorithm class
  511. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  512. An abstract class is proposed within Macop to generalize the management of an algorithm and therefore of a heuristic.
  513. It is located in the macop.algorithms.base_ module.
  514. We will pay attention to the different methods of which she is composed. This class enables to manage some common usages of operation research algorithms:
  515. - initialization function of solution
  516. - validator function to check if solution is valid or not (based on some criteria)
  517. - evaluation function to give fitness score to a solution
  518. - operators used in order to update solution during search process
  519. - policy process applied when choosing next operator to apply
  520. - callbacks function in order to do some relative stuff every number of evaluation or reload algorithm state
  521. - parent algorithm associated to this new algorithm instance (hierarchy management)
  522. She is composed of few default attributes:
  523. - initialiser: {function} -- basic function strategy to initialise solution
  524. - evaluator: {:class:`~macop.evaluators.base.Evaluator`} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
  525. - operators: {[:class:`~macop.operators.base.Operator`]} -- list of operator to use when launching algorithm
  526. - policy: {:class:`~macop.policies.base.Policy`} -- Policy instance strategy to select operators
  527. - validator: {function} -- basic function to check if solution is valid or not under some constraints
  528. - maximise: {bool} -- specify kind of optimisation problem
  529. - verbose: {bool} -- verbose or not information about the algorithm
  530. - currentSolution: {:class:`~macop.solutions.base.Solution`} -- current solution managed for current evaluation comparison
  531. - bestSolution: {:class:`~macop.solutions.base.Solution`} -- best solution found so far during running algorithm
  532. - callbacks: {[:class:`~macop.callbacks.base.Callback`]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
  533. - parent: {:class:`~macop.algorithms.base.Algorithm`} -- parent algorithm reference in case of inner Algorithm instance (optional)
  534. .. code-block:: python
  535. class Algorithm():
  536. def __init__(self,
  537. initialiser,
  538. evaluator,
  539. operators,
  540. policy,
  541. validator,
  542. maximise=True,
  543. parent=None,
  544. verbose=True):
  545. ...
  546. def addCallback(self, callback):
  547. """
  548. Add new callback to algorithm specifying usefull parameters
  549. """
  550. ...
  551. def resume(self):
  552. """
  553. Resume algorithm using Callback instances
  554. """
  555. ...
  556. @property
  557. def result(self):
  558. """Get the expected result of the current algorithm
  559. By default the best solution (but can be anything you want)
  560. """
  561. ...
  562. @result.setter
  563. def result(self, result):
  564. """Set current default result of the algorithm
  565. """
  566. ...
  567. def getParent(self):
  568. """
  569. Recursively find the main parent algorithm attached of the current algorithm
  570. """
  571. ...
  572. def setParent(self, parent):
  573. """
  574. Set parent algorithm to current algorithm
  575. """
  576. ...
  577. def initRun(self):
  578. """
  579. initialise the current solution and best solution using the `initialiser` function
  580. """
  581. ...
  582. def increaseEvaluation(self):
  583. """
  584. Increase number of evaluation once a solution is evaluated for each dependant algorithm (parents hierarchy)
  585. """
  586. ...
  587. def getGlobalEvaluation(self):
  588. """
  589. Get the global number of evaluation (if inner algorithm)
  590. """
  591. ...
  592. def getGlobalMaxEvaluation(self):
  593. """
  594. Get the global max number of evaluation (if inner algorithm)
  595. """
  596. ...
  597. def stop(self):
  598. """
  599. Global stopping criteria (check for parents algorithm hierarchy too)
  600. """
  601. ...
  602. def evaluate(self, solution):
  603. """
  604. Evaluate a solution using evaluator passed when intialize algorithm
  605. """
  606. ...
  607. def update(self, solution):
  608. """
  609. Apply update function to solution using specific `policy`
  610. Check if solution is valid after modification and returns it
  611. """
  612. ...
  613. def isBetter(self, solution):
  614. """
  615. Check if solution is better than best found
  616. """
  617. ...
  618. def run(self, evaluations):
  619. """
  620. Run the specific algorithm following number of evaluations to find optima
  621. """
  622. ...
  623. def progress(self):
  624. """
  625. Log progress and apply callbacks if necessary
  626. """
  627. ...
  628. .. caution::
  629. An important thing here are the ``result`` functions brought as an editable attribute by the ``@property`` and ``@result.setter`` decorators. The idea is to allow the user to modify these functions in order to change the expected result of the algorithm regardless of the data to be returned/modified.
  630. The notion of hierarchy between algorithms is introduced here. We can indeed have certain dependencies between algorithms.
  631. The methods ``increaseEvaluation``, ``getGlobalEvaluation`` and ``getGlobalMaxEvaluation`` ensure that the expected global number of evaluations is correctly managed, just like the ``stop`` method for the search stop criterion.
  632. The ``evaluate``, ``update`` and ``isBetter`` will be used a lot when looking for a solution in the search space.
  633. In particular the ``update`` function, which will call the ``policy`` instance to generate a new valid solution.
  634. ``isBetter`` method is also overloadable especially if the algorithm does not take any more into account than a single solution to be verified (verification via a population for example).
  635. The ``initRun`` method specify the way you intialise your algorithm (``bestSolution`` and ``currentSolution`` as example) if algorithm not already initialised.
  636. .. note::
  637. The ``initRun`` method can also be used for intialise population of solutions instead of only one best solution, if you want to manage a genetic algorithm.
  638. Most important part is the ``run`` method. Into abstract, the ``run`` method only initialised the current number of evaluation for the algorithm based on the parent algorithm if we are into inner algorithm.
  639. It is always **mandatory** to call the parent class ``run`` method using ``super().run(evaluations)``. Then, using ``evaluations`` parameter which is the number of evaluations budget to run, we can process or continue to find solutions into search space.
  640. .. warning::
  641. The other methods such as ``addCallback``, ``resume`` and ``progress`` will be detailed in the next part focusing on the notion of callback.
  642. Local search algorithm
  643. ~~~~~~~~~~~~~~~~~~~~~~~~~~~
  644. We are going to carry out our first local search algorithm within our search space. A `local search` consists of starting from a solution, then applying a mutation or crossover operation to it, in order to obtain a new one.
  645. This new solution is evaluated and retained if it is better. We will speak here of the notion of **neighborhood exploration**. The process is then completed in the same way.
  646. The local search ends after a certain number of evaluations and the best evaluated solution obtained is returned.
  647. Let's implement an algorithm well known under the name of hill climber best improvment inheriting from the mother algorithm class and name it ``HillClimberBestImprovment``.
  648. .. code-block:: python
  649. """
  650. module imports
  651. """
  652. from macop.algorithms.base import Algorithm
  653. class HillClimberBestImprovment(Algorithm):
  654. def run(self, evaluations):
  655. """
  656. Run a local search algorithm
  657. """
  658. # by default use of mother method to initialise variables
  659. super().run(evaluations)
  660. # initialise current solution and best solution
  661. self.initRun()
  662. solutionSize = self._currentSolution.size
  663. # local search algorithm implementation
  664. while not self.stop():
  665. for _ in range(solutionSize):
  666. # update current solution using policy
  667. newSolution = self.update(self._currentSolution)
  668. # if better solution than currently, replace it
  669. if self.isBetter(newSolution):
  670. self._bestSolution = newSolution
  671. # increase number of evaluations
  672. self.increaseEvaluation()
  673. # stop algorithm if necessary
  674. if self.stop():
  675. break
  676. # set new current solution using best solution found in this neighbor search
  677. self._currentSolution = self._bestSolution
  678. return self._bestSolution
  679. Our algorithm is now ready to work. As previously, let us define two operators as well as a random choice strategy.
  680. We will also need to define a **solution initialisation function** so that the algorithm can generate new solutions.
  681. .. code-block:: python
  682. """
  683. Problem instance definition
  684. """
  685. elements_score = [ 4, 2, 10, 1, 2 ] # worth of each object
  686. elements_weight = [ 12, 1, 4, 1, 2 ] # weight of each object
  687. # evaluator instance
  688. evaluator = KnapsackEvaluator(data={'worths': elements_score})
  689. # valid instance using lambda
  690. validator = lambda solution: sum([ elements_weight[i] * solution.data[i] for i in range(len(solution.data))]) <= 15
  691. # initialiser instance using lambda with default param value
  692. initialiser = lambda x=5: BinarySolution.random(x, validator)
  693. # operators list with crossover and mutation
  694. operators = [SimpleCrossover(), SimpleMutation()]
  695. # policy random instance
  696. policy = RandomPolicy(operators)
  697. # maximizing algorithm (relative to knapsack problem)
  698. algo = HillClimberBestImprovment(initialiser, evaluator, operators, policy, validator, maximise=True, verbose=False)
  699. # run the algorithm and get solution found
  700. solution = algo.run(100)
  701. print(solution.fitness)
  702. .. note::
  703. The ``verbose`` algorithm parameter will log into console the advancement process of the algorithm is set to ``True`` (the default value).
  704. Exploratory algorithm
  705. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  706. As explained in **figure (b)** of **section 8.1**, sometimes the search space is more complicated due to convex parts and need heuristic with other strategy rather than a simple local search.
  707. The way to counter this problem is to allow the algorithm to exit the exploitation phase offered by local search. But rather to seek to explore other parts of the research space. This is possible by simply carrying out several local searches with our budget (number of evaluations).
  708. The idea is to make a leap in the search space in order to find a new local optimum which can be the global optimum. The explained process is illustrated below:
  709. .. image:: _static/documentation/search_space_simple.png
  710. :width: 45 %
  711. :align: center
  712. We are going to implement a more specific algorithm allowing to take a new parameter as input. This is a local search, the one previously developed. For that, we will have to modify the constructor a little.
  713. Let's called this new algorithm ``IteratedLocalSearch``:
  714. .. code-block:: python
  715. """
  716. module imports
  717. """
  718. from macop.algorithms.base import Algorithm
  719. class IteratedLocalSearch(Algorithm):
  720. def __init__(self,
  721. initialiser,
  722. evaluator,
  723. operators,
  724. policy,
  725. validator,
  726. localSearch,
  727. maximise=True,
  728. parent=None,
  729. verbose=True):
  730. super().__init__(initialiser, evaluator, operators, policy, validator, maximise, parent, verbose)
  731. # specific local search associated with current algorithm
  732. self._localSearch = localSearch
  733. # need to attach current algorithm as parent
  734. self._localSearch.setParent(self)
  735. def run(self, evaluations, ls_evaluations=100):
  736. """
  737. Run the iterated local search algorithm using local search
  738. """
  739. # by default use of mother method to initialise variables
  740. super().run(evaluations)
  741. # initialise current solution
  742. self.initRun()
  743. # local search algorithm implementation
  744. while not self.stop():
  745. # create and search solution from local search (stop method can be called inside local search)
  746. newSolution = self._localSearch.run(ls_evaluations)
  747. # if better solution than currently, replace it
  748. if self.isBetter(newSolution):
  749. self._bestSolution = newSolution
  750. self.information()
  751. return self._bestSolution
  752. In the initialization phase we have attached our local search passed as a parameter with the current algorithm as parent.
  753. The goal is to touch keep track of the overall search evaluation number (relative to the parent algorithm).
  754. Then, we use this local search in our ``run`` method to allow a better search for solutions.
  755. .. code-block:: python
  756. """
  757. Problem instance definition
  758. """
  759. elements_score = [ 4, 2, 10, 1, 2 ] # worth of each object
  760. elements_weight = [ 12, 1, 4, 1, 2 ] # weight of each object
  761. # evaluator instance
  762. evaluator = KnapsackEvaluator(data={'worths': elements_score})
  763. # valid instance using lambda
  764. validator = lambda solution: sum([ elements_weight[i] * solution.data[i] for i in range(len(solution.data))]) <= 15
  765. # initialiser instance using lambda with default param value
  766. initialiser = lambda x=5: BinarySolution.random(x, validator)
  767. # operators list with crossover and mutation
  768. operators = [SimpleCrossover(), SimpleMutation()]
  769. # policy random instance
  770. policy = RandomPolicy(operators)
  771. # maximizing algorithm (relative to knapsack problem)
  772. localSearch = HillClimberBestImprovment(initialiser, evaluator, operators, policy, validator, maximise=True, verbose=False)
  773. algo = IteratedLocalSearch(initialiser, evaluator, operators, policy, validator, localSearch=local_search, maximise=True, verbose=False)
  774. # run the algorithm using local search and get solution found
  775. solution = algo.run(evaluations=100, ls_evaluations=10)
  776. print(solution.fitness)
  777. .. note::
  778. These two last algorithms developed are available in the library within the module macop.algorithms.mono_.
  779. We have one final feature to explore in the next part. This is the notion of ``callback``.
  780. Keep track
  781. ==============
  782. Keeping track of the running algorithm can be useful on two levels. First of all to understand how it unfolded at the end of the classic run. But also in the case of the unwanted shutdown of the algorithm.
  783. This section will allow you to introduce the recovery of the algorithm thanks to a continuous backup functionality.
  784. Logging into algorithm
  785. ~~~~~~~~~~~~~~~~~~~~~~
  786. Some logs can be retrieve after running an algorithm. **Macop** uses the ``logging`` Python package in order to log algorithm advancement.
  787. Here is an example of use when running an algorithm:
  788. .. code-block:: python
  789. """
  790. basic imports
  791. """
  792. import logging
  793. # logging configuration
  794. logging.basicConfig(format='%(asctime)s %(message)s', filename='data/example.log', level=logging.DEBUG)
  795. ...
  796. # maximizing algorithm (relative to knapsack problem)
  797. algo = HillClimberBestImprovment(initialiser, evaluator, operators, policy, validator, maximise=True, verbose=False)
  798. # run the algorithm using local search and get solution found
  799. solution = algo.run(evaluations=100)
  800. print(solution.fitness)
  801. Hence, log data are saved into ``data/example.log`` in our example.
  802. Callbacks introduction
  803. ~~~~~~~~~~~~~~~~~~~~~~~
  804. Having output logs can help to understand an error that has occurred, however all the progress of the research carried out may be lost.
  805. For this, the functionality relating to callbacks has been developed.
  806. Within **Macop**, a callback is a specific instance of macop.callbacks.base.Callback_ that allows you to perform an action of tracing / saving information **every** ``n`` **evaluations** but also reloading information if necessary when restarting an algorithm.
  807. .. code-block:: python
  808. class Callback():
  809. def __init__(self, every, filepath):
  810. ...
  811. @abstractmethod
  812. def run(self):
  813. """
  814. Check if necessary to do backup based on `every` variable
  815. """
  816. pass
  817. @abstractmethod
  818. def load(self):
  819. """
  820. Load last backup line of solution and set algorithm state at this backup
  821. """
  822. pass
  823. def setAlgo(self, algo):
  824. """
  825. Specify the main algorithm instance reference
  826. """
  827. ...
  828. - The ``run`` method will be called during run process of the algo and do backup at each specific number of evaluations.
  829. - The ``load`` method will be used to reload the state of the algorithm from the last information saved. All saved data is saved in a file whose name will be specified by the user.
  830. Towards the use of Callbacks
  831. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  832. We are going to create our own Callback instance called ``BasicCheckpoint`` which will save the best solution found and number of evaluations done in order to reload it for the next run of our algorithm.
  833. .. code-block:: python
  834. """
  835. module imports
  836. """
  837. from macop.callbacks.base import Callback
  838. class BasicCheckpoint(Callback):
  839. def run(self):
  840. """
  841. Check if necessary to do backup based on `every` variable
  842. """
  843. # get current best solution
  844. solution = self._algo._bestSolution
  845. currentEvaluation = self._algo.getGlobalEvaluation()
  846. # backup if necessary every number of evaluations
  847. if currentEvaluation % self._every == 0:
  848. # create specific line with solution data
  849. solution.data = ""
  850. solutionSize = len(solution.data)
  851. for index, val in enumerate(solution.data):
  852. solution.data += str(val)
  853. if index < solutionSize - 1:
  854. solution.data += ' '
  855. # number of evaluations done, solution data and fitness score
  856. line = str(currentEvaluation) + ';' + solution.data + ';' + str(
  857. solution.fitness) + ';\n'
  858. # check if file exists
  859. if not os.path.exists(self._filepath):
  860. with open(self._filepath, 'w') as f:
  861. f.write(line)
  862. else:
  863. with open(self._filepath, 'a') as f:
  864. f.write(line)
  865. def load(self):
  866. """
  867. Load last backup line and set algorithm state (best solution and evaluations)
  868. """
  869. if os.path.exists(self._filepath):
  870. with open(self._filepath) as f:
  871. # get last line and read data
  872. lastline = f.readlines()[-1]
  873. data = lastline.split(';')
  874. # get evaluation information
  875. globalEvaluation = int(data[0])
  876. # restore number of evaluations
  877. if self._algo.getParent() is not None:
  878. self._algo.getParent()._numberOfEvaluations = globalEvaluation
  879. else:
  880. self._algo._numberOfEvaluations = globalEvaluation
  881. # get best solution data information
  882. solution.data = list(map(int, data[1].split(' ')))
  883. # avoid uninitialised solution
  884. if self._algo._bestSolution is None:
  885. self._algo._bestSolution = self._algo.initialiser()
  886. # set to algorithm the lastest obtained best solution
  887. self._algo._bestsolution.data = np.array(solution.data)
  888. self._algo._bestSolution._score = float(data[2])
  889. In this way, it is possible to specify the use of a callback to our algorithm instance:
  890. .. code-block:: python
  891. ...
  892. # maximizing algorithm (relative to knapsack problem)
  893. algo = HillClimberBestImprovment(initialiser, evaluator, operators, policy, validator, maximise=True, verbose=False)
  894. callback = BasicCheckpoint(every=5, filepath='data/hillClimberBackup.csv')
  895. # add callback into callback list
  896. algo.addCallback(callback)
  897. # run the algorithm using local search and get solution found
  898. solution = algo.run(evaluations=100)
  899. print(solution.fitness)
  900. .. note::
  901. It is possible to add as many callbacks as desired in the algorithm in question.
  902. Previously, some methods of the abstract ``Algorithm`` class have not been presented. These methods are linked to the use of callbacks,
  903. in particular the ``addCallback`` method which allows the addition of a callback to an algorithm instance as seen above.
  904. - The ``resume`` method will reload all callbacks list using ``load`` method.
  905. - The ``progress`` method will ``run`` each callbacks during the algorithm search.
  906. If we want to exploit this functionality, then we will need to exploit them within our algorithm. Let's make the necessary modifications for our algorithm ``IteratedLocalSearch``:
  907. .. code-block:: python
  908. """
  909. module imports
  910. """
  911. from macop.algorithms.base import Algorithm
  912. class IteratedLocalSearch(Algorithm):
  913. ...
  914. def run(self, evaluations, ls_evaluations=100):
  915. """
  916. Run the iterated local search algorithm using local search
  917. """
  918. # by default use of mother method to initialise variables
  919. super().run(evaluations)
  920. # initialise current solution
  921. self.initRun()
  922. # restart using callbacks backup list
  923. self.resume()
  924. # local search algorithm implementation
  925. while not self.stop():
  926. # create and search solution from local search
  927. newSolution = self._localSearch.run(ls_evaluations)
  928. # if better solution than currently, replace it
  929. if self.isBetter(newSolution):
  930. self._bestSolution = newSolution
  931. # check if necessary to call each callbacks
  932. self.progress()
  933. self.information()
  934. return self._bestSolution
  935. All the features of **Macop** were presented. The next section will aim to quickly present the few implementations proposed within **Macop** to highlight the modulality of the package.
  936. Implementation examples
  937. =======================
  938. Within the API of **Macop**, you can find an implementation of The Multi-objective evolutionary algorithm based on decomposition (MOEA/D) is a general-purpose algorithm for approximating the Pareto set of multi-objective optimization problems.
  939. It decomposes the original multi-objective problem into a number of single-objective optimization sub-problems and then uses an evolutionary process to optimize these sub-problems simultaneously and cooperatively.
  940. MOEA/D is a state-of-art algorithm in aggregation-based approaches for multi-objective optimization.
  941. .. image:: _static/documentation/search_space_moead.png
  942. :width: 45 %
  943. :align: center
  944. As illustrated below, the two main objectives are sub-divised into 5 single-objective optimization sub-problems in order to find the Pareto front.
  945. - macop.algorithms.multi.MOSubProblem_ class defines each sub-problem of MOEA/D.
  946. - macop.algorithms.multi.MOEAD_ class exploits ``MOSubProblem`` and implements MOEA/D using weighted-sum of objectives method.
  947. An example with MOEAD for knapsack problem is available in knapsackMultiExample.py_.
  948. .. _knapsackMultiExample.py: https://github.com/jbuisine/macop/blob/master/examples/knapsackMultiExample.py
  949. .. _macop.algorithms.base: macop/macop.algorithms.base.html#module-macop.algorithms.base
  950. .. _macop.algorithms.mono: macop/macop.algorithms.mono.html#module-macop.algorithms.mono
  951. .. _macop.solutions.base: macop/macop.solutions.base.html#module-macop.solutions.base
  952. .. _macop.solutions.base.Solution: macop/macop.solutions.base.html#macop.solutions.base.Solution
  953. .. _macop.solutions.discrete.BinarySolution: macop/macop.solutions.discrete.html#macop.solutions.discrete.BinarySolution
  954. .. _macop.evaluators.base.Evaluator: macop/macop.evaluators.base.html#macop.evaluators.base.Evaluator
  955. .. _macop.evaluators.discrete.mono.KnapsackEvaluator: macop/macop.evaluators.discrete.mono.html#macop.evaluators.discrete.mono.KnapsackEvaluator
  956. .. _macop.operators.base: macop/macop.operators.base.html#module-macop.operators.base
  957. .. _macop.operators.discrete.mutators.SimpleBinaryMutation: macop/macop.operators.discrete.mutators.html#macop.operators.discrete.mutators.SimpleBinaryMutation
  958. .. _macop.operators.discrete.crossovers.SimpleCrossover: macop/macop.operators.discrete.crossovers.html#macop.operators.discrete.crossovers.SimpleCrossover
  959. .. _macop.policies.reinforcement: macop/macop.policies.reinforcement.html#module-macop.policies.reinforcement
  960. .. _macop.policies.base: macop/macop.policies.base.html#module-macop.policies.base
  961. .. _macop.callbacks.base.Callback: macop/macop.callbacks.base.html#macop.callbacks.base.Callback
  962. .. _macop.algorithms.multi.MOSubProblem: macop/macop.algorithms.multi.html#macop.algorithms.multi.MOSubProblem
  963. .. _macop.algorithms.multi.MOEAD: macop/macop.algorithms.multi.html#macop.algorithms.multi.MOEAD