Parcourir la source

Merge branch 'release/v1.0.7'

Jérôme BUISINE il y a 3 ans
Parent
commit
6ce14b10e2
49 fichiers modifiés avec 957 ajouts et 148 suppressions
  1. 39 31
      CONTRIBUTING.md
  2. 2 2
      README.md
  3. BIN
      docs/figures/macop_behaviour.kra
  4. BIN
      docs/figures/operators_choice.kra
  5. BIN
      docs/figures/project_knapsack_problem.kra
  6. BIN
      docs/figures/project_knapsack_problem_op.kra
  7. BIN
      docs/figures/search_space.kra
  8. BIN
      docs/figures/search_space_moead.kra
  9. BIN
      docs/figures/search_space_simple.kra
  10. BIN
      docs/source/_static/documentation/macop_behaviour_reduced.png
  11. BIN
      docs/source/_static/documentation/operators_choice.png
  12. BIN
      docs/source/_static/documentation/search_space.png
  13. BIN
      docs/source/_static/documentation/search_space_moead.png
  14. BIN
      docs/source/_static/documentation/search_space_simple.png
  15. BIN
      docs/source/_static/logo_macop.png
  16. 0 0
      docs/source/_static/logo_macop.png~
  17. 2 2
      docs/source/conf.py
  18. 374 2
      docs/source/documentations/algorithms.rst
  19. 233 2
      docs/source/documentations/callbacks.rst
  20. 3 3
      docs/source/documentations/evaluators.rst
  21. 1 0
      docs/source/documentations/index.rst
  22. 1 1
      docs/source/documentations/introduction.rst
  23. 5 6
      docs/source/documentations/operators.rst
  24. 16 2
      docs/source/documentations/others.rst
  25. 95 6
      docs/source/documentations/policies.rst
  26. 3 3
      docs/source/documentations/problem.rst
  27. 5 5
      docs/source/documentations/solutions.rst
  28. 3 3
      docs/source/documentations/validator.rst
  29. 4 2
      docs/source/index.rst
  30. 27 0
      macop/__init__.py
  31. 3 18
      macop/algorithms/base.py
  32. 3 3
      macop/algorithms/mono.py
  33. 3 3
      macop/algorithms/multi.py
  34. 2 2
      macop/callbacks/classicals.py
  35. 2 2
      macop/callbacks/multi.py
  36. 2 2
      macop/callbacks/policies.py
  37. 1 1
      macop/evaluators/discrete/mono.py
  38. 1 1
      macop/evaluators/discrete/multi.py
  39. 1 1
      macop/operators/continuous/crossovers.py
  40. 1 1
      macop/operators/continuous/mutators.py
  41. 5 11
      macop/operators/discrete/crossovers.py
  42. 1 1
      macop/operators/discrete/mutators.py
  43. 23 6
      macop/policies/base.py
  44. 1 1
      macop/policies/classicals.py
  45. 24 9
      macop/policies/reinforcement.py
  46. 1 1
      macop/solutions/discrete.py
  47. 48 0
      paper.bib
  48. 21 14
      paper.md
  49. 1 1
      setup.py

+ 39 - 31
CONTRIBUTING.md

@@ -8,7 +8,7 @@ Contribution guidelines
 
 # Welcome !
 
-Thank you for taking the time to read this guide for the package's contribution. I'm glad to know that you may bring a lot to the IPFML package. This document will show you the good development practices used in the project and how you can easily participate in its evolution!
+Thank you for taking the time to read this guide for the package's contribution. I'm glad to know that you may bring a lot to the **Macop** package. This document will show you the good development practices used in the project and how you can easily participate in its evolution!
 
 # Table of contents
 
@@ -71,18 +71,16 @@ Note that the **yapf** package is used during build process of **macop** package
 In order to allow quick access to the code, the project follows the documentation conventions (docstring) proposed by Google. Here an example:
 
 ```python
-'''Divide image into equal size blocks
+''' Binary integer solution class
 
-  Args:
-      image: PIL Image or Numpy array
-      block: tuple (width, height) representing the size of each dimension of the block
-      pil: block type returned (default True)
+    - store solution as a binary array (example: [0, 1, 0, 1, 1])
+    - associated size is the size of the array
+    - mainly use for selecting or not an element in a list of valuable objects
 
-  Returns:
-      list containing all 2D Numpy blocks (in RGB or not)
-
-  Raises:
-      ValueError: If `image_width` or `image_heigt` are not compatible to produce correct block sizes
+    Attributes:
+       data: {ndarray} --  array of binary values
+       size: {int} -- size of binary array values
+       score: {float} -- fitness score value
 '''
 ```
 
@@ -101,26 +99,24 @@ This project use the [doctest](https://docs.python.org/3/library/doctest.html) p
 
 # TODO : add custom example
 ```python
-"""Cauchy noise filter to apply on image
-
-  Args:
-      image: image used as input (2D or 3D image representation)
-      n: used to set importance of noise [1, 999]
-      identical: keep or not identical noise distribution for each canal if RGB Image (default False)
-      distribution_interval: set the distribution interval of normal law distribution (default (0, 1))
-      k: variable that specifies the amount of noise to be taken into account in the output image (default 0.0002)
-
-  Returns:
-      2D Numpy array with Cauchy noise applied
-
-  Example:
-
-  >>> from ipfml.filters.noise import cauchy_noise
-  >>> import numpy as np
-  >>> image = np.random.uniform(0, 255, 10000).reshape((100, 100))
-  >>> noisy_image = cauchy_noise(image, 10)
-  >>> noisy_image.shape
-  (100, 100)
+""" Initialise binary solution using specific data
+
+    Args:
+        data: {ndarray} --  array of binary values
+        size: {int} -- size of binary array values
+
+    Example:
+
+    >>> from macop.solutions.discrete import BinarySolution
+    >>> # build of a solution using specific data and size
+    >>> data = [0, 1, 0, 1, 1]
+    >>> solution = BinarySolution(data, len(data))
+    >>> # check data content
+    >>> sum(solution._data) == 3
+    True
+    >>> # clone solution
+    >>> solution_copy = solution.clone()
+    >>> all(solution_copy._data == solution._data)
 """
 ```
 
@@ -165,4 +161,16 @@ You can also add your own labels too or add priority label:
 - prio:**normal**
 - prio:**high**
 
+Main common required issue header:
+
+```
+Package version: X.X.X
+Issue label: XXXXX
+Priority: prio:**level**
+Targeted modules: `macop.algorithms`, `macop.policies`
+Operating System: Manjaro Linux
+
+Description: XXXXX
+```
+
 Whatever the problem reported, I will thank you for your contribution to this project. So do not hesitate.

+ 2 - 2
README.md

@@ -3,7 +3,7 @@
 ![](https://img.shields.io/github/workflow/status/jbuisine/macop/build?style=flat-square) ![](https://img.shields.io/pypi/v/macop?style=flat-square) ![](https://img.shields.io/pypi/dm/macop?style=flat-square)
 
 <p align="center">
-    <img src="https://github.com/jbuisine/macop/blob/master/logo_macop.png" alt="" width="50%">
+    <img src="https://github.com/jbuisine/macop/blob/master/docs/source/_static/logo_macop.png" alt="" width="50%">
 </p>
 
 
@@ -25,7 +25,7 @@
 
 Based on all of these generic and/or implemented functionalities, the user will be able to quickly develop a solution to his problem while retaining the possibility of remaining in control of his development by overloading existing functionalities if necessary.
 
-Main idea about this Python package is that it does not implement the whole available algorithms in the literature but let the possibility to the user to quickly develop and test its own algorithms and strategies. The main objective of this package is to be the most flexible as possible and hence, to offer a maximum of implementation possibilities.
+Main idea about this Python package is that it does not which doesn't implement every algorithm in the literature but let the possibility to the user to quickly develop and test its own algorithms and strategies. The main objective of this package is to provide maximum flexibility, which allows for easy experimentation in implementation..
 
 ## Documentation
 

BIN
docs/figures/macop_behaviour.kra


BIN
docs/figures/operators_choice.kra


BIN
docs/figures/project_knapsack_problem.kra


BIN
docs/figures/project_knapsack_problem_op.kra


BIN
docs/figures/search_space.kra


BIN
docs/figures/search_space_moead.kra


BIN
docs/figures/search_space_simple.kra


BIN
docs/source/_static/documentation/macop_behaviour_reduced.png


BIN
docs/source/_static/documentation/operators_choice.png


BIN
docs/source/_static/documentation/search_space.png


BIN
docs/source/_static/documentation/search_space_moead.png


BIN
docs/source/_static/documentation/search_space_simple.png


BIN
docs/source/_static/logo_macop.png


logo_macop.png → docs/source/_static/logo_macop.png~


+ 2 - 2
docs/source/conf.py

@@ -25,9 +25,9 @@ copyright = '2020, Jérôme BUISINE'
 author = 'Jérôme BUISINE'
 
 # The short X.Y version
-version = '1.0.6'
+version = '1.0.7'
 # The full version, including alpha/beta/rc tags
-release = 'v1.0.6'
+release = 'v1.0.7'
 
 
 # -- General configuration ---------------------------------------------------

+ 374 - 2
docs/source/documentations/algorithms.rst

@@ -1,2 +1,374 @@
-8. Optimisation process
-=======================
+Optimisation process
+=======================
+
+Let us now tackle the interesting part concerning the search for optimum solutions in our research space.
+
+Find local and global optima
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Overall, in an optimization process, we will seek to find the best, or the best solutions that minimize or maximize our objective function (fitness score obtained) in order to respond to our problem.
+
+.. image:: ../_static/documentation/search_space.png
+   :width:  800 px
+   :align: center
+
+Sometimes, the search space can be very simple. A local search can provide access to the global optimum as shown in figure (a) above. 
+In other cases, the search space is more complex. It may be necessary to explore more rather than exploit in order to get out of a convex zone and not find the global optimum but only a local opmatime solution. 
+This problem is illustrated in figure (b).
+
+Abstract algorithm class
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+An abstract class is proposed within Macop to generalize the management of an algorithm and therefore of a heuristic. 
+It is located in the ``macop.algorithms.base`` module. 
+
+We will pay attention to the different methods of which she is composed. This class enables to manage some common usages of operation research algorithms:
+
+- initialization function of solution
+- validator function to check if solution is valid or not (based on some criteria)
+- evaluation function to give fitness score to a solution
+- operators used in order to update solution during search process
+- policy process applied when choosing next operator to apply
+- callbacks function in order to do some relative stuff every number of evaluation or reload algorithm state
+- parent algorithm associated to this new algorithm instance (hierarchy management)
+
+She is composed of few default attributes:
+
+- initializer: {function} -- basic function strategy to initialize solution
+- evaluator: {Evaluator} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
+- operators: {[Operator]} -- list of operator to use when launching algorithm
+- policy: {Policy} -- Policy instance strategy to select operators
+- validator: {function} -- basic function to check if solution is valid or not under some constraints
+- maximise: {bool} -- specify kind of optimisation problem 
+- verbose: {bool} -- verbose or not information about the algorithm
+- currentSolution: {Solution} -- current solution managed for current evaluation comparison
+- bestSolution: {Solution} -- best solution found so far during running algorithm
+- callbacks: {[Callback]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initializing algorithm
+- parent: {Algorithm} -- parent algorithm reference in case of inner Algorithm instance (optional)
+
+.. code-block:: python
+
+    class Algorithm():
+
+        def __init__(self,
+                    initializer,
+                    evaluator,
+                    operators,
+                    policy,
+                    validator,
+                    maximise=True,
+                    parent=None,
+                    verbose=True):
+            ...
+
+        def addCallback(self, callback):
+            """
+            Add new callback to algorithm specifying usefull parameters
+            """
+            ...
+
+        def resume(self):
+            """
+            Resume algorithm using Callback instances
+            """
+            ...
+
+        def getParent(self):
+            """
+            Recursively find the main parent algorithm attached of the current algorithm
+            """
+            ...
+
+        def setParent(self, parent):
+            """
+            Set parent algorithm to current algorithm
+            """
+            ...
+
+
+        def initRun(self):
+            """
+            Initialize the current solution and best solution using the `initialiser` function
+            """
+            ...
+
+        def increaseEvaluation(self):
+            """
+            Increase number of evaluation once a solution is evaluated for each dependant algorithm (parents hierarchy)
+            """
+            ...
+                
+        def getGlobalEvaluation(self):
+            """
+            Get the global number of evaluation (if inner algorithm)
+            """
+            ...
+
+        def getGlobalMaxEvaluation(self):
+            """
+            Get the global max number of evaluation (if inner algorithm)
+            """
+            ...
+
+        def stop(self):
+            """
+            Global stopping criteria (check for parents algorithm hierarchy too)
+            """
+            ...
+
+        def evaluate(self, solution):
+            """
+            Evaluate a solution using evaluator passed when intialize algorithm
+            """
+            ...
+
+        def update(self, solution):
+            """
+            Apply update function to solution using specific `policy`
+            Check if solution is valid after modification and returns it
+            """
+            ...
+
+        def isBetter(self, solution):
+            """
+            Check if solution is better than best found
+            """
+            ...
+
+        def run(self, evaluations):
+            """
+            Run the specific algorithm following number of evaluations to find optima
+            """
+            ...
+
+        def progress(self):
+            """
+            Log progress and apply callbacks if necessary
+            """
+            ...
+
+
+The notion of hierarchy between algorithms is introduced here. We can indeed have certain dependencies between algorithms. 
+The methods ``increaseEvaluation``, ``getGlobalEvaluation`` and ``getGlobalMaxEvaluation`` ensure that the expected global number of evaluations is correctly managed, just like the ``stop`` method for the search stop criterion.
+
+The ``evaluate``, ``update`` and ``isBetter`` will be used a lot when looking for a solution in the search space. 
+In particular the ``update`` function, which will call the ``policy`` instance to generate a new valid solution.
+``isBetter`` method is also overloadable especially if the algorithm does not take any more into account than a single solution to be verified (verification via a population for example).
+
+The ``initRun`` method specify the way you intialise your algorithm (``bestSolution`` and ``currentSolution`` as example) if algorithm not already initialized.
+
+.. note:: 
+    The ``initRun`` method can also be used for intialise population of solutions instead of only one best solution, if you want to manage a genetic algorithm.
+
+Most important part is the ``run`` method. Into abstract, the ``run`` method only initialized the current number of evaluation for the algorithm based on the parent algorithm if we are into inner algorithm.
+It is always **mandatory** to call the parent class ``run`` method using ``super().run(evaluations)``. Then, using ``evaluations`` parameter which is the number of evaluations budget to run, we can process or continue to find solutions into search space.
+
+.. warning::
+    The other methods such as ``addCallback``, ``resume`` and ``progress`` will be detailed in the next part focusing on the notion of callback.
+
+Local search algorithm
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+We are going to carry out our first local search algorithm within our search space. A `local search` consists of starting from a solution, then applying a mutation or crossover operation to it, in order to obtain a new one. 
+This new solution is evaluated and retained if it is better. We will speak here of the notion of **neighborhood exploration**. The process is then completed in the same way. 
+The local search ends after a certain number of evaluations and the best evaluated solution obtained is returned.
+
+Let's implement an algorithm well known under the name of hill climber best improvment inheriting from the mother algorithm class and name it ``HillClimberBestImprovment``.
+
+
+.. code-block:: python
+
+    """
+    module imports
+    """
+    from macop.algorithms.base import Algorithm
+
+    class HillClimberBestImprovment(Algorithm):
+
+        def run(self, evaluations):
+            """
+            Run a local search algorithm
+            """
+
+            # by default use of mother method to initialize variables
+            super().run(evaluations)
+
+            # initialize current solution and best solution
+            self.initRun()
+
+            solutionSize = self._currentSolution._size
+
+            # local search algorithm implementation
+            while not self.stop():
+
+                for _ in range(solutionSize):
+
+                    # update current solution using policy
+                    newSolution = self.update(self._currentSolution)
+
+                    # if better solution than currently, replace it
+                    if self.isBetter(newSolution):
+                        self._bestSolution = newSolution
+
+                    # increase number of evaluations
+                    self.increaseEvaluation()
+
+                    # stop algorithm if necessary
+                    if self.stop():
+                        break
+
+                # set new current solution using best solution found in this neighbor search
+                self._currentSolution = self._bestSolution
+            
+            return self._bestSolution
+
+Our algorithm is now ready to work. As previously, let us define two operators as well as a random choice strategy. 
+We will also need to define a **solution initialisation function** so that the algorithm can generate new solutions.
+
+
+.. code-block:: python
+
+    """
+    Problem instance definition
+    """
+    elements_score = [ 4, 2, 10, 1, 2 ] # worth of each object
+    elements_weight = [ 12, 1, 4, 1, 2 ] # weight of each object
+
+    # evaluator instance
+    evaluator = KnapsackEvaluator(data={'worths': elements_score})
+
+    # valid instance using lambda
+    validator = lambda solution: sum([ elements_weight[i] * solution._data[i] for i in range(len(solution._data))]) <= 15
+    
+    # initialiser instance using lambda with default param value
+    initialiser = lambda x=5: BinarySolution.random(x, validator)
+    
+    # operators list with crossover and mutation
+    operators = [SimpleCrossover(), SimpleMutation()]
+    
+    # policy random instance
+    policy = RandomPolicy(operators)
+    
+    # maximizing algorithm (relative to knapsack problem)
+    algo = HillClimberBestImprovment(initialiser, evaluator, operators, policy, validator, maximise=True, verbose=False)
+
+    # run the algorithm and get solution found
+    solution = algo.run(100)
+    print(solution.fitness())
+
+
+.. note::
+    The ``verbose`` algorithm parameter will log into console the advancement process of the algorithm is set to ``True`` (the default value).
+
+Exploratory algorithm
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+As explained in **figure (b)** of **section 8.1**, sometimes the search space is more complicated due to convex parts and need heuristic with other strategy rather than a simple local search.
+
+The way to counter this problem is to allow the algorithm to exit the exploitation phase offered by local search. But rather to seek to explore other parts of the research space. This is possible by simply carrying out several local searches with our budget (number of evaluations).
+
+The idea is to make a leap in the search space in order to find a new local optimum which can be the global optimum. The explained process is illustrated below:
+
+.. image:: ../_static/documentation/search_space_simple.png
+   :width:  400 px
+   :align: center
+
+
+We are going to implement a more specific algorithm allowing to take a new parameter as input. This is a local search, the one previously developed. For that, we will have to modify the constructor a little.
+Let's called this new algorithm ``IteratedLocalSearch``:
+
+.. code-block:: python
+
+    """
+    module imports
+    """
+    from macop.algorithms.base import Algorithm
+
+    class IteratedLocalSearch(Algorithm):
+        
+        def __init__(self,
+                    initializer,
+                    evaluator,
+                    operators,
+                    policy,
+                    validator,
+                    localSearch,
+                    maximise=True,
+                    parent=None,
+                    verbose=True):
+            
+            super().__init__(initializer, evaluator, operators, policy, validator, maximise, parent, verbose)
+
+            # specific local search associated with current algorithm
+            self._localSearch = localSearch
+
+            # need to attach current algorithm as parent
+            self._localSearch.setParent(self)
+
+
+        def run(self, evaluations, ls_evaluations=100):
+            """
+            Run the iterated local search algorithm using local search
+            """
+
+            # by default use of mother method to initialize variables
+            super().run(evaluations)
+
+            # initialize current solution
+            self.initRun()
+
+            # local search algorithm implementation
+            while not self.stop():
+
+                # create and search solution from local search (stop method can be called inside local search)
+                newSolution = self._localSearch.run(ls_evaluations)
+
+                # if better solution than currently, replace it
+                if self.isBetter(newSolution):
+                    self._bestSolution = newSolution
+
+                self.information()
+
+            return self._bestSolution
+
+In the initialization phase we have attached our local search passed as a parameter with the current algorithm as parent. 
+The goal is to touch keep track of the overall search evaluation number (relative to the parent algorithm).
+
+Then, we use this local search in our ``run`` method to allow a better search for solutions.
+
+.. code-block:: python
+
+    """
+    Problem instance definition
+    """
+    elements_score = [ 4, 2, 10, 1, 2 ] # worth of each object
+    elements_weight = [ 12, 1, 4, 1, 2 ] # weight of each object
+
+    # evaluator instance
+    evaluator = KnapsackEvaluator(data={'worths': elements_score})
+
+    # valid instance using lambda
+    validator = lambda solution: sum([ elements_weight[i] * solution._data[i] for i in range(len(solution._data))]) <= 15
+    
+    # initialiser instance using lambda with default param value
+    initialiser = lambda x=5: BinarySolution.random(x, validator)
+    
+    # operators list with crossover and mutation
+    operators = [SimpleCrossover(), SimpleMutation()]
+    
+    # policy random instance
+    policy = RandomPolicy(operators)
+    
+    # maximizing algorithm (relative to knapsack problem)
+    localSearch = HillClimberBestImprovment(initialiser, evaluator, operators, policy, validator, maximise=True, verbose=False)
+    algo = IteratedLocalSearch(initializer, evaluator, operators, policy, validator, localSearch=local_search, maximise=True, verbose=False)
+
+    # run the algorithm using local search and get solution found 
+    solution = algo.run(evaluations=100, ls_evaluations=10)
+    print(solution.fitness())
+
+
+.. note:: 
+    These two last algorithms developed are available in the library within the module ``maocp.algorithms.mono``.
+
+We have one final feature to explore in the next part. This is the notion of ``callback``.

+ 233 - 2
docs/source/documentations/callbacks.rst

@@ -1,2 +1,233 @@
-9. Keep track
-==============
+Keep track
+==============
+
+Keeping track of the running algorithm can be useful on two levels. First of all to understand how it unfolded at the end of the classic run. But also in the case of the unwanted shutdown of the algorithm. 
+This section will allow you to introduce the recovery of the algorithm thanks to a continuous backup functionality.
+
+Logging into algorithm
+~~~~~~~~~~~~~~~~~~~~~~
+
+Some logs can be retrieve after running an algorithm. **Macop** uses the ``logging`` Python package in order to log algorithm advancement.
+
+Here is an example of use when running an algorithm:
+
+.. code-block:: python
+
+    """
+    basic imports
+    """
+    import logging
+
+    # logging configuration
+    logging.basicConfig(format='%(asctime)s %(message)s', filename='data/example.log', level=logging.DEBUG)
+
+    ...
+    
+    # maximizing algorithm (relative to knapsack problem)
+    algo = HillClimberBestImprovment(initialiser, evaluator, operators, policy, validator, maximise=True, verbose=False)
+
+    # run the algorithm using local search and get solution found 
+    solution = algo.run(evaluations=100)
+    print(solution.fitness())
+
+Hence, log data are saved into ``data/example.log`` in our example.
+
+Callbacks introduction
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Having output logs can help to understand an error that has occurred, however all the progress of the research carried out may be lost. 
+For this, the functionality relating to callbacks has been developed.
+
+Within **Macop**, a callback is a specific instance of ``macop.callbacks.Callback`` that allows you to perform an action of tracing / saving information **every** ``n`` **evaluations** but also reloading information if necessary when restarting an algorithm.
+
+
+.. code-block:: python
+
+    class Callback():
+
+        def __init__(self, every, filepath):
+            ...
+
+        @abstractmethod
+        def run(self):
+            """
+            Check if necessary to do backup based on `every` variable
+            """
+            pass
+
+        @abstractmethod
+        def load(self):
+            """
+            Load last backup line of solution and set algorithm state at this backup
+            """
+            pass
+
+        def setAlgo(self, algo):
+            """
+            Specify the main algorithm instance reference
+            """
+            ...
+
+
+- The ``run`` method will be called during run process of the algo and do backup at each specific number of evaluations. 
+- The ``load`` method will be used to reload the state of the algorithm from the last information saved. All saved data is saved in a file whose name will be specified by the user.
+
+Towards the use of Callbacks
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+We are going to create our own Callback instance called ``BasicCheckpoint`` which will save the best solution found and number of evaluations done in order to reload it for the next run of our algorithm.
+
+.. code-block:: python
+
+    """
+    module imports
+    """
+    from macop.callbacks.base import Callback
+
+
+    class BasicCheckpoint(Callback):
+        
+        def run(self):
+            """
+            Check if necessary to do backup based on `every` variable
+            """
+            # get current best solution
+            solution = self._algo._bestSolution
+
+            currentEvaluation = self._algo.getGlobalEvaluation()
+
+            # backup if necessary every number of evaluations
+            if currentEvaluation % self._every == 0:
+
+                # create specific line with solution data
+                solutionData = ""
+                solutionSize = len(solution._data)
+
+                for index, val in enumerate(solution._data):
+                    solutionData += str(val)
+
+                    if index < solutionSize - 1:
+                        solutionData += ' '
+
+                # number of evaluations done, solution data and fitness score
+                line = str(currentEvaluation) + ';' + solutionData + ';' + str(
+                    solution.fitness()) + ';\n'
+
+                # check if file exists
+                if not os.path.exists(self._filepath):
+                    with open(self._filepath, 'w') as f:
+                        f.write(line)
+                else:
+                    with open(self._filepath, 'a') as f:
+                        f.write(line)
+
+        def load(self):
+            """
+            Load last backup line and set algorithm state (best solution and evaluations)
+            """
+            if os.path.exists(self._filepath):
+
+                with open(self._filepath) as f:
+
+                    # get last line and read data
+                    lastline = f.readlines()[-1]
+                    data = lastline.split(';')
+
+                    # get evaluation  information
+                    globalEvaluation = int(data[0])
+
+                    # restore number of evaluations
+                    if self._algo.getParent() is not None:
+                        self._algo.getParent()._numberOfEvaluations = globalEvaluation
+                    else:
+                        self._algo._numberOfEvaluations = globalEvaluation
+
+                    # get best solution data information
+                    solutionData = list(map(int, data[1].split(' ')))
+
+                    # avoid uninitialized solution
+                    if self._algo._bestSolution is None:
+                        self._algo._bestSolution = self._algo._initializer()
+
+                    # set to algorithm the lastest obtained best solution
+                    self._algo._bestSolution._data = np.array(solutionData)
+                    self._algo._bestSolution._score = float(data[2])
+
+
+In this way, it is possible to specify the use of a callback to our algorithm instance:
+
+
+.. code-block:: python
+
+    ...
+    
+    # maximizing algorithm (relative to knapsack problem)
+    algo = HillClimberBestImprovment(initialiser, evaluator, operators, policy, validator, maximise=True, verbose=False)
+
+    callback = BasicCheckpoint(every=5, filepath='data/hillClimberBackup.csv')
+
+    # add callback into callback list
+    algo.addCallback(callback)
+
+    # run the algorithm using local search and get solution found 
+    solution = algo.run(evaluations=100)
+    print(solution.fitness())
+
+
+.. note::
+    It is possible to add as many callbacks as desired in the algorithm in question.
+
+
+Previously, some methods of the abstract ``Algorithm`` class have not been presented. These methods are linked to the use of callbacks, 
+in particular the ``addCallback`` method which allows the addition of a callback to an algorithm instance as seen above.
+
+- The ``resume`` method will reload all callbacks list using ``load`` method.
+- The ``progress`` method will ``run`` each callbacks during the algorithm search.
+
+If we want to exploit this functionality, then we will need to exploit them within our algorithm. Let's make the necessary modifications for our algorithm ``IteratedLocalSearch``:
+
+
+.. code-block:: python
+
+    """
+    module imports
+    """
+    from macop.algorithms.base import Algorithm
+
+    class IteratedLocalSearch(Algorithm):
+        
+        ...
+
+        def run(self, evaluations, ls_evaluations=100):
+            """
+            Run the iterated local search algorithm using local search
+            """
+
+            # by default use of mother method to initialize variables
+            super().run(evaluations)
+
+            # initialize current solution
+            self.initRun()
+
+            # restart using callbacks backup list
+            self.resume()
+
+            # local search algorithm implementation
+            while not self.stop():
+
+                # create and search solution from local search
+                newSolution = self._localSearch.run(ls_evaluations)
+
+                # if better solution than currently, replace it
+                if self.isBetter(newSolution):
+                    self._bestSolution = newSolution
+
+                # check if necessary to call each callbacks
+                self.progress()
+
+                self.information()
+
+            return self._bestSolution
+
+
+All the features of **Macop** were presented. The next section will aim to quickly present the few implementations proposed within **Macop** to highlight the modulality of the package.

+ 3 - 3
docs/source/documentations/evaluators.rst

@@ -1,9 +1,9 @@
-5. Use of evaluators
+Use of evaluators
 ====================
 
 Now that it is possible to generate a solution randomly or not. It is important to know the value associated with this solution. We will then speak of evaluation of the solution. With the score associated with it, the `fitness`.
 
-5.1. Generic evaluator
+Generic evaluator
 ~~~~~~~~~~~~~~~~~~~~~~
 
 As for the management of solutions, a generic evaluator class ``macop.evaluators.base.Evaluator`` is developed within **Macop**:
@@ -38,7 +38,7 @@ Abstract Evaluator class is used for computing fitness score associated to a sol
 
 We must therefore now create our own evaluator based on the proposed structure.
 
-5.2. Custom evaluator
+Custom evaluator
 ~~~~~~~~~~~~~~~~~~~~~
 
 To create our own evaluator, we need both:

+ 1 - 0
docs/source/documentations/index.rst

@@ -11,6 +11,7 @@ It will gradually take up the major ideas developed within **Macop** to allow fo
 
 .. toctree::
    :maxdepth: 1
+   :numbered:
    :caption: Contents:
 
    introduction

Fichier diff supprimé car celui-ci est trop grand
+ 1 - 1
docs/source/documentations/introduction.rst


+ 5 - 6
docs/source/documentations/operators.rst

@@ -1,9 +1,9 @@
-6. Apply operators to solution
+Apply operators to solution
 ==============================
 
 Applying an operator to a solution consists of modifying the current state of the solution in order to obtain a new one. The goal is to find a better solution in the search space.
 
-6.1. Operators definition
+Operators definition
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
 In the discrete optimisation literature, we can categorise operators into two sections:
@@ -69,7 +69,7 @@ Like the evaluator, the operator keeps **track of the algorithm** (using ``setAl
 
 We will now detail these categories of operators and suggest some relative to our problem.
 
-6.2. Mutator operator
+Mutator operator
 ~~~~~~~~~~~~~~~~~~~~~
 
 As detailed, the mutation operator consists in having a minimum impact on the current state of our solution. Here is an example of a modification that could be done for our problem.
@@ -134,7 +134,7 @@ We can now instanciate our new operator in order to obtain a new solution:
     The developed ``SimpleBinaryMutation`` is available into ``macop.operators.discrete.mutators.SimpleBinaryMutation`` in **Macop**.
 
 
-6.3. Crossover operator
+Crossover operator
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 
@@ -195,7 +195,6 @@ We can now use the crossover operator created to generate new solutions. Here is
 
 .. warning::
     The developed ``SimpleCrossover`` is available into ``macop.operators.discrete.crossovers.SimpleCrossover`` in **Macop**.
-    However, the choice of halves of the merged data is made randomly. In addition, the second solution can be omitted, 
-    by default the operator will crossover between ``solution1`` and the best current solution of the algorithm.
+    However, the choice of halves of the merged data is made randomly.
 
 Next part introduce the ``policy`` feature of **Macop** which enables to choose the next operator to apply during the search process based on specific criterion.

+ 16 - 2
docs/source/documentations/others.rst

@@ -1,2 +1,16 @@
-10. Others features
-===================
+Implementation examples
+=======================
+
+Within the API of **Macop**, you can find an implementation of The Multi-objective evolutionary algorithm based on decomposition (MOEA/D) is a general-purpose algorithm for approximating the Pareto set of multi-objective optimization problems. 
+It decomposes the original multi-objective problem into a number of single-objective optimization sub-problems and then uses an evolutionary process to optimize these sub-problems simultaneously and cooperatively. 
+MOEA/D is a state-of-art algorithm in aggregation-based approaches for multi-objective optimization.
+
+.. image:: ../_static/documentation/search_space_moead.png
+   :width:  400 px
+   :align: center
+
+
+As illustrated below, the two main objectives are sub-divised into 5 single-objective optimization sub-problems in order to find the Pareto front.
+
+- ``macop.algorithms.multi.MOSubProblem`` class defines each sub-problem of MOEA/D.
+- ``macop.algorithms.multi.MOEAD`` class exploits ``MOSubProblem`` and implements MOEA/D using weighted-sum of objectives method.

+ 95 - 6
docs/source/documentations/policies.rst

@@ -1,17 +1,106 @@
-7. Operator choices
+Operator choices
 ===================
 
 The ``policy`` feature of **Macop** enables to choose the next operator to apply during the search process of the algorithm based on specific criterion.
 
-7.1. Why using policy ?
+Why using policy ?
 ~~~~~~~~~~~~~~~~~~~~~~~
 
 Sometimes the nature of the problem and its instance can strongly influence the search results when using mutation operators or crossovers. 
-Automated operator choice strategies have been developed in the literature, notably based on reinforcement learning.
+Automated operator choice strategies have also been developed in the literature, notably based on reinforcement learning.
+
+The operator choice problem can be seen as the desire to find the best solution generation operator at the next evaluation that will be the most conducive to precisely improving the solution.
+
+.. image:: ../_static/documentation/operators_choice.png
+   :width:  400 px
+   :align: center
 
 .. note::
-    An implementation by reinforcement has been developed as an example in the ``macop.policies.reinforcement`` module. 
+    An implementation using reinforcement learning has been developed as an example in the ``macop.policies.reinforcement`` module. 
     However, it will not be detailed here. You can refer to the API documentation for more details.
 
-7.2. Custom policy
-~~~~~~~~~~~~~~~~~~
+
+Custom policy
+~~~~~~~~~~~~~~~~~~
+
+In our case, we are not going to exploit a complex enough implementation of a ``policy``. Simply, we will use a random choice of operator.
+
+First, let's take a look of the ``policy`` abstract class available in ``macop.policies.base``:
+
+.. code-block:: python
+
+    class Policy():
+
+        def __init__(self, operators):
+            self._operators = operators
+
+        @abstractmethod
+        def select(self):
+            """
+            Select specific operator
+            """
+            pass
+
+        def apply(self, solution):
+            """
+            Apply specific operator to create new solution, compute its fitness and return it
+            """
+            ...
+
+        def setAlgo(self, algo):
+            """
+            Keep into policy reference of the whole algorithm
+            """
+            ...
+
+
+``Policy`` instance will have of ``_operators`` attributs in order to keep track of possible operators when selecting one. 
+Here, in our implementation we only need to specify the ``select`` abstract method. The ``apply`` method will select the next operator and return the new solution.
+
+.. code-block:: python
+
+    """
+    module imports
+    """
+    from macop.policies.base import Policy
+
+    class RandomPolicy(Policy):
+
+        def select(self):
+            """
+            Select specific operator
+            """
+            # choose operator randomly
+            index = random.randint(0, len(self._operators) - 1)
+            return self._operators[index]
+
+
+We can now use this operator choice policy to update our current solution:
+
+
+.. code-block:: python
+
+    """
+    Operators instances
+    """
+    mutator = SimpleMutation()
+    crossover = SimpleCrossover()
+
+    """
+    RandomPolicy instance
+    """
+    policy = RandomPolicy([mutator, crossover])
+
+    """
+    Current solutions instance
+    """
+    solution1 = BinarySolution.random(5)
+    solution2 = BinarySolution.random(5)
+
+    # pass two solutions in parameters in case of selected crossover operator
+    new_solution = policy.apply(solution1, solution2)
+
+.. warning::
+    By default if ``solution2`` parameter is not provided into ``policy.apply`` method for crossover, the best solution known is used from the algorithm linked to the ``policy``.
+
+Updating solutions is therefore now possible with our policy. It is high time to dive into the process of optimizing solutions and digging into our research space.

+ 3 - 3
docs/source/documentations/problem.rst

@@ -1,9 +1,9 @@
-2. Problem instance
+Problem instance
 ===================
 
 In this tutorial, we introduce the way of using **Macop** and running your algorithm quickly using the well known `knapsack` problem.
 
-2.1 Problem definition
+Problem definition
 ~~~~~~~~~~~~~~~~~~~~~~
 
 The **knapsack problem** is a problem in combinatorial optimisation: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.
@@ -22,7 +22,7 @@ In this problem, we try to optimise the value associated with the objects we wis
     It is a combinatorial and therefore discrete problem. **Macop** decomposes its package into two parts, which is related to discrete optimisation on the one hand, and continuous optimisation on the other hand. This will be detailed later.
 
 
-2.2 Problem implementation
+Problem implementation
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 During the whole tutorial, the example used is based on the previous illustration with:

+ 5 - 5
docs/source/documentations/solutions.rst

@@ -1,11 +1,11 @@
-3. Solutions
+Solutions
 =============
 
 Representing a solution to a specific problem is very important in an optimisation process. In this example, we will always use the **knapsack problem** as a basis.
 
 In a first step, the management of the solutions by the macop package will be presented. Then a specific implementation for the current problem will be detailed.
 
-3.1. Generic Solution
+Generic Solution
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Inside ``macop.solutions.base`` module of `Macop`, the ``Solution`` class is available. It's an abstract solution class structure which:
@@ -67,13 +67,13 @@ Allowing to initialize it randomly or not (using constructor or ``random`` metho
 
 We will now see how to define a type of solution specific to our problem.
 
-3.2. Solution representation for knapsack
+Solution representation for knapsack
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 We will now use the abstract ``Solution`` type available in the ``macop.solutions.base`` module in order to define our own solution.
 First of all, let's look at the representation of our knapsack problem. **How to represent the solution?**
 
-3.2.1. Knapsack solution
+Knapsack solution
 ************************
 
 A valid solution can be shown below where the sum of the object weights is 15 and the sum of the selected objects values is 8 (its fitness):
@@ -90,7 +90,7 @@ Its representation can be translate as a **binary array** with value:
 
 where selected objects have **1** as value otherwise **0**.
 
-3.2.2. Binary Solution
+Binary Solution
 **********************
 
 We will now define our own type of solution by inheriting from ``macop.solutions.base.Solution``, which we will call ``BinarySolution``.

+ 3 - 3
docs/source/documentations/validator.rst

@@ -1,10 +1,10 @@
-4. Validate a solution
+Validate a solution
 ======================
 
 When an optimisation problem requires respecting certain constraints, Macop allows you to quickly verify that a solution is valid. 
 It is based on a defined function taking a solution as input and returning the validity criterion (true or false).
 
-4.1. Validator definition
+Validator definition
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
 An invalid solution can be shown below where the sum of the object weights is greater than 15:
@@ -41,7 +41,7 @@ To avoid taking into account invalid solutions, we can define our function which
         return weight_sum <= 15
 
 
-4.2. Use of validator
+Use of validator
 ~~~~~~~~~~~~~~~~~~~~~
 
 We can now generate solutions randomly by passing our validation function as a parameter:

+ 4 - 2
docs/source/index.rst

@@ -8,14 +8,16 @@ Minimalist And Customisable Optimisation Package
 What's **Macop** ?
 ------------------
 
-**Macop** is a discrete optimisation Python package which not implement the whole available algorithms in the literature but let you the possibility to quickly develop and test your own algorithm and strategies. The main objective of this package is to be the most flexible as possible and hence, to offer a maximum of implementation possibilities.
+**Macop** is a discrete optimisation Python package which not which doesn't implement every algorithm in the literature but provides the ability to quickly develop and test your own algorithm and strategies. The main objective of this package is to provide maximum flexibility, which allows for easy experimentation in implementation..
 
 .. toctree::
-   :maxdepth: 2
+   :maxdepth: 1
    :caption: Contents:
 
    description
+
    documentations/index
+
    api
    contributing
 

+ 27 - 0
macop/__init__.py

@@ -0,0 +1,27 @@
+from .algorithms import base
+from .algorithms import mono
+from .algorithms import multi
+
+from .callbacks import base
+from .callbacks import classicals
+from .callbacks import multi
+
+from .evaluators import base
+from .evaluators.discrete import mono
+from .evaluators.discrete import multi
+
+from .operators import base
+from .operators.discrete import mutators
+from .operators.discrete import crossovers
+from .operators.continuous import mutators
+from .operators.continuous import crossovers
+
+from .policies import base
+from .policies import classicals
+from .policies import reinforcement
+
+from .solutions import base
+from .solutions import continuous
+from .solutions import discrete
+
+from .utils import progress

+ 3 - 18
macop/algorithms/base.py

@@ -4,7 +4,7 @@
 # main imports
 import logging
 import sys, os
-from ..utils.progress import macop_text, macop_line, macop_progress
+from macop.utils.progress import macop_text, macop_line, macop_progress
 
 
 # Generic algorithm class
@@ -23,9 +23,9 @@ class Algorithm():
 
     Attributes:
         initializer: {function} -- basic function strategy to initialize solution
-        evaluator: {function} -- basic function in order to obtained fitness (mono or multiple objectives)
+        evaluator: {Evaluator} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
         operators: {[Operator]} -- list of operator to use when launching algorithm
-        policy: {Policy} -- Policy class implementation strategy to select operators
+        policy: {Policy} -- Policy implementation strategy to select operators
         validator: {function} -- basic function to check if solution is valid or not under some constraints
         maximise: {bool} -- specify kind of optimisation problem 
         verbose: {bool} -- verbose or not information about the algorithm
@@ -307,20 +307,5 @@ class Algorithm():
     def information(self):
         logging.info(f"-- Best {self._bestSolution} - SCORE {self._bestSolution.fitness()}")
 
-    # def __blockPrint(self):
-    #     """Private method which disables console prints when running algorithm if specified when instancing algorithm
-    #     """
-    #     sys.stdout = open(os.devnull, 'w')
-
-    # def __enablePrint(self):
-    #     """Private which enables console prints when running algorithm
-    #     """
-    #     sys.stdout = sys.__stdout__
-
-    # def __del__(self):
-    #     # enable prints when object is deleted
-    #     if not self._verbose:
-    #         self.__enablePrint()
-
     def __str__(self):
         return f"{type(self).__name__} using {type(self._bestSolution).__name__}"

+ 3 - 3
macop/algorithms/mono.py

@@ -5,7 +5,7 @@
 import logging
 
 # module imports
-from .base import Algorithm
+from macop.algorithms.base import Algorithm
 
 
 class HillClimberFirstImprovment(Algorithm):
@@ -17,7 +17,7 @@ class HillClimberFirstImprovment(Algorithm):
 
     Attributes:
         initalizer: {function} -- basic function strategy to initialize solution
-        evaluator: {function} -- basic function in order to obtained fitness (mono or multiple objectives)
+        evaluator: {Evaluator} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
         operators: {[Operator]} -- list of operator to use when launching algorithm
         policy: {Policy} -- Policy class implementation strategy to select operators
         validator: {function} -- basic function to check if solution is valid or not under some constraints
@@ -116,7 +116,7 @@ class HillClimberBestImprovment(Algorithm):
 
     Attributes:
         initalizer: {function} -- basic function strategy to initialize solution
-        evaluator: {function} -- basic function in order to obtained fitness (mono or multiple objectives)
+        evaluator: {Evaluator} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
         operators: {[Operator]} -- list of operator to use when launching algorithm
         policy: {Policy} -- Policy class implementation strategy to select operators
         validator: {function} -- basic function to check if solution is valid or not under some constraints

+ 3 - 3
macop/algorithms/multi.py

@@ -6,11 +6,11 @@ import logging
 import math
 import numpy as np
 import sys
-from ..utils.progress import macop_text, macop_line, macop_progress
+from macop.utils.progress import macop_text, macop_line, macop_progress
 
 # module imports
-from .base import Algorithm
-from ..evaluators.discrete.multi import WeightedSum
+from macop.algorithms.base import Algorithm
+from macop.evaluators.discrete.multi import WeightedSum
 
 
 class MOEAD(Algorithm):

+ 2 - 2
macop/callbacks/classicals.py

@@ -7,8 +7,8 @@ import logging
 import numpy as np
 
 # module imports
-from .base import Callback
-from ..utils.progress import macop_text, macop_line
+from macop.callbacks.base import Callback
+from macop.utils.progress import macop_text, macop_line
 
 
 class BasicCheckpoint(Callback):

+ 2 - 2
macop/callbacks/multi.py

@@ -7,8 +7,8 @@ import logging
 import numpy as np
 
 # module imports
-from .base import Callback
-from ..utils.progress import macop_text, macop_line
+from macop.callbacks.base import Callback
+from macop.utils.progress import macop_text, macop_line
 
 
 class MultiCheckpoint(Callback):

+ 2 - 2
macop/callbacks/policies.py

@@ -7,8 +7,8 @@ import logging
 import numpy as np
 
 # module imports
-from .base import Callback
-from ..utils.progress import macop_text, macop_line
+from macop.callbacks.base import Callback
+from macop.utils.progress import macop_text, macop_line
 
 
 class UCBCheckpoint(Callback):

+ 1 - 1
macop/evaluators/discrete/mono.py

@@ -1,7 +1,7 @@
 """Knapsack evaluators classes
 """
 # main imports
-from ..base import Evaluator
+from macop.evaluators.base import Evaluator
 
 
 class KnapsackEvaluator(Evaluator):

+ 1 - 1
macop/evaluators/discrete/multi.py

@@ -1,7 +1,7 @@
 """Multi-objective evaluators classes 
 """
 # main imports
-from ..base import Evaluator
+from macop.evaluators.base import Evaluator
 
 
 class WeightedSum(Evaluator):

+ 1 - 1
macop/operators/continuous/crossovers.py

@@ -5,5 +5,5 @@ import random
 import sys
 
 # module imports
-from ..base import Crossover
+from macop.operators.base import Crossover
 

+ 1 - 1
macop/operators/continuous/mutators.py

@@ -5,4 +5,4 @@ import random
 import sys
 
 # module imports
-from ..base import Mutation
+from macop.operators.base import Mutation

+ 5 - 11
macop/operators/discrete/crossovers.py

@@ -5,7 +5,7 @@ import random
 import sys
 
 # module imports
-from ..base import Crossover
+from macop.operators.base import Crossover
 
 
 class SimpleCrossover(Crossover):
@@ -68,11 +68,8 @@ class SimpleCrossover(Crossover):
         # copy data of solution
         firstData = solution1._data.copy()
 
-        # get best solution from current algorithm
-        if solution2 is None:
-            copy_solution = self._algo._bestSolution.clone()
-        else:
-            copy_solution = solution2.clone()
+        # copy of solution2 as output solution
+        copy_solution = solution2.clone()
 
         splitIndex = int(size / 2)
 
@@ -143,11 +140,8 @@ class RandomSplitCrossover(Crossover):
         # copy data of solution
         firstData = solution1._data.copy()
 
-        # get best solution from current algorithm
-        if solution2 is None:
-            copy_solution = self._algo._bestSolution.clone()
-        else:
-            copy_solution = solution2.clone()
+        # copy of solution2 as output solution
+        copy_solution = solution2.clone()
 
         splitIndex = random.randint(0, size)
 

+ 1 - 1
macop/operators/discrete/mutators.py

@@ -5,7 +5,7 @@ import random
 import sys
 
 # module imports
-from ..base import Mutation
+from macop.operators.base import Mutation
 
 
 class SimpleMutation(Mutation):

+ 23 - 6
macop/policies/base.py

@@ -4,6 +4,9 @@ import logging
 from abc import abstractmethod
 
 
+from macop.operators.base import KindOperator
+
+
 # define policy to choose `operator` function at current iteration
 class Policy():
     """Abstract class which is used for applying strategy when selecting and applying operator 
@@ -24,12 +27,13 @@ class Policy():
         """
         pass
 
-    def apply(self, solution):
+    def apply(self, solution1, solution2=None):
         """
-        Apply specific operator chosen to create new solution, computes its fitness and returns solution
+        Apply specific operator chosen to create new solution, compute its fitness and return solution
         
         Args:
-            _solution: {Solution} -- the solution to use for generating new solution
+            solution1: {Solution} -- the first solution to use for generating new solution
+            solution2: {Solution} -- the second solution to use for generating new solution (in case of specific crossover, default is best solution from algorithm)
 
         Returns:
             {Solution} -- new generated solution
@@ -38,12 +42,25 @@ class Policy():
         operator = self.select()
 
         logging.info("---- Applying %s on %s" %
-                     (type(operator).__name__, solution))
+                     (type(operator).__name__, solution1))
+
+        # default value of solution2 is current best solution
+        if solution2 is None and self._algo is not None:
+            solution2 = self._algo._bestSolution
+
+        # avoid use of crossover if only one solution is passed
+        if solution2 is None and operator._kind == KindOperator.CROSSOVER:
+
+            while operator._kind == KindOperator.CROSSOVER:            
+                operator = self.select()
 
         # apply operator on solution
-        newSolution = operator.apply(solution)
+        if operator._kind == KindOperator.CROSSOVER:
+            newSolution = operator.apply(solution1, solution2)
+        else:
+            newSolution = operator.apply(solution1)
 
-        logging.info("---- Obtaining %s" % (solution))
+        logging.info("---- Obtaining %s" % (newSolution))
 
         return newSolution
 

+ 1 - 1
macop/policies/classicals.py

@@ -4,7 +4,7 @@
 import random
 
 # module imports
-from .base import Policy
+from macop.policies.base import Policy
 
 
 class RandomPolicy(Policy):

+ 24 - 9
macop/policies/reinforcement.py

@@ -7,7 +7,8 @@ import math
 import numpy as np
 
 # module imports
-from .base import Policy
+from macop.policies.base import Policy
+from macop.operators.base import KindOperator
 
 
 class UCBPolicy(Policy):
@@ -101,7 +102,7 @@ class UCBPolicy(Policy):
         else:
             return self._operators[random.choice(indices)]
 
-    def apply(self, solution):
+    def apply(self, solution1, solution2=None):
         """
         Apply specific operator chosen to create new solution, computes its fitness and returns solution
 
@@ -109,7 +110,8 @@ class UCBPolicy(Policy):
         - selected operator occurence is also increased
 
         Args:
-            solution: {Solution} -- the solution to use for generating new solution
+            solution1: {Solution} -- the first solution to use for generating new solution
+            solution2: {Solution} -- the second solution to use for generating new solution (in case of specific crossover, default is best solution from algorithm)
 
         Returns:
             {Solution} -- new generated solution
@@ -118,10 +120,23 @@ class UCBPolicy(Policy):
         operator = self.select()
 
         logging.info("---- Applying %s on %s" %
-                     (type(operator).__name__, solution))
+                     (type(operator).__name__, solution1))
+
+        # default value of solution2 is current best solution
+        if solution2 is None and self._algo is not None:
+            solution2 = self._algo._bestSolution
+
+        # avoid use of crossover if only one solution is passed
+        if solution2 is None and operator._kind == KindOperator.CROSSOVER:
+
+            while operator._kind == KindOperator.CROSSOVER:            
+                operator = self.select()
 
         # apply operator on solution
-        newSolution = operator.apply(solution)
+        if operator._kind == KindOperator.CROSSOVER:
+            newSolution = operator.apply(solution1, solution2)
+        else:
+            newSolution = operator.apply(solution1)
 
         # compute fitness of new solution
         newSolution.evaluate(self._algo._evaluator)
@@ -129,10 +144,10 @@ class UCBPolicy(Policy):
         # compute fitness improvment rate
         if self._algo._maximise:
             fir = (newSolution.fitness() -
-                   solution.fitness()) / solution.fitness()
+                   solution1.fitness()) / solution1.fitness()
         else:
-            fir = (solution.fitness() -
-                   newSolution.fitness()) / solution.fitness()
+            fir = (solution1.fitness() -
+                   newSolution.fitness()) / solution1.fitness()
 
         operator_index = self._operators.index(operator)
 
@@ -141,6 +156,6 @@ class UCBPolicy(Policy):
 
         self._occurences[operator_index] += 1
 
-        logging.info("---- Obtaining %s" % (solution))
+        logging.info("---- Obtaining %s" % (newSolution))
 
         return newSolution

+ 1 - 1
macop/solutions/discrete.py

@@ -3,7 +3,7 @@
 import numpy as np
 
 # modules imports
-from .base import Solution
+from macop.solutions.base import Solution
 
 
 class BinarySolution(Solution):

+ 48 - 0
paper.bib

@@ -131,4 +131,52 @@
   timestamp = {Mon, 15 Jun 2020 16:51:53 +0200},
   biburl    = {https://dblp.org/rec/journals/remotesensing/PullanagariKY18.bib},
   bibsource = {dblp computer science bibliography, https://dblp.org}
+}
+
+@misc{ceres-solver,
+  author = "Sameer Agarwal and Keir Mierle and Others",
+  title = "Ceres Solver",
+  howpublished = "\url{http://ceres-solver.org}",
+}
+
+@book{hart2017pyomo,
+  title={Pyomo--optimization modeling in python},
+  author={Hart, William E. and Laird, Carl D. and Watson, Jean-Paul and Woodruff, David L. and Hackebeil, Gabriel A. and Nicholson, Bethany L. and Siirola, John D.},
+  edition={Second},
+  volume={67},
+  year={2017},
+  publisher={Springer Science \& Business Media}
+}
+
+@article{pyopt-paper,
+  author = {Ruben E. Perez and Peter W. Jansen and Joaquim R. R. A. Martins},
+  title = {py{O}pt: A {P}ython-Based Object-Oriented Framework for Nonlinear Constrained Optimization},
+  journal = {Structures and Multidisciplinary Optimization},
+  year = {2012},
+  volume = {45},
+  number = {1},
+  pages = {101--118},
+  doi = {10.1007/s00158-011-0666-3}
+}
+
+@incollection{MaherMiltenbergerPedrosoRehfeldtSchwarzSerrano2016,
+  author = {Stephen Maher and Matthias Miltenberger and Jo{\~{a}}o Pedro Pedroso and Daniel Rehfeldt and Robert Schwarz and Felipe Serrano},
+  title = {{PySCIPOpt}: Mathematical Programming in Python with the {SCIP} Optimization Suite},
+  booktitle = {Mathematical Software {\textendash} {ICMS} 2016},
+  publisher = {Springer International Publishing},
+  pages = {301--307},
+  year = {2016},
+  doi = {10.1007/978-3-319-42432-3_37},
+}
+
+@misc{simanneal-solver,
+  author = "Matthew Perry",
+  title = "simanneal",
+  howpublished = "\url{https://github.com/perrygeo/simanneal}",
+}
+
+@misc{solid-solver,
+  author = "Devin Soni",
+  title = "Solid",
+  howpublished = "\url{https://github.com/100/Solid}",
 }

Fichier diff supprimé car celui-ci est trop grand
+ 21 - 14
paper.md


+ 1 - 1
setup.py

@@ -73,7 +73,7 @@ class TestCommand(distutils.command.check.check):
 
 setup(
     name='macop',
-    version='1.0.6',
+    version='1.0.7',
     description='Minimalist And Customisable Optimisation Package',
     long_description=open('README.md').read(),
     long_description_content_type='text/markdown',