Parcourir la source

end of documentation updates

Jérôme BUISINE il y a 3 ans
Parent
commit
801611969a

BIN
docs/figures/operators_choice.kra~


BIN
docs/figures/search_space.kra~


BIN
docs/figures/search_space.png~


BIN
docs/figures/search_space_moead.kra


BIN
docs/figures/search_space_simple.kra~


BIN
docs/source/_static/documentation/search_space_moead.png


+ 10 - 0
docs/source/documentations/algorithms.rst

@@ -178,6 +178,11 @@ Let's implement an algorithm well known under the name of hill climber best impr
 
 .. code-block:: python
 
+    """
+    module imports
+    """
+    from macop.algorithms.base import Algorithm
+
     class HillClimberBestImprovment(Algorithm):
 
         def run(self, evaluations):
@@ -274,6 +279,11 @@ Let's called this new algorithm ``IteratedLocalSearch``:
 
 .. code-block:: python
 
+    """
+    module imports
+    """
+    from macop.algorithms.base import Algorithm
+
     class IteratedLocalSearch(Algorithm):
         
         def __init__(self,

+ 232 - 1
docs/source/documentations/callbacks.rst

@@ -1,2 +1,233 @@
 Keep track
-==============
+==============
+
+Keeping track of the running algorithm can be useful on two levels. First of all to understand how it unfolded at the end of the classic run. But also in the case of the unwanted shutdown of the algorithm. 
+This section will allow you to introduce the recovery of the algorithm thanks to a continuous backup functionality.
+
+Logging into algorithm
+~~~~~~~~~~~~~~~~~~~~~~
+
+Some logs can be retrieve after running an algorithm. **Macop** uses the ``logging`` Python package in order to log algorithm advancement.
+
+Here is an example of use when running an algorithm:
+
+.. code-block:: python
+
+    """
+    basic imports
+    """
+    import logging
+
+    # logging configuration
+    logging.basicConfig(format='%(asctime)s %(message)s', filename='data/example.log', level=logging.DEBUG)
+
+    ...
+    
+    # maximizing algorithm (relative to knapsack problem)
+    algo = HillClimberBestImprovment(initialiser, evaluator, operators, policy, validator, maximise=True, verbose=False)
+
+    # run the algorithm using local search and get solution found 
+    solution = algo.run(evaluations=100)
+    print(solution.fitness())
+
+Hence, log data are saved into ``data/example.log`` in our example.
+
+Callbacks introduction
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Having output logs can help to understand an error that has occurred, however all the progress of the research carried out may be lost. 
+For this, the functionality relating to callbacks has been developed.
+
+Within **Macop**, a callback is a specific instance of ``macop.callbacks.Callback`` that allows you to perform an action of tracing / saving information **every** ``n`` **evaluations** but also reloading information if necessary when restarting an algorithm.
+
+
+.. code-block:: python
+
+    class Callback():
+
+        def __init__(self, every, filepath):
+            ...
+
+        @abstractmethod
+        def run(self):
+            """
+            Check if necessary to do backup based on `every` variable
+            """
+            pass
+
+        @abstractmethod
+        def load(self):
+            """
+            Load last backup line of solution and set algorithm state at this backup
+            """
+            pass
+
+        def setAlgo(self, algo):
+            """
+            Specify the main algorithm instance reference
+            """
+            ...
+
+
+- The ``run`` method will be called during run process of the algo and do backup at each specific number of evaluations. 
+- The ``load`` method will be used to reload the state of the algorithm from the last information saved. All saved data is saved in a file whose name will be specified by the user.
+
+Towards the use of Callbacks
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+We are going to create our own Callback instance called ``BasicCheckpoint`` which will save the best solution found and number of evaluations done in order to reload it for the next run of our algorithm.
+
+.. code-block:: python
+
+    """
+    module imports
+    """
+    from macop.callbacks.base import Callback
+
+
+    class BasicCheckpoint(Callback):
+        
+        def run(self):
+            """
+            Check if necessary to do backup based on `every` variable
+            """
+            # get current best solution
+            solution = self._algo._bestSolution
+
+            currentEvaluation = self._algo.getGlobalEvaluation()
+
+            # backup if necessary every number of evaluations
+            if currentEvaluation % self._every == 0:
+
+                # create specific line with solution data
+                solutionData = ""
+                solutionSize = len(solution._data)
+
+                for index, val in enumerate(solution._data):
+                    solutionData += str(val)
+
+                    if index < solutionSize - 1:
+                        solutionData += ' '
+
+                # number of evaluations done, solution data and fitness score
+                line = str(currentEvaluation) + ';' + solutionData + ';' + str(
+                    solution.fitness()) + ';\n'
+
+                # check if file exists
+                if not os.path.exists(self._filepath):
+                    with open(self._filepath, 'w') as f:
+                        f.write(line)
+                else:
+                    with open(self._filepath, 'a') as f:
+                        f.write(line)
+
+        def load(self):
+            """
+            Load last backup line and set algorithm state (best solution and evaluations)
+            """
+            if os.path.exists(self._filepath):
+
+                with open(self._filepath) as f:
+
+                    # get last line and read data
+                    lastline = f.readlines()[-1]
+                    data = lastline.split(';')
+
+                    # get evaluation  information
+                    globalEvaluation = int(data[0])
+
+                    # restore number of evaluations
+                    if self._algo.getParent() is not None:
+                        self._algo.getParent()._numberOfEvaluations = globalEvaluation
+                    else:
+                        self._algo._numberOfEvaluations = globalEvaluation
+
+                    # get best solution data information
+                    solutionData = list(map(int, data[1].split(' ')))
+
+                    # avoid uninitialized solution
+                    if self._algo._bestSolution is None:
+                        self._algo._bestSolution = self._algo._initializer()
+
+                    # set to algorithm the lastest obtained best solution
+                    self._algo._bestSolution._data = np.array(solutionData)
+                    self._algo._bestSolution._score = float(data[2])
+
+
+In this way, it is possible to specify the use of a callback to our algorithm instance:
+
+
+.. code-block:: python
+
+    ...
+    
+    # maximizing algorithm (relative to knapsack problem)
+    algo = HillClimberBestImprovment(initialiser, evaluator, operators, policy, validator, maximise=True, verbose=False)
+
+    callback = BasicCheckpoint(every=5, filepath='data/hillClimberBackup.csv')
+
+    # add callback into callback list
+    algo.addCallback(callback)
+
+    # run the algorithm using local search and get solution found 
+    solution = algo.run(evaluations=100)
+    print(solution.fitness())
+
+
+.. note::
+    It is possible to add as many callbacks as desired in the algorithm in question.
+
+
+Previously, some methods of the abstract ``Algorithm`` class have not been presented. These methods are linked to the use of callbacks, 
+in particular the ``addCallback`` method which allows the addition of a callback to an algorithm instance as seen above.
+
+- The ``resume`` method will reload all callbacks list using ``load`` method.
+- The ``progress`` method will ``run`` each callbacks during the algorithm search.
+
+If we want to exploit this functionality, then we will need to exploit them within our algorithm. Let's make the necessary modifications for our algorithm ``IteratedLocalSearch``:
+
+
+.. code-block:: python
+
+    """
+    module imports
+    """
+    from macop.algorithms.base import Algorithm
+
+    class IteratedLocalSearch(Algorithm):
+        
+        ...
+
+        def run(self, evaluations, ls_evaluations=100):
+            """
+            Run the iterated local search algorithm using local search
+            """
+
+            # by default use of mother method to initialize variables
+            super().run(evaluations)
+
+            # initialize current solution
+            self.initRun()
+
+            # restart using callbacks backup list
+            self.resume()
+
+            # local search algorithm implementation
+            while not self.stop():
+
+                # create and search solution from local search
+                newSolution = self._localSearch.run(ls_evaluations)
+
+                # if better solution than currently, replace it
+                if self.isBetter(newSolution):
+                    self._bestSolution = newSolution
+
+                # check if necessary to call each callbacks
+                self.progress()
+
+                self.information()
+
+            return self._bestSolution
+
+
+All the features of **Macop** were presented. The next section will aim to quickly present the few implementations proposed within **Macop** to highlight the modulality of the package.

+ 16 - 2
docs/source/documentations/others.rst

@@ -1,2 +1,16 @@
-Others features
-===================
+Implementation examples
+=======================
+
+Within the API of **Macop**, you can find an implementation of The Multi-objective evolutionary algorithm based on decomposition (MOEA/D) is a general-purpose algorithm for approximating the Pareto set of multi-objective optimization problems. 
+It decomposes the original multi-objective problem into a number of single-objective optimization sub-problems and then uses an evolutionary process to optimize these sub-problems simultaneously and cooperatively. 
+MOEA/D is a state-of-art algorithm in aggregation-based approaches for multi-objective optimization.
+
+.. image:: ../_static/documentation/search_space_moead.png
+   :width:  400 px
+   :align: center
+
+
+As illustrated below, the two main objectives are sub-divised into 5 single-objective optimization sub-problems in order to find the Pareto front.
+
+- ``macop.algorithms.multi.MOSubProblem`` class defines each sub-problem of MOEA/D.
+- ``macop.algorithms.multi.MOEAD`` class exploits ``MOSubProblem`` and implements MOEA/D using weighted-sum of objectives method.

+ 1 - 1
docs/source/index.rst

@@ -11,7 +11,7 @@ What's **Macop** ?
 **Macop** is a discrete optimisation Python package which not implement the whole available algorithms in the literature but let you the possibility to quickly develop and test your own algorithm and strategies. The main objective of this package is to be the most flexible as possible and hence, to offer a maximum of implementation possibilities.
 
 .. toctree::
-   :maxdepth: 2
+   :maxdepth: 1
    :caption: Contents:
 
    description