Browse Source

Update of A tour of Macop

Jérôme BUISINE 4 years ago
parent
commit
476b4fed26

+ 2 - 2
docs/source/api.rst

@@ -134,8 +134,8 @@ macop.evaluators
 .. autosummary::
 .. autosummary::
    :toctree: macop
    :toctree: macop
    
    
-   macop.evaluators.discrete.mono
-   macop.evaluators.discrete.multi
+   macop.evaluators.continuous.mono
+   macop.evaluators.continuous.multi
 
 
 macop.operators
 macop.operators
 -------------------
 -------------------

+ 32 - 2
docs/source/documentations.rst

@@ -1363,9 +1363,12 @@ If we want to exploit this functionality, then we will need to exploit them with
 All the features of **Macop** were presented. The next section will aim to quickly present the few implementations proposed within **Macop** to highlight the modulality of the package.
 All the features of **Macop** were presented. The next section will aim to quickly present the few implementations proposed within **Macop** to highlight the modulality of the package.
 
 
 
 
-Implementation examples
+Advanced usages
 =======================
 =======================
 
 
+Multi-objective discrete optimisation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
 Within the API of **Macop**, you can find an implementation of The Multi-objective evolutionary algorithm based on decomposition (MOEA/D) is a general-purpose algorithm for approximating the Pareto set of multi-objective optimization problems. 
 Within the API of **Macop**, you can find an implementation of The Multi-objective evolutionary algorithm based on decomposition (MOEA/D) is a general-purpose algorithm for approximating the Pareto set of multi-objective optimization problems. 
 It decomposes the original multi-objective problem into a number of single-objective optimization sub-problems and then uses an evolutionary process to optimize these sub-problems simultaneously and cooperatively. 
 It decomposes the original multi-objective problem into a number of single-objective optimization sub-problems and then uses an evolutionary process to optimize these sub-problems simultaneously and cooperatively. 
 MOEA/D is a state-of-art algorithm in aggregation-based approaches for multi-objective optimization.
 MOEA/D is a state-of-art algorithm in aggregation-based approaches for multi-objective optimization.
@@ -1384,7 +1387,22 @@ An example with MOEAD for knapsack problem is available in knapsackMultiExample.
 
 
 .. _knapsackMultiExample.py: https://github.com/jbuisine/macop/blob/master/examples/knapsackMultiExample.py
 .. _knapsackMultiExample.py: https://github.com/jbuisine/macop/blob/master/examples/knapsackMultiExample.py
 
 
+Continuous Zdt problems
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Even if the package is not primarily intended for continuous optimisation, it allows for adaptation to continuous optimisation. 
+
+Based on the Zdt_ benchmarks function, it offers an implementation of Solution, Operator and Evaluator to enable the optimisation of this kind of problem.
+
+.. _Zdt: https://en.wikipedia.org/wiki/Test_functions_for_optimization
+
+- macop.solutions.continuous.ContinuousSolution_: manage float array solution in order to represent continuous solution;
+- macop.operators.continuous.mutators.PolynomialMutation_: update solution using polynomial mutation over solution's data;
+- macop.operators.continuous.crossovers.BasicDifferentialEvolutionCrossover_: use of new generated solutions in order to obtain new offspring solution;
+- macop.evaluators.continous.mono.ZdtEvaluator_: continuous evaluator for `Zdt` problem instance. Take into its ``data``, the ``f`` Zdt function;
+- macop.callbacks.classicals.ContinuousCallback_: manage callback and backup of continuous solution.
 
 
+A complete implementation example with the Rosenbrock_ function is available.
 
 
 .. _macop.algorithms.base: macop/macop.algorithms.base.html#module-macop.algorithms.base
 .. _macop.algorithms.base: macop/macop.algorithms.base.html#module-macop.algorithms.base
 .. _macop.algorithms.mono: macop/macop.algorithms.mono.html#module-macop.algorithms.mono
 .. _macop.algorithms.mono: macop/macop.algorithms.mono.html#module-macop.algorithms.mono
@@ -1406,4 +1424,16 @@ An example with MOEAD for knapsack problem is available in knapsackMultiExample.
 .. _macop.callbacks.base.Callback: macop/macop.callbacks.base.html#macop.callbacks.base.Callback
 .. _macop.callbacks.base.Callback: macop/macop.callbacks.base.html#macop.callbacks.base.Callback
 
 
 .. _macop.algorithms.multi.MOSubProblem: macop/macop.algorithms.multi.html#macop.algorithms.multi.MOSubProblem
 .. _macop.algorithms.multi.MOSubProblem: macop/macop.algorithms.multi.html#macop.algorithms.multi.MOSubProblem
-.. _macop.algorithms.multi.MOEAD: macop/macop.algorithms.multi.html#macop.algorithms.multi.MOEAD
+.. _macop.algorithms.multi.MOEAD: macop/macop.algorithms.multi.html#macop.algorithms.multi.MOEAD
+
+.. _macop.solutions.continuous.ContinuousSolution: macop/macop.solutions.continuous.html#macop.solutions.continuous.ContinuousSolution
+
+.. _macop.operators.continuous.mutators.PolynomialMutation: macop/macop.operators.continuous.mutators.html#macop.operators.continuous.mutators.PolynomialMutation
+.. _macop.operators.continuous.crossovers.BasicDifferentialEvolutionCrossover: macop/macop.operators.continuous.crossovers.html#macop.operators.continuous.crossovers.BasicDifferentialEvolutionCrossover
+
+.. _macop.evaluators.continous.mono.ZdtEvaluator: macop/macop.evaluators.continuous.mono.html#macop.evaluators.continous.mono.ZdtEvaluator
+
+.. _macop.callbacks.classicals.ContinuousCallback: macop/macop.callbacks.classicals.html#macop.callbacks.classicals.ContinuousCallback
+
+
+.. _Rosenbrock: https://github.com/jbuisine/macop/blob/master/examples/ZdtExample.py

+ 22 - 0
docs/source/macop/macop.evaluators.continuous.mono.rst

@@ -0,0 +1,22 @@
+macop.evaluators.continuous.mono
+================================
+
+.. automodule:: macop.evaluators.continuous.mono
+
+   
+   
+   
+
+   
+   
+   .. rubric:: Classes
+
+   .. autosummary::
+   
+      ZdtEvaluator
+   
+   
+
+   
+   
+   

+ 16 - 0
docs/source/macop/macop.evaluators.continuous.multi.rst

@@ -0,0 +1,16 @@
+macop.evaluators.continuous.multi
+=================================
+
+.. automodule:: macop.evaluators.continuous.multi
+
+   
+   
+   
+
+   
+   
+   
+
+   
+   
+   

+ 28 - 32
examples/ZdtExample.py

@@ -3,19 +3,20 @@ import logging
 import os
 import os
 import random
 import random
 import numpy as np
 import numpy as np
+import math
 
 
 # module imports
 # module imports
-from macop.solutions.discrete import BinarySolution
-from macop.evaluators.discrete.mono import UBQPEvaluator
+from macop.solutions.continuous import ContinuousSolution
+from macop.evaluators.continuous.mono import ZdtEvaluator
 
 
-from macop.operators.continuous.mutators import SimpleMutation
-from macop.operators.discrete.mutators import SimpleBinaryMutation
+from macop.operators.continuous.mutators import PolynomialMutation
+from macop.operators.continuous.crossovers import BasicDifferentialEvolutionCrossover
 
 
 from macop.policies.classicals import RandomPolicy
 from macop.policies.classicals import RandomPolicy
 
 
 from macop.algorithms.mono import IteratedLocalSearch as ILS
 from macop.algorithms.mono import IteratedLocalSearch as ILS
 from macop.algorithms.mono import HillClimberFirstImprovment
 from macop.algorithms.mono import HillClimberFirstImprovment
-from macop.callbacks.classicals import BasicCheckpoint
+from macop.callbacks.classicals import ContinuousCheckpoint
 
 
 if not os.path.exists('data'):
 if not os.path.exists('data'):
     os.makedirs('data')
     os.makedirs('data')
@@ -26,52 +27,47 @@ logging.basicConfig(format='%(asctime)s %(message)s', filename='data/example.log
 random.seed(42)
 random.seed(42)
 
 
 # usefull instance data
 # usefull instance data
-n = 100
-ubqp_instance_file = 'instances/ubqp/ubqp_instance.txt'
-filepath = "data/checkpoints_ubqp.csv"
+n = 10
+filepath = "data/checkpoints_zdt_Rosenbrock.csv"
+problem_interval = -10, 10 # fixed value interval (avoid infinite)
 
 
 
 
-# default validator
+# check each value in order to validate
 def validator(solution):
 def validator(solution):
+
+    mini, maxi = problem_interval
+
+    for x in solution.data:
+        if x < mini or x > maxi:
+            return False
+
     return True
     return True
 
 
 # define init random solution
 # define init random solution
 def init():
 def init():
-    return BinarySolution.random(n, validator)
-
-
-filepath = "data/checkpoints.csv"
+    return ContinuousSolution.random(n, problem_interval, validator)
 
 
 def main():
 def main():
 
 
-    # load UBQP instance
-    with open(ubqp_instance_file, 'r') as f:
-
-        lines = f.readlines()
-
-        # get all string floating point values of matrix
-        Q_data = ''.join([ line.replace('\n', '') for line in lines[8:] ])
-
-        # load the concatenate obtained string
-        Q_matrix = np.fromstring(Q_data, dtype=float, sep=' ').reshape(n, n)
-
-    print(f'Q_matrix shape: {Q_matrix.shape}')
+    # Rosenbrock function with a=1 and b=100 (see https://en.wikipedia.org/wiki/Rosenbrock_function)
+    Rosenbrock_function = lambda s: sum([ 100 * math.pow(s.data[i + 1] - (math.pow(s.data[i], 2)), 2) + math.pow((1 - s.data[i]), 2) for i in range(len(s.data) - 1) ])
 
 
-    operators = [SimpleBinaryMutation(), SimpleMutation()]
+    operators = [PolynomialMutation(interval=problem_interval), BasicDifferentialEvolutionCrossover(interval=problem_interval)]
     policy = RandomPolicy(operators)
     policy = RandomPolicy(operators)
-    callback = BasicCheckpoint(every=5, filepath=filepath)
-    evaluator = UBQPEvaluator(data={'Q': Q_matrix})
+    callback = ContinuousCheckpoint(every=5, filepath=filepath)
+    evaluator = ZdtEvaluator(data={'f': Rosenbrock_function})
 
 
     # passing global evaluation param from ILS
     # passing global evaluation param from ILS
-    hcfi = HillClimberFirstImprovment(init, evaluator, operators, policy, validator, maximise=True, verbose=True)
-    algo = ILS(init, evaluator, operators, policy, validator, localSearch=hcfi, maximise=True, verbose=True)
+    hcfi = HillClimberFirstImprovment(init, evaluator, operators, policy, validator, maximise=False, verbose=True)
+    algo = ILS(init, evaluator, operators, policy, validator, localSearch=hcfi, maximise=False, verbose=True)
     
     
     # add callback into callback list
     # add callback into callback list
     algo.addCallback(callback)
     algo.addCallback(callback)
 
 
-    bestSol = algo.run(10000, ls_evaluations=100)
+    bestSol = algo.run(100000, ls_evaluations=100)
+    print(bestSol.data)
 
 
-    print('Solution for UBQP instance score is {}'.format(evaluator.compute(bestSol)))
+    print('Solution for Rosenbrock Zdt instance score is {}'.format(evaluator.compute(bestSol)))
 
 
 if __name__ == "__main__":
 if __name__ == "__main__":
     main()
     main()

+ 2 - 0
macop/__init__.py

@@ -9,6 +9,8 @@ from macop.callbacks import multi
 from macop.evaluators import base
 from macop.evaluators import base
 from macop.evaluators.discrete import mono
 from macop.evaluators.discrete import mono
 from macop.evaluators.discrete import multi
 from macop.evaluators.discrete import multi
+from macop.evaluators.continuous import mono
+from macop.evaluators.continuous import multi
 
 
 from macop.operators import base
 from macop.operators import base
 from macop.operators.discrete import mutators
 from macop.operators.discrete import mutators

+ 4 - 4
macop/algorithms/mono.py

@@ -343,16 +343,16 @@ class IteratedLocalSearch(Algorithm):
         # by default use of mother method to initialise variables
         # by default use of mother method to initialise variables
         super().run(evaluations)
         super().run(evaluations)
 
 
+        # add same callbacks
+        for callback in self._callbacks:
+            self._localSearch.addCallback(callback)
+
         # enable resuming for ILS
         # enable resuming for ILS
         self.resume()
         self.resume()
 
 
         # initialise current solution
         # initialise current solution
         self.initRun()
         self.initRun()
 
 
-        # add same callbacks
-        for callback in self._callbacks:
-            self._localSearch.addCallback(callback)
-
         # local search algorithm implementation
         # local search algorithm implementation
         while not self.stop():
         while not self.stop():
 
 

+ 99 - 7
macop/callbacks/classicals.py

@@ -34,16 +34,16 @@ class BasicCheckpoint(Callback):
 
 
             logging.info("Checkpoint is done into " + self._filepath)
             logging.info("Checkpoint is done into " + self._filepath)
 
 
-            solution.data = ""
+            solution_data = ""
             solutionSize = len(solution.data)
             solutionSize = len(solution.data)
 
 
             for index, val in enumerate(solution.data):
             for index, val in enumerate(solution.data):
-                solution.data += str(val)
+                solution_data += str(val)
 
 
                 if index < solutionSize - 1:
                 if index < solutionSize - 1:
-                    solution.data += ' '
+                    solution_data += ' '
 
 
-            line = str(currentEvaluation) + ';' + solution.data + ';' + str(
+            line = str(currentEvaluation) + ';' + solution_data + ';' + str(
                 solution.fitness) + ';\n'
                 solution.fitness) + ';\n'
 
 
             # check if file exists
             # check if file exists
@@ -76,12 +76,104 @@ class BasicCheckpoint(Callback):
                     self._algo.setEvaluation(globalEvaluation)
                     self._algo.setEvaluation(globalEvaluation)
 
 
                 # get best solution data information
                 # get best solution data information
-                solution.data = list(map(int, data[1].split(' ')))
+                solution_data = list(map(int, data[1].split(' ')))
 
 
                 if self._algo.result is None:
                 if self._algo.result is None:
-                    self._algo.result(self._algo.initialiser())
+                    self._algo.result = self._algo.initialiser()
 
 
-                self._algo.result.data = np.array(solution.data)
+                self._algo.result.data = np.array(solution_data)
+                self._algo.result.fitness = float(data[2])
+
+            macop_line(self._algo)
+            macop_text(self._algo,
+                       f'Checkpoint found from `{self._filepath}` file.')
+            macop_text(
+                self._algo,
+                f'Restart algorithm from evaluation {self._algo.getEvaluation()}.'
+            )
+        else:
+            macop_text(
+                self._algo,
+                'No backup found... Start running algorithm from evaluation 0.'
+            )
+            logging.info(
+                "Can't load backup... Backup filepath not valid in Checkpoint")
+
+        macop_line(self._algo)
+
+
+class ContinuousCheckpoint(Callback):
+    """
+    ContinuousCheckpoint is used for loading previous computations and start again after loading checkpoint (only continuous solution)
+
+    Attributes:
+        algo: {:class:`~macop.algorithms.base.Algorithm`} -- main algorithm instance reference
+        every: {int} -- checkpoint frequency used (based on number of evaluations)
+        filepath: {str} -- file path where checkpoints will be saved
+    """
+
+    def run(self):
+        """
+        Check if necessary to do backup based on `every` variable
+        """
+        # get current best solution
+        solution = self._algo.result
+
+        currentEvaluation = self._algo.getGlobalEvaluation()
+
+        # backup if necessary
+        if currentEvaluation % self._every == 0:
+
+            logging.info("Checkpoint is done into " + self._filepath)
+
+            solution_data = ""
+            solutionSize = len(solution.data)
+
+            for index, val in enumerate(solution.data):
+                solution_data += str(val)
+
+                if index < solutionSize - 1:
+                    solution_data += ' '
+
+            line = str(currentEvaluation) + ';' + solution_data + ';' + str(
+                solution.fitness) + ';\n'
+
+            # check if file exists
+            if not os.path.exists(self._filepath):
+                with open(self._filepath, 'w') as f:
+                    f.write(line)
+            else:
+                with open(self._filepath, 'a') as f:
+                    f.write(line)
+
+    def load(self):
+        """
+        Load last backup line of solution and set algorithm state (best solution and evaluations) at this backup
+        """
+        if os.path.exists(self._filepath):
+
+            logging.info('Load best solution from last checkpoint')
+            with open(self._filepath) as f:
+
+                # get last line and read data
+                lastline = f.readlines()[-1]
+                data = lastline.split(';')
+
+                # get evaluation  information
+                globalEvaluation = int(data[0])
+
+                if self._algo.getParent() is not None:
+                    self._algo.getParent().setEvaluation(globalEvaluation)
+                else:
+                    self._algo.setEvaluation(globalEvaluation)
+
+                # get best solution data information
+                solution_data = list(map(float, data[1].split(' ')))
+
+                if self._algo.result is None:
+                    self._algo.result = self._algo.initialiser()
+
+                self._algo.result.data = np.array(solution_data)
                 self._algo.result.fitness = float(data[2])
                 self._algo.result.fitness = float(data[2])
 
 
             macop_line(self._algo)
             macop_line(self._algo)

+ 7 - 6
macop/operators/continuous/crossovers.py

@@ -43,23 +43,24 @@ class BasicDifferentialEvolutionCrossover(Crossover):
         self.F = F
         self.F = F
 
 
 
 
-    def apply(self, solution):
+    def apply(self, solution1, solution2=None):
         """Create new solution based on solution passed as parameter
         """Create new solution based on solution passed as parameter
 
 
         Args:
         Args:
-            solution: {:class:`~macop.solutions.base.Solution`} -- the solution to use for generating new solution
+            solution1: {:class:`~macop.solutions.base.Solution`} -- the first solution to use for generating new solution
+            solution2: {:class:`~macop.solutions.base.Solution`} -- the second solution to use for generating new solution
 
 
         Returns:
         Returns:
             {:class:`~macop.solutions.base.Solution`}: new continuous generated solution
             {:class:`~macop.solutions.base.Solution`}: new continuous generated solution
         """
         """
 
 
-        size = solution.size
+        size = solution1.size
 
 
-        solution1 = solution.clone()
+        solution1 = solution1.clone()
 
 
         # create two new random solutions using instance and its static method
         # create two new random solutions using instance and its static method
-        solution2 = solution.random(size, interval=(self.mini, self.maxi))
-        solution3 = solution.random(size, interval=(self.mini, self.maxi))
+        solution2 = solution1.random(size, interval=(self.mini, self.maxi))
+        solution3 = solution1.random(size, interval=(self.mini, self.maxi))
 
 
         # apply crossover on the new computed solution
         # apply crossover on the new computed solution
         for i in range(len(solution1.data)):
         for i in range(len(solution1.data)):