Parcourir la source

Update of Solution and Algorithm attributes using @property decorator

Jérôme BUISINE il y a 3 ans
Parent
commit
9e997acbf8

+ 25 - 4
CONTRIBUTING.md

@@ -95,7 +95,8 @@ In order to facilitate the integration of new modules, do not hesitate to let me
 In order to allow quick access to the code, the project follows the documentation conventions (docstring) proposed by Google. Here an example:
 
 ```python
-''' Binary integer solution class
+class BinarySolution():
+"""Binary integer solution class
 
     - store solution as a binary array (example: [0, 1, 0, 1, 1])
     - associated size is the size of the array
@@ -105,7 +106,27 @@ In order to allow quick access to the code, the project follows the documentatio
        data: {ndarray} --  array of binary values
        size: {int} -- size of binary array values
        score: {float} -- fitness score value
-'''
+"""
+```
+
+For method:
+```python
+class BinarySolution():
+
+...
+
+def random(self, validator):
+    """
+    Intialize binary array with use of validator to generate valid random solution
+
+    Args:
+        size: {int} -- expected solution size to generate
+        validator: {function} -- specific function which validates or not a solution (if None, not validation is applied)
+
+    Returns:
+        {:class:`~macop.solutions.discrete.BinarySolution`}: new generated binary solution
+    """
+    ...
 ```
 
 You can generate documentation and display updates using these following commands:
@@ -133,11 +154,11 @@ This project uses the [doctest](https://docs.python.org/3/library/doctest.html)
     >>> data = [0, 1, 0, 1, 1]
     >>> solution = BinarySolution(data, len(data))
     >>> # check data content
-    >>> sum(solution.getData()) == 3
+    >>> sum(solution.data) == 3
     True
     >>> # clone solution
     >>> solution_copy = solution.clone()
-    >>> all(solution_copy._data == solution.getData())
+    >>> all(solution_copy._data == solution.data)
 """
 ```
 

+ 1 - 1
README.md

@@ -41,7 +41,7 @@ Flexible discrete optimisation package allowing a quick implementation of your p
 
 The primary advantage of using Python is that it allows you to dynamically add new members within the new implemented solution or algorithm classes. This of course does not close the possibilities of extension and storage of information within solutions and algorithms. It all depends on the need in question.
 
-### In `macop.algorihtms` module:
+### In `macop.algorithms` module:
 
 Both single and multi-objective algorithms have been implemented for demonstration purposes. 
 

+ 6 - 2
docs/source/conf.py

@@ -25,9 +25,9 @@ copyright = '2021, Jérôme BUISINE'
 author = 'Jérôme BUISINE'
 
 # The short X.Y version
-version = '1.0.11'
+version = '1.0.12'
 # The full version, including alpha/beta/rc tags
-release = 'v1.0.11'
+release = 'v1.0.12'
 
 
 # -- General configuration ---------------------------------------------------
@@ -47,10 +47,14 @@ extensions = [
     'sphinx.ext.autosummary',
     'sphinx.ext.viewcode',
     'sphinx.ext.coverage',
+    'sphinx.ext.intersphinx',
     #'sphinx.ext.pngmath',
     #'autoapi.extension' 
 ]
 
+# Enable numref
+numfig = True
+
 # These folders are copied to the documentation's HTML output
 html_static_path = ['_static']
 

+ 1 - 1
docs/source/contributing.rst

@@ -10,4 +10,4 @@ Using GitHub
 
 Please refer to the guidelines_ file if you want more information about process!
 
-.. _guidelines: https://github.com/prise-3d/macop/blob/master/CONTRIBUTING
+.. _guidelines: https://github.com/prise-3d/macop/blob/master/CONTRIBUTING.md

+ 74 - 33
docs/source/documentations.rst

@@ -122,9 +122,31 @@ Some specific methods are available:
             """
             ...
 
+        @property
         def fitness(self):
             """
-            Returns fitness score
+            Returns fitness score (by default `score` private attribute)
+            """
+            ...
+
+        @fitness.setter
+        def fitness(self, score):
+            """
+            Set solution score as wished (by default `score` private attribute)
+            """
+            ...
+
+        @property
+        def data(self):
+            """
+            Returns solution data (by default `data` private attribute)
+            """
+            ...
+
+        @data.setter
+        def data(self, data):
+            """
+            Set solution data (by default `data` private attribute)
             """
             ...
 
@@ -141,6 +163,8 @@ Some specific methods are available:
             """
             ...
 
+.. caution::
+    An important thing here are the ``fitness`` and ``data`` functions brought as an editable attribute by the ``@property`` and ``@XXXXX.setter`` decorators. The idea is to allow the user to modify these functions in order to change the expected result of the algorithm regardless of the data to be returned/modified. 
 
 From these basic methods, it is possible to manage a representation of a solution to our problem. 
 
@@ -259,7 +283,7 @@ To avoid taking into account invalid solutions, we can define our function which
 
         for i, w in enumerate(elements_weight):
             # add weight if current object is set to 1
-            weight_sum += w * solution.getData()[i]
+            weight_sum += w * solution.data[i]
         
         # validation condition
         return weight_sum <= 15
@@ -354,7 +378,7 @@ We will define the ``KnapsackEvaluator`` class, which will therefore allow us to
 
             # `_data` contains worths array values of objects
             fitness = 0
-            for index, elem in enumerate(solution.getData()):
+            for index, elem in enumerate(solution.data):
                 fitness += self._data['worths'][index] * elem
 
             return fitness
@@ -382,7 +406,7 @@ It is now possible to initialise our new evaluator with specific data of our pro
     solution_fitness = solution.evaluate(evaluator)
 
     # score is also stored into solution
-    solution_fitness = solution.fitness()
+    solution_fitness = solution.fitness
 
 .. note::
     The current developed ``KnapsackEvaluator`` is available into macop.evaluators.discrete.mono.KnapsackEvaluator_ in **Macop**.
@@ -437,7 +461,7 @@ Like the evaluator, the operator keeps **track of the algorithm** (using ``setAl
         """Abstract Mutation extend from Operator
 
         Attributes:
-            kind: {KindOperator} -- specify the kind of operator
+            kind: {:class:`~macop.operators.base.KindOperator`} -- specify the kind of operator
         """
         def __init__(self):
             self._kind = KindOperator.MUTATOR
@@ -450,7 +474,7 @@ Like the evaluator, the operator keeps **track of the algorithm** (using ``setAl
         """Abstract crossover extend from Operator
 
         Attributes:
-            kind: {KindOperator} -- specify the kind of operator
+            kind: {:class:`~macop.operators.base.KindOperator`} -- specify the kind of operator
         """
         def __init__(self):
             self._kind = KindOperator.CROSSOVER
@@ -496,10 +520,10 @@ The modification applied here is just a bit swapped. Let's define the ``SimpleBi
             copy_solution = solution.clone()
 
             # swicth values
-            if copy_solution.getData()[cell]:
-                copy_solution.getData()[cell] = 0
+            if copy_solution.data[cell]:
+                copy_solution.data[cell] = 0
             else:
-                copy_solution.getData()[cell] = 1
+                copy_solution.data[cell] = 1
 
             # return the new obtained solution
             return copy_solution
@@ -563,7 +587,7 @@ The first half of solution 1 will be saved and added to the second half of solut
             # copy of solution 2
             copy_solution = solution2.clone()
 
-            copy_solution.getData()[splitIndex:] = firstData[splitIndex:]
+            copy_solution.data[splitIndex:] = firstData[splitIndex:]
 
             return copy_solution
 
@@ -734,16 +758,16 @@ We will pay attention to the different methods of which she is composed. This cl
 She is composed of few default attributes:
 
 - initialiser: {function} -- basic function strategy to initialise solution
-- evaluator: {Evaluator} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
-- operators: {[Operator]} -- list of operator to use when launching algorithm
-- policy: {Policy} -- Policy instance strategy to select operators
+- evaluator: {:class:`~macop.evaluators.base.Evaluator`} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
+- operators: {[:class:`~macop.operators.base.Operator`]} -- list of operator to use when launching algorithm
+- policy: {:class:`~macop.policies.base.Policy`} -- Policy instance strategy to select operators
 - validator: {function} -- basic function to check if solution is valid or not under some constraints
 - maximise: {bool} -- specify kind of optimisation problem 
 - verbose: {bool} -- verbose or not information about the algorithm
-- currentSolution: {Solution} -- current solution managed for current evaluation comparison
-- bestSolution: {Solution} -- best solution found so far during running algorithm
-- callbacks: {[Callback]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
-- parent: {Algorithm} -- parent algorithm reference in case of inner Algorithm instance (optional)
+- currentSolution: {:class:`~macop.solutions.base.Solution`} -- current solution managed for current evaluation comparison
+- bestSolution: {:class:`~macop.solutions.base.Solution`} -- best solution found so far during running algorithm
+- callbacks: {[:class:`~macop.callbacks.base.Callback`]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
+- parent: {:class:`~macop.algorithms.base.Algorithm`} -- parent algorithm reference in case of inner Algorithm instance (optional)
 
 .. code-block:: python
 
@@ -772,6 +796,20 @@ She is composed of few default attributes:
             """
             ...
 
+        @property
+        def result(self):
+            """Get the expected result of the current algorithm
+
+            By default the best solution (but can be anything you want)
+            """
+            ...
+
+        @result.setter
+        def result(self, result):
+            """Set current default result of the algorithm
+            """
+            ...
+
         def getParent(self):
             """
             Recursively find the main parent algorithm attached of the current algorithm
@@ -784,7 +822,6 @@ She is composed of few default attributes:
             """
             ...
 
-
         def initRun(self):
             """
             initialise the current solution and best solution using the `initialiser` function
@@ -847,6 +884,9 @@ She is composed of few default attributes:
             ...
 
 
+.. caution::
+    An important thing here are the ``result`` functions brought as an editable attribute by the ``@property`` and ``@result.setter`` decorators. The idea is to allow the user to modify these functions in order to change the expected result of the algorithm regardless of the data to be returned/modified. 
+
 The notion of hierarchy between algorithms is introduced here. We can indeed have certain dependencies between algorithms. 
 The methods ``increaseEvaluation``, ``getGlobalEvaluation`` and ``getGlobalMaxEvaluation`` ensure that the expected global number of evaluations is correctly managed, just like the ``stop`` method for the search stop criterion.
 
@@ -865,6 +905,7 @@ It is always **mandatory** to call the parent class ``run`` method using ``super
 .. warning::
     The other methods such as ``addCallback``, ``resume`` and ``progress`` will be detailed in the next part focusing on the notion of callback.
 
+
 Local search algorithm
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -937,7 +978,7 @@ We will also need to define a **solution initialisation function** so that the a
     evaluator = KnapsackEvaluator(data={'worths': elements_score})
 
     # valid instance using lambda
-    validator = lambda solution: sum([ elements_weight[i] * solution.getData()[i] for i in range(len(solution.getData()))]) <= 15
+    validator = lambda solution: sum([ elements_weight[i] * solution.data[i] for i in range(len(solution.data))]) <= 15
     
     # initialiser instance using lambda with default param value
     initialiser = lambda x=5: BinarySolution.random(x, validator)
@@ -953,7 +994,7 @@ We will also need to define a **solution initialisation function** so that the a
 
     # run the algorithm and get solution found
     solution = algo.run(100)
-    print(solution.fitness())
+    print(solution.fitness)
 
 
 .. note::
@@ -1047,7 +1088,7 @@ Then, we use this local search in our ``run`` method to allow a better search fo
     evaluator = KnapsackEvaluator(data={'worths': elements_score})
 
     # valid instance using lambda
-    validator = lambda solution: sum([ elements_weight[i] * solution.getData()[i] for i in range(len(solution.getData()))]) <= 15
+    validator = lambda solution: sum([ elements_weight[i] * solution.data[i] for i in range(len(solution.data))]) <= 15
     
     # initialiser instance using lambda with default param value
     initialiser = lambda x=5: BinarySolution.random(x, validator)
@@ -1064,7 +1105,7 @@ Then, we use this local search in our ``run`` method to allow a better search fo
 
     # run the algorithm using local search and get solution found 
     solution = algo.run(evaluations=100, ls_evaluations=10)
-    print(solution.fitness())
+    print(solution.fitness)
 
 
 .. note:: 
@@ -1102,7 +1143,7 @@ Here is an example of use when running an algorithm:
 
     # run the algorithm using local search and get solution found 
     solution = algo.run(evaluations=100)
-    print(solution.fitness())
+    print(solution.fitness)
 
 Hence, log data are saved into ``data/example.log`` in our example.
 
@@ -1174,18 +1215,18 @@ We are going to create our own Callback instance called ``BasicCheckpoint`` whic
             if currentEvaluation % self._every == 0:
 
                 # create specific line with solution data
-                solutionData = ""
-                solutionSize = len(solution.getData())
+                solution.data = ""
+                solutionSize = len(solution.data)
 
-                for index, val in enumerate(solution.getData()):
-                    solutionData += str(val)
+                for index, val in enumerate(solution.data):
+                    solution.data += str(val)
 
                     if index < solutionSize - 1:
-                        solutionData += ' '
+                        solution.data += ' '
 
                 # number of evaluations done, solution data and fitness score
-                line = str(currentEvaluation) + ';' + solutionData + ';' + str(
-                    solution.fitness()) + ';\n'
+                line = str(currentEvaluation) + ';' + solution.data + ';' + str(
+                    solution.fitness) + ';\n'
 
                 # check if file exists
                 if not os.path.exists(self._filepath):
@@ -1217,14 +1258,14 @@ We are going to create our own Callback instance called ``BasicCheckpoint`` whic
                         self._algo._numberOfEvaluations = globalEvaluation
 
                     # get best solution data information
-                    solutionData = list(map(int, data[1].split(' ')))
+                    solution.data = list(map(int, data[1].split(' ')))
 
                     # avoid uninitialised solution
                     if self._algo._bestSolution is None:
                         self._algo._bestSolution = self._algo.initialiser()
 
                     # set to algorithm the lastest obtained best solution
-                    self._algo._bestsolution.getData() = np.array(solutionData)
+                    self._algo._bestsolution.data = np.array(solution.data)
                     self._algo._bestSolution._score = float(data[2])
 
 
@@ -1245,7 +1286,7 @@ In this way, it is possible to specify the use of a callback to our algorithm in
 
     # run the algorithm using local search and get solution found 
     solution = algo.run(evaluations=100)
-    print(solution.fitness())
+    print(solution.fitness)
 
 
 .. note::

+ 4 - 4
docs/source/documentations/algorithms.rst

@@ -238,7 +238,7 @@ We will also need to define a **solution initialisation function** so that the a
     evaluator = KnapsackEvaluator(data={'worths': elements_score})
 
     # valid instance using lambda
-    validator = lambda solution: sum([ elements_weight[i] * solution.getData()[i] for i in range(len(solution.getData()))]) <= 15
+    validator = lambda solution: sum([ elements_weight[i] * solution.getdata = )[i] for i in range(len(solution.getdata = )))]) <= 15
     
     # initialiser instance using lambda with default param value
     initialiser = lambda x=5: BinarySolution.random(x, validator)
@@ -254,7 +254,7 @@ We will also need to define a **solution initialisation function** so that the a
 
     # run the algorithm and get solution found
     solution = algo.run(100)
-    print(solution.fitness())
+    print(solution.fitness)
 
 
 .. note::
@@ -348,7 +348,7 @@ Then, we use this local search in our ``run`` method to allow a better search fo
     evaluator = KnapsackEvaluator(data={'worths': elements_score})
 
     # valid instance using lambda
-    validator = lambda solution: sum([ elements_weight[i] * solution.getData()[i] for i in range(len(solution.getData()))]) <= 15
+    validator = lambda solution: sum([ elements_weight[i] * solution.getdata = )[i] for i in range(len(solution.getdata = )))]) <= 15
     
     # initialiser instance using lambda with default param value
     initialiser = lambda x=5: BinarySolution.random(x, validator)
@@ -365,7 +365,7 @@ Then, we use this local search in our ``run`` method to allow a better search fo
 
     # run the algorithm using local search and get solution found 
     solution = algo.run(evaluations=100, ls_evaluations=10)
-    print(solution.fitness())
+    print(solution.fitness)
 
 
 .. note:: 

+ 11 - 11
docs/source/documentations/callbacks.rst

@@ -28,7 +28,7 @@ Here is an example of use when running an algorithm:
 
     # run the algorithm using local search and get solution found 
     solution = algo.run(evaluations=100)
-    print(solution.fitness())
+    print(solution.fitness)
 
 Hence, log data are saved into ``data/example.log`` in our example.
 
@@ -100,18 +100,18 @@ We are going to create our own Callback instance called ``BasicCheckpoint`` whic
             if currentEvaluation % self._every == 0:
 
                 # create specific line with solution data
-                solutionData = ""
-                solutionSize = len(solution.getData())
+                solution.data = ""
+                solutionSize = len(solution.getdata = ))
 
-                for index, val in enumerate(solution.getData()):
-                    solutionData += str(val)
+                for index, val in enumerate(solution.getdata = )):
+                    solution.data += str(val)
 
                     if index < solutionSize - 1:
-                        solutionData += ' '
+                        solution.data += ' '
 
                 # number of evaluations done, solution data and fitness score
-                line = str(currentEvaluation) + ';' + solutionData + ';' + str(
-                    solution.fitness()) + ';\n'
+                line = str(currentEvaluation) + ';' + solution.data + ';' + str(
+                    solution.fitness) + ';\n'
 
                 # check if file exists
                 if not os.path.exists(self._filepath):
@@ -143,14 +143,14 @@ We are going to create our own Callback instance called ``BasicCheckpoint`` whic
                         self._algo._numberOfEvaluations = globalEvaluation
 
                     # get best solution data information
-                    solutionData = list(map(int, data[1].split(' ')))
+                    solution.data = list(map(int, data[1].split(' ')))
 
                     # avoid uninitialised solution
                     if self._algo._bestSolution is None:
                         self._algo._bestSolution = self._algo.initialiser()
 
                     # set to algorithm the lastest obtained best solution
-                    self._algo._bestsolution.getData() = np.array(solutionData)
+                    self._algo._bestsolution.getdata = ) = np.array(solution.data)
                     self._algo._bestSolution._score = float(data[2])
 
 
@@ -171,7 +171,7 @@ In this way, it is possible to specify the use of a callback to our algorithm in
 
     # run the algorithm using local search and get solution found 
     solution = algo.run(evaluations=100)
-    print(solution.fitness())
+    print(solution.fitness)
 
 
 .. note::

+ 2 - 2
docs/source/documentations/evaluators.rst

@@ -61,7 +61,7 @@ We will define the ``KnapsackEvaluator`` class, which will therefore allow us to
 
             # `_data` contains worths array values of objects
             fitness = 0
-            for index, elem in enumerate(solution.getData()):
+            for index, elem in enumerate(solution.getdata = )):
                 fitness += self._data['worths'][index] * elem
 
             return fitness
@@ -89,7 +89,7 @@ It is now possible to initialise our new evaluator with specific data of our pro
     solution_fitness = solution.evaluate(evaluator)
 
     # score is also stored into solution
-    solution_fitness = solution.fitness()
+    solution_fitness = solution.fitness
 
 .. note::
     The current developed ``KnapsackEvaluator`` is available into ``macop.evaluators.mono.KnapsackEvaluator`` in **Macop**.

+ 4 - 4
docs/source/documentations/operators.rst

@@ -105,10 +105,10 @@ The modification applied here is just a bit swapped. Let's define the ``SimpleBi
             copy_solution = solution.clone()
 
             # swicth values
-            if copy_solution.getData()[cell]:
-                copy_solution.getData()[cell] = 0
+            if copy_solution.getdata = )[cell]:
+                copy_solution.getdata = )[cell] = 0
             else:
-                copy_solution.getData()[cell] = 1
+                copy_solution.getdata = )[cell] = 1
 
             # return the new obtained solution
             return copy_solution
@@ -172,7 +172,7 @@ The first half of solution 1 will be saved and added to the second half of solut
             # copy of solution 2
             copy_solution = solution2.clone()
 
-            copy_solution.getData()[splitIndex:] = firstData[splitIndex:]
+            copy_solution.getdata = )[splitIndex:] = firstData[splitIndex:]
 
             return copy_solution
 

+ 1 - 1
docs/source/documentations/validator.rst

@@ -35,7 +35,7 @@ To avoid taking into account invalid solutions, we can define our function which
 
         for i, w in enumerate(elements_weight):
             # add weight if current object is set to 1
-            weight_sum += w * solution.getData()[i]
+            weight_sum += w * solution.getdata = )[i]
         
         # validation condition
         return weight_sum <= 15

+ 3 - 3
docs/source/examples/qap/implementation.rst

@@ -63,8 +63,8 @@ So we are going to create a class that will inherit from the abstract class ``ma
             {float} -- fitness score of solution
         """
         fitness = 0
-        for index_i, val_i in enumerate(solution.getData()):
-            for index_j, val_j in enumerate(solution.getData()):
+        for index_i, val_i in enumerate(solution.getdata = )):
+            for index_j, val_j in enumerate(solution.getdata = )):
                 fitness += self._data['F'][index_i, index_j] * self._data['D'][val_i, val_j]
 
         return fitness
@@ -105,7 +105,7 @@ If you are uncomfortable with some of the elements in the code that will follow,
 
     # default validator (check the consistency of our data, i.e. only unique element)
     def validator(solution):
-        if len(list(solution.getData())) > len(set(list(solution.getData()))):
+        if len(list(solution.getdata = ))) > len(set(list(solution.getdata = )))):
             print("not valid")
             return False
         return True

+ 2 - 2
docs/source/examples/ubqp/implementation.rst

@@ -60,8 +60,8 @@ So we are going to create a class that will inherit from the abstract class ``ma
             {float} -- fitness score of solution
         """
         fitness = 0
-        for index_i, val_i in enumerate(solution.getData()):
-            for index_j, val_j in enumerate(solution.getData()):
+        for index_i, val_i in enumerate(solution.getdata = )):
+            for index_j, val_j in enumerate(solution.getdata = )):
                 fitness += self._data['Q'][index_i, index_j] * val_i * val_j
 
         return fitness

+ 3 - 3
docs/source/qap_example.rst

@@ -211,8 +211,8 @@ So we are going to create a class that will inherit from the abstract class maco
             {float} -- fitness score of solution
         """
         fitness = 0
-        for index_i, val_i in enumerate(solution.getData()):
-            for index_j, val_j in enumerate(solution.getData()):
+        for index_i, val_i in enumerate(solution.getdata = )):
+            for index_j, val_j in enumerate(solution.getdata = )):
                 fitness += self._data['F'][index_i, index_j] * self._data['D'][val_i, val_j]
 
         return fitness
@@ -253,7 +253,7 @@ If you are uncomfortable with some of the elements in the code that will follow,
 
     # default validator (check the consistency of our data, i.e. only unique element)
     def validator(solution):
-        if len(list(solution.getData())) > len(set(list(solution.getData()))):
+        if len(list(solution.getdata = ))) > len(set(list(solution.getdata = )))):
             print("not valid")
             return False
         return True

+ 2 - 2
docs/source/ubqp_example.rst

@@ -139,8 +139,8 @@ So we are going to create a class that will inherit from the abstract class maco
             {float} -- fitness score of solution
         """
         fitness = 0
-        for index_i, val_i in enumerate(solution.getData()):
-            for index_j, val_j in enumerate(solution.getData()):
+        for index_i, val_i in enumerate(solution.getdata = )):
+            for index_j, val_j in enumerate(solution.getdata = )):
                 fitness += self._data['Q'][index_i, index_j] * val_i * val_j
 
         return fitness

+ 1 - 1
examples/knapsackExample.py

@@ -33,7 +33,7 @@ elements_weight = [ random.randint(2, 5) for _ in range(30) ]
 def knapsackWeight(solution):
 
     weight_sum = 0
-    for index, elem in enumerate(solution.getData()):
+    for index, elem in enumerate(solution.data):
         weight_sum += elements_weight[index] * elem
 
     return weight_sum

+ 1 - 1
examples/knapsackMultiExample.py

@@ -35,7 +35,7 @@ elements_weight = [ random.randint(90, 100) for _ in range(500) ]
 def knapsackWeight(solution):
 
     weight_sum = 0
-    for index, elem in enumerate(solution.getData()):
+    for index, elem in enumerate(solution.data):
         weight_sum += elements_weight[index] * elem
 
     return weight_sum

+ 1 - 1
examples/qapExample.py

@@ -33,7 +33,7 @@ filepath = "data/checkpoints_qap.csv"
 
 # default validator
 def validator(solution):
-    if len(list(solution.getData())) > len(set(list(solution.getData()))):
+    if len(list(solution.data)) > len(set(list(solution.data))):
         print("not valid")
         return False
     return True

+ 31 - 29
macop/algorithms/base.py

@@ -23,16 +23,16 @@ class Algorithm():
 
     Attributes:
         initialiser: {function} -- basic function strategy to initialise solution
-        evaluator: {Evaluator} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
-        operators: {[Operator]} -- list of operator to use when launching algorithm
-        policy: {Policy} -- Policy implementation strategy to select operators
+        evaluator: {:class:`~macop.evaluators.base.Evaluator`} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
+        operators: {[:class:`~macop.operators.base.Operator`]} -- list of operator to use when launching algorithm
+        policy: {:class:`~macop.policies.base.Policy`} -- Policy implementation strategy to select operators
         validator: {function} -- basic function to check if solution is valid or not under some constraints
         maximise: {bool} -- specify kind of optimisation problem 
         verbose: {bool} -- verbose or not information about the algorithm
-        currentSolution: {Solution} -- current solution managed for current evaluation comparison
-        bestSolution: {Solution} -- best solution found so far during running algorithm
-        callbacks: {[Callback]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
-        parent: {Algorithm} -- parent algorithm reference in case of inner Algorithm instance (optional)
+        currentSolution: {:class:`~macop.solutions.base.Solution`} -- current solution managed for current evaluation comparison
+        bestSolution: {:class:`~macop.solutions.base.Solution`} -- best solution found so far during running algorithm
+        callbacks: {[:class:`~macop.callbacks.base.Callback`]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
+        parent: {:class:`~macop.algorithms.base.Algorithm`} -- parent algorithm reference in case of inner Algorithm instance (optional)
     """
     def __init__(self,
                  initialiser,
@@ -47,12 +47,12 @@ class Algorithm():
 
         Args:
             initialiser: {function} -- basic function strategy to initialise solution
-            evaluator: {Evaluator} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
-            operators: {[Operator]} -- list of operator to use when launching algorithm
-            policy: {Policy} -- Policy implementation strategy to select operators
+            evaluator: {:class:`~macop.evaluators.base.Evaluator`} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
+            operators: {[:class:`~macop.operators.base.Operator`]} -- list of operator to use when launching algorithm
+            policy: {:class:`~macop.policies.base.Policy`} -- Policy implementation strategy to select operators
             validator: {function} -- basic function to check if solution is valid or not under some constraints
             maximise: {bool} -- specify kind of optimisation problem 
-            parent: {Algorithm} -- parent algorithm reference in case of inner Algorithm instance (optional)
+            parent: {:class:`~macop.algorithms.base.Algorithm`} -- parent algorithm reference in case of inner Algorithm instance (optional)
             verbose: {bool} -- verbose or not information about the algorithm
         """
 
@@ -97,7 +97,7 @@ class Algorithm():
         """Add new callback to algorithm specifying usefull parameters
 
         Args:
-            callback: {Callback} -- specific Callback instance
+            callback: {:class:`~macop.callbacks.base.Callback`} -- specific Callback instance
         """
         # specify current main algorithm reference for callback
         if self._parent is not None:
@@ -120,7 +120,7 @@ class Algorithm():
         """Recursively find the main parent algorithm attached of the current algorithm
 
         Returns:
-            {Algorithm} -- main algorithm set for this algorithm
+            {:class:`~macop.algorithms.base.Algorithm`}: main algorithm set for this algorithm
         """
 
         current_algorithm = self
@@ -137,21 +137,23 @@ class Algorithm():
         """Set parent algorithm to current algorithm
 
         Args:
-            parent: {Algorithm} -- main algorithm set for this algorithm
+            parent: {:class:`~macop.algorithms.base.Algorithm`} -- main algorithm set for this algorithm
         """
         self._parent = parent
 
-    def getResult(self):
+    @property
+    def result(self):
         """Get the expected result of the current algorithm
 
         By default the best solution (but can be anything you want)
 
         Returns:
-            {object} -- expected result data of the current algorithm
+            {object}: expected result data of the current algorithm
         """
         return self._bestSolution
 
-    def setDefaultResult(self, result):
+    @result.setter
+    def result(self, result):
         """Set current default result of the algorithm
 
         Args:
@@ -190,7 +192,7 @@ class Algorithm():
         """Get the global number of evaluation (if inner algorithm)
 
         Returns:
-            {int} -- current global number of evaluation
+            {int}: current global number of evaluation
         """
         parent_algorithm = self.getParent()
 
@@ -203,7 +205,7 @@ class Algorithm():
         """Get the current number of evaluation
 
         Returns:
-            {int} -- current number of evaluation
+            {int}: current number of evaluation
         """
         return self._numberOfEvaluations
 
@@ -219,7 +221,7 @@ class Algorithm():
         """Get the global max number of evaluation (if inner algorithm)
 
         Returns:
-            {int} -- current global max number of evaluation
+            {int}: current global max number of evaluation
         """
 
         parent_algorithm = self.getParent()
@@ -246,10 +248,10 @@ class Algorithm():
         Evaluate a solution using evaluator passed when intialize algorithm
 
         Args:
-            solution: {Solution} -- solution to evaluate
+            solution: {:class:`~macop.solutions.base.Solution`} -- solution to evaluate
 
         Returns: 
-            {float} -- fitness score of solution which is not already evaluated or changed
+            {float}: fitness score of solution which is not already evaluated or changed
 
         Note: 
             if multi-objective problem this method can be updated using array of `evaluator`
@@ -262,10 +264,10 @@ class Algorithm():
         Check if solution is valid after modification and returns it
         
         Args:
-            solution: {Solution} -- solution to update using current policy
+            solution: {:class:`~macop.solutions.base.Solution`} -- solution to update using current policy
 
         Returns:
-            {Solution} -- updated solution obtained by the selected operator
+            {:class:`~macop.solutions.base.Solution`}: updated solution obtained by the selected operator
         """
 
         # two parameters are sent if specific crossover solution are wished
@@ -289,20 +291,20 @@ class Algorithm():
         - fitness comparison is done using problem nature (maximising or minimising)
 
         Args:
-            solution: {Solution} -- solution to compare with best one
+            solution: {:class:`~macop.solutions.base.Solution`} -- solution to compare with best one
 
         Returns:
-            {bool} -- `True` if better
+            {bool}:`True` if better
         """
         if not solution.isValid(self.validator):
             return False
 
         # depending of problem to solve (maximizing or minimizing)
         if self._maximise:
-            if solution.fitness() > self._bestSolution.fitness():
+            if solution.fitness > self._bestSolution.fitness:
                 return True
         else:
-            if solution.fitness() < self._bestSolution.fitness():
+            if solution.fitness < self._bestSolution.fitness:
                 return True
 
         # by default
@@ -359,7 +361,7 @@ class Algorithm():
 
     def information(self):
         logging.info(
-            f"-- Best {self._bestSolution} - SCORE {self._bestSolution.fitness()}"
+            f"-- Best {self._bestSolution} - SCORE {self._bestSolution.fitness}"
         )
 
     def __str__(self):

+ 35 - 35
macop/algorithms/mono.py

@@ -17,15 +17,15 @@ class HillClimberFirstImprovment(Algorithm):
 
     Attributes:
         initalizer: {function} -- basic function strategy to initialise solution
-        evaluator: {Evaluator} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
-        operators: {[Operator]} -- list of operator to use when launching algorithm
-        policy: {Policy} -- Policy class implementation strategy to select operators
+        evaluator: {:class:`~macop.evaluators.base.Evaluator`} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
+        operators: {[:class:`~macop.operators.base.Operator`]} -- list of operator to use when launching algorithm
+        policy: {:class:`~macop.policies.base.Policy`} -- Policy class implementation strategy to select operators
         validator: {function} -- basic function to check if solution is valid or not under some constraints
         maximise: {bool} -- specify kind of optimisation problem 
-        currentSolution: {Solution} -- current solution managed for current evaluation
-        bestSolution: {Solution} -- best solution found so far during running algorithm
-        callbacks: {[Callback]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
-        parent: {Algorithm} -- parent algorithm reference in case of inner Algorithm instance (optional)
+        currentSolution: {:class:`~macop.solutions.base.Solution`} -- current solution managed for current evaluation
+        bestSolution: {:class:`~macop.solutions.base.Solution`} -- best solution found so far during running algorithm
+        callbacks: {[:class:`~macop.callbacks.base.Callback`]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
+        parent: {:class:`~macop.algorithms.base.Algorithm`} -- parent algorithm reference in case of inner Algorithm instance (optional)
     
     Example:
 
@@ -52,7 +52,7 @@ class HillClimberFirstImprovment(Algorithm):
     >>>
     >>> # validator specification (based on weights of each objects)
     >>> weights = [ random.randint(5, 30) for i in range(problem_size) ]
-    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.getData()) if value == 1]) < 200 else False
+    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.data) if value == 1]) < 200 else False
     >>>
     >>> # initialiser function for binary solution using specific solution size
     >>> initialiser = lambda x=20: BinarySolution.random(x, validator)
@@ -75,7 +75,7 @@ class HillClimberFirstImprovment(Algorithm):
             evaluations: {int} -- number of Local search evaluations
             
         Returns:
-            {Solution} -- best solution found
+            {:class:`~macop.solutions.base.Solution`}: best solution found
         """
 
         # by default use of mother method to initialise variables
@@ -104,7 +104,7 @@ class HillClimberFirstImprovment(Algorithm):
 
                 self.progress()
                 logging.info(
-                    f"---- Current {newSolution} - SCORE {newSolution.fitness()}"
+                    f"---- Current {newSolution} - SCORE {newSolution.fitness}"
                 )
 
                 # stop algorithm if necessary
@@ -130,15 +130,15 @@ class HillClimberBestImprovment(Algorithm):
 
     Attributes:
         initalizer: {function} -- basic function strategy to initialise solution
-        evaluator: {Evaluator} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
-        operators: {[Operator]} -- list of operator to use when launching algorithm
-        policy: {Policy} -- Policy class implementation strategy to select operators
+        evaluator: {:class:`~macop.evaluators.base.Evaluator`} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
+        operators: {[:class:`~macop.operators.base.Operator`]} -- list of operator to use when launching algorithm
+        policy: {:class:`~macop.policies.base.Policy`} -- Policy class implementation strategy to select operators
         validator: {function} -- basic function to check if solution is valid or not under some constraints
         maximise: {bool} -- specify kind of optimisation problem 
-        currentSolution: {Solution} -- current solution managed for current evaluation
-        bestSolution: {Solution} -- best solution found so far during running algorithm
-        callbacks: {[Callback]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
-        parent: {Algorithm} -- parent algorithm reference in case of inner Algorithm instance (optional)
+        currentSolution: {:class:`~macop.solutions.base.Solution`} -- current solution managed for current evaluation
+        bestSolution: {:class:`~macop.solutions.base.Solution`} -- best solution found so far during running algorithm
+        callbacks: {[:class:`~macop.callbacks.base.Callback`]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
+        parent: {:class:`~macop.algorithms.base.Algorithm`} -- parent algorithm reference in case of inner Algorithm instance (optional)
     
     Example:
 
@@ -165,7 +165,7 @@ class HillClimberBestImprovment(Algorithm):
     >>>
     >>> # validator specification (based on weights of each objects)
     >>> weights = [ random.randint(5, 30) for i in range(problem_size) ]
-    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.getData()) if value == 1]) < 200 else False
+    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.data) if value == 1]) < 200 else False
     >>>
     >>> # initialiser function for binary solution using specific solution size
     >>> initialiser = lambda x=20: BinarySolution.random(x, validator)
@@ -188,7 +188,7 @@ class HillClimberBestImprovment(Algorithm):
             evaluations: {int} -- number of Local search evaluations
             
         Returns:
-            {Solution} -- best solution found
+            {:class:`~macop.solutions.base.Solution`}: best solution found
         """
 
         # by default use of mother method to initialise variables
@@ -216,7 +216,7 @@ class HillClimberBestImprovment(Algorithm):
 
                 self.progress()
                 logging.info(
-                    f"---- Current {newSolution} - SCORE {newSolution.fitness()}"
+                    f"---- Current {newSolution} - SCORE {newSolution.fitness}"
                 )
 
                 # stop algorithm if necessary
@@ -244,15 +244,15 @@ class IteratedLocalSearch(Algorithm):
     Attributes:
         initalizer: {function} -- basic function strategy to initialise solution
         evaluator: {function} -- basic function in order to obtained fitness (mono or multiple objectives)
-        operators: {[Operator]} -- list of operator to use when launching algorithm
-        policy: {Policy} -- Policy class implementation strategy to select operators
+        operators: {[:class:`~macop.operators.base.Operator`]} -- list of operator to use when launching algorithm
+        policy: {:class:`~macop.policies.base.Policy`} -- Policy class implementation strategy to select operators
         validator: {function} -- basic function to check if solution is valid or not under some constraints
         maximise: {bool} -- specify kind of optimisation problem 
-        currentSolution: {Solution} -- current solution managed for current evaluation
-        bestSolution: {Solution} -- best solution found so far during running algorithm
-        localSearch: {Algorithm} -- current local search into ILS
-        callbacks: {[Callback]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
-        parent: {Algorithm} -- parent algorithm reference in case of inner Algorithm instance (optional)
+        currentSolution: {:class:`~macop.solutions.base.Solution`} -- current solution managed for current evaluation
+        bestSolution: {:class:`~macop.solutions.base.Solution`} -- best solution found so far during running algorithm
+        localSearch: {:class:`~macop.algorithms.base.Algorithm`} -- current local search into ILS
+        callbacks: {[:class:`~macop.callbacks.base.Callback`]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
+        parent: {:class:`~macop.algorithms.base.Algorithm`} -- parent algorithm reference in case of inner Algorithm instance (optional)
     
     Example:
 
@@ -280,7 +280,7 @@ class IteratedLocalSearch(Algorithm):
     >>>
     >>> # validator specification (based on weights of each objects)
     >>> weights = [ random.randint(5, 30) for i in range(problem_size) ]
-    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.getData()) if value == 1]) < 200 else False
+    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.data) if value == 1]) < 200 else False
     >>>
     >>> # initialiser function with lambda function
     >>> initialiser = lambda x=20: BinarySolution.random(x, validator)
@@ -306,17 +306,17 @@ class IteratedLocalSearch(Algorithm):
                  maximise=True,
                  parent=None,
                  verbose=True):
-        """Iterated Local Search Algorithm initialisation with use of specific LocalSearch {Algorithm} instance
+        """Iterated Local Search Algorithm initialisation with use of specific LocalSearch {:class:`~macop.algorithms.base.Algorithm`} instance
 
         Args:
             initialiser: {function} -- basic function strategy to initialise solution
-            evaluator: {Evaluator} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
-            operators: {[Operator]} -- list of operator to use when launching algorithm
-            policy: {Policy} -- Policy implementation strategy to select operators
+            evaluator: {:class:`~macop.evaluators.base.Evaluator`} -- evaluator instance in order to obtained fitness (mono or multiple objectives)
+            operators: {[:class:`~macop.operators.base.Operator`]} -- list of operator to use when launching algorithm
+            policy: {:class:`~macop.policies.base.Policy`} -- Policy implementation strategy to select operators
             validator: {function} -- basic function to check if solution is valid or not under some constraints
-            localSearch: {Algorithm} -- current local search into ILS
+            localSearch: {:class:`~macop.algorithms.base.Algorithm`} -- current local search into ILS
             maximise: {bool} -- specify kind of optimisation problem 
-            parent: {Algorithm} -- parent algorithm reference in case of inner Algorithm instance (optional)
+            parent: {:class:`~macop.algorithms.base.Algorithm`} -- parent algorithm reference in case of inner Algorithm instance (optional)
             verbose: {bool} -- verbose or not information about the algorithm
         """
 
@@ -337,7 +337,7 @@ class IteratedLocalSearch(Algorithm):
             ls_evaluations: {int} -- number of Local search evaluations (default: 100)
 
         Returns:
-            {Solution} -- best solution found
+            {:class:`~macop.solutions.base.Solution`}: best solution found
         """
 
         # by default use of mother method to initialise variables

+ 27 - 28
macop/algorithms/multi.py

@@ -22,16 +22,16 @@ class MOEAD(Algorithm):
         nObjectives: {int} -- number of objectives (based of number evaluator)
         initialiser: {function} -- basic function strategy to initialise solution
         evaluator: {[function]} -- list of basic function in order to obtained fitness (multiple objectives)
-        operators: {[Operator]} -- list of operator to use when launching algorithm
-        policy: {Policy} -- Policy class implementation strategy to select operators
+        operators: {[:class:`~macop.operators.base.Operator`]} -- list of operator to use when launching algorithm
+        policy: {:class:`~macop.policies.base.Policy`} -- Policy class implementation strategy to select operators
         validator: {function} -- basic function to check if solution is valid or not under some constraints
         maximise: {bool} -- specify kind of optimisation problem
         verbose: {bool} -- verbose or not information about the algorithm
-        population: [{Solution}] -- population of solution, one for each sub problem
-        pfPop: [{Solution}] -- pareto front population
+        population: [{:class:`~macop.solutions.base.Solution`}] -- population of solution, one for each sub problem
+        pfPop: [{:class:`~macop.solutions.base.Solution`}] -- pareto front population
         weights: [[{float}]] -- random weights used for custom mu sub problems
-        callbacks: {[Callback]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
-        parent: {Algorithm} -- parent algorithm reference in case of inner Algorithm instance (optional)
+        callbacks: {[:class:`~macop.callbacks.base.Callback`]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
+        parent: {:class:`~macop.algorithms.base.Algorithm`} -- parent algorithm reference in case of inner Algorithm instance (optional)
 
     >>> import random
     >>>
@@ -58,7 +58,7 @@ class MOEAD(Algorithm):
     >>>
     >>> # validator specification (based on weights of each objects)
     >>> weights = [ random.randint(5, 30) for i in range(problem_size) ]
-    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.getData()) if value == 1]) < 200 else False
+    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.data) if value == 1]) < 200 else False
     >>>
     >>> # initialiser function for binary solution using specific solution size
     >>> initialiser = lambda x=20: BinarySolution.random(x, validator)
@@ -95,11 +95,11 @@ class MOEAD(Algorithm):
             T: {[float]} -- number of neightbors for each sub problem
             initialiser: {function} -- basic function strategy to initialise solution
             evaluator: {[function]} -- list of basic function in order to obtained fitness (multiple objectives)
-            operators: {[Operator]} -- list of operator to use when launching algorithm
-            policy: {Policy} -- Policy class implementation strategy to select operators
+            operators: {[:class:`~macop.operators.base.Operator`]} -- list of operator to use when launching algorithm
+            policy: {:class:`~macop.policies.base.Policy`} -- Policy class implementation strategy to select operators
             validator: {function} -- basic function to check if solution is valid or not under some constraints
             maximise: {bool} -- specify kind of optimisation problem
-            parent: {Algorithm} -- parent algorithm reference in case of inner Algorithm instance (optional)
+            parent: {:class:`~macop.algorithms.base.Algorithm`} -- parent algorithm reference in case of inner Algorithm instance (optional)
             verbose: {bool} -- verbose or not information about the algorithm
         """
 
@@ -215,7 +215,7 @@ class MOEAD(Algorithm):
             evaluations: {int} -- number of Local search evaluations
             
         Returns:
-            {Solution} -- best solution found
+            {:class:`~macop.solutions.base.Solution`}: best solution found
         """
 
         # by default use of mother method to initialise variables
@@ -250,8 +250,7 @@ class MOEAD(Algorithm):
                 # for each neighbor of current sub problem update solution if better
                 improvment = False
                 for j in self._neighbors[i]:
-                    if spBestSolution.fitness(
-                    ) > self._subProblems[j]._bestSolution.fitness():
+                    if spBestSolution.fitness > self._subProblems[j]._bestSolution.fitness:
 
                         # create new solution based on current new if better, computes fitness associated to new solution for sub problem
                         newSolution = spBestSolution.clone()
@@ -380,15 +379,15 @@ class MOEAD(Algorithm):
 
         return paFront
 
-    def getResult(self):
+    def result(self):
         """Get the expected result of the current algorithm
 
         Returns:
-            [{Solution}] -- pareto front population
+            [{:class:`~macop.solutions.base.Solution`}]: pareto front population
         """
         return self._pfPop
 
-    def setDefaultResult(self, result):
+    def result(self, result):
         """Set current default result of the algorithm
 
         Args:
@@ -429,15 +428,15 @@ class MOSubProblem(Algorithm):
         weights: {[float]} -- sub problems objectives weights
         initalizer: {function} -- basic function strategy to initialise solution
         evaluator: {function} -- basic function in order to obtained fitness (mono or multiple objectives)
-        operators: {[Operator]} -- list of operator to use when launching algorithm
-        policy: {Policy} -- Policy class implementation strategy to select operators
+        operators: {[:class:`~macop.operators.base.Operator`]} -- list of operator to use when launching algorithm
+        policy: {:class:`~macop.policies.base.Policy`} -- Policy class implementation strategy to select operators
         validator: {function} -- basic function to check if solution is valid or not under some constraints
         maximise: {bool} -- specify kind of optimisation problem 
         verbose: {bool} -- verbose or not information about the algorithm
-        currentSolution: {Solution} -- current solution managed for current evaluation
-        bestSolution: {Solution} -- best solution found so far during running algorithm
-        callbacks: {[Callback]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
-        parent: {Algorithm} -- parent algorithm reference in case of inner Algorithm instance (optional)
+        currentSolution: {:class:`~macop.solutions.base.Solution`} -- current solution managed for current evaluation
+        bestSolution: {:class:`~macop.solutions.base.Solution`} -- best solution found so far during running algorithm
+        callbacks: {[:class:`~macop.callbacks.base.Callback`]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initialising algorithm
+        parent: {:class:`~macop.algorithms.base.Algorithm`} -- parent algorithm reference in case of inner Algorithm instance (optional)
     
     Example:
 
@@ -466,7 +465,7 @@ class MOSubProblem(Algorithm):
     >>>
     >>> # validator specification (based on weights of each objects)
     >>> weights = [ random.randint(5, 30) for i in range(problem_size) ]
-    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.getData()) if value == 1]) < 200 else False
+    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.data) if value == 1]) < 200 else False
     >>>
     >>> # initialiser function for binary solution using specific solution size
     >>> initialiser = lambda x=20: BinarySolution.random(x, validator)
@@ -506,11 +505,11 @@ class MOSubProblem(Algorithm):
             weights: {[float]} -- sub problems objectives weights
             initalizer: {function} -- basic function strategy to initialise solution
             evaluator: {function} -- basic function in order to obtained fitness (mono or multiple objectives)
-            operators: {[Operator]} -- list of operator to use when launching algorithm
-            policy: {Policy} -- Policy class implementation strategy to select operators
+            operators: {[:class:`~macop.operators.base.Operator`]} -- list of operator to use when launching algorithm
+            policy: {:class:`~macop.policies.base.Policy`} -- Policy class implementation strategy to select operators
             validator: {function} -- basic function to check if solution is valid or not under some constraints
             maximise: {bool} -- specify kind of optimisation problem 
-            parent: {Algorithm} -- parent algorithm reference in case of inner Algorithm instance (optional)
+            parent: {:class:`~macop.algorithms.base.Algorithm`} -- parent algorithm reference in case of inner Algorithm instance (optional)
             verbose: {bool} -- verbose or not information about the algorithm
         """
 
@@ -530,7 +529,7 @@ class MOSubProblem(Algorithm):
             evaluations: {int} -- number of evaluations
             
         Returns:
-            {Solution} -- best solution found
+            {:class:`~macop.solutions.base.Solution`}: best solution found
         """
 
         # by default use of mother method to initialise variables
@@ -566,7 +565,7 @@ class MOSubProblem(Algorithm):
                 break
 
             logging.info(
-                f"---- Current {newSolution} - SCORE {newSolution.fitness()}")
+                f"---- Current {newSolution} - SCORE {newSolution.fitness}")
 
             logging.info(
                 f"End of {type(self).__name__}, best solution found {self._bestSolution}"

+ 2 - 2
macop/callbacks/base.py

@@ -11,7 +11,7 @@ class Callback():
     Callback abstract class in order to compute some instruction every evaluation
 
     Attributes:
-        algo: {Algorithm} -- main algorithm instance reference
+        algo: {:class:`~macop.algorithms.base.Algorithm`} -- main algorithm instance reference
         every: {int} -- checkpoint frequency used (based on number of evaluations)
         filepath: {str} -- file path where checkpoints will be saved
     """
@@ -37,7 +37,7 @@ class Callback():
         """Specify the main algorithm instance reference
 
         Args:
-            algo: {Algorithm} -- main algorithm instance reference
+            algo: {:class:`~macop.algorithms.base.Algorithm`} -- main algorithm instance reference
         """
         self._algo = algo
 

+ 14 - 14
macop/callbacks/classicals.py

@@ -16,7 +16,7 @@ class BasicCheckpoint(Callback):
     BasicCheckpoint is used for loading previous computations and start again after loading checkpoint
 
     Attributes:
-        algo: {Algorithm} -- main algorithm instance reference
+        algo: {:class:`~macop.algorithms.base.Algorithm`} -- main algorithm instance reference
         every: {int} -- checkpoint frequency used (based on number of evaluations)
         filepath: {str} -- file path where checkpoints will be saved
     """
@@ -25,7 +25,7 @@ class BasicCheckpoint(Callback):
         Check if necessary to do backup based on `every` variable
         """
         # get current best solution
-        solution = self._algo.getResult()
+        solution = self._algo.result
 
         currentEvaluation = self._algo.getGlobalEvaluation()
 
@@ -34,17 +34,17 @@ class BasicCheckpoint(Callback):
 
             logging.info("Checkpoint is done into " + self._filepath)
 
-            solutionData = ""
-            solutionSize = len(solution.getData())
+            solution.data = ""
+            solutionSize = len(solution.data)
 
-            for index, val in enumerate(solution.getData()):
-                solutionData += str(val)
+            for index, val in enumerate(solution.data):
+                solution.data += str(val)
 
                 if index < solutionSize - 1:
-                    solutionData += ' '
+                    solution.data += ' '
 
-            line = str(currentEvaluation) + ';' + solutionData + ';' + str(
-                solution.fitness()) + ';\n'
+            line = str(currentEvaluation) + ';' + solution.data + ';' + str(
+                solution.fitness) + ';\n'
 
             # check if file exists
             if not os.path.exists(self._filepath):
@@ -76,13 +76,13 @@ class BasicCheckpoint(Callback):
                     self._algo.setEvaluation(globalEvaluation)
 
                 # get best solution data information
-                solutionData = list(map(int, data[1].split(' ')))
+                solution.data = list(map(int, data[1].split(' ')))
 
-                if self._algo.getResult() is None:
-                    self._algo.setDefaultResult(self._algo.initialiser())
+                if self._algo.result is None:
+                    self._algo.result(self._algo.initialiser())
 
-                self._algo.getResult().setData(np.array(solutionData))
-                self._algo.getResult().setScore(float(data[2]))
+                self._algo.result.data = np.array(solution.data)
+                self._algo.result.fitness = float(data[2])
 
             macop_line(self._algo)
             macop_text(self._algo,

+ 20 - 20
macop/callbacks/multi.py

@@ -16,7 +16,7 @@ class MultiCheckpoint(Callback):
     MultiCheckpoint is used for loading previous computations and start again after loading checkpoint
 
     Attributes:
-        algo: {Algorithm} -- main algorithm instance reference
+        algo: {:class:`~macop.algorithms.base.Algorithm`} -- main algorithm instance reference
         every: {int} -- checkpoint frequency used (based on number of evaluations)
         filepath: {str} -- file path where checkpoints will be saved
     """
@@ -37,21 +37,21 @@ class MultiCheckpoint(Callback):
             with open(self._filepath, 'w') as f:
 
                 for solution in population:
-                    solutionData = ""
-                    solutionSize = len(solution.getData())
+                    solution.data = ""
+                    solutionSize = len(solution.data)
 
-                    for index, val in enumerate(solution.getData()):
-                        solutionData += str(val)
+                    for index, val in enumerate(solution.data):
+                        solution.data += str(val)
 
                         if index < solutionSize - 1:
-                            solutionData += ' '
+                            solution.data += ' '
 
                     line = str(currentEvaluation) + ';'
 
                     for i in range(len(self._algo.evaluator)):
                         line += str(solution.scores[i]) + ';'
 
-                    line += solutionData + ';\n'
+                    line += solution.data + ';\n'
 
                     f.write(line)
 
@@ -84,11 +84,11 @@ class MultiCheckpoint(Callback):
                     scores = [float(s) for s in data[1:nObjectives + 1]]
 
                     # get best solution data information
-                    solutionData = list(map(int, data[-1].split(' ')))
+                    solution.data = list(map(int, data[-1].split(' ')))
 
                     # initialise and fill with data
                     self._algo.population[i] = self._algo.initialiser()
-                    self._algo.population[i].setData(np.array(solutionData))
+                    self._algo.population[i].data = np.array(solution.data)
                     self._algo.population[i].scores = scores
 
                     self._algo._pfPop.append(self._algo.population[i])
@@ -117,7 +117,7 @@ class ParetoCheckpoint(Callback):
     Pareto checkpoint is used for loading previous computations and start again after loading checkpoint
 
     Attributes:
-        algo: {Algorithm} -- main algorithm instance reference
+        algo: {:class:`~macop.algorithms.base.Algorithm`} -- main algorithm instance reference
         every: {int} -- checkpoint frequency used (based on number of evaluations)
         filepath: {str} -- file path where checkpoints will be saved
     """
@@ -126,7 +126,7 @@ class ParetoCheckpoint(Callback):
         Check if necessary to do backup based on `every` variable
         """
         # get current population
-        pfPop = self._algo._pfPop
+        pfPop = self._algo.result
 
         currentEvaluation = self._algo.getGlobalEvaluation()
 
@@ -138,21 +138,21 @@ class ParetoCheckpoint(Callback):
             with open(self._filepath, 'w') as f:
 
                 for solution in pfPop:
-                    solutionData = ""
-                    solutionSize = len(solution.getData())
+                    solution.data = ""
+                    solutionSize = len(solution.data)
 
-                    for index, val in enumerate(solution.getData()):
-                        solutionData += str(val)
+                    for index, val in enumerate(solution.data):
+                        solution.data += str(val)
 
                         if index < solutionSize - 1:
-                            solutionData += ' '
+                            solution.data += ' '
 
                     line = ''
 
                     for i in range(len(self._algo.evaluator)):
                         line += str(solution.scores[i]) + ';'
 
-                    line += solutionData + ';\n'
+                    line += solution.data + ';\n'
 
                     f.write(line)
 
@@ -174,10 +174,10 @@ class ParetoCheckpoint(Callback):
                     scores = [float(s) for s in data[0:nObjectives]]
 
                     # get best solution data information
-                    solutionData = list(map(int, data[-1].split(' ')))
+                    solution.data = list(map(int, data[-1].split(' ')))
 
-                    self._algo._pfPop[i]._data = solutionData
-                    self._algo._pfPop[i].scores = scores
+                    self._algo.result[i].data = solution.data
+                    self._algo.result[i].scores = scores
 
             macop_text(
                 self._algo,

+ 2 - 2
macop/callbacks/policies.py

@@ -14,10 +14,10 @@ from macop.utils.progress import macop_text, macop_line
 class UCBCheckpoint(Callback):
     """
     UCB checkpoint is used for loading previous Upper Confidence Bound data and start again after loading checkpoint
-    Need to be the same operators used during previous run (see `macop.policies.reinforcement.UCBPolicy` for more details)
+    Need to be the same operators used during previous run (see :class:`~macop.policies.reinforcement.UCBPolicy` for more details)
 
     Attributes:
-        algo: {Algorithm} -- main algorithm instance reference
+        algo: {:class:`~macop.algorithms.base.Algorithm`} -- main algorithm instance reference
         every: {int} -- checkpoint frequency used (based on number of evaluations)
         filepath: {str} -- file path where checkpoints will be saved
     """

+ 2 - 2
macop/evaluators/base.py

@@ -24,7 +24,7 @@ class Evaluator():
         Fitness is a float value for mono-objective or set of float values if multi-objective evaluation
 
         Args:
-            solution: {Solution} -- Solution instance
+            solution: {:class:`~macop.solutions.base.Solution`} -- Solution instance
 
         Return:
             {float} -- computed solution score (float or set of float if multi-objective evaluation)
@@ -36,6 +36,6 @@ class Evaluator():
            The reason is to better manage evaluator instance if necessary
 
         Args:
-            algo: {Algorithm} -- the algorithm reference runned
+            algo: {:class:`~macop.algorithms.base.Algorithm`} -- the algorithm reference runned
         """
         self._algo = algo

+ 11 - 11
macop/evaluators/discrete/mono.py

@@ -36,13 +36,13 @@ class KnapsackEvaluator(Evaluator):
         """Apply the computation of fitness from solution
 
         Args:
-            solution: {Solution} -- Solution instance
+            solution: {:class:`~macop.solutions.base.Solution`} -- Solution instance
     
         Returns:
-            {float} -- fitness score of solution
+            {float}: fitness score of solution
         """
         fitness = 0
-        for index, elem in enumerate(solution.getData()):
+        for index, elem in enumerate(solution.data):
             fitness += self._data['worths'][index] * elem
 
         return fitness
@@ -98,14 +98,14 @@ class QAPEvaluator(Evaluator):
         """Apply the computation of fitness from solution
 
         Args:
-            solution: {Solution} -- QAP solution instance
+            solution: {:class:`~macop.solutions.base.Solution`} -- QAP solution instance
     
         Returns:
-            {float} -- fitness score of solution
+            {float}: fitness score of solution
         """
         fitness = 0
-        for index_i, val_i in enumerate(solution.getData()):
-            for index_j, val_j in enumerate(solution.getData()):
+        for index_i, val_i in enumerate(solution.data):
+            for index_j, val_j in enumerate(solution.data):
                 fitness += self._data['F'][index_i,
                                            index_j] * self._data['D'][val_i,
                                                                       val_j]
@@ -159,14 +159,14 @@ class UBQPEvaluator(Evaluator):
         """Apply the computation of fitness from solution
 
         Args:
-            solution: {Solution} -- UBQP solution instance
+            solution: {:class:`~macop.solutions.base.Solution`} -- UBQP solution instance
     
         Returns:
-            {float} -- fitness score of solution
+            {float}: fitness score of solution
         """
         fitness = 0
-        for index_i, val_i in enumerate(solution.getData()):
-            for index_j, val_j in enumerate(solution.getData()):
+        for index_i, val_i in enumerate(solution.data):
+            for index_j, val_j in enumerate(solution.data):
                 fitness += self._data['Q'][index_i, index_j] * val_i * val_j
 
         return fitness

+ 2 - 2
macop/evaluators/discrete/multi.py

@@ -48,10 +48,10 @@ class WeightedSum(Evaluator):
         - Compute weighted-sum for these objectives
 
         Args:
-            solution: {Solution} -- Solution instance
+            solution: {:class:`~macop.solutions.base.Solution`} -- Solution instance
     
         Returns:
-            {float} -- weighted-sum of the fitness scores
+            {float}: weighted-sum of the fitness scores
         """
         scores = [
             evaluator.compute(solution)

+ 9 - 9
macop/operators/base.py

@@ -26,7 +26,7 @@ class Operator():
         """Apply the current operator transformation
 
         Args:
-            solution: {Solution} -- Solution instance
+            solution: {:class:`~macop.solutions.base.Solution`} -- Solution instance
         """
         pass
 
@@ -35,7 +35,7 @@ class Operator():
            The reason is to better manage operator instance
 
         Args:
-            algo: {Algorithm} -- the algorithm reference runned
+            algo: {:class:`~macop.algorithms.base.Algorithm`} -- the algorithm reference runned
         """
         self._algo = algo
 
@@ -44,7 +44,7 @@ class Mutation(Operator):
     """Abstract Mutation extend from Operator
 
     Attributes:
-        kind: {KindOperator} -- specify the kind of operator
+        kind: {:class:`~macop.operators.base.KindOperator`} -- specify the kind of operator
     """
     def __init__(self):
         """Mutation initialiser in order to specify kind of Operator
@@ -55,10 +55,10 @@ class Mutation(Operator):
         """Apply mutation over solution in order to obtained new solution
 
         Args:
-            solution: {Solution} -- solution to use in order to create new solution
+            solution: {:class:`~macop.solutions.base.Solution`} -- solution to use in order to create new solution
 
         Return:
-            {Solution} -- new generated solution
+            {:class:`~macop.solutions.base.Solution`} -- new generated solution
         """
         raise NotImplementedError
 
@@ -67,7 +67,7 @@ class Crossover(Operator):
     """Abstract crossover extend from Operator
 
     Attributes:
-        kind: {KindOperator} -- specify the kind of operator
+        kind: {:class:`~macop.operators.base.KindOperator`} -- specify the kind of operator
     """
     def __init__(self):
         """Crossover initialiser in order to specify kind of Operator
@@ -78,10 +78,10 @@ class Crossover(Operator):
         """Apply crossover using two solutions in order to obtained new solution
 
         Args:
-            solution1: {Solution} -- the first solution to use for generating new solution
-            solution2: {Solution} -- the second solution to use for generating new solution
+            solution1: {:class:`~macop.solutions.base.Solution`} -- the first solution to use for generating new solution
+            solution2: {:class:`~macop.solutions.base.Solution`} -- the second solution to use for generating new solution
 
         Return:
-            {Solution} -- new generated solution
+            {:class:`~macop.solutions.base.Solution`} -- new generated solution
         """
         raise NotImplementedError

+ 24 - 24
macop/operators/discrete/crossovers.py

@@ -12,7 +12,7 @@ class SimpleCrossover(Crossover):
     """Crossover implementation which generated new solution by splitting at mean size best solution and current solution
 
     Attributes:
-        kind: {Algorithm} -- specify the kind of operator
+        kind: {:class:`~macop.algorithms.base.Algorithm`} -- specify the kind of operator
 
     Example:
 
@@ -37,7 +37,7 @@ class SimpleCrossover(Crossover):
     >>>
     >>> # validator specification (based on weights of each objects)
     >>> weights = [ random.randint(20, 30) for i in range(10) ]
-    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.getData()) if value == 1]) < 200 else False
+    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.data) if value == 1]) < 200 else False
     >>>
     >>> # initialiser function for binary solution using specific solution size
     >>> initialiser = lambda x=10: BinarySolution.random(x, validator)
@@ -52,29 +52,29 @@ class SimpleCrossover(Crossover):
     >>>
     >>> # using best solution, simple crossover is applied
     >>> best_solution = algo.run(100)
-    >>> list(best_solution.getData())
-    [1, 1, 0, 1, 0, 1, 1, 1, 0, 1]
+    >>> list(best_solution.data)
+    [1, 1, 0, 1, 1, 1, 1, 1, 0, 0]
     >>> new_solution_1 = initialiser()
     >>> new_solution_2 = initialiser()
     >>> offspring_solution = simple_crossover.apply(new_solution_1, new_solution_2)
-    >>> list(offspring_solution.getData())
-    [0, 1, 1, 0, 1, 0, 1, 1, 0, 1]
+    >>> list(offspring_solution.data)
+    [0, 1, 0, 0, 0, 1, 1, 0, 1, 1]
     """
     def apply(self, solution1, solution2=None):
         """Create new solution based on best solution found and solution passed as parameter
 
         Args:
-            solution1: {Solution} -- the first solution to use for generating new solution
-            solution2: {Solution} -- the second solution to use for generating new solution
+            solution1: {:class:`~macop.solutions.base.Solution`} -- the first solution to use for generating new solution
+            solution2: {:class:`~macop.solutions.base.Solution`} -- the second solution to use for generating new solution
 
         Returns:
-            {Solution} -- new generated solution
+            {:class:`~macop.solutions.base.Solution`}: new generated solution
         """
 
         size = solution1._size
 
         # copy data of solution
-        firstData = solution1._data.copy()
+        firstData = solution1.data.copy()
 
         # copy of solution2 as output solution
         copy_solution = solution2.clone()
@@ -82,9 +82,9 @@ class SimpleCrossover(Crossover):
         splitIndex = int(size / 2)
 
         if random.uniform(0, 1) > 0.5:
-            copy_solution.getData()[splitIndex:] = firstData[splitIndex:]
+            copy_solution.data[splitIndex:] = firstData[splitIndex:]
         else:
-            copy_solution.getData()[:splitIndex] = firstData[:splitIndex]
+            copy_solution.data[:splitIndex] = firstData[:splitIndex]
 
         return copy_solution
 
@@ -93,7 +93,7 @@ class RandomSplitCrossover(Crossover):
     """Crossover implementation which generated new solution by randomly splitting best solution and current solution
 
     Attributes:
-        kind: {KindOperator} -- specify the kind of operator
+        kind: {:class:`~macop.operators.base.KindOperator`} -- specify the kind of operator
 
     Example:
 
@@ -118,7 +118,7 @@ class RandomSplitCrossover(Crossover):
     >>>
     >>> # validator specification (based on weights of each objects)
     >>> weights = [ random.randint(20, 30) for i in range(10) ]
-    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.getData()) if value == 1]) < 200 else False
+    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.data) if value == 1]) < 200 else False
     >>>
     >>> # initialiser function for binary solution using specific solution size
     >>> initialiser = lambda x=10: BinarySolution.random(x, validator)
@@ -133,28 +133,28 @@ class RandomSplitCrossover(Crossover):
     >>>
     >>> # using best solution, simple crossover is applied
     >>> best_solution = algo.run(100)
-    >>> list(best_solution.getData())
-    [1, 1, 1, 0, 1, 0, 1, 1, 1, 0]
+    >>> list(best_solution.data)
+    [1, 1, 1, 1, 1, 0, 1, 1, 0, 0]
     >>> new_solution_1 = initialiser()
     >>> new_solution_2 = initialiser()
     >>> offspring_solution = random_split_crossover.apply(new_solution_1, new_solution_2)
-    >>> list(offspring_solution.getData())
-    [0, 0, 0, 1, 1, 0, 0, 1, 0, 0]
+    >>> list(offspring_solution.data)
+    [1, 0, 0, 1, 1, 1, 0, 0, 1, 1]
     """
     def apply(self, solution1, solution2=None):
         """Create new solution based on best solution found and solution passed as parameter
 
         Args:
-            solution1: {Solution} -- the first solution to use for generating new solution
-            solution2: {Solution} -- the second solution to use for generating new solution
+            solution1: {:class:`~macop.solutions.base.Solution`} -- the first solution to use for generating new solution
+            solution2: {:class:`~macop.solutions.base.Solution`} -- the second solution to use for generating new solution
 
         Returns:
-            {Solution} -- new generated solution
+            {:class:`~macop.solutions.base.Solution`}: new generated solution
         """
         size = solution1._size
 
         # copy data of solution
-        firstData = solution1._data.copy()
+        firstData = solution1.data.copy()
 
         # copy of solution2 as output solution
         copy_solution = solution2.clone()
@@ -162,8 +162,8 @@ class RandomSplitCrossover(Crossover):
         splitIndex = random.randint(0, size)
 
         if random.uniform(0, 1) > 0.5:
-            copy_solution.getData()[splitIndex:] = firstData[splitIndex:]
+            copy_solution.data[splitIndex:] = firstData[splitIndex:]
         else:
-            copy_solution.getData()[:splitIndex] = firstData[:splitIndex]
+            copy_solution.data[:splitIndex] = firstData[:splitIndex]
 
         return copy_solution

+ 16 - 17
macop/operators/discrete/mutators.py

@@ -12,7 +12,7 @@ class SimpleMutation(Mutation):
     """Mutation implementation for binary solution, swap two bits randomly from solution
 
     Attributes:
-        kind: {KindOperator} -- specify the kind of operator
+        kind: {:class:`~macop.operators.base.KindOperator`} -- specify the kind of operator
 
     Example:
 
@@ -20,21 +20,21 @@ class SimpleMutation(Mutation):
     >>> from macop.solutions.discrete import BinarySolution
     >>> from macop.operators.discrete.mutators import SimpleMutation
     >>> solution = BinarySolution.random(5)
-    >>> list(solution.getData())
+    >>> list(solution.data)
     [1, 0, 0, 0, 1]
     >>> mutator = SimpleMutation()
     >>> mutation_solution = mutator.apply(solution)
-    >>> list(mutation_solution.getData())
+    >>> list(mutation_solution.data)
     [0, 0, 1, 0, 1]
     """
     def apply(self, solution):
         """Create new solution based on solution passed as parameter
 
         Args:
-            solution: {Solution} -- the solution to use for generating new solution
+            solution: {:class:`~macop.solutions.base.Solution`} -- the solution to use for generating new solution
 
         Returns:
-            {Solution} -- new generated solution
+            {:class:`~macop.solutions.base.Solution`}: new generated solution
         """
 
         size = solution._size
@@ -49,12 +49,11 @@ class SimpleMutation(Mutation):
             firstCell = random.randint(0, size - 1)
             secondCell = random.randint(0, size - 1)
 
-        temp = copy_solution.getData()[firstCell]
+        temp = copy_solution.data[firstCell]
 
         # swicth values
-        copy_solution.getData()[firstCell] = copy_solution.getData(
-        )[secondCell]
-        copy_solution.getData()[secondCell] = temp
+        copy_solution.data[firstCell] = copy_solution.data[secondCell]
+        copy_solution.data[secondCell] = temp
 
         return copy_solution
 
@@ -63,7 +62,7 @@ class SimpleBinaryMutation(Mutation):
     """Mutation implementation for binary solution, swap bit randomly from solution
 
     Attributes:
-        kind: {KindOperator} -- specify the kind of operator
+        kind: {:class:`~macop.operators.base.KindOperator`} -- specify the kind of operator
 
     Example:
 
@@ -71,21 +70,21 @@ class SimpleBinaryMutation(Mutation):
     >>> from macop.solutions.discrete import BinarySolution
     >>> from macop.operators.discrete.mutators import SimpleBinaryMutation
     >>> solution = BinarySolution.random(5)
-    >>> list(solution.getData())
+    >>> list(solution.data)
     [0, 1, 0, 0, 0]
     >>> mutator = SimpleBinaryMutation()
     >>> mutation_solution = mutator.apply(solution)
-    >>> list(mutation_solution.getData())
+    >>> list(mutation_solution.data)
     [1, 1, 0, 0, 0]
     """
     def apply(self, solution):
         """Create new solution based on solution passed as parameter
 
         Args:
-            solution: {Solution} -- the solution to use for generating new solution
+            solution: {:class:`~macop.solutions.base.Solution`} -- the solution to use for generating new solution
 
         Returns:
-            {Solution} -- new generated solution
+            {:class:`~macop.solutions.base.Solution`}: new generated solution
         """
 
         size = solution._size
@@ -95,9 +94,9 @@ class SimpleBinaryMutation(Mutation):
         copy_solution = solution.clone()
 
         # swicth values
-        if copy_solution.getData()[cell]:
-            copy_solution.getData()[cell] = 0
+        if copy_solution.data[cell]:
+            copy_solution.data[cell] = 0
         else:
-            copy_solution.getData()[cell] = 1
+            copy_solution.data[cell] = 1
 
         return copy_solution

+ 7 - 7
macop/policies/base.py

@@ -11,7 +11,7 @@ class Policy():
     """Abstract class which is used for applying strategy when selecting and applying operator 
 
     Attributes:
-        operators: {[Operator]} -- list of selected operators for the algorithm
+        operators: {[:class:`~macop.operators.base.Operator`]} -- list of selected operators for the algorithm
     """
     def __init__(self, operators):
         """Initialise new Policy instance using specific list of operators
@@ -27,7 +27,7 @@ class Policy():
         Select specific operator
 
         Returns:
-            {Operator} -- selected operator
+            {:class:`~macop.operators.base.Operator`}: selected operator
         """
         pass
 
@@ -36,11 +36,11 @@ class Policy():
         Apply specific operator chosen to create new solution, compute its fitness and return solution
         
         Args:
-            solution1: {Solution} -- the first solution to use for generating new solution
-            solution2: {Solution} -- the second solution to use for generating new solution (in case of specific crossover, default is best solution from algorithm)
+            solution1: {:class:`~macop.solutions.base.Solution`} -- the first solution to use for generating new solution
+            solution2: {:class:`~macop.solutions.base.Solution`} -- the second solution to use for generating new solution (in case of specific crossover, default is best solution from algorithm)
 
         Returns:
-            {Solution} -- new generated solution
+            {:class:`~macop.solutions.base.Solution`}: new generated solution
         """
 
         operator = self.select()
@@ -50,7 +50,7 @@ class Policy():
 
         # default value of solution2 is current best solution
         if solution2 is None and self._algo is not None:
-            solution2 = self._algo.getResult()
+            solution2 = self._algo.result
 
         # avoid use of crossover if only one solution is passed
         if solution2 is None and operator._kind == KindOperator.CROSSOVER:
@@ -73,6 +73,6 @@ class Policy():
            The reason is to better manage the operator choices (use of rewards as example)
 
         Args:
-            algo: {Algorithm} -- the algorithm reference runned
+            algo: {:class:`~macop.algorithms.base.Algorithm`} -- the algorithm reference runned
         """
         self._algo = algo

+ 2 - 2
macop/policies/classicals.py

@@ -11,7 +11,7 @@ class RandomPolicy(Policy):
     """Policy class implementation which is used for select operator randomly from the `operators` list
 
     Attributes:
-        operators: {[Operator]} -- list of selected operators for the algorithm
+        operators: {[:class:`~macop.operators.base.Operator`]} -- list of selected operators for the algorithm
 
     Example:
 
@@ -31,7 +31,7 @@ class RandomPolicy(Policy):
         """Select randomly the next operator to use
 
         Returns:
-            {Operator}: the selected operator
+            {:class:`~macop.operators.base.Operator`}: the selected operator
 
         """
         # choose operator randomly

+ 13 - 13
macop/policies/reinforcement.py

@@ -22,7 +22,7 @@ class UCBPolicy(Policy):
     - Resource link: https://banditalgs.com/2016/09/18/the-upper-confidence-bound-algorithm/
 
     Attributes:
-        operators: {[Operator]} -- list of selected operators for the algorithm
+        operators: {[:class:`~macop.operators.base.Operator`]} -- list of selected operators for the algorithm
         C: {float} -- The second half of the UCB equation adds exploration, with the degree of exploration being controlled by the hyper-parameter ``C``.
         exp_rate: {float} -- exploration rate (probability to choose randomly next operator)
         rewards: {[float]} -- list of summed rewards obtained for each operator
@@ -56,7 +56,7 @@ class UCBPolicy(Policy):
     >>>
     >>> # validator specification (based on weights of each objects)
     >>> weights = [ random.randint(5, 30) for i in range(20) ]
-    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.getData()) if value == 1]) < 200 else False
+    >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution.data) if value == 1]) < 200 else False
     >>>
     >>> # initialiser function with lambda function
     >>> initialiser = lambda x=20: BinarySolution.random(x, validator)
@@ -72,13 +72,13 @@ class UCBPolicy(Policy):
     >>> type(solution).__name__
     'BinarySolution'
     >>> policy.occurences # one more due to first evaluation
-    [51, 53]
+    [53, 50]
     """
     def __init__(self, operators, C=100., exp_rate=0.9):
         """UCB Policy initialiser
 
         Args:
-            operators: {[Operator]} -- list of selected operators for the algorithm
+            operators: {[:class:`~macop.operators.base.Operator`]} -- list of selected operators for the algorithm
             C: {float} -- The second half of the UCB equation adds exploration, with the degree of exploration being controlled by the hyper-parameter `C`.
             exp_rate: {float} -- exploration rate (probability to choose randomly next operator)
         """
@@ -96,7 +96,7 @@ class UCBPolicy(Policy):
         """Select using Upper Confidence Bound the next operator to use (using acquired rewards)
 
         Returns:
-            {Operator}: the selected operator
+            {:class:`~macop.operators.base.Operator`}: the selected operator
         """
 
         indices = [i for i, o in enumerate(self.occurences) if o == 0]
@@ -132,11 +132,11 @@ class UCBPolicy(Policy):
         - selected operator occurence is also increased
 
         Args:
-            solution1: {Solution} -- the first solution to use for generating new solution
-            solution2: {Solution} -- the second solution to use for generating new solution (in case of specific crossover, default is best solution from algorithm)
+            solution1: {:class:`~macop.solutions.base.Solution`} -- the first solution to use for generating new solution
+            solution2: {:class:`~macop.solutions.base.Solution`} -- the second solution to use for generating new solution (in case of specific crossover, default is best solution from algorithm)
 
         Returns:
-            {Solution} -- new generated solution
+            {:class:`~macop.solutions.base.Solution`}: new generated solution
         """
 
         operator = self.select()
@@ -146,7 +146,7 @@ class UCBPolicy(Policy):
 
         # default value of solution2 is current best solution
         if solution2 is None and self._algo is not None:
-            solution2 = self._algo.getResult()
+            solution2 = self._algo.result
 
         # avoid use of crossover if only one solution is passed
         if solution2 is None and operator._kind == KindOperator.CROSSOVER:
@@ -165,11 +165,11 @@ class UCBPolicy(Policy):
 
         # compute fitness improvment rate
         if self._algo._maximise:
-            fir = (newSolution.fitness() -
-                   solution1.fitness()) / solution1.fitness()
+            fir = (newSolution.fitness -
+                   solution1.fitness) / solution1.fitness
         else:
-            fir = (solution1.fitness() -
-                   newSolution.fitness()) / solution1.fitness()
+            fir = (solution1.fitness -
+                   newSolution.fitness) / solution1.fitness
 
         operator_index = self._operators.index(operator)
 

+ 21 - 17
macop/solutions/base.py

@@ -33,7 +33,7 @@ class Solution():
             validator: {function} -- specific function which validates or not a solution
 
         Returns:
-            {bool} -- `True` is solution is valid
+            {bool}: `True` is solution is valid
         """
         return validator(self)
 
@@ -45,41 +45,45 @@ class Solution():
             _evaluator: {function} -- specific function which computes fitness of solution
 
         Returns:
-            {float} -- fitness score value
+            {float}: fitness score value
         """
         self._score = evaluator.compute(self)
         return self._score
 
+    @property
     def fitness(self):
         """
-        Returns fitness score
+        Returns fitness score (by default `score` private attribute)
 
         Returns:
-            {float} -- fitness score value
+            {float}: fitness score value
         """
         return self._score
 
-    def getData(self):
+    @fitness.setter
+    def fitness(self, score):
         """
-        Returns solution data
+        Set solution score as wished (by default `score` private attribute)
+        """
+        self._score = score
+
+    @property
+    def data(self):
+        """
+        Returns solution data (by default `data` private attribute)
 
         Returns:
-            {ndarray} -- data values
+            {object}: data values
         """
         return self._data
 
-    def setData(self, data):
+    @data.setter
+    def data(self, data):
         """
-        Set solution data
+        Set solution data (by default `data` private attribute)
         """
         self._data = data
 
-    def setScore(self, score):
-        """
-        Set solution score as wished
-        """
-        self._score = score
-
     @staticmethod
     def random(size, validator=None):
         """
@@ -90,7 +94,7 @@ class Solution():
             validator: {function} -- specific function which validates or not a solution (if None, not validation is applied)
 
         Returns:
-            {Solution} -- generated solution
+            {:class:`~macop.solutions.base.Solution`}: generated solution
         """
         return None
 
@@ -98,7 +102,7 @@ class Solution():
         """Clone the current solution and its data, but without keeping evaluated `_score`
 
         Returns:
-            {Solution} -- clone of current solution
+            {:class:`~macop.solutions.base.Solution`}: clone of current solution
         """
         copy_solution = deepcopy(self)
         copy_solution._score = None

+ 15 - 15
macop/solutions/discrete.py

@@ -36,11 +36,11 @@ class BinarySolution(Solution):
         >>> solution = BinarySolution(data, len(data))
         >>>
         >>> # check data content
-        >>> sum(solution.getData()) == 3
+        >>> sum(solution.data) == 3
         True
         >>> # clone solution
         >>> solution_copy = solution.clone()
-        >>> all(solution_copy._data == solution.getData())
+        >>> all(solution_copy.data == solution.data)
         True
         """
         super().__init__(np.array(data), size)
@@ -55,16 +55,16 @@ class BinarySolution(Solution):
             validator: {function} -- specific function which validates or not a solution (if None, not validation is applied)
 
         Returns:
-            {BinarySolution} -- new generated binary solution
+            {:class:`~macop.solutions.discrete.BinarySolution`}: new generated binary solution
 
         Example:
 
         >>> from macop.solutions.discrete import BinarySolution
         >>>
         >>> # generate random solution using specific validator
-        >>> validator = lambda solution: True if sum(solution.getData()) > 5 else False
+        >>> validator = lambda solution: True if sum(solution.data) > 5 else False
         >>> solution = BinarySolution.random(10, validator)
-        >>> sum(solution.getData()) > 5
+        >>> sum(solution.data) > 5
         True
         """
 
@@ -109,10 +109,10 @@ class CombinatoryIntegerSolution(Solution):
         >>> import numpy as np
         >>> data = np.arange(5)
         >>> solution = CombinatoryIntegerSolution(data, 5)
-        >>> sum(solution.getData()) == 10
+        >>> sum(solution.data) == 10
         True
         >>> solution_copy = solution.clone()
-        >>> all(solution_copy._data == solution.getData())
+        >>> all(solution_copy.data == solution.data)
         True
         """
         super().__init__(data, size)
@@ -127,16 +127,16 @@ class CombinatoryIntegerSolution(Solution):
             validator: {function} -- specific function which validates or not a solution (if None, not validation is applied)
 
         Returns:
-            {CombinatoryIntegerSolution} -- new generated combinatory integer solution
+            {:class:`~macop.solutions.discrete.CombinatoryIntegerSolution`}: new generated combinatory integer solution
 
         Example:
 
         >>> from macop.solutions.discrete import CombinatoryIntegerSolution
         >>>
         >>> # generate random solution using specific validator
-        >>> validator = lambda solution: True if sum(solution.getData()) > 5 else False
+        >>> validator = lambda solution: True if sum(solution.data) > 5 else False
         >>> solution = CombinatoryIntegerSolution.random(5, validator)
-        >>> sum(solution.getData()) > 5
+        >>> sum(solution.data) > 5
         True
         """
 
@@ -182,10 +182,10 @@ class IntegerSolution(Solution):
         >>> np.random.seed(42)
         >>> data = np.random.randint(5, size=10)
         >>> solution = IntegerSolution(data, 10)
-        >>> sum(solution.getData())
+        >>> sum(solution.data)
         28
         >>> solution_copy = solution.clone()
-        >>> all(solution_copy._data == solution.getData())
+        >>> all(solution_copy.data == solution.data)
         True
         """
         super().__init__(data, size)
@@ -200,7 +200,7 @@ class IntegerSolution(Solution):
             validator: {function} -- specific function which validates or not a solution (if None, not validation is applied)
 
         Returns:
-            {IntegerSolution} -- new generated integer solution
+            {:class:`~macop.solutions.discrete.IntegerSolution`}: new generated integer solution
 
         Example:
 
@@ -209,9 +209,9 @@ class IntegerSolution(Solution):
         >>> np.random.seed(42)
         >>>
         >>> # generate random solution using specific validator
-        >>> validator = lambda solution: True if sum(solution.getData()) > 5 else False
+        >>> validator = lambda solution: True if sum(solution.data) > 5 else False
         >>> solution = IntegerSolution.random(5, validator)
-        >>> sum(solution.getData()) > 10
+        >>> sum(solution.data) > 10
         True
         """
 

+ 3 - 3
macop/utils/progress.py

@@ -15,7 +15,7 @@ def macop_text(algo, msg):
     """Display Macop message to user interface
 
     Args:
-        algo: {Algorithm} -- current algorithm instance
+        algo: {:class:`~macop.algorithms.base.Algorithm`} -- current algorithm instance
         msg: {str} -- message to display
     """
     if algo._verbose:
@@ -28,7 +28,7 @@ def macop_line(algo):
     """Macop split line
 
     Args:
-        algo: {Algorithm} -- current algorithm instance
+        algo: {:class:`~macop.algorithms.base.Algorithm`} -- current algorithm instance
     """
 
     if not algo._verbose:
@@ -50,7 +50,7 @@ def macop_progress(algo, evaluations, max):
     """Progress line of macop
 
     Args:
-        algo: {Algorithm} -- current algorithm instance
+        algo: {:class:`~macop.algorithms.base.Algorithm`} -- current algorithm instance
         evaluations: {int} -- current number of evaluations
         max: {int} -- max number of expected evaluations
     """

+ 1 - 1
setup.py

@@ -73,7 +73,7 @@ class TestCommand(distutils.command.check.check):
 
 setup(
     name='macop',
-    version='1.0.11',
+    version='1.0.12',
     description='Minimalist And Customisable Optimisation Package',
     long_description=open('README.md').read(),
     long_description_content_type='text/markdown',