Parcourir la source

documentation updates for evaluator / opetators

Jérôme BUISINE il y a 3 ans
Parent
commit
52e603abe9

+ 1 - 1
.gitignore

@@ -57,6 +57,6 @@ target/
 
 data
 examples/data
-docs/build
+docs/_build
 .vscode
 .python-version

+ 6 - 0
docs/source/_static/css/custom.css

@@ -61,4 +61,10 @@ colgroup :first-child {
 
 .article.pytorch-article .function dt > code, article.pytorch-article .attribute dt > code, article.pytorch-article .class .attribute dt > code, article.pytorch-article .class dt > code {
     color: #009900 !important;
+}
+
+.align-center {
+    display: block;
+    margin-left: auto;
+    margin-right: auto;
 }

BIN
docs/source/_static/documentation/macop_behaviour.png


BIN
docs/source/_static/documentation/project_knapsack_crossover.png


BIN
docs/source/_static/documentation/project_knapsack_mutator.png


+ 3 - 0
docs/source/_static/js/custom.js

@@ -0,0 +1,3 @@
+document.addEventListener("DOMContentLoaded", (event) => {
+    Array.from(document.getElementsByClassName('header-logo')).forEach(element => element.href = "index.html")
+})

+ 4 - 0
docs/source/conf.py

@@ -59,6 +59,10 @@ html_css_files = [
     'css/custom.css',
 ]
 
+html_js_files = [
+    'js/custom.js',
+]
+
 # autoapi_add_toctree_entry = True
 # autoapi_template_dir = '_autoapi_templates'
 # autoapi_dirs = ['../../rawls']

+ 8 - 1
docs/source/description.rst

@@ -9,7 +9,14 @@ Description
 Context
 ------------
 
-`macop` is an optimisation Python package which not implement the whole available algorithms in the literature but let you the possibility to quickly develop and test your own algorithm and strategies. The main objective of this package is to be the most flexible as possible and hence, to offer a maximum of implementation possibilities.
+Based on its generic behaviour, each **Macop** algorithm runs can be represented as an interactive loop where you can interact with and specify your needs at each step:
+
+.. image:: _static/documentation/macop_behaviour.png
+   :width: 450 px
+   :align: center
+
+The package is strongly oriented on combinatorial optimisation (hence discrete optimisation) but it remains possible to extend for continuous optimisation.
+
 
 Installation
 ------------

+ 2 - 0
docs/source/documentations/algorithms.rst

@@ -0,0 +1,2 @@
+8. Optimisation process
+=======================

+ 2 - 0
docs/source/documentations/callbacks.rst

@@ -0,0 +1,2 @@
+9. Keep track
+==============

+ 97 - 0
docs/source/documentations/evaluators.rst

@@ -0,0 +1,97 @@
+5. Use of evaluators
+====================
+
+Now that it is possible to generate a solution randomly or not. It is important to know the value associated with this solution. We will then speak of evaluation of the solution. With the score associated with it, the `fitness`.
+
+5.1. Generic evaluator
+~~~~~~~~~~~~~~~~~~~~~~
+
+As for the management of solutions, a generic evaluator class ``macop.evaluators.base.Evaluator`` is developed within **Macop**:
+
+Abstract Evaluator class is used for computing fitness score associated to a solution. To evaluate all the solutions, this class:
+
+- stores into its ``_data`` dictionary attritute required measures when computing a solution
+- has a ``compute`` abstract method enable to compute and associate a score to a given solution
+- stores into its ``_algo`` attritute the current algorithm to use (we will talk about algorithm later)
+
+.. code-block: python
+
+    class Evaluator():
+    """
+    Abstract Evaluator class which enables to compute solution using specific `_data` 
+    """
+    def __init__(self, data):
+        self._data = data
+
+    @abstractmethod
+    def compute(self, solution):
+        """
+        Apply the computation of fitness from solution
+        """
+        pass
+
+    def setAlgo(self, algo):
+        """
+        Keep into evaluator reference of the whole algorithm
+        """
+        self._algo = algo
+
+We must therefore now create our own evaluator based on the proposed structure.
+
+5.2. Custom evaluator
+~~~~~~~~~~~~~~~~~~~~~
+
+To create our own evaluator, we need both:
+
+- data useful for evaluating a solution
+- calculate the score (fitness) associated with the state of the solution from these data. Hence, implement specific ``compute`` method.
+
+We will define the ``KnapsackEvaluator`` class, which will therefore allow us to evaluate solutions to our current problem.
+
+.. code-block:: python
+
+    """
+    modules imports
+    """
+    from macop.evaluators.base import Evaluator
+
+    class KnapsackEvaluator(Evaluator):
+        
+        def compute(solution):
+
+            # `_data` contains worths array values of objects
+            fitness = 0
+            for index, elem in enumerate(solution._data):
+                fitness += self._data['worths'][index] * elem
+
+            return fitness
+
+
+It is now possible to initialize our new evaluator with specific data of our problem instance:
+
+.. code-block:: python
+
+    """
+    Problem instance definition
+    """
+    elements_score = [ 4, 2, 10, 1, 2 ] # worth of each object
+    elements_weight = [ 12, 1, 4, 1, 2 ] # weight of each object
+
+    """
+    Evaluator problem instance
+    """
+    evaluator = KnapsackEvaluator(data={'worths': elements_score})
+
+    # using defined BinarySolution
+    solution = BinarySolution.random(5)
+
+    # obtaining current solution score
+    solution_fitness = solution.evaluate(evaluator)
+
+    # score is also stored into solution
+    solution_fitness = solution.fitness()
+
+.. note::
+    The current developed ``KnapsackEvaluator`` is available into ``macop.evaluators.mono.KnapsackEvaluator`` in **Macop**.
+
+In the next part we will see how to modify our current solution with the use of modification operator.

+ 9 - 3
docs/source/documentations/index.rst

@@ -5,9 +5,9 @@ Documentation
    :width: 300 px
    :align: center
 
-This documentation will allow a user who wishes to use the Macop optimisation package to understand both how it works and offers examples of how to implement specific needs.
+This documentation will allow a user who wishes to use the **Macop** optimisation package to understand both how it works and offers examples of how to implement specific needs.
 
-It will gradually take up the major ideas developed within `Macop` to allow for rapid development. You can navigate directly via the menu available below to access a specific part of the documentation.
+It will gradually take up the major ideas developed within **Macop** to allow for quick development. You can navigate directly via the menu available below to access a specific part of the documentation.
 
 .. toctree::
    :maxdepth: 1
@@ -16,4 +16,10 @@ It will gradually take up the major ideas developed within `Macop` to allow for
    introduction
    problem
    solutions
-   validator
+   validator
+   evaluators
+   operators
+   policies
+   algorithms
+   callbacks
+   others

Fichier diff supprimé car celui-ci est trop grand
+ 6 - 3
docs/source/documentations/introduction.rst


+ 201 - 0
docs/source/documentations/operators.rst

@@ -0,0 +1,201 @@
+6. Apply operators to solution
+==============================
+
+Applying an operator to a solution consists of modifying the current state of the solution in order to obtain a new one. The goal is to find a better solution in the search space.
+
+6.1. Operators definition
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the discrete optimisation literature, we can categorise operators into two sections:
+
+- **mutators**: modification of one or more elements of a solution from its current state.
+- **crossovers**: Inspired by Darwin's theory of evolution, we are going here from two solutions to generate a so-called offspring solution composed of the fusion of the data of the parent solutions.
+
+Inside **Macop**, operators are also decomposed into these two categories. Inside ``macop.operators.discrete.base``, generic class ``Operator`` enables to manage any kind of operator.
+
+.. code-block:: python
+
+    class Operator():
+        """
+        Abstract Operator class which enables to update solution applying operator (computation)
+        """
+        @abstractmethod
+        def __init__(self):
+            pass
+
+        @abstractmethod
+        def apply(self, solution):
+            """
+            Apply the current operator transformation
+            """
+            pass
+
+        def setAlgo(self, algo):
+            """
+            Keep into operator reference of the whole algorithm
+            """
+            self._algo = algo
+
+Like the evaluator, the operator keeps **track of the algorithm** (using ``setAlgo`` method) to which he will be linked. This will allow better management of the way in which the operator must take into account the state of current data relating to the evolution of research.
+
+``Mutation`` and ``Crossover`` classes inherite from ``Operator``. An ``apply`` function is required for any new operator.
+
+.. code-block:: python
+
+    class Mutation(Operator):
+        """Abstract Mutation extend from Operator
+
+        Attributes:
+            kind: {KindOperator} -- specify the kind of operator
+        """
+        def __init__(self):
+            self._kind = KindOperator.MUTATOR
+
+        def apply(self, solution):
+            raise NotImplementedError
+
+
+    class Crossover(Operator):
+        """Abstract crossover extend from Operator
+
+        Attributes:
+            kind: {KindOperator} -- specify the kind of operator
+        """
+        def __init__(self):
+            self._kind = KindOperator.CROSSOVER
+
+        def apply(self, solution1, solution2):
+            raise NotImplementedError
+
+We will now detail these categories of operators and suggest some relative to our problem.
+
+6.2. Mutator operator
+~~~~~~~~~~~~~~~~~~~~~
+
+As detailed, the mutation operator consists in having a minimum impact on the current state of our solution. Here is an example of a modification that could be done for our problem.
+
+.. image:: ../_static/documentation/project_knapsack_mutator.png
+   :width:  800 px
+   :align: center
+
+In this example we change a bit value randomly and obtain a new solution from our search space.
+
+.. warning::
+    Applying an operator can conduct to a new but invalid solution from the search space.
+
+The modification applied here is just a bit swapped. Let's define the ``SimpleBinaryMutation`` operator, allows to randomly change a binary value of our current solution.
+
+
+.. code-block:: python
+
+    """
+    modules imports
+    """
+    from macop.operators.discrete.base import Mutation
+
+    class SimpleBinaryMutation(Mutation):
+
+        def apply(self, solution):
+            
+            # obtain targeted cell using solution size
+            size = solution._size
+            cell = random.randint(0, size - 1)
+
+            # copy of solution
+            copy_solution = solution.clone()
+
+            # swicth values
+            if copy_solution._data[cell]:
+                copy_solution._data[cell] = 0
+            else:
+                copy_solution._data[cell] = 1
+
+            # return the new obtained solution
+            return copy_solution
+
+We can now instanciate our new operator in order to obtain a new solution:
+
+
+.. code-block:: python
+
+    """
+    BinaryMutator instance
+    """
+    mutator = SimpleBinaryMutation()
+
+    # using defined BinarySolution
+    solution = BinarySolution.random(5)
+
+    # obtaining new solution using operator
+    new_solution = mutator.apply(solution)
+
+
+.. note::
+    The developed ``SimpleBinaryMutation`` is available into ``macop.operators.discrete.mutators.SimpleBinaryMutation`` in **Macop**.
+
+
+6.3. Crossover operator
+~~~~~~~~~~~~~~~~~~~~~~~
+
+
+Inspired by Darwin's theory of evolution, crossover starts from two solutions to generate a so-called offspring solution composed of the fusion of the data of the parent solutions.
+
+.. image:: ../_static/documentation/project_knapsack_crossover.png
+   :width:  800 px
+   :align: center
+
+In this example we merge two solutions with a specific splitting criterion in order to obtain an offspring.
+
+We will now implement the SimpleCrossover crossover operator, which will merge data from two solutions. 
+The first half of solution 1 will be saved and added to the second half of solution 2 to generate the new solution (offspring).
+
+
+.. code-block:: python
+
+    """
+    modules imports
+    """
+    from macop.operators.discrete.base import Crossover
+
+    class SimpleCrossover(Crossover):
+
+        def apply(self, solution1, solution2):
+            
+            size = solution1._size
+
+            # default split index used
+            splitIndex = int(size / 2)
+
+            # copy data of solution 1
+            firstData = solution1._data.copy()
+
+            # copy of solution 2
+            copy_solution = solution2.clone()
+
+            copy_solution._data[splitIndex:] = firstData[splitIndex:]
+
+            return copy_solution
+
+
+We can now use the crossover operator created to generate new solutions. Here is an example of use:
+
+.. code-block:: python
+
+    """
+    SimpleCrossover instance
+    """
+    crossover = SimpleCrossover()
+
+    # using defined BinarySolution
+    solution1 = BinarySolution.random(5)
+    solution2 = BinarySolution.random(5)
+
+    # obtaining new solution using crossover
+    offspring = crossover.apply(solution1, solution2)
+
+.. warning::
+    The developed ``SimpleCrossover`` is available into ``macop.operators.discrete.crossovers.SimpleCrossover`` in **Macop**.
+    However, the choice of halves of the merged data is made randomly. In addition, the second solution can be omitted, 
+    by default the operator will crossover between ``solution1`` and the best current solution of the algorithm.
+
+Next part introduce the ``policy`` feature of **Macop** which enables to choose the next operator to apply during the search process based on specific criterion.

+ 2 - 0
docs/source/documentations/others.rst

@@ -0,0 +1,2 @@
+10. Others features
+===================

+ 17 - 0
docs/source/documentations/policies.rst

@@ -0,0 +1,17 @@
+7. Operator choices
+===================
+
+The ``policy`` feature of **Macop** enables to choose the next operator to apply during the search process of the algorithm based on specific criterion.
+
+7.1. Why using policy ?
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Sometimes the nature of the problem and its instance can strongly influence the search results when using mutation operators or crossovers. 
+Automated operator choice strategies have been developed in the literature, notably based on reinforcement learning.
+
+.. note::
+    An implementation by reinforcement has been developed as an example in the ``macop.policies.reinforcement`` module. 
+    However, it will not be detailed here. You can refer to the API documentation for more details.
+
+7.2. Custom policy
+~~~~~~~~~~~~~~~~~~

+ 6 - 4
docs/source/documentations/problem.rst

@@ -6,7 +6,7 @@ In this tutorial, we introduce the way of using **Macop** and running your algor
 2.1 Problem definition
 ~~~~~~~~~~~~~~~~~~~~~~
 
-The **knapsack problem** is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.
+The **knapsack problem** is a problem in combinatorial optimisation: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.
 
 
 The image below provides an illustration of the problem:
@@ -34,7 +34,7 @@ During the whole tutorial, the example used is based on the previous illustratio
 
 Hence, we now define our problem in Python:
 
-- values of each objects 
+- worth value of each objects 
 - weight associated to each of these objects
 
 .. code-block:: python
@@ -43,7 +43,9 @@ Hence, we now define our problem in Python:
     Problem instance definition
     """
 
-    elements_score = [ 4, 2, 10, 1, 2 ] # value of each object
+    elements_score = [ 4, 2, 10, 1, 2 ] # worth of each object
     elements_weight = [ 12, 1, 4, 1, 2 ] # weight of each object
 
-Once we have defined the instance of our problem, we will need to define the representation of a solution to that problem.
+Once we have defined the instance of our problem, we will need to define the representation of a solution to that problem.
+
+Let's define the ``SimpleBinaryCrossover`` operator, allows to randomly change a binary value of our current solution.

+ 3 - 0
docs/source/documentations/solutions.rst

@@ -127,6 +127,9 @@ We will also have to implement the ``random`` method to create a new random solu
 
             return solution
 
+.. note::
+    The current developed ``BinarySolution`` is available into ``macop.solutions.discrete.BinarySolution`` in **Macop**.
+
 Using this new Solution representation, we can now generate solution randomly:
 
 .. code-block:: python

+ 2 - 2
docs/source/documentations/validator.rst

@@ -1,7 +1,7 @@
 4. Validate a solution
 ======================
 
-When an optimization problem requires respecting certain constraints, Macop allows you to quickly verify that a solution is valid. 
+When an optimisation problem requires respecting certain constraints, Macop allows you to quickly verify that a solution is valid. 
 It is based on a defined function taking a solution as input and returning the validity criterion (true or false).
 
 4.1. Validator definition
@@ -23,7 +23,7 @@ To avoid taking into account invalid solutions, we can define our function which
     Problem instance definition
     """
 
-    elements_score = [ 4, 2, 10, 1, 2 ] # value of each object
+    elements_score = [ 4, 2, 10, 1, 2 ] # worth of each object
     elements_weight = [ 12, 1, 4, 1, 2 ] # weight of each object
 
     """

+ 6 - 7
docs/source/index.rst

@@ -5,23 +5,22 @@ Minimalist And Customisable Optimisation Package
    :width: 300 px
    :align: center
 
-What's `macop` ?
-=================
+What's **Macop** ?
+------------------
 
-`macop` is an optimisation Python package which not implement the whole available algorithms in the literature but let you the possibility to quickly develop and test your own algorithm and strategies. The main objective of this package is to be the most flexible as possible and hence, to offer a maximum of implementation possibilities.
+**Macop** is a discrete optimisation Python package which not implement the whole available algorithms in the literature but let you the possibility to quickly develop and test your own algorithm and strategies. The main objective of this package is to be the most flexible as possible and hence, to offer a maximum of implementation possibilities.
 
 .. toctree::
-   :maxdepth: 1
+   :maxdepth: 2
    :caption: Contents:
 
    description
-   api
    documentations/index
-   examples
+   api
    contributing
 
 Indices and tables
-==================
+------------------
 
 * :ref:`genindex`
 * :ref:`modindex`

+ 4 - 1
examples/knapsackExample.py

@@ -16,6 +16,7 @@ from macop.policies.classicals import RandomPolicy
 from macop.policies.reinforcement import UCBPolicy
 
 from macop.algorithms.mono import IteratedLocalSearch as ILS
+from macop.algorithms.mono import HillClimberFirstImprovment
 from macop.callbacks.classicals import BasicCheckpoint
 
 if not os.path.exists('data'):
@@ -59,7 +60,9 @@ def main():
     callback = BasicCheckpoint(every=5, filepath=filepath)
     evaluator = KnapsackEvaluator(data={'worths': elements_score})
 
-    algo = ILS(init, evaluator, operators, policy, validator, maximise=True, verbose=False)
+    # passing global evaluation param from ILS
+    hcfi = HillClimberFirstImprovment(init, evaluator, operators, policy, validator, maximise=True, verbose=False)
+    algo = ILS(init, evaluator, operators, policy, validator, localSearch=hcfi, maximise=True, verbose=False)
     
     # add callback into callback list
     algo.addCallback(callback)

+ 14 - 5
macop/algorithms/base.py

@@ -79,20 +79,20 @@ class Algorithm():
         else:
             self._policy.setAlgo(self)
 
-    def addCallback(self, _callback):
+    def addCallback(self, callback):
         """Add new callback to algorithm specifying usefull parameters
 
         Args:
-            _callback: {Callback} -- specific Callback instance
+            callback: {Callback} -- specific Callback instance
         """
         # specify current main algorithm reference for callback
         if self._parent is not None:
-            _callback.setAlgo(self.getParent())
+            callback.setAlgo(self.getParent())
         else:
-            _callback.setAlgo(self)
+            callback.setAlgo(self)
 
         # set as new
-        self._callbacks.append(_callback)
+        self._callbacks.append(callback)
 
     def resume(self):
         """Resume algorithm using Callback instances
@@ -119,6 +119,15 @@ class Algorithm():
 
         return parent_alrogithm
 
+    def setParent(self, parent):
+        """Set parent algorithm to current algorithm
+
+        Args:
+            parent: {Algorithm} -- main algorithm set for this algorithm
+        """
+        self._parent = parent
+
+
     def initRun(self):
         """
         Initialize the current solution and best solution using the `initialiser` function

+ 34 - 22
macop/algorithms/mono.py

@@ -11,8 +11,8 @@ from .base import Algorithm
 class HillClimberFirstImprovment(Algorithm):
     """Hill Climber First Improvment used as quick exploration optimisation algorithm
 
-    - First, this algorithm do a neighborhood exploration of a new generated solution (by doing operation on the current solution obtained) in order to find a better solution from the neighborhood space.
-    - Then replace the current solution by the first one from the neighbordhood space which is better than the current solution.
+    - First, this algorithm do a neighborhood exploration of a new generated solution (by doing operation on the current solution obtained) in order to find a better solution from the neighborhood space;
+    - Then replace the current solution by the first one from the neighbordhood space which is better than the current solution;
     - And do these steps until a number of evaluation (stopping criterion) is reached.
 
     Attributes:
@@ -110,8 +110,8 @@ class HillClimberFirstImprovment(Algorithm):
 class HillClimberBestImprovment(Algorithm):
     """Hill Climber Best Improvment used as exploitation optimisation algorithm
 
-    - First, this algorithm do a neighborhood exploration of a new generated solution (by doing operation on the current solution obtained) in order to find the best solution from the neighborhood space.
-    - Then replace the best solution found from the neighbordhood space as current solution to use.
+    - First, this algorithm do a neighborhood exploration of a new generated solution (by doing operation on the current solution obtained) in order to find the best solution from the neighborhood space;
+    - Then replace the best solution found from the neighbordhood space as current solution to use;
     - And do these steps until a number of evaluation (stopping criterion) is reached.
 
     Attributes:
@@ -206,12 +206,12 @@ class HillClimberBestImprovment(Algorithm):
 
 
 class IteratedLocalSearch(Algorithm):
-    """Iterated Local Search used to avoid local optima and increave EvE (Exploration vs Exploitation) compromise
+    """Iterated Local Search (ILS) used to avoid local optima and increave EvE (Exploration vs Exploitation) compromise
 
-    - A number of evaluations (`ls_evaluations`) is dedicated to local search process, here `HillClimberFirstImprovment` algorithm
-    - Starting with the new generated solution, the local search algorithm will return a new solution
-    - If the obtained solution is better than the best solution known into `IteratedLocalSearch`, then the solution is replaced
-    - Restart this process until stopping critirion (number of expected evaluations)
+    - A number of evaluations (`ls_evaluations`) is dedicated to local search process, here `HillClimberFirstImprovment` algorithm;
+    - Starting with the new generated solution, the local search algorithm will return a new solution;
+    - If the obtained solution is better than the best solution known into `IteratedLocalSearch`, then the solution is replaced;
+    - Restart this process until stopping critirion (number of expected evaluations).
 
     Attributes:
         initalizer: {function} -- basic function strategy to initialize solution
@@ -222,6 +222,7 @@ class IteratedLocalSearch(Algorithm):
         maximise: {bool} -- specify kind of optimisation problem 
         currentSolution: {Solution} -- current solution managed for current evaluation
         bestSolution: {Solution} -- best solution found so far during running algorithm
+        localSearch: {Algorithm} -- current local search into ILS
         callbacks: {[Callback]} -- list of Callback class implementation to do some instructions every number of evaluations and `load` when initializing algorithm
     
     Example:
@@ -235,6 +236,7 @@ class IteratedLocalSearch(Algorithm):
     >>> # solution and algorithm
     >>> from macop.solutions.discrete import BinarySolution
     >>> from macop.algorithms.mono import IteratedLocalSearch
+    >>> from macop.algorithms.mono import HillClimberFirstImprovment
     >>> # evaluator import
     >>> from macop.evaluators.discrete.mono import KnapsackEvaluator
     >>> # evaluator initialization (worths objects passed into data)
@@ -249,12 +251,32 @@ class IteratedLocalSearch(Algorithm):
     >>> # operators list with crossover and mutation
     >>> operators = [SimpleCrossover(), SimpleMutation()]
     >>> policy = RandomPolicy(operators)
-    >>> algo = IteratedLocalSearch(initializer, evaluator, operators, policy, validator, maximise=True, verbose=False)
+    >>> local_search = HillClimberFirstImprovment(initializer, evaluator, operators, policy, validator, maximise=True, verbose=False)
+    >>> algo = IteratedLocalSearch(initializer, evaluator, operators, policy, validator, localSearch=local_search, maximise=True, verbose=False)
     >>> # run the algorithm
     >>> solution = algo.run(100, ls_evaluations=10)
     >>> solution._score
     137
     """
+    def __init__(self,
+                 initializer,
+                 evaluator,
+                 operators,
+                 policy,
+                 validator,
+                 localSearch,
+                 maximise=True,
+                 parent=None,
+                 verbose=True):
+        
+        super().__init__(initializer, evaluator, operators, policy, validator, maximise, parent, verbose)
+
+        # specific local search associated with current algorithm
+        self._localSearch = localSearch
+        # need to attach current algorithm as parent
+        self._localSearch.setParent(self)
+
+
     def run(self, evaluations, ls_evaluations=100):
         """
         Run the iterated local search algorithm using local search (EvE compromise)
@@ -276,25 +298,15 @@ class IteratedLocalSearch(Algorithm):
         # initialize current solution
         self.initRun()
 
-        # passing global evaluation param from ILS
-        ls = HillClimberFirstImprovment(self._initializer,
-                         self._evaluator,
-                         self._operators,
-                         self._policy,
-                         self._validator,
-                         self._maximise,
-                         verbose=self._verbose,
-                         parent=self)
-
         # add same callbacks
         for callback in self._callbacks:
-            ls.addCallback(callback)
+            self._localSearch.addCallback(callback)
 
         # local search algorithm implementation
         while not self.stop():
 
             # create and search solution from local search
-            newSolution = ls.run(ls_evaluations)
+            newSolution = self._localSearch.run(ls_evaluations)
 
             # if better solution than currently, replace it
             if self.isBetter(newSolution):

+ 1 - 1
macop/operators/base.py

@@ -61,5 +61,5 @@ class Crossover(Operator):
     def __init__(self):
         self._kind = KindOperator.CROSSOVER
 
-    def apply(self, solution):
+    def apply(self, solution1, solution2=None):
         raise NotImplementedError

+ 34 - 20
macop/operators/discrete/crossovers.py

@@ -24,6 +24,7 @@ class SimpleCrossover(Crossover):
     >>> # solution and algorithm
     >>> from macop.solutions.discrete import BinarySolution
     >>> from macop.algorithms.mono import IteratedLocalSearch
+    >>> from macop.algorithms.mono import HillClimberFirstImprovment
     >>> # evaluator import
     >>> from macop.evaluators.discrete.mono import KnapsackEvaluator
     >>> # evaluator initialization (worths objects passed into data)
@@ -39,33 +40,39 @@ class SimpleCrossover(Crossover):
     >>> simple_mutation = SimpleMutation()
     >>> operators = [simple_crossover, simple_mutation]
     >>> policy = UCBPolicy(operators)
-    >>> algo = IteratedLocalSearch(initializer, evaluator, operators, policy, validator, maximise=True, verbose=False)
+    >>> local_search = HillClimberFirstImprovment(initializer, evaluator, operators, policy, validator, maximise=True, verbose=False)
+    >>> algo = IteratedLocalSearch(initializer, evaluator, operators, policy, validator, localSearch=local_search, maximise=True, verbose=False)
     >>> # using best solution, simple crossover is applied
     >>> best_solution = algo.run(100)
     >>> list(best_solution._data)
     [1, 1, 0, 1, 0, 1, 1, 1, 0, 1]
-    >>> new_solution = initializer()
-    >>> mutate_solution = simple_crossover.apply(new_solution)
-    >>> list(mutate_solution._data)
-    [0, 1, 0, 0, 0, 1, 1, 1, 0, 1]
+    >>> new_solution_1 = initializer()
+    >>> new_solution_2 = initializer()
+    >>> offspring_solution = simple_crossover.apply(new_solution_1, new_solution_2)
+    >>> list(offspring_solution._data)
+    [0, 1, 1, 0, 1, 0, 1, 1, 0, 1]
     """
-    def apply(self, solution):
+    def apply(self, solution1, solution2=None):
         """Create new solution based on best solution found and solution passed as parameter
 
         Args:
-            solution: {Solution} -- the solution to use for generating new solution
+            solution1: {Solution} -- the first solution to use for generating new solution
+            solution2: {Solution} -- the second solution to use for generating new solution
 
         Returns:
             {Solution} -- new generated solution
         """
 
-        size = solution._size
+        size = solution1._size
 
         # copy data of solution
-        firstData = solution._data.copy()
+        firstData = solution1._data.copy()
 
         # get best solution from current algorithm
-        copy_solution = self._algo._bestSolution.clone()
+        if solution2 is None:
+            copy_solution = self._algo._bestSolution.clone()
+        else:
+            copy_solution = solution2.clone()
 
         splitIndex = int(size / 2)
 
@@ -93,6 +100,7 @@ class RandomSplitCrossover(Crossover):
     >>> # solution and algorithm
     >>> from macop.solutions.discrete import BinarySolution
     >>> from macop.algorithms.mono import IteratedLocalSearch
+    >>> from macop.algorithms.mono import HillClimberFirstImprovment
     >>> # evaluator import
     >>> from macop.evaluators.discrete.mono import KnapsackEvaluator
     >>> # evaluator initialization (worths objects passed into data)
@@ -108,32 +116,38 @@ class RandomSplitCrossover(Crossover):
     >>> simple_mutation = SimpleMutation()
     >>> operators = [random_split_crossover, simple_mutation]
     >>> policy = UCBPolicy(operators)
-    >>> algo = IteratedLocalSearch(initializer, evaluator, operators, policy, validator, maximise=True, verbose=False)
+    >>> local_search = HillClimberFirstImprovment(initializer, evaluator, operators, policy, validator, maximise=True, verbose=False)
+    >>> algo = IteratedLocalSearch(initializer, evaluator, operators, policy, validator, localSearch=local_search, maximise=True, verbose=False)
     >>> # using best solution, simple crossover is applied
     >>> best_solution = algo.run(100)
     >>> list(best_solution._data)
     [1, 1, 1, 0, 1, 0, 1, 1, 1, 0]
-    >>> new_solution = initializer()
-    >>> mutate_solution = random_split_crossover.apply(new_solution)
-    >>> list(mutate_solution._data)
-    [1, 0, 0, 1, 1, 0, 0, 1, 0, 0]
+    >>> new_solution_1 = initializer()
+    >>> new_solution_2 = initializer()
+    >>> offspring_solution = random_split_crossover.apply(new_solution_1, new_solution_2)
+    >>> list(offspring_solution._data)
+    [0, 0, 0, 1, 1, 0, 0, 1, 0, 0]
     """
-    def apply(self, solution):
+    def apply(self, solution1, solution2=None):
         """Create new solution based on best solution found and solution passed as parameter
 
         Args:
-            solution: {Solution} -- the solution to use for generating new solution
+            solution1: {Solution} -- the first solution to use for generating new solution
+            solution2: {Solution} -- the second solution to use for generating new solution
 
         Returns:
             {Solution} -- new generated solution
         """
-        size = solution._size
+        size = solution1._size
 
         # copy data of solution
-        firstData = solution._data.copy()
+        firstData = solution1._data.copy()
 
         # get best solution from current algorithm
-        copy_solution = self._algo._bestSolution.clone()
+        if solution2 is None:
+            copy_solution = self._algo._bestSolution.clone()
+        else:
+            copy_solution = solution2.clone()
 
         splitIndex = random.randint(0, size)
 

+ 3 - 1
macop/policies/reinforcement.py

@@ -37,6 +37,7 @@ class UCBPolicy(Policy):
     >>> # solution and algorithm
     >>> from macop.solutions.discrete import BinarySolution
     >>> from macop.algorithms.mono import IteratedLocalSearch
+    >>> from macop.algorithms.mono import HillClimberFirstImprovment
     >>> # evaluator import
     >>> from macop.evaluators.discrete.mono import KnapsackEvaluator
     >>> # evaluator initialization (worths objects passed into data)
@@ -50,7 +51,8 @@ class UCBPolicy(Policy):
     >>> # operators list with crossover and mutation
     >>> operators = [SimpleCrossover(), SimpleMutation()]
     >>> policy = UCBPolicy(operators)
-    >>> algo = IteratedLocalSearch(initializer, evaluator, operators, policy, validator, maximise=True, verbose=False)
+    >>> local_search = HillClimberFirstImprovment(initializer, evaluator, operators, policy, validator, maximise=True, verbose=False)
+    >>> algo = IteratedLocalSearch(initializer, evaluator, operators, policy, validator, localSearch=local_search, maximise=True, verbose=False)
     >>> policy._occurences
     [0, 0]
     >>> solution = algo.run(100)