macop.algorithms.mono¶
Mono-objective available algorithms
Classes
|
Hill Climber Best Improvment used as exploitation optimisation algorithm |
|
Hill Climber First Improvment used as quick exploration optimisation algorithm |
|
Iterated Local Search used to avoid local optima and increave EvE (Exploration vs Exploitation) compromise |
-
class
macop.algorithms.mono.
HillClimberBestImprovment
(initializer, evaluator, operators, policy, validator, maximise=True, parent=None, verbose=True)[source]¶ Hill Climber Best Improvment used as exploitation optimisation algorithm
First, this algorithm do a neighborhood exploration of a new generated solution (by doing operation on the current solution obtained) in order to find the best solution from the neighborhood space.
Then replace the best solution found from the neighbordhood space as current solution to use.
And do these steps until a number of evaluation (stopping criterion) is reached.
-
initalizer
¶ {function} – basic function strategy to initialize solution
-
evaluator
¶ {function} – basic function in order to obtained fitness (mono or multiple objectives)
-
operators
¶ {[Operator]} – list of operator to use when launching algorithm
-
policy
¶ {Policy} – Policy class implementation strategy to select operators
-
validator
¶ {function} – basic function to check if solution is valid or not under some constraints
-
maximise
¶ {bool} – specify kind of optimisation problem
-
currentSolution
¶ {Solution} – current solution managed for current evaluation
-
bestSolution
¶ {Solution} – best solution found so far during running algorithm
-
callbacks
¶ {[Callback]} – list of Callback class implementation to do some instructions every number of evaluations and load when initializing algorithm
Example:
>>> import random >>> # operators import >>> from macop.operators.discrete.crossovers import SimpleCrossover >>> from macop.operators.discrete.mutators import SimpleMutation >>> # policy import >>> from macop.policies.classicals import RandomPolicy >>> # solution and algorithm >>> from macop.solutions.discrete import BinarySolution >>> from macop.algorithms.mono import HillClimberBestImprovment >>> # evaluator import >>> from macop.evaluators.discrete.mono import KnapsackEvaluator >>> # evaluator initialization (worths objects passed into data) >>> problem_size = 20 >>> worths = [ random.randint(0, 20) for i in range(problem_size) ] >>> evaluator = KnapsackEvaluator(data={'worths': worths}) >>> # validator specification (based on weights of each objects) >>> weights = [ random.randint(5, 30) for i in range(problem_size) ] >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution._data) if value == 1]) < 200 else False >>> # initializer function with lambda function >>> initializer = lambda x=20: BinarySolution.random(x, validator) >>> # operators list with crossover and mutation >>> operators = [SimpleCrossover(), SimpleMutation()] >>> policy = RandomPolicy(operators) >>> algo = HillClimberBestImprovment(initializer, evaluator, operators, policy, validator, maximise=True, verbose=False) >>> # run the algorithm >>> solution = algo.run(100) >>> solution._score 104
-
class
macop.algorithms.mono.
HillClimberFirstImprovment
(initializer, evaluator, operators, policy, validator, maximise=True, parent=None, verbose=True)[source]¶ Hill Climber First Improvment used as quick exploration optimisation algorithm
First, this algorithm do a neighborhood exploration of a new generated solution (by doing operation on the current solution obtained) in order to find a better solution from the neighborhood space.
Then replace the current solution by the first one from the neighbordhood space which is better than the current solution.
And do these steps until a number of evaluation (stopping criterion) is reached.
-
initalizer
¶ {function} – basic function strategy to initialize solution
-
evaluator
¶ {function} – basic function in order to obtained fitness (mono or multiple objectives)
-
operators
¶ {[Operator]} – list of operator to use when launching algorithm
-
policy
¶ {Policy} – Policy class implementation strategy to select operators
-
validator
¶ {function} – basic function to check if solution is valid or not under some constraints
-
maximise
¶ {bool} – specify kind of optimisation problem
-
currentSolution
¶ {Solution} – current solution managed for current evaluation
-
bestSolution
¶ {Solution} – best solution found so far during running algorithm
-
callbacks
¶ {[Callback]} – list of Callback class implementation to do some instructions every number of evaluations and load when initializing algorithm
Example:
>>> import random >>> # operators import >>> from macop.operators.discrete.crossovers import SimpleCrossover >>> from macop.operators.discrete.mutators import SimpleMutation >>> # policy import >>> from macop.policies.classicals import RandomPolicy >>> # solution and algorithm >>> from macop.solutions.discrete import BinarySolution >>> from macop.algorithms.mono import HillClimberFirstImprovment >>> # evaluator import >>> from macop.evaluators.discrete.mono import KnapsackEvaluator >>> # evaluator initialization (worths objects passed into data) >>> problem_size = 20 >>> worths = [ random.randint(0, 20) for i in range(problem_size) ] >>> evaluator = KnapsackEvaluator(data={'worths': worths}) >>> # validator specification (based on weights of each objects) >>> weights = [ random.randint(5, 30) for i in range(problem_size) ] >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution._data) if value == 1]) < 200 else False >>> # initializer function with lambda function >>> initializer = lambda x=20: BinarySolution.random(x, validator) >>> # operators list with crossover and mutation >>> operators = [SimpleCrossover(), SimpleMutation()] >>> policy = RandomPolicy(operators) >>> algo = HillClimberFirstImprovment(initializer, evaluator, operators, policy, validator, maximise=True, verbose=False) >>> # run the algorithm >>> solution = algo.run(100) >>> solution._score 128
-
class
macop.algorithms.mono.
IteratedLocalSearch
(initializer, evaluator, operators, policy, validator, maximise=True, parent=None, verbose=True)[source]¶ Iterated Local Search used to avoid local optima and increave EvE (Exploration vs Exploitation) compromise
A number of evaluations (ls_evaluations) is dedicated to local search process, here HillClimberFirstImprovment algorithm
Starting with the new generated solution, the local search algorithm will return a new solution
If the obtained solution is better than the best solution known into IteratedLocalSearch, then the solution is replaced
Restart this process until stopping critirion (number of expected evaluations)
-
initalizer
¶ {function} – basic function strategy to initialize solution
-
evaluator
¶ {function} – basic function in order to obtained fitness (mono or multiple objectives)
-
operators
¶ {[Operator]} – list of operator to use when launching algorithm
-
policy
¶ {Policy} – Policy class implementation strategy to select operators
-
validator
¶ {function} – basic function to check if solution is valid or not under some constraints
-
maximise
¶ {bool} – specify kind of optimisation problem
-
currentSolution
¶ {Solution} – current solution managed for current evaluation
-
bestSolution
¶ {Solution} – best solution found so far during running algorithm
-
callbacks
¶ {[Callback]} – list of Callback class implementation to do some instructions every number of evaluations and load when initializing algorithm
Example:
>>> import random >>> # operators import >>> from macop.operators.discrete.crossovers import SimpleCrossover >>> from macop.operators.discrete.mutators import SimpleMutation >>> # policy import >>> from macop.policies.classicals import RandomPolicy >>> # solution and algorithm >>> from macop.solutions.discrete import BinarySolution >>> from macop.algorithms.mono import IteratedLocalSearch >>> # evaluator import >>> from macop.evaluators.discrete.mono import KnapsackEvaluator >>> # evaluator initialization (worths objects passed into data) >>> problem_size = 20 >>> worths = [ random.randint(0, 20) for i in range(problem_size) ] >>> evaluator = KnapsackEvaluator(data={'worths': worths}) >>> # validator specification (based on weights of each objects) >>> weights = [ random.randint(5, 30) for i in range(problem_size) ] >>> validator = lambda solution: True if sum([weights[i] for i, value in enumerate(solution._data) if value == 1]) < 200 else False >>> # initializer function with lambda function >>> initializer = lambda x=20: BinarySolution.random(x, validator) >>> # operators list with crossover and mutation >>> operators = [SimpleCrossover(), SimpleMutation()] >>> policy = RandomPolicy(operators) >>> algo = IteratedLocalSearch(initializer, evaluator, operators, policy, validator, maximise=True, verbose=False) >>> # run the algorithm >>> solution = algo.run(100, ls_evaluations=10) >>> solution._score 137
-
run
(evaluations, ls_evaluations=100)[source]¶ Run the iterated local search algorithm using local search (EvE compromise)
- Parameters
evaluations – {int} – number of global evaluations for ILS
ls_evaluations – {int} – number of Local search evaluations (default: 100)
- Returns
{Solution} – best solution found