Parcourir la source

Update of doc and add of new feature

Jérôme BUISINE il y a 4 ans
Parent
commit
e227c09130

+ 94 - 84
README.md

@@ -1,5 +1,7 @@
 # Noise detection using SVM
 
+Project developed during thesis in order to detect noise (perceptual noise) generated during rendering process using SVM models and attributes (features) extracted from an image.
+
 ## Requirements
 
 ```
@@ -15,12 +17,33 @@ python generate/generate_all_data.py --feature all
 For noise detection, many features are available:
 - lab
 - mscn
-- mscn_revisited
 - low_bits_2
+- low_bits_3
 - low_bits_4
 - low_bits_5
 - low_bits_6
 - low_bits_4_shifted_2
+- sub_blocks_stats
+- sub_blocks_area
+- sub_blocks_stats_reduced
+- sub_blocks_area_normed
+- mscn_var_4
+- mscn_var_16
+- mscn_var_64
+- mscn_var_16_max
+- mscn_var_64_max
+- ica_diff
+- svd_trunc_diff
+- ipca_diff
+- svd_reconstruct
+- highest_sv_std_filters
+- lowest_sv_std_filters
+- highest_wave_sv_std_filters
+- lowest_wave_sv_std_filters
+- highest_sv_std_filters_full
+- lowest_sv_std_filters_full
+- highest_sv_entropy_std_filters
+- lowest_sv_entropy_std_filters
 
 You can also specify feature you want to compute and image step to avoid some images:
 ```bash
@@ -29,44 +52,89 @@ python generate/generate_all_data.py --feature mscn --step 50
 
 - **step**: keep only image if image id % 50 == 0 (assumption is that keeping spaced data will let model better fit).
 
-## How to use
+Or generate all data:
 
-### Multiple directories and scripts are available:
+```bash
+python generate/generate_all_data.py --feature all
+```
 
+## Requirements
+
+```
+pip install -r requirements.txt
+```
+
+## Project structure
+
+### Link to your dataset
+
+You have to create a symbolic link to your own database which respects this structure:
+
+- dataset/
+  - Scene1/
+    - zone00/
+    - ...
+    - zone15/
+      - seuilExpe (file which contains threshold samples of zone image perceived by human)
+    - Scene1_00050.png
+    - Scene1_00070.png
+    - ...
+    - Scene1_01180.png
+    - Scene1_01200.png
+  - Scene2/
+    - ...
+  - ...
+
+Create your symbolic link:
+
+```
+ln -s /path/to/your/data dataset
+```
+
+### Code architecture description
 
-- **fichiersSVD_light/\***: all scene files information (zones of each scene, SVD descriptor files information and so on...).
-- **train_model.py**: script which is used to run specific model available.
-- **data/\***: folder which will contain all *.train* & *.test* files in order to train model.
-- **saved_models/*.joblib**: all scikit learn models saved.
-- **models_info/***: all markdown files generated to get quick information about model performance and prediction. 
-- **results**: This folder contains **model_comparisons.csv** obtained after running runAll_maxwell_*.sh script.
 - **modules/\***: contains all modules usefull for the whole project (such as configuration variables)
+- **analysis/\***: contains all jupyter notebook used for analysis during thesis
+- **generate/\***: contains python scripts for generate data from scenes (described later)
+- **data_processing/\***: all python scripts for generate custom dataset for models
+- **prediction/\***: all python scripts for predict new threshold from computed models
+- **simulation/\***: contains all bash scripts used for run simulation from models
+- **display/\***: contains all python scripts used for display Scene information (such as Singular values...)
+- **run/\***: bash scripts to run few step at once : 
+  - generate custom dataset
+  - train model
+  - keep model performance
+  - run simulation (if necessary)
+- **others/\***: folders which contains others scripts such as script for getting performance of model on specific scene and write it into Mardown file.
+- **data_attributes.py**: files which contains all extracted features implementation from an image.
+- **custom_config.py**: override the main configuration project of `modules/config/global_config.py`
+- **train_model.py**: script which is used to run specific model available.
+
+### Generated data directories:
 
-### Scripts for generating data files
+- **data/\***: folder which will contain all generated *.train* & *.test* files in order to train model.
+- **saved_models/\***: all scikit learn or keras models saved.
+- **models_info/\***: all markdown files generated to get quick information about model performance and prediction obtained after running `run/runAll_*.sh` script.
+- **results/**:  This folder contains `model_comparisons.csv` file used for store models performance.
 
-Two scripts can be used for generating data in order to fit model:
-- **generate_data_model.py**: zones are specified and stayed fixed for each scene
-- **generate_data_model_random.py**: zones are chosen randomly (just a number of zone is specified)
-- **generate_data_model_random_maxwell.py**: zones are chosen randomly (just a number of zone is specified). Only maxwell scene are used.
 
+## How to use ?
 
 **Remark**: Note here that all python script have *--help* command.
 
 ```
-python generate/generate_data_model.py --help
-
-python generate/generate_data_model.py --output xxxx --interval 0,20  --kind svdne --scenes "A, B, D" --zones "0, 1, 2" --percent 0.7 --sep: --rowindex 1 --custom custom_min_max_filename
+python generate_data_model.py --help
 ```
 
 Parameters explained:
-- **output**: filename of data (which will be split into two parts, *.train* and *.test* relative to your choices).
+- **feature**: feature choice wished
+- **output**: filename of data (which will be split into two parts, *.train* and *.test* relative to your choices). Need to be into `data` folder.
 - **interval**: the interval of data you want to use from SVD vector.
 - **kind**: kind of data ['svd', 'svdn', 'svdne']; not normalize, normalize vector only and normalize together.
 - **scenes**: scenes choice for training dataset.
 - **zones**: zones to take for training dataset.
+- **step**: specify if all pictures are used or not using step process.
 - **percent**: percent of data amount of zone to take (choose randomly) of zone
-- **sep**: output csv file seperator used
-- **rowindex**: if 1 then row will be like that 1:xxxxx, 2:xxxxxx, ..., n:xxxxxx
 - **custom**: specify if you want your data normalized using interval and not the whole singular values vector. If it is, the value of this parameter is the output filename which will store the min and max value found. This file will be usefull later to make prediction with model (optional parameter).
 
 ### Train model
@@ -74,7 +142,7 @@ Parameters explained:
 This is an example of how to train a model
 
 ```bash
-python train_model.py --data 'data/xxxxx.train' --output 'model_file_to_save' --choice 'model_choice'
+python train_model.py --data 'data/xxxx' --output 'model_file_to_save' --choice 'model_choice'
 ```
 
 Expected values for the **choice** parameter are ['svm_model', 'ensemble_model', 'ensemble_model_v2'].
@@ -95,15 +163,15 @@ The model will return only 0 or 1:
 - 0 means image seem to be not noisy.
 
 All SVD features developed need:
-- Name added into *feature_choices_labels* global array variable of **modules/utils/config.py** file.
-- A specification of how you compute the feature into *get_svd_data* method of **modules/utils/data_type.py** file.
+- Name added into *feature_choices_labels* global array variable of `custom_config.py` file.
+- A specification of how you compute the feature into *get_image_features* method of `data_attributes.py` file.
 
 ### Predict scene using model
 
 Now we have a model trained, we can use it with an image as input:
 
 ```bash
-python prediction/prediction_scene.py --data path/to/xxxx.csv --model saved_model/xxxx.joblib --output xxxxx --scene xxxx
+python prediction_scene.py --data path/to/xxxx.csv --model saved_model/xxxx.joblib --output xxxxx --scene xxxx
 ```
 **Remark**: *scene* parameter expected need to be the correct name of the Scene.
 
@@ -115,68 +183,10 @@ Just use --help option to get more information.
 
 ### Simulate model on scene
 
-All scripts named **predict_seuil_expe\*.py** are used to simulate model prediction during rendering process. Do not forget the **custom** parameter filename if necessary.
+All scripts named **prediction/predict_seuil_expe\*.py** are used to simulate model prediction during rendering process. Do not forget the **custom** parameter filename if necessary.
 
 Once you have simulation done. Checkout your **threshold_map/%MODEL_NAME%/simulation\_curves\_zones\_\*/** folder and use it with help of **display_simulation_curves.py** script.
 
-## Others scripts
-
-### Test model on all scene data
-
-In order to see if a model well generalized, a bash script is available:
-
-```bash
-bash others/testModelByScene.sh '100' '110' 'saved_models/xxxx.joblib' 'svdne' 'lab'
-```
-
-Parameters list:
-- 1: Begin of interval of data from SVD to use
-- 2: End of interval of data from SVD to use
-- 3: Model we want to test
-- 4: Kind of data input used by trained model
-- 5: feature used by model
-
-
-### Get treshold map
-
-Main objective of this project is to predict as well as a human the noise perception on a photo realistic image. Human threshold is available from training data. So a script was developed to give the predicted treshold from model and compare predicted treshold from the expected one.
-
-```bash
-python prediction/predict_seuil_expe.py --interval "x,x" --model 'saved_models/xxxx.joblib' --mode ["svd", "svdn", "svdne"] --feature ['lab', 'mscn', ...] --limit_detection xx --custom 'custom_min_max_filename'
-```
-
-Parameters list:
-- **model**: mode file saved to use
-- **interval**: the interval of data you want to use from SVD vector.
-- **mode**: kind of data ['svd', 'svdn', 'svdne']; not normalize, normalize vector only and normalize together.
-- **limit_detection**: number of not noisy images found to stop and return threshold (integer).
-- **custom**: custom filename where min and max values are stored (optional parameter).
-
-### Display model performance information
-
-Another script was developed to display into Mardown format the performance of a model.
-
-The content will be divised into two parts:
-- Predicted performance on all scenes
-- Treshold maps obtained from model on each scenes
-
-The previous script need to already have ran to obtain and display treshold maps on this markdown file.
-
-```bash
-python others/save_model_result_in_md.py --interval "xx,xx" --model saved_models/xxxx.joblib --mode ["svd", "svdn", "svdne"] --feature ['lab', 'mscn']
-```
-
-Parameters list:
-- **model**: mode file saved to use
-- **interval**: the interval of data you want to use from SVD vector.
-- **mode**: kind of data ['svd', 'svdn', 'svdne']; not normalize, normalize vector only and normalize together.
-
-Markdown file with all information is saved using model name into **models_info** folder.
-
-### Others...
-
-All others bash scripts are used to combine and run multiple model combinations...
-
 ## License
 
-[The MIT license](https://github.com/prise-3d/Thesis-NoiseDetection-attributes/blob/master/LICENSE)
+[The MIT license](https://github.com/prise-3d/Thesis-NoiseDetection-attributes/blob/master/LICENSE)

Fichier diff supprimé car celui-ci est trop grand
+ 1 - 1
custom_config.py


+ 48 - 40
data_attributes.py

@@ -23,7 +23,7 @@ import custom_config as cfg
 from modules.utils import data as dt
 
 
-def get_svd_data(data_type, block):
+def get_image_features(data_type, block):
     """
     Method which returns the data type expected
     """
@@ -407,12 +407,12 @@ def get_svd_data(data_type, block):
             
         sv_array = np.array(sv_vector)
         
-        _, len = sv_array.shape
+        _, length = sv_array.shape
         
         sv_std = []
         
         # normalize each SV vectors and compute standard deviation for each sub vectors
-        for i in range(len):
+        for i in range(length):
             sv_array[:, i] = utils.normalize_arr(sv_array[:, i])
             sv_std.append(np.std(sv_array[:, i]))
         
@@ -427,66 +427,74 @@ def get_svd_data(data_type, block):
         # data are arranged following std trend computed
         data = s_arr[indices]
 
-    if 'filters_statistics' in data_type:
-
-        img_width, img_height = 200, 200
+    if 'sv_entropy_std_filters' in data_type:
 
         lab_img = transform.get_LAB_L(block)
         arr = np.array(lab_img)
 
-        # compute all filters statistics
-        def get_stats(arr, I_filter):
-
-            e1       = np.abs(arr - I_filter)
-            L        = np.array(e1)
-            mu0      = np.mean(L)
-            A        = L - mu0
-            H        = A * A
-            E        = np.sum(H) / (img_width * img_height)
-            P        = np.sqrt(E)
-
-            return mu0, P
-
-        stats = []
+        images = []
 
         kernel = np.ones((3,3),np.float32)/9
-        stats.append(get_stats(arr, cv2.filter2D(arr,-1,kernel)))
+        images.append(cv2.filter2D(arr,-1,kernel))
 
         kernel = np.ones((5,5),np.float32)/25
-        stats.append(get_stats(arr, cv2.filter2D(arr,-1,kernel)))
+        images.append(cv2.filter2D(arr,-1,kernel))
 
-        stats.append(get_stats(arr, cv2.GaussianBlur(arr, (3, 3), 0.5)))
+        images.append(cv2.GaussianBlur(arr, (3, 3), 0.5))
 
-        stats.append(get_stats(arr, cv2.GaussianBlur(arr, (3, 3), 1)))
+        images.append(cv2.GaussianBlur(arr, (3, 3), 1))
 
-        stats.append(get_stats(arr, cv2.GaussianBlur(arr, (3, 3), 1.5)))
+        images.append(cv2.GaussianBlur(arr, (3, 3), 1.5))
 
-        stats.append(get_stats(arr, cv2.GaussianBlur(arr, (5, 5), 0.5)))
+        images.append(cv2.GaussianBlur(arr, (5, 5), 0.5))
 
-        stats.append(get_stats(arr, cv2.GaussianBlur(arr, (5, 5), 1)))
+        images.append(cv2.GaussianBlur(arr, (5, 5), 1))
 
-        stats.append(get_stats(arr, cv2.GaussianBlur(arr, (5, 5), 1.5)))
+        images.append(cv2.GaussianBlur(arr, (5, 5), 1.5))
 
-        stats.append(get_stats(arr, medfilt2d(arr, [3, 3])))
+        images.append(medfilt2d(arr, [3, 3]))
 
-        stats.append(get_stats(arr, medfilt2d(arr, [5, 5])))
+        images.append(medfilt2d(arr, [5, 5]))
 
-        stats.append(get_stats(arr, wiener(arr, [3, 3])))
+        images.append(wiener(arr, [3, 3]))
 
-        stats.append(get_stats(arr, wiener(arr, [5, 5])))
+        images.append(wiener(arr, [5, 5]))
 
         wave = w2d(arr, 'db1', 2)
-        stats.append(get_stats(arr, np.array(wave, 'float64')))
-
-        data = []
+        images.append(np.array(wave, 'float64'))
 
-        for stat in stats:
-            data.append(stat[0])
+        sv_vector = []
+        sv_entropy_list = []
+        
+        # for each new image apply SVD and get SV 
+        for img in images:
+            s = compression.get_SVD_s(img)
+            sv_vector.append(s)
 
-        for stat in stats:
-            data.append(stat[1])
+            sv_entropy = [utils.get_entropy_contribution_of_i(s, id_sv) for id_sv, sv in enumerate(s)]
+            sv_entropy_list.append(sv_entropy)
+        
+        sv_std = []
+        
+        sv_array = np.array(sv_vector)
+        _, length = sv_array.shape
+        
+        # normalize each SV vectors and compute standard deviation for each sub vectors
+        for i in range(length):
+            sv_array[:, i] = utils.normalize_arr(sv_array[:, i])
+            sv_std.append(np.std(sv_array[:, i]))
         
-        data = np.array(data)
+        indices = []
+
+        if 'lowest' in data_type:
+            indices = utils.get_indices_of_lowest_values(sv_std, 200)
+
+        if 'highest' in data_type:
+            indices = utils.get_indices_of_highest_values(sv_std, 200)
+
+        # data are arranged following std trend computed
+        s_arr = compression.get_SVD_s(arr)
+        data = s_arr[indices]
 
     return data
 

+ 0 - 1
dataset

@@ -1 +0,0 @@
-../data/Scenes/

+ 2 - 2
display/display_scenes_zones.py

@@ -19,7 +19,7 @@ sys.path.insert(0, '') # trick to enable import of main folder module
 
 import custom_config as cfg
 from modules.utils import data as dt
-from data_attributes import get_svd_data
+from data_attributes import get_image_features
 
 
 # variables and parameters
@@ -117,7 +117,7 @@ def display_data_scenes(data_type, p_scene, p_kind):
                     # getting expected block id
                     block = img_blocks[id_zone]
 
-                    data = get_svd_data(data_type, block)
+                    data = get_image_features(data_type, block)
 
                     ##################
                     # Data mode part #

+ 1 - 1
display/display_simulation_curves.py

@@ -10,7 +10,7 @@ import matplotlib.pyplot as plt
 sys.path.insert(0, '') # trick to enable import of main folder module
 
 import custom_config as cfg
-from data_attributes import get_svd_data
+from data_attributes import get_image_features
 
 
 # variables and parameters

+ 2 - 2
display/display_svd_area_data_scene.py

@@ -16,7 +16,7 @@ sys.path.insert(0, '') # trick to enable import of main folder module
 
 import custom_config as cfg
 from modules.utils import data as dt
-from data_attributes import get_svd_data
+from data_attributes import get_image_features
 
 # getting configuration information
 zone_folder         = cfg.zone_folder
@@ -122,7 +122,7 @@ def display_svd_values(p_scene, p_interval, p_indices, p_metric, p_mode, p_step,
 
                 img = Image.open(img_path)
 
-                svd_values = get_svd_data(p_metric, img)
+                svd_values = get_image_features(p_metric, img)
 
                 if p_norm:
                     svd_values = svd_values[begin_data:end_data]

+ 2 - 2
display/display_svd_area_scenes.py

@@ -14,7 +14,7 @@ sys.path.insert(0, '') # trick to enable import of main folder module
 
 import custom_config as cfg
 from modules.utils import data as dt
-from data_attributes import get_svd_data
+from data_attributes import get_image_features
 
 # getting configuration information
 zone_folder         = cfg.zone_folder
@@ -126,7 +126,7 @@ def display_svd_values(p_interval, p_indices, p_metric, p_mode, p_step, p_norm,
 
             img = Image.open(img_path)
 
-            svd_values = get_svd_data(p_metric, img)
+            svd_values = get_image_features(p_metric, img)
 
             if p_norm:
                 svd_values = svd_values[begin_data:end_data]

+ 2 - 2
display/display_svd_data_error_scene.py

@@ -15,7 +15,7 @@ sys.path.insert(0, '') # trick to enable import of main folder module
 
 import custom_config as cfg
 from modules.utils import data as dt
-from data_attributes import get_svd_data
+from data_attributes import get_image_features
 
 # getting configuration information
 zone_folder         = cfg.zone_folder
@@ -124,7 +124,7 @@ def display_svd_values(p_scene, p_interval, p_indices, p_feature, p_mode, p_step
 
                 img = Image.open(img_path)
 
-                svd_values = get_svd_data(p_feature, img)
+                svd_values = get_image_features(p_feature, img)
 
                 if p_norm:
                     svd_values = svd_values[begin_data:end_data]

+ 2 - 2
display/display_svd_data_scene.py

@@ -14,7 +14,7 @@ sys.path.insert(0, '') # trick to enable import of main folder module
 
 import custom_config as cfg
 from modules.utils import data as dt
-from data_attributes import get_svd_data
+from data_attributes import get_image_features
 
 # getting configuration information
 zone_folder         = cfg.zone_folder
@@ -108,7 +108,7 @@ def display_svd_values(p_scene, p_interval, p_indices, p_feature, p_mode, p_step
 
                 img = Image.open(img_path)
 
-                svd_values = get_svd_data(p_feature, img)
+                svd_values = get_image_features(p_feature, img)
 
                 if p_norm:
                     svd_values = svd_values[begin_data:end_data]

+ 2 - 2
display/display_svd_zone_scene.py

@@ -15,7 +15,7 @@ sys.path.insert(0, '') # trick to enable import of main folder module
 
 import custom_config as cfg
 from modules.utils import data as dt
-from data_attributes import get_svd_data
+from data_attributes import get_image_features
 
 # getting configuration information
 zone_folder         = cfg.zone_folder
@@ -175,7 +175,7 @@ def display_svd_values(p_scene, p_interval, p_indices, p_zone, p_feature, p_mode
 
                 # get data from mode
                 # Here you can add the way you compute data
-                data = get_svd_data(p_feature, block)
+                data = get_image_features(p_feature, block)
 
                 # TODO : improve part of this code to get correct min / max values
                 if p_norm:

+ 2 - 2
generate/generate_all_data.py

@@ -16,7 +16,7 @@ sys.path.insert(0, '') # trick to enable import of main folder module
 
 import custom_config as cfg
 from modules.utils import data as dt
-from data_attributes import get_svd_data
+from data_attributes import get_image_features
 
 
 # getting configuration information
@@ -99,7 +99,7 @@ def generate_data_svd(data_type, mode):
                 # feature computation part #
                 ###########################
 
-                data = get_svd_data(data_type, block)
+                data = get_image_features(data_type, block)
 
                 ##################
                 # Data mode part #

+ 1 - 1
generate/generate_data_model.py

@@ -14,7 +14,7 @@ sys.path.insert(0, '') # trick to enable import of main folder module
 
 import custom_config as cfg
 from modules.utils import data as dt
-from data_attributes import get_svd_data
+from data_attributes import get_image_features
 
 
 # getting configuration information

+ 1 - 1
generate/generate_data_model_corr_random.py

@@ -15,7 +15,7 @@ sys.path.insert(0, '') # trick to enable import of main folder module
 
 import custom_config as cfg
 from modules.utils import data as dt
-from data_attributes import get_svd_data
+from data_attributes import get_image_features
 
 
 # getting configuration information

+ 1 - 1
generate/generate_data_model_random.py

@@ -14,7 +14,7 @@ sys.path.insert(0, '') # trick to enable import of main folder module
 
 import custom_config as cfg
 from modules.utils import data as dt
-from data_attributes import get_svd_data
+from data_attributes import get_image_features
 
 
 # getting configuration information

+ 1 - 1
generate/generate_data_model_random_center.py

@@ -14,7 +14,7 @@ sys.path.insert(0, '') # trick to enable import of main folder module
 
 import custom_config as cfg
 from modules.utils import data as dt
-from data_attributes import get_svd_data
+from data_attributes import get_image_features
 
 
 # getting configuration information

+ 1 - 1
generate/generate_data_model_random_split.py

@@ -14,7 +14,7 @@ sys.path.insert(0, '') # trick to enable import of main folder module
 
 import custom_config as cfg
 from modules.utils import data as dt
-from data_attributes import get_svd_data
+from data_attributes import get_image_features
 
 
 # getting configuration information

+ 2 - 2
prediction/predict_noisy_image_svd.py

@@ -14,7 +14,7 @@ from PIL import Image
 sys.path.insert(0, '') # trick to enable import of main folder module
 
 import custom_config as cfg
-from data_attributes import get_svd_data
+from data_attributes import get_image_features
 
 # variables and parameters
 path                  = cfg.dataset_path
@@ -79,7 +79,7 @@ def main():
     # load image
     img = Image.open(p_img_file)
 
-    data = get_svd_data(p_feature, img)
+    data = get_image_features(p_feature, img)
 
     # get interval values
     begin, end = p_interval

+ 9 - 9
simulation/run_maxwell_simulation_keras_custom.sh

@@ -7,30 +7,30 @@ simulate_models="simulate_models_keras.csv"
 scenes="A, D, G, H"
 
 start_index=0
-metrics_size=( ["sub_blocks_stats"]=24 ["sub_blocks_stats_reduced"]=20 ["sub_blocks_area"]=16 ["sub_blocks_area_normed"]=20)
+features_size=( ["sub_blocks_stats"]=24 ["sub_blocks_stats_reduced"]=20 ["sub_blocks_area"]=16 ["sub_blocks_area_normed"]=20)
 
-for metric in {"sub_blocks_stats","sub_blocks_stats_reduced","sub_blocks_area","sub_blocks_area_normed"}; do
+for feature in {"sub_blocks_stats","sub_blocks_stats_reduced","sub_blocks_area","sub_blocks_area_normed"}; do
     for nb_zones in {4,6,8,10,12}; do
 
         for mode in {"svd","svdn","svdne"}; do
 
-            end_index=${metrics_size[${metric}]}
-            FILENAME="data/deep_keras_N${end_index}_B${start_index}_E${end_index}_nb_zones_${nb_zones}_${metric}_${mode}"
-            MODEL_NAME="deep_keras_N${end_index}_B${start_index}_E${end_index}_nb_zones_${nb_zones}_${metric}_${mode}"
+            end_index=${features_size[${feature}]}
+            FILENAME="data/deep_keras_N${end_index}_B${start_index}_E${end_index}_nb_zones_${nb_zones}_${feature}_${mode}"
+            MODEL_NAME="deep_keras_N${end_index}_B${start_index}_E${end_index}_nb_zones_${nb_zones}_${feature}_${mode}"
 
-            CUSTOM_MIN_MAX_FILENAME="N${size}_B${start_index}_E${end_index}_nb_zones_${nb_zones}_${metric}_${mode}_min_max"
+            CUSTOM_MIN_MAX_FILENAME="N${size}_B${start_index}_E${end_index}_nb_zones_${nb_zones}_${feature}_${mode}_min_max"
 
             if grep -xq "${MODEL_NAME}" "${simulate_models}"; then
                 echo "Run simulation for model ${MODEL_NAME}"
 
                 # by default regenerate model
-                python generate/generate_data_model_random.py --output ${FILENAME} --interval "${start_index},${end_index}" --kind ${mode} --metric ${metric} --scenes "${scenes}" --nb_zones "${nb_zones}" --percent 1 --renderer "maxwell" --step 40 --random 1 --custom ${CUSTOM_MIN_MAX_FILENAME}
+                python generate/generate_data_model_random.py --output ${FILENAME} --interval "${start_index},${end_index}" --kind ${mode} --feature ${feature} --scenes "${scenes}" --nb_zones "${nb_zones}" --percent 1 --renderer "maxwell" --step 40 --random 1 --custom ${CUSTOM_MIN_MAX_FILENAME}
 
                 python train_model.py --data ${FILENAME} --output ${MODEL_NAME} --choice ${model}
 
-                python prediction/predict_seuil_expe_maxwell_curve.py --interval "${start_index},${end_index}" --model "saved_models/${MODEL_NAME}.json" --mode "${mode}" --metric ${metric} --limit_detection '2' --custom ${CUSTOM_MIN_MAX_FILENAME}
+                python prediction/predict_seuil_expe_maxwell_curve.py --interval "${start_index},${end_index}" --model "saved_models/${MODEL_NAME}.json" --mode "${mode}" --feature ${feature} --limit_detection '2' --custom ${CUSTOM_MIN_MAX_FILENAME}
 
-                python others/save_model_result_in_md_maxwell.py --interval "${start_index},${end_index}" --model "saved_models/${MODEL_NAME}.json" --mode "${mode}" --metric ${metric}
+                python others/save_model_result_in_md_maxwell.py --interval "${start_index},${end_index}" --model "saved_models/${MODEL_NAME}.json" --mode "${mode}" --feature ${feature}
 
             fi
         done