Parcourir la source

Add of simulation folder

Jérôme BUISINE il y a 4 ans
Parent
commit
a159d882df

+ 72 - 11
README.md

@@ -1,15 +1,67 @@
-# Noise detection project
+# Noise detection with CNN
 
 
 ## Requirements
 ## Requirements
 
 
 ```bash
 ```bash
-git clone --recursive https://github.com/prise-3d/Thesis-NoiseDetection-CNN.git XXXXX
+git clone --recursive https://github.com/prise-3d/Thesis-NoiseDetection-CNN.git
 ```
 ```
 
 
 ```bash
 ```bash
 pip install -r requirements.txt
 pip install -r requirements.txt
 ```
 ```
 
 
+## Project structure
+
+### Link to your dataset
+
+You have to create a symbolic link to your own database which respects this structure:
+
+- dataset/
+  - Scene1/
+    - zone00/
+    - ...
+    - zone15/
+      - seuilExpe (file which contains threshold samples of zone image perceived by human)
+    - Scene1_00050.png
+    - Scene1_00070.png
+    - ...
+    - Scene1_01180.png
+    - Scene1_01200.png
+  - Scene2/
+    - ...
+  - ...
+
+Create your symbolic link:
+
+```
+ln -s /path/to/your/data dataset
+```
+
+### Code architecture description
+
+- **modules/\***: contains all modules usefull for the whole project (such as configuration variables)
+- **analysis/\***: contains all jupyter notebook used for analysis during thesis
+- **generate/\***: contains python scripts for generate data from scenes (described later)
+- **prediction/\***: all python scripts for predict new threshold from computed models
+- **simulation/\***: contains all bash scripts used for run simulation from models
+- **display/\***: contains all python scripts used for display Scene information (such as Singular values...)
+- **run/\***: bash scripts to run few step at once : 
+  - generate custom dataset
+  - train model
+  - keep model performance
+  - run simulation (if necessary)
+- **others/\***: folders which contains others scripts such as script for getting performance of model on specific scene and write it into Mardown file.
+- **data_attributes.py**: files which contains all extracted features implementation from an image.
+- **custom_config.py**: override the main configuration project of `modules/config/global_config.py`
+- **train_model.py**: script which is used to run specific model available.
+
+### Generated data directories:
+
+- **data/\***: folder which will contain all generated *.train* & *.test* files in order to train model.
+- **saved_models/\***: all scikit learn or keras models saved.
+- **models_info/\***: all markdown files generated to get quick information about model performance and prediction obtained after running `run/runAll_*.sh` script.
+- **results/**:  This folder contains `model_comparisons.csv` file used for store models performance.
+
 ## How to use
 ## How to use
 
 
 Generate reconstructed data from specific method of reconstruction (run only once time or clean data folder before):
 Generate reconstructed data from specific method of reconstruction (run only once time or clean data folder before):
@@ -46,19 +98,28 @@ python generate/generate_dataset.py --output data/output_data_filename --feature
 
 
 Then, train model using your custom dataset:
 Then, train model using your custom dataset:
 ```bash
 ```bash
-python train_model --data data/custom_dataset --output output_model_name
+python train_model.py --data data/custom_dataset --output output_model_name
+```
+
+### Predict image using model
+
+Now we have a model trained, we can use it with an image as input:
+
+```bash
+python prediction/predict_noisy_image.py --image path/to/image.png --model saved_models/xxxxxx.json --features 'svd_reconstruction' --params '100, 200'
 ```
 ```
 
 
-## Modules
+- **features**: feature choices need to be one of the listed above.
+
+The model will return only 0 or 1:
+- 1 means noisy image is detected.
+- 0 means image seem to be not noisy.
+
+### Simulate model on scene
 
 
-This project contains modules:
-- **modules/utils/config.py**: *Store all configuration information about the project and dataset information*
-- **modules/utils/data.py**: *Usefull methods used for dataset*
-- **modules/models/metrics.py**: *Usefull methods for performance comparisons*
-- **modules/models/models.py**: *Generation of CNN model*
-- **modules/classes/Transformation.py**: *Transformation class for more easily manage computation*
+All scripts named **prediction/predict_seuil_expe\*.py** are used to simulate model prediction during rendering process.
 
 
-All these modules will be enhanced during development of the project
+Once you have simulation done. Checkout your **threshold_map/%MODEL_NAME%/simulation\_curves\_zones\_\*/** folder and use it with help of **display_simulation_curves.py** script.
 
 
 ## License
 ## License
 
 

+ 0 - 63
run_maxwell_simulation_custom.sh

@@ -1,63 +0,0 @@
-#! bin/bash
-
-# file which contains model names we want to use for simulation
-simulate_models="simulate_models.csv"
-
-# selection of four scenes (only maxwell)
-scenes="A, D, G, H"
-VECTOR_SIZE=200
-
-for size in {"4","8","16","26","32","40"}; do
-    for metric in {"lab","mscn","mscn_revisited","low_bits_2","low_bits_3","low_bits_4","low_bits_5","low_bits_6","low_bits_4_shifted_2","ica_diff","ipca_diff","svd_trunc_diff","svd_reconstruct"}; do
-
-        half=$(($size/2))
-        start=-$half
-
-        for counter in {0..4}; do
-             end=$(($start+$size))
-
-             if [ "$end" -gt "$VECTOR_SIZE" ]; then
-                 start=$(($VECTOR_SIZE-$size))
-                 end=$(($VECTOR_SIZE))
-             fi
-
-             if [ "$start" -lt "0" ]; then
-                 start=$((0))
-                 end=$(($size))
-             fi
-
-             for nb_zones in {4,6,8,10,12,14}; do
-
-                 for mode in {"svd","svdn","svdne"}; do
-                     for model in {"svm_model","ensemble_model","ensemble_model_v2"}; do
-
-                        FILENAME="data/${model}_N${size}_B${start}_E${end}_nb_zones_${nb_zones}_${metric}_${mode}"
-                        MODEL_NAME="${model}_N${size}_B${start}_E${end}_nb_zones_${nb_zones}_${metric}_${mode}"
-                        CUSTOM_MIN_MAX_FILENAME="N${size}_B${start}_E${end}_nb_zones_${nb_zones}_${metric}_${mode}_min_max"
-
-                        if grep -xq "${MODEL_NAME}" "${simulate_models}"; then
-                            echo "Run simulation for model ${MODEL_NAME}"
-
-                            # by default regenerate model
-                            python generate_data_model_random.py --output ${FILENAME} --interval "${start},${end}" --kind ${mode} --metric ${metric} --scenes "${scenes}" --nb_zones "${nb_zones}" --percent 1 --renderer "maxwell" --step 40 --random 1 --custom ${CUSTOM_MIN_MAX_FILENAME}
-
-                            python train_model.py --data ${FILENAME} --output ${MODEL_NAME} --choice ${model}
-
-                            python predict_seuil_expe_maxwell_curve.py --interval "${start},${end}" --model "saved_models/${MODEL_NAME}.joblib" --mode "${mode}" --metric ${metric} --limit_detection '2' --custom ${CUSTOM_MIN_MAX_FILENAME}
-
-                            python save_model_result_in_md_maxwell.py --interval "${start},${end}" --model "saved_models/${MODEL_NAME}.joblib" --mode "${mode}" --metric ${metric}
-
-                        fi
-                    done
-                done
-            done
-
-            if [ "$counter" -eq "0" ]; then
-                start=$(($start+50-$half))
-            else
-                start=$(($start+50))
-            fi
-
-        done
-    done
-done

+ 6 - 0
simulation/generate_all_simulate_curves.sh

@@ -0,0 +1,6 @@
+for file in "threshold_map"/*; do
+
+    echo ${file}
+
+    python display/display_simulation_curves.py --folder ${file}
+done

+ 39 - 0
simulation/run_maxwell_simulation_filters_statistics.sh

@@ -0,0 +1,39 @@
+#! bin/bash
+
+# file which contains model names we want to use for simulation
+simulate_models="simulate_models.csv"
+
+# selection of four scenes (only maxwell)
+scenes="A, D, G, H"
+
+size="26"
+
+# for feature in {"lab","mscn","low_bits_2","low_bits_3","low_bits_4","low_bits_5","low_bits_6","low_bits_4_shifted_2","ica_diff","svd_trunc_diff","ipca_diff","svd_reconstruct"}; do
+feature="filters_statistics"
+
+for nb_zones in {4,6,8,10,12}; do
+    for mode in {"svd","svdn","svdne"}; do
+        for model in {"svm_model","ensemble_model","ensemble_model_v2"}; do
+
+            FILENAME="data/${model}_N${size}_B0_E${size}_nb_zones_${nb_zones}_${feature}_${mode}"
+            MODEL_NAME="${model}_N${size}_B0_E${size}_nb_zones_${nb_zones}_${feature}_${mode}"
+            CUSTOM_MIN_MAX_FILENAME="N${size}_B0_E${size}_nb_zones_${nb_zones}_${feature}_${mode}_min_max"
+
+            echo $MODEL_NAME
+
+            # only compute if necessary (perhaps server will fall.. Just in case)
+            if grep -q "${MODEL_NAME}" "${simulate_models}"; then
+
+                echo "Run simulation for ${MODEL_NAME}..."
+
+                # Use of already generated model
+                # python generate/generate_data_model_random.py --output ${FILENAME} --interval "0,${size}" --kind ${mode} --feature ${feature} --scenes "${scenes}" --nb_zones "${nb_zones}" --percent 1 --renderer "maxwell" --step 40 --random 1 --custom ${CUSTOM_MIN_MAX_FILENAME}
+                # python train_model.py --data ${FILENAME} --output ${MODEL_NAME} --choice ${model}
+
+                python prediction/predict_seuil_expe_maxwell_curve.py --interval "0,${size}" --model "saved_models/${MODEL_NAME}.joblib" --mode "${mode}" --feature ${feature} --custom ${CUSTOM_MIN_MAX_FILENAME}
+
+                python others/save_model_result_in_md_maxwell.py --interval "0,${size}" --model "saved_models/${MODEL_NAME}.joblib" --mode "${mode}" --feature ${feature}
+            fi
+        done
+    done
+done