|
@@ -1,5 +1,7 @@
|
|
|
# Noise detection using SVM
|
|
|
|
|
|
+Project developed during thesis in order to detect noise (perceptual noise) generated during rendering process using SVM models and attributes (features) extracted from an image.
|
|
|
+
|
|
|
## Requirements
|
|
|
|
|
|
```
|
|
@@ -15,12 +17,33 @@ python generate/generate_all_data.py --feature all
|
|
|
For noise detection, many features are available:
|
|
|
- lab
|
|
|
- mscn
|
|
|
-- mscn_revisited
|
|
|
- low_bits_2
|
|
|
+- low_bits_3
|
|
|
- low_bits_4
|
|
|
- low_bits_5
|
|
|
- low_bits_6
|
|
|
- low_bits_4_shifted_2
|
|
|
+- sub_blocks_stats
|
|
|
+- sub_blocks_area
|
|
|
+- sub_blocks_stats_reduced
|
|
|
+- sub_blocks_area_normed
|
|
|
+- mscn_var_4
|
|
|
+- mscn_var_16
|
|
|
+- mscn_var_64
|
|
|
+- mscn_var_16_max
|
|
|
+- mscn_var_64_max
|
|
|
+- ica_diff
|
|
|
+- svd_trunc_diff
|
|
|
+- ipca_diff
|
|
|
+- svd_reconstruct
|
|
|
+- highest_sv_std_filters
|
|
|
+- lowest_sv_std_filters
|
|
|
+- highest_wave_sv_std_filters
|
|
|
+- lowest_wave_sv_std_filters
|
|
|
+- highest_sv_std_filters_full
|
|
|
+- lowest_sv_std_filters_full
|
|
|
+- highest_sv_entropy_std_filters
|
|
|
+- lowest_sv_entropy_std_filters
|
|
|
|
|
|
You can also specify feature you want to compute and image step to avoid some images:
|
|
|
```bash
|
|
@@ -29,44 +52,89 @@ python generate/generate_all_data.py --feature mscn --step 50
|
|
|
|
|
|
- **step**: keep only image if image id % 50 == 0 (assumption is that keeping spaced data will let model better fit).
|
|
|
|
|
|
-## How to use
|
|
|
+Or generate all data:
|
|
|
|
|
|
-### Multiple directories and scripts are available:
|
|
|
+```bash
|
|
|
+python generate/generate_all_data.py --feature all
|
|
|
+```
|
|
|
|
|
|
+## Requirements
|
|
|
+
|
|
|
+```
|
|
|
+pip install -r requirements.txt
|
|
|
+```
|
|
|
+
|
|
|
+## Project structure
|
|
|
+
|
|
|
+### Link to your dataset
|
|
|
+
|
|
|
+You have to create a symbolic link to your own database which respects this structure:
|
|
|
+
|
|
|
+- dataset/
|
|
|
+ - Scene1/
|
|
|
+ - zone00/
|
|
|
+ - ...
|
|
|
+ - zone15/
|
|
|
+ - seuilExpe (file which contains threshold samples of zone image perceived by human)
|
|
|
+ - Scene1_00050.png
|
|
|
+ - Scene1_00070.png
|
|
|
+ - ...
|
|
|
+ - Scene1_01180.png
|
|
|
+ - Scene1_01200.png
|
|
|
+ - Scene2/
|
|
|
+ - ...
|
|
|
+ - ...
|
|
|
+
|
|
|
+Create your symbolic link:
|
|
|
+
|
|
|
+```
|
|
|
+ln -s /path/to/your/data dataset
|
|
|
+```
|
|
|
+
|
|
|
+### Code architecture description
|
|
|
|
|
|
-- **fichiersSVD_light/\***: all scene files information (zones of each scene, SVD descriptor files information and so on...).
|
|
|
-- **train_model.py**: script which is used to run specific model available.
|
|
|
-- **data/\***: folder which will contain all *.train* & *.test* files in order to train model.
|
|
|
-- **saved_models/*.joblib**: all scikit learn models saved.
|
|
|
-- **models_info/***: all markdown files generated to get quick information about model performance and prediction.
|
|
|
-- **results**: This folder contains **model_comparisons.csv** obtained after running runAll_maxwell_*.sh script.
|
|
|
- **modules/\***: contains all modules usefull for the whole project (such as configuration variables)
|
|
|
+- **analysis/\***: contains all jupyter notebook used for analysis during thesis
|
|
|
+- **generate/\***: contains python scripts for generate data from scenes (described later)
|
|
|
+- **data_processing/\***: all python scripts for generate custom dataset for models
|
|
|
+- **prediction/\***: all python scripts for predict new threshold from computed models
|
|
|
+- **simulation/\***: contains all bash scripts used for run simulation from models
|
|
|
+- **display/\***: contains all python scripts used for display Scene information (such as Singular values...)
|
|
|
+- **run/\***: bash scripts to run few step at once :
|
|
|
+ - generate custom dataset
|
|
|
+ - train model
|
|
|
+ - keep model performance
|
|
|
+ - run simulation (if necessary)
|
|
|
+- **others/\***: folders which contains others scripts such as script for getting performance of model on specific scene and write it into Mardown file.
|
|
|
+- **data_attributes.py**: files which contains all extracted features implementation from an image.
|
|
|
+- **custom_config.py**: override the main configuration project of `modules/config/global_config.py`
|
|
|
+- **train_model.py**: script which is used to run specific model available.
|
|
|
+
|
|
|
+### Generated data directories:
|
|
|
|
|
|
-### Scripts for generating data files
|
|
|
+- **data/\***: folder which will contain all generated *.train* & *.test* files in order to train model.
|
|
|
+- **saved_models/\***: all scikit learn or keras models saved.
|
|
|
+- **models_info/\***: all markdown files generated to get quick information about model performance and prediction obtained after running `run/runAll_*.sh` script.
|
|
|
+- **results/**: This folder contains `model_comparisons.csv` file used for store models performance.
|
|
|
|
|
|
-Two scripts can be used for generating data in order to fit model:
|
|
|
-- **generate_data_model.py**: zones are specified and stayed fixed for each scene
|
|
|
-- **generate_data_model_random.py**: zones are chosen randomly (just a number of zone is specified)
|
|
|
-- **generate_data_model_random_maxwell.py**: zones are chosen randomly (just a number of zone is specified). Only maxwell scene are used.
|
|
|
|
|
|
+## How to use ?
|
|
|
|
|
|
**Remark**: Note here that all python script have *--help* command.
|
|
|
|
|
|
```
|
|
|
-python generate/generate_data_model.py --help
|
|
|
-
|
|
|
-python generate/generate_data_model.py --output xxxx --interval 0,20 --kind svdne --scenes "A, B, D" --zones "0, 1, 2" --percent 0.7 --sep: --rowindex 1 --custom custom_min_max_filename
|
|
|
+python generate_data_model.py --help
|
|
|
```
|
|
|
|
|
|
Parameters explained:
|
|
|
-- **output**: filename of data (which will be split into two parts, *.train* and *.test* relative to your choices).
|
|
|
+- **feature**: feature choice wished
|
|
|
+- **output**: filename of data (which will be split into two parts, *.train* and *.test* relative to your choices). Need to be into `data` folder.
|
|
|
- **interval**: the interval of data you want to use from SVD vector.
|
|
|
- **kind**: kind of data ['svd', 'svdn', 'svdne']; not normalize, normalize vector only and normalize together.
|
|
|
- **scenes**: scenes choice for training dataset.
|
|
|
- **zones**: zones to take for training dataset.
|
|
|
+- **step**: specify if all pictures are used or not using step process.
|
|
|
- **percent**: percent of data amount of zone to take (choose randomly) of zone
|
|
|
-- **sep**: output csv file seperator used
|
|
|
-- **rowindex**: if 1 then row will be like that 1:xxxxx, 2:xxxxxx, ..., n:xxxxxx
|
|
|
- **custom**: specify if you want your data normalized using interval and not the whole singular values vector. If it is, the value of this parameter is the output filename which will store the min and max value found. This file will be usefull later to make prediction with model (optional parameter).
|
|
|
|
|
|
### Train model
|
|
@@ -74,7 +142,7 @@ Parameters explained:
|
|
|
This is an example of how to train a model
|
|
|
|
|
|
```bash
|
|
|
-python train_model.py --data 'data/xxxxx.train' --output 'model_file_to_save' --choice 'model_choice'
|
|
|
+python train_model.py --data 'data/xxxx' --output 'model_file_to_save' --choice 'model_choice'
|
|
|
```
|
|
|
|
|
|
Expected values for the **choice** parameter are ['svm_model', 'ensemble_model', 'ensemble_model_v2'].
|
|
@@ -95,15 +163,15 @@ The model will return only 0 or 1:
|
|
|
- 0 means image seem to be not noisy.
|
|
|
|
|
|
All SVD features developed need:
|
|
|
-- Name added into *feature_choices_labels* global array variable of **modules/utils/config.py** file.
|
|
|
-- A specification of how you compute the feature into *get_svd_data* method of **modules/utils/data_type.py** file.
|
|
|
+- Name added into *feature_choices_labels* global array variable of `custom_config.py` file.
|
|
|
+- A specification of how you compute the feature into *get_image_features* method of `data_attributes.py` file.
|
|
|
|
|
|
### Predict scene using model
|
|
|
|
|
|
Now we have a model trained, we can use it with an image as input:
|
|
|
|
|
|
```bash
|
|
|
-python prediction/prediction_scene.py --data path/to/xxxx.csv --model saved_model/xxxx.joblib --output xxxxx --scene xxxx
|
|
|
+python prediction_scene.py --data path/to/xxxx.csv --model saved_model/xxxx.joblib --output xxxxx --scene xxxx
|
|
|
```
|
|
|
**Remark**: *scene* parameter expected need to be the correct name of the Scene.
|
|
|
|
|
@@ -115,68 +183,10 @@ Just use --help option to get more information.
|
|
|
|
|
|
### Simulate model on scene
|
|
|
|
|
|
-All scripts named **predict_seuil_expe\*.py** are used to simulate model prediction during rendering process. Do not forget the **custom** parameter filename if necessary.
|
|
|
+All scripts named **prediction/predict_seuil_expe\*.py** are used to simulate model prediction during rendering process. Do not forget the **custom** parameter filename if necessary.
|
|
|
|
|
|
Once you have simulation done. Checkout your **threshold_map/%MODEL_NAME%/simulation\_curves\_zones\_\*/** folder and use it with help of **display_simulation_curves.py** script.
|
|
|
|
|
|
-## Others scripts
|
|
|
-
|
|
|
-### Test model on all scene data
|
|
|
-
|
|
|
-In order to see if a model well generalized, a bash script is available:
|
|
|
-
|
|
|
-```bash
|
|
|
-bash others/testModelByScene.sh '100' '110' 'saved_models/xxxx.joblib' 'svdne' 'lab'
|
|
|
-```
|
|
|
-
|
|
|
-Parameters list:
|
|
|
-- 1: Begin of interval of data from SVD to use
|
|
|
-- 2: End of interval of data from SVD to use
|
|
|
-- 3: Model we want to test
|
|
|
-- 4: Kind of data input used by trained model
|
|
|
-- 5: feature used by model
|
|
|
-
|
|
|
-
|
|
|
-### Get treshold map
|
|
|
-
|
|
|
-Main objective of this project is to predict as well as a human the noise perception on a photo realistic image. Human threshold is available from training data. So a script was developed to give the predicted treshold from model and compare predicted treshold from the expected one.
|
|
|
-
|
|
|
-```bash
|
|
|
-python prediction/predict_seuil_expe.py --interval "x,x" --model 'saved_models/xxxx.joblib' --mode ["svd", "svdn", "svdne"] --feature ['lab', 'mscn', ...] --limit_detection xx --custom 'custom_min_max_filename'
|
|
|
-```
|
|
|
-
|
|
|
-Parameters list:
|
|
|
-- **model**: mode file saved to use
|
|
|
-- **interval**: the interval of data you want to use from SVD vector.
|
|
|
-- **mode**: kind of data ['svd', 'svdn', 'svdne']; not normalize, normalize vector only and normalize together.
|
|
|
-- **limit_detection**: number of not noisy images found to stop and return threshold (integer).
|
|
|
-- **custom**: custom filename where min and max values are stored (optional parameter).
|
|
|
-
|
|
|
-### Display model performance information
|
|
|
-
|
|
|
-Another script was developed to display into Mardown format the performance of a model.
|
|
|
-
|
|
|
-The content will be divised into two parts:
|
|
|
-- Predicted performance on all scenes
|
|
|
-- Treshold maps obtained from model on each scenes
|
|
|
-
|
|
|
-The previous script need to already have ran to obtain and display treshold maps on this markdown file.
|
|
|
-
|
|
|
-```bash
|
|
|
-python others/save_model_result_in_md.py --interval "xx,xx" --model saved_models/xxxx.joblib --mode ["svd", "svdn", "svdne"] --feature ['lab', 'mscn']
|
|
|
-```
|
|
|
-
|
|
|
-Parameters list:
|
|
|
-- **model**: mode file saved to use
|
|
|
-- **interval**: the interval of data you want to use from SVD vector.
|
|
|
-- **mode**: kind of data ['svd', 'svdn', 'svdne']; not normalize, normalize vector only and normalize together.
|
|
|
-
|
|
|
-Markdown file with all information is saved using model name into **models_info** folder.
|
|
|
-
|
|
|
-### Others...
|
|
|
-
|
|
|
-All others bash scripts are used to combine and run multiple model combinations...
|
|
|
-
|
|
|
## License
|
|
|
|
|
|
-[The MIT license](https://github.com/prise-3d/Thesis-NoiseDetection-attributes/blob/master/LICENSE)
|
|
|
+[The MIT license](https://github.com/prise-3d/Thesis-NoiseDetection-attributes/blob/master/LICENSE)
|