Synthesis Images noise detection metrics developed including all approaches using SVD or others compression methods

Jérôme BUISINE b37b2a3e06 Merge branch 'release/v0.5.0' into master 4 年之前
analysis e7d2ffa874 update svd entropy analysis 4 年之前
data_processing 20f4b3e9c6 Refactoring of data generation scripts 5 年之前
display 6591112e2e add of notebook image generation 4 年之前
generate 9027fd20f9 update of whole project for new thresholds 4 年之前
modules @ 270de3a969 16d11eed87 Update of svd entropy analysis 4 年之前
others 10bfdf2af8 Fix save model python script 5 年之前
prediction e64e4bbca0 Erase of sleep time for prediction 5 年之前
run 20f4b3e9c6 Refactoring of data generation scripts 5 年之前
simulation 555b166fee Add of stats simulation run script 5 年之前
.gitignore 914e4bc50c Refactoring of the whole project 5 年之前
.gitmodules 4a684e884e Refactoring of all display script 5 年之前
LICENSE ee12267d9c Update repo for new organization 5 年之前
README.md 744669f201 Update script for running sv entropy 5 年之前
__init__.py 914e4bc50c Refactoring of the whole project 5 年之前
custom_config.py 32872476cf use of thresholds file when displaying data 4 年之前
data_attributes.py 32872476cf use of thresholds file when displaying data 4 年之前
models.py 914e4bc50c Refactoring of the whole project 5 年之前
requirements.txt 5911c7da90 New models creation 6 年之前
train_keras_svd.py 32872476cf use of thresholds file when displaying data 4 年之前
train_model.py 32872476cf use of thresholds file when displaying data 4 年之前

README.md

Noise detection using SVM

Project developed during thesis in order to detect noise (perceptual noise) generated during rendering process using SVM models and attributes (features) extracted from an image.

Requirements

pip install -r requirements.txt

For noise detection, many features are available:

  • lab
  • mscn
  • low_bits_2
  • low_bits_3
  • low_bits_4
  • low_bits_5
  • low_bits_6
  • low_bits_4_shifted_2
  • sub_blocks_stats
  • sub_blocks_area
  • sub_blocks_stats_reduced
  • sub_blocks_area_normed
  • mscn_var_4
  • mscn_var_16
  • mscn_var_64
  • mscn_var_16_max
  • mscn_var_64_max
  • ica_diff
  • svd_trunc_diff
  • ipca_diff
  • svd_reconstruct
  • highest_sv_std_filters
  • lowest_sv_std_filters
  • highest_wave_sv_std_filters
  • lowest_wave_sv_std_filters
  • highest_sv_std_filters_full
  • lowest_sv_std_filters_full
  • highest_sv_entropy_std_filters
  • lowest_sv_entropy_std_filters

Generate all needed data for each features (which requires the whole dataset. In order to get it, you need to contact us).

python generate/generate_all_data.py --feature all

You can also specify feature you want to compute and image step to avoid some images:

python generate/generate_all_data.py --feature mscn --step 50
  • step: keep only image if image id % 50 == 0 (assumption is that keeping spaced data will let model better fit).

Requirements

pip install -r requirements.txt

Project structure

Link to your dataset

You have to create a symbolic link to your own database which respects this structure:

  • dataset/
    • Scene1/
    • zone00/
    • ...
    • zone15/
      • seuilExpe (file which contains threshold samples of zone image perceived by human)
    • Scene1_00050.png
    • Scene1_00070.png
    • ...
    • Scene1_01180.png
    • Scene1_01200.png
    • Scene2/
    • ...
    • ...

Create your symbolic link:

ln -s /path/to/your/data dataset

Code architecture description

  • modules/*: contains all modules usefull for the whole project (such as configuration variables)
  • analysis/*: contains all jupyter notebook used for analysis during thesis
  • generate/*: contains python scripts for generate data from scenes (described later)
  • data_processing/*: all python scripts for generate custom dataset for models
  • prediction/*: all python scripts for predict new threshold from computed models
  • simulation/*: contains all bash scripts used for run simulation from models
  • display/*: contains all python scripts used for display Scene information (such as Singular values...)
  • run/*: bash scripts to run few step at once :
    • generate custom dataset
    • train model
    • keep model performance
    • run simulation (if necessary)
  • others/*: folders which contains others scripts such as script for getting performance of model on specific scene and write it into Mardown file.
  • data_attributes.py: files which contains all extracted features implementation from an image.
  • custom_config.py: override the main configuration project of modules/config/global_config.py
  • train_model.py: script which is used to run specific model available.

Generated data directories:

  • data/*: folder which will contain all generated .train & .test files in order to train model.
  • saved_models/*: all scikit learn or keras models saved.
  • models_info/*: all markdown files generated to get quick information about model performance and prediction obtained after running run/runAll_*.sh script.
  • results/: This folder contains model_comparisons.csv file used for store models performance.

How to use ?

Remark: Note here that all python script have --help command.

python generate_data_model.py --help

Parameters explained:

  • feature: feature choice wished
  • output: filename of data (which will be split into two parts, .train and .test relative to your choices). Need to be into data folder.
  • interval: the interval of data you want to use from SVD vector.
  • kind: kind of data ['svd', 'svdn', 'svdne']; not normalize, normalize vector only and normalize together.
  • scenes: scenes choice for training dataset.
  • zones: zones to take for training dataset.
  • step: specify if all pictures are used or not using step process.
  • percent: percent of data amount of zone to take (choose randomly) of zone
  • custom: specify if you want your data normalized using interval and not the whole singular values vector. If it is, the value of this parameter is the output filename which will store the min and max value found. This file will be usefull later to make prediction with model (optional parameter).

Train model

This is an example of how to train a model

python train_model.py --data 'data/xxxx' --output 'model_file_to_save' --choice 'model_choice'

Expected values for the choice parameter are ['svm_model', 'ensemble_model', 'ensemble_model_v2'].

Predict image using model

Now we have a model trained, we can use it with an image as input:

python prediction/predict_noisy_image_svd.py --image path/to/image.png --interval "x,x" --model saved_models/xxxxxx.joblib --feature 'lab' --mode 'svdn' --custom 'min_max_filename'
  • feature: feature choice need to be one of the listed above.
  • custom: specify filename with custom min and max from your data interval. This file was generated using custom parameter of one of the generate_data_model*.py script (optional parameter).

The model will return only 0 or 1:

  • 1 means noisy image is detected.
  • 0 means image seem to be not noisy.

All SVD features developed need:

  • Name added into feature_choices_labels global array variable of custom_config.py file.
  • A specification of how you compute the feature into get_image_features method of data_attributes.py file.

Predict scene using model

Now we have a model trained, we can use it with an image as input:

python prediction_scene.py --data path/to/xxxx.csv --model saved_model/xxxx.joblib --output xxxxx --scene xxxx

Remark: scene parameter expected need to be the correct name of the Scene.

Visualize data

All scripts with names display/display_*.py are used to display data information or results.

Just use --help option to get more information.

Simulate model on scene

All scripts named prediction/predict_seuil_expe*.py are used to simulate model prediction during rendering process. Do not forget the custom parameter filename if necessary.

Once you have simulation done. Checkout your threshold_map/%MODEL_NAME%/simulation_curves_zones_*/ folder and use it with help of display_simulation_curves.py script.

License

The MIT license