Syntesis images noise detection using CNN approach
Jérôme BUISINE d32d5b1f2d Merge branch 'release/v0.4.8' | 3 年之前 | |
---|---|---|
analysis | 4 年之前 | |
display | 5 年之前 | |
generate | 3 年之前 | |
models | 5 年之前 | |
prediction | 4 年之前 | |
run | 4 年之前 | |
simulation | 5 年之前 | |
.gitignore | 5 年之前 | |
.gitmodules | 5 年之前 | |
LICENSE | 5 年之前 | |
README.md | 3 年之前 | |
__init__.py | 5 年之前 | |
cnn_models.py | 4 年之前 | |
config.py | 3 年之前 | |
prediction_model.py | 5 年之前 | |
requirements.txt | 5 年之前 | |
train_lstm_weighted.py | 4 年之前 | |
train_model.py | 4 年之前 |
git clone --recursive https://github.com/prise-3d/Thesis-NoiseDetection-CNN.git
pip install -r requirements.txt
modules/config/global_config.py
run/runAll_*.sh
script.model_comparisons.csv
file used for store models performance.Generate reconstructed data from specific method of reconstruction (run only once time or clean data folder before):
python generate/generate_reconstructed_data.py -h
Generate custom dataset from one reconstructed method or multiples (implemented later)
python generate/generate_dataset.py -h
List of expected parameter by reconstruction method:
Example:
python generate/generate_dataset.py --output data/output_data_filename --features "svd_reconstruction, ipca_reconstruction, fast_ica_reconstruction" --renderer "maxwell" --scenes "A, D, G, H" --params "100, 200 :: 50, 10 :: 50" --nb_zones 10 --random 1
Then, train model using your custom dataset:
python train_model.py --data data/custom_dataset --output output_model_name
Now we have a model trained, we can use it with an image as input:
python prediction/predict_noisy_image.py --image path/to/image.png --model saved_models/xxxxxx.json --features 'svd_reconstruction' --params '100, 200'
The model will return only 0 or 1:
All scripts named prediction/predict_seuil_expe*.py are used to simulate model prediction during rendering process.
Once you have simulation done. Checkout your threshold_map/%MODEL_NAME%/simulation_curves_zones_*/ folder and use it with help of display_simulation_curves.py script.