浏览代码

Creation of dataset file script added

Jérôme BUISINE 5 年之前
父节点
当前提交
f80a4942e7
共有 100 个文件被更改,包括 385 次插入1093 次删除
  1. 6 1
      .gitignore
  2. 8 0
      LICENSE
  3. 25 0
      TODO.md
  4. 0 244
      classification_cnn_keras.py
  5. 0 235
      classification_cnn_keras_cross_validation.py
  6. 0 254
      classification_cnn_keras_svd.py
  7. 0 275
      classification_cnn_keras_svd_img.py
  8. 0 29
      config.json
  9. 150 55
      generate_dataset.py
  10. 196 0
      generate_reconstructed_data.py
  11. 二进制
      img_train/final/appartAopt_00850.png
  12. 二进制
      img_train/final/appartAopt_00860.png
  13. 二进制
      img_train/final/appartAopt_00870.png
  14. 二进制
      img_train/final/appartAopt_00880.png
  15. 二进制
      img_train/final/appartAopt_00890.png
  16. 二进制
      img_train/final/appartAopt_00900.png
  17. 二进制
      img_train/final/bureau1_9700.png
  18. 二进制
      img_train/final/bureau1_9750.png
  19. 二进制
      img_train/final/bureau1_9800.png
  20. 二进制
      img_train/final/bureau1_9850.png
  21. 二进制
      img_train/final/bureau1_9900.png
  22. 二进制
      img_train/final/bureau1_9950.png
  23. 二进制
      img_train/final/cendrierIUT2_01180.png
  24. 二进制
      img_train/final/cendrierIUT2_01240.png
  25. 二进制
      img_train/final/cendrierIUT2_01300.png
  26. 二进制
      img_train/final/cendrierIUT2_01360.png
  27. 二进制
      img_train/final/cendrierIUT2_01420.png
  28. 二进制
      img_train/final/cendrierIUT2_01480.png
  29. 二进制
      img_train/final/cuisine01_01150.png
  30. 二进制
      img_train/final/cuisine01_01160.png
  31. 二进制
      img_train/final/cuisine01_01170.png
  32. 二进制
      img_train/final/cuisine01_01180.png
  33. 二进制
      img_train/final/cuisine01_01190.png
  34. 二进制
      img_train/final/cuisine01_01200.png
  35. 二进制
      img_train/final/echecs09750.png
  36. 二进制
      img_train/final/echecs09800.png
  37. 二进制
      img_train/final/echecs09850.png
  38. 二进制
      img_train/final/echecs09900.png
  39. 二进制
      img_train/final/echecs09950.png
  40. 二进制
      img_train/final/echecs10000.png
  41. 二进制
      img_train/final/pnd_39750.png
  42. 二进制
      img_train/final/pnd_39800.png
  43. 二进制
      img_train/final/pnd_39850.png
  44. 二进制
      img_train/final/pnd_39900.png
  45. 二进制
      img_train/final/pnd_39950.png
  46. 二进制
      img_train/final/pnd_40000.png
  47. 二进制
      img_train/noisy/appartAopt_00070.png
  48. 二进制
      img_train/noisy/appartAopt_00080.png
  49. 二进制
      img_train/noisy/appartAopt_00090.png
  50. 二进制
      img_train/noisy/appartAopt_00100.png
  51. 二进制
      img_train/noisy/appartAopt_00110.png
  52. 二进制
      img_train/noisy/appartAopt_00120.png
  53. 二进制
      img_train/noisy/bureau1_100.png
  54. 二进制
      img_train/noisy/bureau1_1000.png
  55. 二进制
      img_train/noisy/bureau1_1050.png
  56. 二进制
      img_train/noisy/bureau1_1100.png
  57. 二进制
      img_train/noisy/bureau1_1150.png
  58. 二进制
      img_train/noisy/bureau1_1250.png
  59. 二进制
      img_train/noisy/cendrierIUT2_00040.png
  60. 二进制
      img_train/noisy/cendrierIUT2_00100.png
  61. 二进制
      img_train/noisy/cendrierIUT2_00160.png
  62. 二进制
      img_train/noisy/cendrierIUT2_00220.png
  63. 二进制
      img_train/noisy/cendrierIUT2_00280.png
  64. 二进制
      img_train/noisy/cendrierIUT2_00340.png
  65. 二进制
      img_train/noisy/cuisine01_00050.png
  66. 二进制
      img_train/noisy/cuisine01_00060.png
  67. 二进制
      img_train/noisy/cuisine01_00070.png
  68. 二进制
      img_train/noisy/cuisine01_00080.png
  69. 二进制
      img_train/noisy/cuisine01_00090.png
  70. 二进制
      img_train/noisy/cuisine01_00100.png
  71. 二进制
      img_train/noisy/echecs00050.png
  72. 二进制
      img_train/noisy/echecs00100.png
  73. 二进制
      img_train/noisy/echecs00150.png
  74. 二进制
      img_train/noisy/echecs00200.png
  75. 二进制
      img_train/noisy/echecs00250.png
  76. 二进制
      img_train/noisy/echecs00300.png
  77. 二进制
      img_train/noisy/pnd_100.png
  78. 二进制
      img_train/noisy/pnd_1000.png
  79. 二进制
      img_train/noisy/pnd_1050.png
  80. 二进制
      img_train/noisy/pnd_1150.png
  81. 二进制
      img_train/noisy/pnd_1200.png
  82. 二进制
      img_train/noisy/pnd_1300.png
  83. 二进制
      img_validation/final/SdB2_00900.png
  84. 二进制
      img_validation/final/SdB2_00910.png
  85. 二进制
      img_validation/final/SdB2_00920.png
  86. 二进制
      img_validation/final/SdB2_00930.png
  87. 二进制
      img_validation/final/SdB2_00940.png
  88. 二进制
      img_validation/final/SdB2_00950.png
  89. 二进制
      img_validation/final/SdB2_D_00900.png
  90. 二进制
      img_validation/final/SdB2_D_00910.png
  91. 二进制
      img_validation/final/SdB2_D_00920.png
  92. 二进制
      img_validation/final/SdB2_D_00930.png
  93. 二进制
      img_validation/final/SdB2_D_00940.png
  94. 二进制
      img_validation/final/SdB2_D_00950.png
  95. 二进制
      img_validation/final/selles_envir02850.png
  96. 二进制
      img_validation/final/selles_envir02900.png
  97. 二进制
      img_validation/final/selles_envir02950.png
  98. 二进制
      img_validation/final/selles_envir03000.png
  99. 二进制
      img_validation/final/selles_envir03050.png
  100. 0 0
      img_validation/final/selles_envir03100.png

+ 6 - 1
.gitignore

@@ -2,9 +2,14 @@
 data
 .python-version
 __pycache__
+.vscode
 
 # by default avoid model files and png files
 *.h5
 *.png
 !saved_models/*.h5
-!saved_models/*.png
+!saved_models/*.png
+
+# data
+learned_zones
+dataset

+ 8 - 0
LICENSE

@@ -0,0 +1,8 @@
+MIT License
+Copyright (c) 2019 prise-3d
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

+ 25 - 0
TODO.md

@@ -0,0 +1,25 @@
+# TODO :
+
+## Prépation des données
+
+- Séparer dans 2 dossiers les images (noisy, not noisy) 
+  - Par scène
+  - Par zone
+  - Par métrique [scene, zone]
+  
+- Transformer chaque image comme souhaitée (ici reconstruction SV avec 110 composantes faibles)
+- Pour chaque image ajouter sa forme sous 4 rotations (augmentation du nombre de données)
+
+## Chargement des données
+- Chargement de l'ensemble des images (association : "path", "label")
+- Mettre en place un équilibre de classes
+- Mélange de l'ensemble des données
+- Séparation des données (train, validation, test)
+
+## Conception du modèle
+- Mise en place d'un modèle CNN
+- Utilisation BatchNormalization / Dropout
+
+
+## Si non fonctionnel
+- Utilisation d'une approche transfer learning

+ 0 - 244
classification_cnn_keras.py

@@ -1,244 +0,0 @@
-'''This script goes along the blog post
-"Building powerful image classification models using very little data"
-from blog.keras.io.
-```
-data/
-    train/
-        final/
-            final001.png
-            final002.png
-            ...
-        noisy/
-            noisy001.png
-            noisy002.png
-            ...
-    validation/
-        final/
-            final001.png
-            final002.png
-            ...
-        noisy/
-            noisy001.png
-            noisy002.png
-            ...
-```
-'''
-import sys, os, getopt
-import json
-
-from keras.preprocessing.image import ImageDataGenerator
-from keras.models import Sequential
-from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D
-from keras.layers import Activation, Dropout, Flatten, Dense, BatchNormalization
-from keras import backend as K
-from keras.utils import plot_model
-
-from ipfml import tf_model_helper
-
-# local functions import (metrics preprocessing)
-import preprocessing_functions
-
-##########################################
-# Global parameters (with default value) #
-##########################################
-img_width, img_height = 100, 100
-
-train_data_dir = 'data/train'
-validation_data_dir = 'data/validation'
-nb_train_samples = 7200
-nb_validation_samples = 3600
-epochs = 50
-batch_size = 16
-
-input_shape = (3, img_width, img_height)
-
-###########################################
-
-'''
-Method which returns model to train
-@return : DirectoryIterator
-'''
-def generate_model():
-
-    model = Sequential()
-
-    model.add(Conv2D(60, (2, 2), input_shape=input_shape))
-    model.add(Activation('relu'))
-    model.add(MaxPooling2D(pool_size=(2, 2)))
-
-    model.add(Conv2D(40, (2, 2)))
-    model.add(Activation('relu'))
-    model.add(MaxPooling2D(pool_size=(2, 2)))
-
-    model.add(Conv2D(20, (2, 2)))
-    model.add(Activation('relu'))
-    model.add(MaxPooling2D(pool_size=(2, 2)))
-
-    model.add(Flatten())
-
-    model.add(Dense(140))
-    model.add(Activation('relu'))
-    model.add(BatchNormalization())
-    model.add(Dropout(0.3))
-
-    model.add(Dense(120))
-    model.add(Activation('relu'))
-    model.add(BatchNormalization())
-    model.add(Dropout(0.3))
-
-    model.add(Dense(80))
-    model.add(Activation('relu'))
-    model.add(BatchNormalization())
-    model.add(Dropout(0.2))
-
-    model.add(Dense(40))
-    model.add(Activation('relu'))
-    model.add(BatchNormalization())
-    model.add(Dropout(0.2))
-
-    model.add(Dense(20))
-    model.add(Activation('relu'))
-    model.add(BatchNormalization())
-    model.add(Dropout(0.2))
-
-    model.add(Dense(1))
-    model.add(Activation('sigmoid'))
-
-    model.compile(loss='binary_crossentropy',
-                  optimizer='rmsprop',
-                  metrics=['accuracy'])
-
-    return model
-
-'''
-Method which loads train data
-@return : DirectoryIterator
-'''
-def load_train_data():
-    # this is the augmentation configuration we will use for training
-    train_datagen = ImageDataGenerator(
-        rescale=1. / 255,
-        shear_range=0.2,
-        zoom_range=0.2,
-        horizontal_flip=True)
-
-    train_generator = train_datagen.flow_from_directory(
-        train_data_dir,
-        target_size=(img_width, img_height),
-        batch_size=batch_size,
-        class_mode='binary')
-
-    return train_generator
-
-'''
-Method which loads validation data
-@return : DirectoryIterator
-'''
-def load_validation_data():
-
-    # this is the augmentation configuration we will use for testing:
-    # only rescaling
-    test_datagen = ImageDataGenerator(rescale=1. / 255)
-
-    validation_generator = test_datagen.flow_from_directory(
-        validation_data_dir,
-        target_size=(img_width, img_height),
-        batch_size=batch_size,
-        class_mode='binary')
-
-    return validation_generator
-
-def main():
-
-    # update global variable and not local
-    global batch_size
-    global epochs   
-    global img_width
-    global img_height
-    global input_shape
-    global train_data_dir
-    global validation_data_dir
-    global nb_train_samples
-    global nb_validation_samples 
-
-    if len(sys.argv) <= 1:
-        print('Run with default parameters...')
-        print('classification_cnn_keras_svd.py --directory xxxx --output xxxxx --batch_size xx --epochs xx --img xx')
-        sys.exit(2)
-    try:
-        opts, args = getopt.getopt(sys.argv[1:], "ho:d:b:e:i", ["help", "output=", "directory=", "batch_size=", "epochs=", "img="])
-    except getopt.GetoptError:
-        # print help information and exit:
-        print('classification_cnn_keras_svd.py --directory xxxx --output xxxxx --batch_size xx --epochs xx --img xx')
-        sys.exit(2)
-    for o, a in opts:
-        if o == "-h":
-            print('classification_cnn_keras_svd.py --directory xxxx --output xxxxx --batch_size xx --epochs xx --img xx')
-            sys.exit()
-        elif o in ("-o", "--output"):
-            filename = a
-        elif o in ("-b", "--batch_size"):
-            batch_size = int(a)
-            print(batch_size)
-        elif o in ("-e", "--epochs"):
-            epochs = int(a)
-        elif o in ("-d", "--directory"):
-            directory = a
-        elif o in ("-i", "--img"):
-            img_height = int(a)
-            img_width = int(a)
-        else:
-            assert False, "unhandled option"
-
-    # 3 because we have 3 color canals
-    if K.image_data_format() == 'channels_first':
-        input_shape = (3, img_width, img_height)
-    else:
-        input_shape = (img_width, img_height, 3)
-
-    # configuration
-    with open('config.json') as json_data:
-        d = json.load(json_data)
-        train_data_dir = d['train_data_dir']
-        validation_data_dir = d['train_validation_dir']
-
-        try:
-            nb_train_samples = d[str(img_width)]['nb_train_samples']
-            nb_validation_samples = d[str(img_width)]['nb_validation_samples']
-        except:
-             print("--img parameter missing of invalid (--image_width xx --img_height xx)")
-             sys.exit(2)
-    
-    # load of model
-    model = generate_model()
-    model.summary()
-
-    if 'directory' in locals():
-        print('Your model information will be saved into %s...' % directory)
-
-    history = model.fit_generator(
-        load_train_data(),
-        steps_per_epoch=nb_train_samples // batch_size,
-        epochs=epochs,
-        validation_data=load_validation_data(),
-        validation_steps=nb_validation_samples // batch_size)
-
-    # if user needs output files
-    if(filename):
-
-        # update filename by folder
-        if(directory):
-            # create folder if necessary
-            if not os.path.exists(directory):
-                os.makedirs(directory)
-            filename = directory + "/" + filename
-
-        # save plot file history
-        tf_model_helper.save(history, filename)
-
-        plot_model(model, to_file=str(('%s.png' % filename)))
-        model.save_weights(str('%s.h5' % filename))
-
-
-if __name__ == "__main__":
-    main()

+ 0 - 235
classification_cnn_keras_cross_validation.py

@@ -1,235 +0,0 @@
-'''This script goes along the blog post
-"Building powerful image classification models using very little data"
-from blog.keras.io.
-```
-data/
-    train/
-        final/
-            final001.png
-            final002.png
-            ...
-        noisy/
-            noisy001.png
-            noisy002.png
-            ...
-    validation/
-        final/
-            final001.png
-            final002.png
-            ...
-        noisy/
-            noisy001.png
-            noisy002.png
-            ...
-```
-'''
-import sys, os, getopt
-import json
-
-from keras.preprocessing.image import ImageDataGenerator
-from keras.models import Sequential
-from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D
-from keras.layers import Activation, Dropout, Flatten, Dense, BatchNormalization
-from keras import backend as K
-from keras.utils import plot_model
-
-
-# local functions import (metrics preprocessing)
-import preprocessing_functions
-
-##########################################
-# Global parameters (with default value) #
-##########################################
-img_width, img_height = 100, 100
-
-train_data_dir = 'data/train'
-validation_data_dir = 'data/validation'
-nb_train_samples = 7200
-nb_validation_samples = 3600
-epochs = 50
-batch_size = 16
-
-input_shape = (3, img_width, img_height)
-
-###########################################
-
-'''
-Method which returns model to train
-@return : DirectoryIterator
-'''
-def generate_model():
-    # create your model using this function
-    model = Sequential()
-    model.add(Conv2D(60, (2, 2), input_shape=input_shape, dilation_rate=1))
-    model.add(Activation('relu'))
-    model.add(MaxPooling2D(pool_size=(2, 2)))
-
-    model.add(Conv2D(40, (2, 2), dilation_rate=1))
-    model.add(Activation('relu'))
-    model.add(MaxPooling2D(pool_size=(2, 2)))
-
-    model.add(Conv2D(20, (2, 2), dilation_rate=1))
-    model.add(Activation('relu'))
-    model.add(MaxPooling2D(pool_size=(2, 2)))
-
-    model.add(Flatten())
-
-    model.add(Dense(140))
-    model.add(Activation('relu'))
-    model.add(BatchNormalization())
-    model.add(Dropout(0.3))
-
-    model.add(Dense(120))
-    model.add(Activation('relu'))
-    model.add(BatchNormalization())
-    model.add(Dropout(0.3))
-
-    model.add(Dense(80))
-    model.add(Activation('relu'))
-    model.add(BatchNormalization())
-    model.add(Dropout(0.2))
-
-    model.add(Dense(40))
-    model.add(Activation('relu'))
-    model.add(BatchNormalization())
-    model.add(Dropout(0.2))
-
-    model.add(Dense(20))
-    model.add(Activation('relu'))
-    model.add(BatchNormalization())
-    model.add(Dropout(0.2))
-
-    model.add(Dense(1))
-    model.add(Activation('sigmoid'))
-
-    model.compile(loss='binary_crossentropy',
-                  optimizer='adam',
-                  metrics=['accuracy'])
-
-    return model
-
-def load_data():
-    # load your data using this function
-    # this is the augmentation configuration we will use for training
-    train_datagen = ImageDataGenerator(
-        rescale=1. / 255,
-        shear_range=0.2,
-        zoom_range=0.2,
-        horizontal_flip=True)
-
-    train_generator = train_datagen.flow_from_directory(
-        train_data_dir,
-        target_size=(img_width, img_height),
-        batch_size=batch_size,
-        class_mode='binary')
-
-    return train_generator
-
-def train_and_evaluate_model(model, data_train, data_test):
-
-    return model.fit_generator(
-        data_train,
-        steps_per_epoch=nb_train_samples // batch_size,
-        epochs=epochs,
-        shuffle=True,
-        validation_data=data_test,
-        validation_steps=nb_validation_samples // batch_size)
-
-def main():
-
-    # update global variable and not local
-    global batch_size
-    global epochs
-    global img_width
-    global img_height
-    global input_shape
-    global train_data_dir
-    global validation_data_dir
-    global nb_train_samples
-    global nb_validation_samples
-
-    if len(sys.argv) <= 1:
-        print('Run with default parameters...')
-        print('classification_cnn_keras_svd.py --directory xxxx --output xxxxx --batch_size xx --epochs xx --img xx')
-        sys.exit(2)
-    try:
-        opts, args = getopt.getopt(sys.argv[1:], "ho:d:b:e:i", ["help", "output=", "directory=", "batch_size=", "epochs=", "img="])
-    except getopt.GetoptError:
-        # print help information and exit:
-        print('classification_cnn_keras_svd.py --directory xxxx --output xxxxx --batch_size xx --epochs xx --img xx')
-        sys.exit(2)
-    for o, a in opts:
-        if o == "-h":
-            print('classification_cnn_keras_svd.py --directory xxxx --output xxxxx --batch_size xx --epochs xx --img xx')
-            sys.exit()
-        elif o in ("-o", "--output"):
-            filename = a
-        elif o in ("-b", "--batch_size"):
-            batch_size = int(a)
-        elif o in ("-e", "--epochs"):
-            epochs = int(a)
-        elif o in ("-d", "--directory"):
-            directory = a
-        elif o in ("-i", "--img"):
-            img_height = int(a)
-            img_width = int(a)
-        else:
-            assert False, "unhandled option"
-
-    # 3 because we have 3 color canals
-    if K.image_data_format() == 'channels_first':
-        input_shape = (3, img_width, img_height)
-    else:
-        input_shape = (img_width, img_height, 3)
-
-    # configuration
-    with open('config.json') as json_data:
-        d = json.load(json_data)
-        train_data_dir = d['train_data_dir']
-        validation_data_dir = d['train_validation_dir']
-
-        try:
-            nb_train_samples = d[str(img_width)]['nb_train_samples']
-            nb_validation_samples = d[str(img_width)]['nb_validation_samples']
-        except:
-             print("--img parameter missing of invalid (--image_width xx --img_height xx)")
-             sys.exit(2)
-
-
-    # load of model
-    model = generate_model()
-    model.summary()
-
-    data_generator = ImageDataGenerator(rescale=1./255, validation_split=0.33)
-
-    # check if possible to not do this thing each time
-    train_generator = data_generator.flow_from_directory(train_data_dir, target_size=(img_width, img_height), shuffle=True, seed=13,
-                                                         class_mode='binary', batch_size=batch_size, subset="training")
-
-    validation_generator = data_generator.flow_from_directory(train_data_dir, target_size=(img_width, img_height), shuffle=True, seed=13,
-                                                         class_mode='binary', batch_size=batch_size, subset="validation")
-
-    # now run model
-    history = train_and_evaluate_model(model, train_generator, validation_generator)
-
-    print("directory %s " % directory)
-    if(directory):
-        print('Your model information will be saved into %s...' % directory)
-    # if user needs output files
-    if(filename):
-
-        # update filename by folder
-        if(directory):
-            # create folder if necessary
-            if not os.path.exists(directory):
-                os.makedirs(directory)
-            filename = directory + "/" + filename
-
-        # save plot file history
-        # tf_model_helper.save(history, filename)
-
-        plot_model(model, to_file=str(('%s.png' % filename)))
-        model.save_weights(str('%s.h5' % filename))
-
-if __name__ == "__main__":
-    main()

+ 0 - 254
classification_cnn_keras_svd.py

@@ -1,254 +0,0 @@
-'''This script goes along the blog post
-"Building powerful image classification models using very little data"
-from blog.keras.io.
-```
-data/
-    train/
-        final/
-            final001.png
-            final002.png
-            ...
-        noisy/
-            noisy001.png
-            noisy002.png
-            ...
-    validation/
-        final/
-            final001.png
-            final002.png
-            ...
-        noisy/
-            noisy001.png
-            noisy002.png
-            ...
-```
-'''
-import sys, os, getopt
-import json
-
-from keras.preprocessing.image import ImageDataGenerator
-from keras.models import Sequential
-from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D
-from keras.layers import Activation, Dropout, Flatten, Dense, BatchNormalization
-from keras.optimizers import Adam
-from keras.regularizers import l2
-from keras import backend as K
-from keras.utils import plot_model
-
-import tensorflow as tf
-import numpy as np
-
-from ipfml import tf_model_helper
-from ipfml import metrics
-
-# local functions import
-import preprocessing_functions
-
-##########################################
-# Global parameters (with default value) #
-##########################################
-img_width, img_height = 100, 100
-
-train_data_dir = 'data/train'
-validation_data_dir = 'data/validation'
-nb_train_samples = 7200
-nb_validation_samples = 3600
-epochs = 50
-batch_size = 16
-
-input_shape = (3, img_width, img_height)
-
-###########################################
-
-'''
-Method which returns model to train
-@return : DirectoryIterator
-'''
-def generate_model():
-
-    model = Sequential()
-
-    model.add(Conv2D(60, (2, 1), input_shape=input_shape))
-    model.add(Activation('relu'))
-    model.add(BatchNormalization())
-    model.add(MaxPooling2D(pool_size=(2, 1)))
-
-    model.add(Conv2D(40, (2, 1)))
-    model.add(Activation('relu'))
-    model.add(MaxPooling2D(pool_size=(2, 1)))
-
-    model.add(Conv2D(30, (2, 1)))
-    model.add(Activation('relu'))
-    model.add(MaxPooling2D(pool_size=(2, 1)))
-
-    model.add(Flatten())
-    model.add(Dense(150, kernel_regularizer=l2(0.01)))
-    model.add(BatchNormalization())
-    model.add(Activation('relu'))
-    model.add(Dropout(0.2))
-
-    model.add(Dense(120, kernel_regularizer=l2(0.01)))
-    model.add(BatchNormalization())
-    model.add(Activation('relu'))
-    model.add(Dropout(0.2))
-
-    model.add(Dense(80, kernel_regularizer=l2(0.01)))
-    model.add(BatchNormalization())
-    model.add(Activation('relu'))
-    model.add(Dropout(0.2))
-
-    model.add(Dense(40, kernel_regularizer=l2(0.01)))
-    model.add(BatchNormalization())
-    model.add(Activation('relu'))
-    model.add(Dropout(0.2))
-
-    model.add(Dense(20, kernel_regularizer=l2(0.01)))
-    model.add(BatchNormalization())
-    model.add(Activation('relu'))
-    model.add(Dropout(0.1))
-
-    model.add(Dense(1))
-    model.add(Activation('sigmoid'))
-
-    model.compile(loss='binary_crossentropy',
-                  optimizer='rmsprop',
-                  metrics=['accuracy'])
-
-    return model
-
-'''
-Method which loads train data
-@return : DirectoryIterator
-'''
-def load_train_data():
-
-    # this is the augmentation configuration we will use for training
-    train_datagen = ImageDataGenerator(
-        #rescale=1. / 255,
-        #shear_range=0.2,
-        #zoom_range=0.2,
-        #horizontal_flip=True,
-        preprocessing_function=preprocessing_functions.get_s_model_data)
-
-    train_generator = train_datagen.flow_from_directory(
-        train_data_dir,
-        target_size=(img_width, img_height),
-        batch_size=batch_size,
-        class_mode='binary')
-
-    return train_generator
-
-'''
-Method which loads validation data
-@return : DirectoryIterator
-'''
-def load_validation_data():
-
-    # this is the augmentation configuration we will use for testing:
-    # only rescaling
-    test_datagen = ImageDataGenerator(
-        #rescale=1. / 255,
-        preprocessing_function=preprocessing_functions.get_s_model_data)
-
-    validation_generator = test_datagen.flow_from_directory(
-        validation_data_dir,
-        target_size=(img_width, img_height),
-        batch_size=batch_size,
-        class_mode='binary')
-
-    return validation_generator
-
-def main():
-
-    # update global variable and not local
-    global batch_size
-    global epochs   
-    global img_width
-    global img_height
-    global input_shape
-    global train_data_dir
-    global validation_data_dir
-    global nb_train_samples
-    global nb_validation_samples 
-
-    if len(sys.argv) <= 1:
-        print('Run with default parameters...')
-        print('classification_cnn_keras_svd.py --directory xxxx --output xxxxx --batch_size xx --epochs xx --img xx')
-        sys.exit(2)
-    try:
-        opts, args = getopt.getopt(sys.argv[1:], "ho:d:b:e:i", ["help", "output=", "directory=", "batch_size=", "epochs=", "img="])
-    except getopt.GetoptError:
-        # print help information and exit:
-        print('classification_cnn_keras_svd.py --directory xxxx --output xxxxx --batch_size xx --epochs xx --img xx')
-        sys.exit(2)
-    for o, a in opts:
-        if o == "-h":
-            print('classification_cnn_keras_svd.py --directory xxxx --output xxxxx --batch_size xx --epochs xx --img xx')
-            sys.exit()
-        elif o in ("-o", "--output"):
-            filename = a
-        elif o in ("-b", "--batch_size"):
-            batch_size = int(a)
-        elif o in ("-e", "--epochs"):
-            epochs = int(a)
-        elif o in ("-d", "--directory"):
-            directory = a
-        elif o in ("-i", "--img"):
-            img_height = int(a)
-            img_width = int(a)
-        else:
-            assert False, "unhandled option"
-
-    # 3 because we have 3 color canals
-    if K.image_data_format() == 'channels_first':
-        input_shape = (3, img_width, img_height)
-    else:
-        input_shape = (img_width, img_height, 3)
-
-    # configuration
-    with open('config.json') as json_data:
-        d = json.load(json_data)
-        train_data_dir = d['train_data_dir']
-        validation_data_dir = d['train_validation_dir']
-
-        try:
-            nb_train_samples = d[str(img_width)]['nb_train_samples']
-            nb_validation_samples = d[str(img_width)]['nb_validation_samples']
-        except:
-             print("--img parameter missing of invalid (--image_width xx --img_height xx)")
-             sys.exit(2)
-
-
-    # load of model
-    model = generate_model()
-    model.summary()
-
-    if(directory):
-        print('Your model information will be saved into %s...' % directory)
-
-    history = model.fit_generator(
-        load_train_data(),
-        steps_per_epoch=nb_train_samples // batch_size,
-        epochs=epochs,
-        validation_data=load_validation_data(),
-        validation_steps=nb_validation_samples // batch_size)
-
-    # if user needs output files
-    if(filename):
-
-        # update filename by folder
-        if(directory):
-            # create folder if necessary
-            if not os.path.exists(directory):
-                os.makedirs(directory)
-            filename = directory + "/" + filename
-
-        # save plot file history
-        tf_model_helper.save(history, filename)
-
-        plot_model(model, to_file=str(('%s.png' % filename)), show_shapes=True)
-        model.save_weights(str('%s.h5' % filename))
-
-
-if __name__ == "__main__":
-    main()

+ 0 - 275
classification_cnn_keras_svd_img.py

@@ -1,275 +0,0 @@
-'''This script goes along the blog post
-"Building powerful image classification models using very little data"
-from blog.keras.io.
-```
-data/
-    train/
-        final/
-            final001.png
-            final002.png
-            ...
-        noisy/
-            noisy001.png
-            noisy002.png
-            ...
-    validation/
-        final/
-            final001.png
-            final002.png
-            ...
-        noisy/
-            noisy001.png
-            noisy002.png
-            ...
-```
-'''
-import sys, os, getopt
-import json
-
-from keras.preprocessing.image import ImageDataGenerator
-from keras.models import Sequential
-from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, Cropping2D
-from keras.layers import Activation, Dropout, Flatten, Dense, BatchNormalization
-from keras.optimizers import Adam
-from keras.regularizers import l2
-from keras import backend as K
-from keras.utils import plot_model
-
-import tensorflow as tf
-import numpy as np
-
-import matplotlib.pyplot as plt
-
-# preprocessing of images
-from path import Path
-from PIL import Image
-import shutil
-import time
-
-# local functions import (metrics preprocessing)
-import preprocessing_functions
-
-##########################################
-# Global parameters (with default value) #
-#### ######################################
-img_width, img_height = 100, 100
-
-train_data_dir = 'data_svd_**img_size**/train'
-validation_data_dir = 'data_svd_**img_size**/validation'
-nb_train_samples = 7200
-nb_validation_samples = 3600
-epochs = 50
-batch_size = 16
-
-input_shape = (3, img_width, img_height)
-
-###########################################
-
-def init_directory(img_size, generate_data):
-
-    img_size_str = str(img_size)
-
-    svd_data_folder = str('data_svd_' + img_size_str)
-
-    if os.path.exists(svd_data_folder) and 'y' in generate_data:
-        print("Removing all previous data...")
-
-        shutil.rmtree(svd_data_folder)
-
-    if not os.path.exists(svd_data_folder):
-        print("Creating new data... Just take coffee... Or two...")
-        os.makedirs(str(train_data_dir.replace('**img_size**', img_size_str) + '/final'))
-        os.makedirs(str(train_data_dir.replace('**img_size**', img_size_str) + '/noisy'))
-
-        os.makedirs(str(validation_data_dir.replace('**img_size**', img_size_str) + '/final'))
-        os.makedirs(str(validation_data_dir.replace('**img_size**', img_size_str) + '/noisy'))
-
-        for f in Path('./data').walkfiles():
-            if 'png' in f:
-                img = Image.open(f)
-                new_img = preprocessing_functions.get_s_model_data_img(img)
-                new_img_path = f.replace('./data', str('./' + svd_data_folder))
-                new_img.save(new_img_path)
-                print(new_img_path)
-
-
-'''
-Method which returns model to train
-@return : DirectoryIterator
-'''
-def generate_model():
-
-    model = Sequential()
-
-    model.add(Cropping2D(cropping=((20, 20), (20, 20)), input_shape=input_shape))
-
-    model.add(Conv2D(50, (2, 2)))
-    model.add(Activation('relu'))
-    model.add(AveragePooling2D(pool_size=(2, 2)))
-
-    model.add(Flatten())
-
-    model.add(Dense(100, kernel_regularizer=l2(0.01)))
-    model.add(Activation('relu'))
-    model.add(BatchNormalization())
-    model.add(Dropout(0.2))
-
-    model.add(Dense(100, kernel_regularizer=l2(0.01)))
-    model.add(Activation('relu'))
-    model.add(BatchNormalization())
-    model.add(Dropout(0.5))
-
-    model.add(Dense(1))
-    model.add(Activation('sigmoid'))
-
-    model.compile(loss='binary_crossentropy',
-                  optimizer='rmsprop',
-                  metrics=['accuracy'])
-
-    return model
-
-'''
-Method which loads train data
-@return : DirectoryIterator
-'''
-def load_train_data():
-
-    # this is the augmentation configuration we will use for training
-    train_datagen = ImageDataGenerator(
-        rescale=1. / 255,
-        #shear_range=0.2,
-        #zoom_range=0.2,
-        #horizontal_flip=True,
-        #preprocessing_function=preprocessing_functions.get_s_model_data_img
-        )
-
-    train_generator = train_datagen.flow_from_directory(
-        train_data_dir,
-        target_size=(img_width, img_height),
-        batch_size=batch_size,
-        class_mode='binary')
-
-    return train_generator
-
-'''
-Method which loads validation data
-@return : DirectoryIterator
-'''
-def load_validation_data():
-
-    # this is the augmentation configuration we will use for testing:
-    # only rescaling
-    test_datagen = ImageDataGenerator(
-        rescale=1. / 255,
-        #preprocessing_function=preprocessing_functions.get_s_model_data_img
-        )
-
-    validation_generator = test_datagen.flow_from_directory(
-        validation_data_dir,
-        target_size=(img_width, img_height),
-        batch_size=batch_size,
-        class_mode='binary')
-
-    return validation_generator
-
-def main():
-
-    # update global variable and not local
-    global batch_size
-    global epochs
-    global input_shape
-    global train_data_dir
-    global validation_data_dir
-    global nb_train_samples
-    global nb_validation_samples
-
-    if len(sys.argv) <= 1:
-        print('Run with default parameters...')
-        print('classification_cnn_keras_svd.py --directory xxxx --output xxxxx --batch_size xx --epochs xx --img xx --generate (y/n)')
-        sys.exit(2)
-    try:
-        opts, args = getopt.getopt(sys.argv[1:], "ho:d:b:e:i:g", ["help", "output=", "directory=", "batch_size=", "epochs=", "img=", "generate="])
-    except getopt.GetoptError:
-        # print help information and exit:
-        print('classification_cnn_keras_svd.py --directory xxxx --output xxxxx --batch_size xx --epochs xx --img xx --generate (y/n)')
-        sys.exit(2)
-    for o, a in opts:
-        if o == "-h":
-            print('classification_cnn_keras_svd.py --directory xxxx --output xxxxx --batch_size xx --epochs xx --img xx --generate (y/n)')
-            sys.exit()
-        elif o in ("-o", "--output"):
-            filename = a
-        elif o in ("-b", "--batch_size"):
-            batch_size = int(a)
-        elif o in ("-e", "--epochs"):
-            epochs = int(a)
-        elif o in ("-d", "--directory"):
-            directory = a
-        elif o in ("-i", "--img"):
-            image_size = int(a)
-        elif o in ("-g", "--generate"):
-            generate_data = a
-        else:
-            assert False, "unhandled option"
-
-    # 3 because we have 3 color canals
-    if K.image_data_format() == 'channels_first':
-        input_shape = (3, img_width, img_height)
-    else:
-        input_shape = (img_width, img_height, 3)
-
-    img_str_size = str(image_size)
-    train_data_dir = str(train_data_dir.replace('**img_size**', img_str_size))
-    validation_data_dir = str(validation_data_dir.replace('**img_size**', img_str_size))
-
-    # configuration
-    with open('config.json') as json_data:
-        d = json.load(json_data)
-
-        try:
-            nb_train_samples = d[str(image_size)]['nb_train_samples']
-            nb_validation_samples = d[str(image_size)]['nb_validation_samples']
-        except:
-             print("--img parameter missing of invalid (--image_width xx --img_height xx)")
-             sys.exit(2)
-
-
-    init_directory(image_size, generate_data)
-    # load of model
-    model = generate_model()
-    model.summary()
-
-    if(directory):
-        print('Your model information will be saved into %s...' % directory)
-
-    history = model.fit_generator(
-        load_train_data(),
-        steps_per_epoch=nb_train_samples // batch_size,
-        epochs=epochs,
-        validation_data=load_validation_data(),
-        validation_steps=nb_validation_samples // batch_size)
-
-    # if user needs output files
-    if(filename):
-
-        # update filename by folder
-        if(directory):
-            # create folder if necessary
-            if not os.path.exists(directory):
-                os.makedirs(directory)
-            filename = directory + "/" + filename
-
-        fig_size = plt.rcParams["figure.figsize"]
-        fig_size[0] = 9
-        fig_size[1] = 9
-        plt.rcParams["figure.figsize"] = fig_size
-
-        # save plot file history
-        plot_info.save(history, filename)
-
-        plot_model(model, to_file=str(('%s.png' % filename)), show_shapes=True)
-        model.save_weights(str('%s.h5' % filename))
-
-
-if __name__ == "__main__":
-    main()

+ 0 - 29
config.json

@@ -1,29 +0,0 @@
-{
-    "train_data_dir": "data/train",
-    "train_validation_dir": "data/validation",
-
-    "20":{    
-        "nb_train_samples": 115200,
-        "nb_validation_samples": 57600
-    },
-
-    "40":{    
-        "nb_train_samples": 57600,
-        "nb_validation_samples": 28800
-    },
-
-    "60":{    
-        "nb_train_samples": 12800,
-        "nb_validation_samples": 6400
-    },
-
-    "80":{    
-        "nb_train_samples": 7200,
-        "nb_validation_samples": 3600
-    },
-
-    "100":{    
-        "nb_train_samples": 4608,
-        "nb_validation_samples": 2304
-    }
-}

+ 150 - 55
generate_dataset.py

@@ -1,81 +1,176 @@
 #!/usr/bin/env python3
 # -*- coding: utf-8 -*-
 """
-Created on Fri Sep 14 21:02:42 2018
+Created on Wed Jun 19 11:47:42 2019
 
 @author: jbuisine
 """
 
-from __future__ import print_function
-import glob, image_slicer
-import sys, os, getopt
+import sys, os, argparse
+import numpy as np
+import random
+import time
+import json
+
 from PIL import Image
-import shutil
+from ipfml import processing, metrics, utils
+from skimage import color
+
+from modules.utils import config as cfg
+from modules.utils import data as dt
+
+from preprocessing_functions import svd_reconstruction
+
+# getting configuration information
+config_filename         = cfg.config_filename
+zone_folder             = cfg.zone_folder
+learned_folder          = cfg.learned_zones_folder
+min_max_filename        = cfg.min_max_filename_extension
+
+# define all scenes values
+scenes_list             = cfg.scenes_names
+scenes_indexes          = cfg.scenes_indices
+choices                 = cfg.normalization_choices
+path                    = cfg.dataset_path
+zones                   = cfg.zones_indices
+seuil_expe_filename     = cfg.seuil_expe_filename
+
+metric_choices          = cfg.metric_choices_labels
+output_data_folder      = cfg.output_data_folder
+
+generic_output_file_svd = '_random.csv'
+
+def generate_data_model(_scenes_list, _filename, _interval,  _metric, _scenes, _nb_zones = 4, _random=0):
+
+    output_train_filename = _filename + ".train"
+    output_test_filename = _filename + ".test"
+
+    if not '/' in output_train_filename:
+        raise Exception("Please select filename with directory path to save data. Example : data/dataset")
+
+    # create path if not exists
+    if not os.path.exists(output_data_folder):
+        os.makedirs(output_data_folder)
+
+    train_file_data = []
+    test_file_data  = []
+
+    scenes = os.listdir(path)
+    # remove min max file from scenes folder
+    scenes = [s for s in scenes if min_max_filename not in s]
+    begin, end = _interval
+
+    # go ahead each scenes
+    for id_scene, folder_scene in enumerate(_scenes_list):
+
+        scene_path = os.path.join(path, folder_scene)
+
+        zones_indices = zones
+
+        # shuffle list of zones (=> randomly choose zones)
+        # only in random mode
+        if _random:
+            random.shuffle(zones_indices)
 
-# show to create own dataset https://gist.github.com/fchollet/0830affa1f7f19fd47b06d4cf89ed44d
+         # store zones learned
+        learned_zones_indices = zones_indices[:_nb_zones]
 
-NUMBER_SUB_IMAGES = 100
+        # write into file
+        folder_learned_path = os.path.join(learned_folder, _filename.split('/')[1])
 
-def init_directory():
+        if not os.path.exists(folder_learned_path):
+            os.makedirs(folder_learned_path)
 
-    if os.path.exists('data'):
-        print("Removing all previous data...")
+        file_learned_path = os.path.join(folder_learned_path, folder_scene + '.csv')
 
-        shutil.rmtree('data')
+        with open(file_learned_path, 'w') as f:
+            for i in learned_zones_indices:
+                f.write(str(i) + ';')
 
-    if not os.path.exists('data'):
-        print("Creating new data...")
-        os.makedirs('data/train/final')
-        os.makedirs('data/train/noisy')
+        for id_zone, index_folder in enumerate(zones_indices):
 
-        os.makedirs('data/validation/final')
-        os.makedirs('data/validation/noisy')
+            index_str = str(index_folder)
+            if len(index_str) < 2:
+                index_str = "0" + index_str
+            
+            current_zone_folder = "zone" + index_str
+            zone_path = os.path.join(scene_path, current_zone_folder)
 
-def create_images(folder, output_folder):
-    images_path = glob.glob(folder + "/*.png")
+            # custom path for metric
+            metric_path = os.path.join(zone_path, _metric)
 
-    for img in images_path:
-        image_name = img.replace(folder, '').replace('/', '')
-        tiles = image_slicer.slice(img, NUMBER_SUB_IMAGES, save = False)
-        image_slicer.save_tiles(tiles, directory=output_folder, prefix='part_'+image_name)
+            # custom path for interval of reconstruction and metric
+            metric_interval_path = os.path.join(metric_path, str(begin) + "_" + str(end))
 
-def generate_dataset():
-    create_images('img_train/final', 'data/train/final')
-    create_images('img_train/noisy', 'data/train/noisy')
-    create_images('img_validation/final', 'data/validation/final')
-    create_images('img_validation/noisy', 'data/validation/noisy')
+            for label in os.listdir(metric_interval_path):
+                label_path = os.path.join(metric_interval_path, label)
+
+                images = os.listdir(label_path)
+
+                for img in images:
+                    img_path = os.path.join(label_path, img)
+
+                    line = label + ';' + img_path + '\n'
+
+                    if id_zone < _nb_zones and folder_scene in _scenes:
+                        train_file_data.append(line)
+                    else:
+                        test_file_data.append(line)
+
+    train_file = open(output_train_filename, 'w')
+    test_file = open(output_test_filename, 'w')
+
+    random.shuffle(train_file_data)
+    random.shuffle(test_file_data)
+
+    for line in train_file_data:
+        train_file.write(line)
+
+    for line in test_file_data:
+        test_file.write(line)
+
+    train_file.close()
+    test_file.close()
 
 def main():
 
-    global NUMBER_SUB_IMAGES
-
-    if len(sys.argv) <= 1:
-        print('Please specify nb sub image per image parameter (use -h if you want to know values)...')
-        print('generate_dataset.py --nb xxxx')
-        sys.exit(2)
-    try:
-        opts, args = getopt.getopt(sys.argv[1:], "h:n", ["help", "nb="])
-    except getopt.GetoptError:
-        # print help information and exit:
-        print('generate_dataset.py --nb xxxx')
-        sys.exit(2)
-    for o, a in opts:
-
-        if o == "--help":
-            print('generate_dataset.py --nb xxxx')
-            print('20x20 : 1600')
-            print('40x40 : 400')
-            print('60x60 : 178 (approximately)')
-            print('80x80 : 100')
-            print('100x100 : 64')
-            sys.exit()
-        elif o == '--nb':
-            NUMBER_SUB_IMAGES = int(a)
-
-    init_directory()
+    parser = argparse.ArgumentParser(description="Compute specific dataset for model using of metric")
+
+    parser.add_argument('--output', type=str, help='output file name desired (.train and .test)')
+    parser.add_argument('--metric', type=str, 
+                                    help="metric choice in order to compute data (use 'all' if all metrics are needed)", 
+                                    choices=metric_choices,
+                                    required=True)
+    parser.add_argument('--interval', type=str, help="interval choice if needed by the compression method", default='"100, 200"')
+    parser.add_argument('--scenes', type=str, help='List of scenes to use for training data')
+    parser.add_argument('--nb_zones', type=int, help='Number of zones to use for training data set', choices=list(range(1, 17)))
+    parser.add_argument('--renderer', type=str, help='Renderer choice in order to limit scenes used', choices=cfg.renderer_choices, default='all')
+    parser.add_argument('--random', type=int, help='Data will be randomly filled or not', choices=[0, 1])
+
+    args = parser.parse_args()
+
+    p_filename = args.output
+    p_metric   = args.metric
+    p_interval = list(map(int, args.interval.split(',')))
+    p_scenes   = args.scenes.split(',')
+    p_nb_zones = args.nb_zones
+    p_renderer = args.renderer
+    p_random   = args.random
+
+        # list all possibles choices of renderer
+    scenes_list = dt.get_renderer_scenes_names(p_renderer)
+    scenes_indices = dt.get_renderer_scenes_indices(p_renderer)
+
+    # getting scenes from indexes user selection
+    scenes_selected = []
+
+    for scene_id in p_scenes:
+        index = scenes_indices.index(scene_id.strip())
+        scenes_selected.append(scenes_list[index])
 
     # create database using img folder (generate first time only)
-    generate_dataset()
+    generate_data_model(scenes_list, p_filename, p_interval,  p_metric, scenes_selected, p_nb_zones, p_random)
+
 
 if __name__== "__main__":
     main()

+ 196 - 0
generate_reconstructed_data.py

@@ -0,0 +1,196 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+Created on Wed Jun 19 11:47:42 2019
+
+@author: jbuisine
+"""
+
+import sys, os, argparse
+import numpy as np
+import random
+import time
+import json
+
+from PIL import Image
+from ipfml import processing, metrics, utils
+from skimage import color
+
+from modules.utils import config as cfg
+from preprocessing_functions import svd_reconstruction
+
+# getting configuration information
+config_filename         = cfg.config_filename
+zone_folder             = cfg.zone_folder
+min_max_filename        = cfg.min_max_filename_extension
+
+# define all scenes values
+scenes_list             = cfg.scenes_names
+scenes_indexes          = cfg.scenes_indices
+choices                 = cfg.normalization_choices
+path                    = cfg.dataset_path
+zones                   = cfg.zones_indices
+seuil_expe_filename     = cfg.seuil_expe_filename
+
+metric_choices          = cfg.metric_choices_labels
+output_data_folder      = cfg.output_data_folder
+
+generic_output_file_svd = '_random.csv'
+
+def generate_data_svd(data_type, interval):
+    """
+    @brief Method which generates all .csv files from scenes
+    @param data_type,  metric choice
+    @param interval, interval choice used by reconstructed method
+    @return nothing
+    """
+
+    scenes = os.listdir(path)
+    # remove min max file from scenes folder
+    scenes = [s for s in scenes if min_max_filename not in s]
+    begin, end = interval
+
+    # go ahead each scenes
+    for id_scene, folder_scene in enumerate(scenes):
+
+        print(folder_scene)
+        scene_path = os.path.join(path, folder_scene)
+
+        config_file_path = os.path.join(scene_path, config_filename)
+
+        with open(config_file_path, "r") as config_file:
+            last_image_name = config_file.readline().strip()
+            prefix_image_name = config_file.readline().strip()
+            start_index_image = config_file.readline().strip()
+            end_index_image = config_file.readline().strip()
+            step_counter = int(config_file.readline().strip())
+
+        # construct each zones folder name
+        zones_folder = []
+        metrics_folder = []
+        zones_threshold = []
+
+        # get zones list info
+        for index in zones:
+            index_str = str(index)
+            if len(index_str) < 2:
+                index_str = "0" + index_str
+
+            current_zone = "zone"+index_str
+            zones_folder.append(current_zone)
+            zone_path = os.path.join(scene_path, current_zone)
+
+            with open(os.path.join(zone_path, cfg.seuil_expe_filename)) as f:
+                zones_threshold.append(int(f.readline()))
+
+            # custom path for metric
+            metric_path = os.path.join(zone_path, data_type)
+
+            if not os.path.exists(metric_path):
+                os.makedirs(metric_path)
+
+            # custom path for interval of reconstruction and metric
+            metric_interval_path = os.path.join(metric_path, str(begin) + "_" + str(end))
+            metrics_folder.append(metric_interval_path)
+
+            if not os.path.exists(metric_interval_path):
+                os.makedirs(metric_interval_path)
+
+            # create for each zone the labels folder
+            labels = [cfg.not_noisy_folder, cfg.noisy_folder]
+
+            for label in labels:
+                label_folder = os.path.join(metric_interval_path, label)
+
+                if not os.path.exists(label_folder):
+                    os.makedirs(label_folder)
+
+        
+
+        current_counter_index = int(start_index_image)
+        end_counter_index = int(end_index_image)
+
+        # for each images
+        while(current_counter_index <= end_counter_index):
+
+            current_counter_index_str = str(current_counter_index)
+
+            while len(start_index_image) > len(current_counter_index_str):
+                current_counter_index_str = "0" + current_counter_index_str
+
+            img_path = os.path.join(scene_path, prefix_image_name + current_counter_index_str + ".png")
+
+            current_img = Image.open(img_path)
+            img_blocks = processing.divide_in_blocks(current_img, (200, 200))
+
+            for id_block, block in enumerate(img_blocks):
+
+                ##########################
+                # Image computation part #
+                ##########################
+                output_block = svd_reconstruction(block, [begin, end])
+                output_block = np.array(output_block, 'uint8')
+                
+                # current output image
+                output_block_img = Image.fromarray(output_block)
+
+                label_path = metrics_folder[id_block]
+
+                # get label folder for block
+                if current_counter_index > zones_threshold[id_block]:
+                    label_path = os.path.join(label_path, cfg.not_noisy_folder)
+                else:
+                    label_path = os.path.join(label_path, cfg.noisy_folder)
+
+                rotations = [0, 90, 180, 270]
+
+                # rotate image to increase dataset size
+                for rotation in rotations:
+                    rotated_output_img = output_block_img.rotate(rotation)
+
+                    output_reconstructed_filename = img_path.split('/')[-1].replace('.png', '') + '_' + zones_folder[id_block]
+                    output_reconstructed_filename = output_reconstructed_filename + '_' + str(rotation) + '.png'
+                    output_reconstructed_path = os.path.join(label_path, output_reconstructed_filename)
+
+                    rotated_output_img.save(output_reconstructed_path)
+
+
+            start_index_image_int = int(start_index_image)
+            print(data_type + "_" + folder_scene + " - " + "{0:.2f}".format((current_counter_index - start_index_image_int) / (end_counter_index - start_index_image_int)* 100.) + "%")
+            sys.stdout.write("\033[F")
+
+            current_counter_index += step_counter
+
+
+        print('\n')
+
+    print("%s_%s : end of data generation\n" % (data_type, interval))
+
+
+def main():
+
+    parser = argparse.ArgumentParser(description="Compute and prepare data of metric of all scenes using specific interval if necessary")
+
+    parser.add_argument('--metric', type=str, 
+                                    help="metric choice in order to compute data (use 'all' if all metrics are needed)", 
+                                    choices=metric_choices,
+                                    required=True)
+
+    parser.add_argument('--interval', type=str, 
+                                    help="interval choice if needed by the compression method", 
+                                    default='"100, 200"')
+
+    args = parser.parse_args()
+
+    p_metric   = args.metric
+    p_interval = list(map(int, args.interval.split(',')))
+
+    # generate all or specific metric data
+    if p_metric == 'all':
+        for m in metric_choices:
+            generate_data_svd(m, p_interval)
+    else:
+        generate_data_svd(p_metric, p_interval)
+
+if __name__== "__main__":
+    main()

二进制
img_train/final/appartAopt_00850.png


二进制
img_train/final/appartAopt_00860.png


二进制
img_train/final/appartAopt_00870.png


二进制
img_train/final/appartAopt_00880.png


二进制
img_train/final/appartAopt_00890.png


二进制
img_train/final/appartAopt_00900.png


二进制
img_train/final/bureau1_9700.png


二进制
img_train/final/bureau1_9750.png


二进制
img_train/final/bureau1_9800.png


二进制
img_train/final/bureau1_9850.png


二进制
img_train/final/bureau1_9900.png


二进制
img_train/final/bureau1_9950.png


二进制
img_train/final/cendrierIUT2_01180.png


二进制
img_train/final/cendrierIUT2_01240.png


二进制
img_train/final/cendrierIUT2_01300.png


二进制
img_train/final/cendrierIUT2_01360.png


二进制
img_train/final/cendrierIUT2_01420.png


二进制
img_train/final/cendrierIUT2_01480.png


二进制
img_train/final/cuisine01_01150.png


二进制
img_train/final/cuisine01_01160.png


二进制
img_train/final/cuisine01_01170.png


二进制
img_train/final/cuisine01_01180.png


二进制
img_train/final/cuisine01_01190.png


二进制
img_train/final/cuisine01_01200.png


二进制
img_train/final/echecs09750.png


二进制
img_train/final/echecs09800.png


二进制
img_train/final/echecs09850.png


二进制
img_train/final/echecs09900.png


二进制
img_train/final/echecs09950.png


二进制
img_train/final/echecs10000.png


二进制
img_train/final/pnd_39750.png


二进制
img_train/final/pnd_39800.png


二进制
img_train/final/pnd_39850.png


二进制
img_train/final/pnd_39900.png


二进制
img_train/final/pnd_39950.png


二进制
img_train/final/pnd_40000.png


二进制
img_train/noisy/appartAopt_00070.png


二进制
img_train/noisy/appartAopt_00080.png


二进制
img_train/noisy/appartAopt_00090.png


二进制
img_train/noisy/appartAopt_00100.png


二进制
img_train/noisy/appartAopt_00110.png


二进制
img_train/noisy/appartAopt_00120.png


二进制
img_train/noisy/bureau1_100.png


二进制
img_train/noisy/bureau1_1000.png


二进制
img_train/noisy/bureau1_1050.png


二进制
img_train/noisy/bureau1_1100.png


二进制
img_train/noisy/bureau1_1150.png


二进制
img_train/noisy/bureau1_1250.png


二进制
img_train/noisy/cendrierIUT2_00040.png


二进制
img_train/noisy/cendrierIUT2_00100.png


二进制
img_train/noisy/cendrierIUT2_00160.png


二进制
img_train/noisy/cendrierIUT2_00220.png


二进制
img_train/noisy/cendrierIUT2_00280.png


二进制
img_train/noisy/cendrierIUT2_00340.png


二进制
img_train/noisy/cuisine01_00050.png


二进制
img_train/noisy/cuisine01_00060.png


二进制
img_train/noisy/cuisine01_00070.png


二进制
img_train/noisy/cuisine01_00080.png


二进制
img_train/noisy/cuisine01_00090.png


二进制
img_train/noisy/cuisine01_00100.png


二进制
img_train/noisy/echecs00050.png


二进制
img_train/noisy/echecs00100.png


二进制
img_train/noisy/echecs00150.png


二进制
img_train/noisy/echecs00200.png


二进制
img_train/noisy/echecs00250.png


二进制
img_train/noisy/echecs00300.png


二进制
img_train/noisy/pnd_100.png


二进制
img_train/noisy/pnd_1000.png


二进制
img_train/noisy/pnd_1050.png


二进制
img_train/noisy/pnd_1150.png


二进制
img_train/noisy/pnd_1200.png


二进制
img_train/noisy/pnd_1300.png


二进制
img_validation/final/SdB2_00900.png


二进制
img_validation/final/SdB2_00910.png


二进制
img_validation/final/SdB2_00920.png


二进制
img_validation/final/SdB2_00930.png


二进制
img_validation/final/SdB2_00940.png


二进制
img_validation/final/SdB2_00950.png


二进制
img_validation/final/SdB2_D_00900.png


二进制
img_validation/final/SdB2_D_00910.png


二进制
img_validation/final/SdB2_D_00920.png


二进制
img_validation/final/SdB2_D_00930.png


二进制
img_validation/final/SdB2_D_00940.png


二进制
img_validation/final/SdB2_D_00950.png


二进制
img_validation/final/selles_envir02850.png


二进制
img_validation/final/selles_envir02900.png


二进制
img_validation/final/selles_envir02950.png


二进制
img_validation/final/selles_envir03000.png


二进制
img_validation/final/selles_envir03050.png


+ 0 - 0
img_validation/final/selles_envir03100.png


部分文件因为文件数量过多而无法显示