Parcourir la source

Update of documentation

Jérôme BUISINE il y a 4 ans
Parent
commit
0c40621cc5
4 fichiers modifiés avec 26 ajouts et 20 suppressions
  1. 8 0
      LICENSE.md
  2. 11 11
      README.md
  3. 7 8
      classification_cnn_keras_cross_validation.py
  4. 0 1
      generate_dataset.py

+ 8 - 0
LICENSE.md

@@ -0,0 +1,8 @@
+MIT License
+Copyright (c) 2019 prise-3d
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

+ 11 - 11
README.md

@@ -8,7 +8,7 @@ pip install -r requirements.txt
 
 ## How to use
 
-Generate dataset (run only once time or clean data folder before) :
+Generate dataset (run only once time or clean data folder before):
 ```
 python generate_dataset.py
 ```
@@ -20,13 +20,13 @@ You can specify the number of sub images you want in the script by modifying **_
 python generate_dataset.py --nb xxxx
 ```
 
-There are 3 kinds of Neural Networks :
-- **classification_cnn_keras.py** : *based on cropped images and do convolution*
-- **classification_cnn_keras_cross_validation.py** : *based on cropped images and do convolution. Data are randomly split for training*
-- **classification_cnn_keras_svd.py** : *based on svd metrics of image*
+There are 3 kinds of Neural Networks:
+- **classification_cnn_keras.py**: *based on cropped images and do convolution*
+- **classification_cnn_keras_cross_validation.py**: *based on cropped images and do convolution. Data are randomly split for training*
+- **classification_cnn_keras_svd.py**: *based on svd metrics of image*
 
 
-After your built your neural network in classification_cnn_keras.py, you just have to run it :
+After your built your neural network in classification_cnn_keras.py, you just have to run it:
 
 ```
 python classification_cnn_keras_svd.py --directory xxxx --output xxxxx --batch_size xx --epochs xx --img xx (or --image_width xx --img_height xx)
@@ -36,12 +36,12 @@ A config file in json is available and keeps in memory all image sizes available
 
 ## Modules
 
-This project contains modules :
-- **modules/image_metrics** : *where all computed metrics function are developed*
-- **modules/model_helper** : *contains helpful function to save or display model information and performance*
+This project contains modules:
+- **modules/image_metrics**: *where all computed metrics function are developed*
+- **modules/model_helper**: *contains helpful function to save or display model information and performance*
 
 All these modules will be enhanced during development of the project
 
-## How to contribute
+## License
 
-This git project uses [git-flow](https://danielkummer.github.io/git-flow-cheatsheet/) implementation. You are free to contribute to it.
+[MIT](https://github.com/prise-3d/Thesis-NoiseDetection-CNN/blob/master/LICENSE)

+ 7 - 8
classification_cnn_keras_cross_validation.py

@@ -33,7 +33,6 @@ from keras.layers import Activation, Dropout, Flatten, Dense, BatchNormalization
 from keras import backend as K
 from keras.utils import plot_model
 
-from ipfml import tf_model_helper
 
 # local functions import (metrics preprocessing)
 import preprocessing_functions
@@ -61,15 +60,15 @@ Method which returns model to train
 def generate_model():
     # create your model using this function
     model = Sequential()
-    model.add(Conv2D(60, (2, 2), input_shape=input_shape))
+    model.add(Conv2D(60, (2, 2), input_shape=input_shape, dilation_rate=1))
     model.add(Activation('relu'))
     model.add(MaxPooling2D(pool_size=(2, 2)))
 
-    model.add(Conv2D(40, (2, 2)))
+    model.add(Conv2D(40, (2, 2), dilation_rate=1))
     model.add(Activation('relu'))
     model.add(MaxPooling2D(pool_size=(2, 2)))
 
-    model.add(Conv2D(20, (2, 2)))
+    model.add(Conv2D(20, (2, 2), dilation_rate=1))
     model.add(Activation('relu'))
     model.add(MaxPooling2D(pool_size=(2, 2)))
 
@@ -104,7 +103,7 @@ def generate_model():
     model.add(Activation('sigmoid'))
 
     model.compile(loss='binary_crossentropy',
-                  optimizer='rmsprop',
+                  optimizer='adam',
                   metrics=['accuracy'])
 
     return model
@@ -140,14 +139,14 @@ def main():
 
     # update global variable and not local
     global batch_size
-    global epochs   
+    global epochs
     global img_width
     global img_height
     global input_shape
     global train_data_dir
     global validation_data_dir
     global nb_train_samples
-    global nb_validation_samples 
+    global nb_validation_samples
 
     if len(sys.argv) <= 1:
         print('Run with default parameters...')
@@ -227,7 +226,7 @@ def main():
             filename = directory + "/" + filename
 
         # save plot file history
-        tf_model_helper.save(history, filename)
+        # tf_model_helper.save(history, filename)
 
         plot_model(model, to_file=str(('%s.png' % filename)))
         model.save_weights(str('%s.h5' % filename))

+ 0 - 1
generate_dataset.py

@@ -30,7 +30,6 @@ def init_directory():
 
         os.makedirs('data/validation/final')
         os.makedirs('data/validation/noisy')
-       
 
 def create_images(folder, output_folder):
     images_path = glob.glob(folder + "/*.png")