Keras (TensorFlow) repurposes a trained model to achieve image classification even with a small amount of data[Fine tuning]
Using the data from the previous blog, let's experiment with Convolutional Neural Network (CNN) convolutional neural nets using pre-trained models!
1. Introduction: What is "Fine tuning"?
It is a way to optimize the coupling weights between layers of a model that has already been trained to generate a new model.
A network of learned large numbers of images will have a similar filter to some extent. Therefore, by using a model that has been trained using other image data, a new model can be modeled with less data/learning amount.
1-1. About the trained model
The trained model uses VGG16 to classify the flowers by learning the image data of 6 flowers (500 images each).
Relearn all the joins in the figure below.
* The VGG model is the model used by the Oxford VGG team that won the ImageNet in 2014.
2. Implementation
I'll put this program in Git here.
2-1. Development environment (machine specifications)
CPU | Intel® Core™ i7-7700 Processor |
---|---|
MEMORY | 16GB |
GPU | GeForce GTX 980 Ti |
OS | Ubuntu 16.04 |
2-2. Environment preparation (build Jupyter on Docker)
Build a Docker file. If you are not in a GPU environment, please use an ubuntu image etc.
※ It takes a little time to build.
$ git clone https://github.com/tsunaki00/fine_tuning.git
$ cd fine_tuning
$ cd docker
$ docker build . -t keras
# Dockerを起動(GPU環境ではnvidia-docker)
$ docker run -d --name keras-container \\
-v $PWD/notebooks:/notebooks \\
-p 8888:8888 \\
keras
2-3. Preparation of training data
Experiment using flower images collected last time. Git
2-4. Creating a learning program on Jupyter
Access Jupyter to run the development and run the following programs: http:// [Server]: 8888
Learning takes a little time.
import pandas as pd
import random, math
import numpy as np
from keras.preprocessing.image import load_img, img_to_array
from keras.applications.vgg16 import VGG16
from keras.models import Sequential, Model
from keras.layers import Input, Dense, Dropout, Activation, Flatten
from keras.optimizers import SGD
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
# 分類クラス
classes = ['chrysanthemum', 'cosmos', 'ginkgo', 'lotus' , 'margaret', 'rose']
nb_classes = len(classes)
batch_size = 32
nb_epoch = 10
current_dir = "/notebooks"
# image pixel
img_rows, img_cols = 224, 224
def build_model() :
input_tensor = Input(shape=(img_rows, img_cols, 3))
vgg16 = VGG16(include_top=False, weights='imagenet', input_tensor=input_tensor)
#vgg16.summary()
_model = Sequential()
_model.add(Flatten(input_shape=vgg16.output_shape[1:]))
_model.add(Dense(256, activation='relu'))
_model.add(Dropout(0.5))
_model.add(Dense(nb_classes, activation='softmax'))
model = Model(inputs=vgg16.input, outputs=_model(vgg16.output))
# modelの14層目までのモデル重み
for layer in model.layers[:15]:
layer.trainable = False
model.compile(loss='categorical_crossentropy',
optimizer=SGD(lr=1e-4, momentum=0.9), metrics=['accuracy'])
return model
if __name__ == "__main__":
train_datagen = ImageDataGenerator(
rescale=1.0 / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
train_generator = train_datagen.flow_from_directory(
directory=current_dir + '/images',
target_size=(img_rows, img_cols),
color_mode='rgb',
classes=classes,
class_mode='categorical',
batch_size=batch_size,
shuffle=True)
test_datagen = ImageDataGenerator(rescale=1.0 / 255)
test_generator = test_datagen.flow_from_directory(
directory=current_dir + '/test_images',
target_size=(img_rows, img_cols),
color_mode='rgb',
classes=classes,
class_mode='categorical',
batch_size=batch_size,
shuffle=True)
model = build_model()
model.fit_generator(
train_generator,
steps_per_epoch=3000,
epochs=nb_epoch,
validation_data=test_generator,
validation_steps=600
)
hdf5_file = current_dir + "/model/flower-model.hdf5"
model.save_weights(hdf5_file)
2-5. Model experiment
Experiment with a trained model.
import pandas as pd
import random, math
import numpy as np
from keras.preprocessing.image import load_img, img_to_array
from keras.applications.vgg16 import VGG16, preprocess_input
from keras.models import Sequential, Model
from keras.layers import Input, Dense, Dropout, Activation, Flatten
from keras.optimizers import SGD
classes = ['chrysanthemum', 'cosmos', 'ginkgo', 'lotus' , 'margaret', 'rose']
nb_classes = len(classes)
current_dir = "/notebooks"
img_rows, img_cols = 224, 224
def build_model() :
input_tensor = Input(shape=(img_rows, img_cols, 3))
vgg16 = VGG16(include_top=False, weights='imagenet', input_tensor=input_tensor)
_model = Sequential()
_model.add(Flatten(input_shape=vgg16.output_shape[1:]))
_model.add(Dense(256, activation='relu'))
_model.add(Dropout(0.5))
_model.add(Dense(nb_classes, activation='softmax'))
model = Model(inputs=vgg16.input, outputs=_model(vgg16.output))
# modelの14層目までのモデル重み
for layer in model.layers[:15]:
layer.trainable = False
model.compile(loss='categorical_crossentropy',
optimizer=SGD(lr=1e-4, momentum=0.9), metrics=['accuracy'])
return model
if __name__ == "__main__":
model = build_model()
model.load_weights(current_dir + "/model/flower-model.hdf5")
filename = current_dir + "/check_images/rose.jpg"
img = load_img(filename, target_size=(img_rows, img_cols))
x = img_to_array(img)
x = np.expand_dims(x, axis=0)
filename = current_dir + "/check_images/rose.jpg"
predict = model.predict(preprocess_input(x))
for pre in predict:
y = pre.argmax()
print("花の名前 : ", classes[y])
Experiment with the following flowers
The result was correct with Rose.
3. Conclusion
It seems that it will expand a lot by optimizing the trained model!! We usually develop AI related to horse racing prediction siva and fashion.
I also tweet various deep learning experiments. twitter Please follow us.
Tsunaki Utsunomiya
フロント開発、サーバ開発、ハード開発、データサイエンス [プログラム] node.js, python, Java, swift, Angular, React etc... [サーバ] redhat系,debian系 AWS,Kubernetes, CloudStack, OpenStack, docker ... etc [DB] hadoop, spark, RDB各種
Updated on October 16, 2017