본문 바로가기

Information/구글 텐서플로우 자격증 따기

[코세라 강좌] 2주차 - 실습(2/2)

반응형

콜백 함수를 이용해서 트레이닝할 때 조건을 줄 수 있습니다. 

 

아래 예는 accuracy가 60%를 초과하면 트레이닝을 멈추도록하는 하는 예제.

 

import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np

class myCallback(tf.keras.callbacks.Callback):
  def on_epoch_end(self, epoch, logs={}):
    if(logs.get('accuracy')>0.6):
      print("\nReached 60% accuracy so cancelling training!")
      self.model.stop_training = True

# print(tf.__version__)
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()

#normalize to 0~1
training_images  = training_images / 255.0
test_images = test_images / 255.0

#modeling
# Flatten: make to 1 dimensional set
# relu: X if X>0 else 0
# softmax: picks the biggest one,
#           for example, [0.1, 0.1, 0.05, 0.1, 9.5, 0.1, 0.05, 0.05, 0.05] ->  [0,0,0,0,1,0,0,0,0]
model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
                                    tf.keras.layers.Dense(128, activation=tf.nn.relu),
                                    tf.keras.layers.Dense(10, activation=tf.nn.softmax)])

model.compile(optimizer = tf.optimizers.Adam(),
              loss = 'sparse_categorical_crossentropy',
              metrics=['accuracy'])

callbacks = myCallback()
model.fit(training_images, training_labels, epochs=10, callbacks=[callbacks])

#evaluation
model.evaluate(test_images, test_labels)

#classifications
classifications = model.predict(test_images)
print(classifications[0]) # print 10 probability number. the last one is biggest
print(test_labels[0])     # print 9

 


MNIST 예제에 대해서 accuracy가 99% 이상이면 트레이닝을 멈추게하는 예제코드

 

import tensorflow as tf

def train_mnist():
    class myCallback(tf.keras.callbacks.Callback):
        def on_epoch_end(self, epoch, logs={}):
            if (logs.get('accuracy') > 0.99):
            # if (logs.get('acc') > 0.99):
                print('Reached 99% accuracy so cancelling training!')
                self.model.stop_training = True

    callbacks = myCallback()

    mnist = tf.keras.datasets.mnist
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
    
    x_train = x_train / 255.0
    x_test = x_test / 255.0
    
    model = tf.keras.models.Sequential([    
        tf.keras.layers.Flatten(),
        tf.keras.layers.Dense(128, activation=tf.nn.relu),
        tf.keras.layers.Dense(10, activation=tf.nn.softmax)    
    ])

    model.compile(optimizer='adam',
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy'])
    
    history = model.fit(  
        x_train, y_train, epochs=10, callbacks=[callbacks]
    )
    
    return history.epoch, history.history['acc'][-1]

train_mnist()

 

-끝-

반응형