티스토리 뷰

 

 

 

 

 

 

 

Tensorflow로 Softmax Classification의 구현하기

 

 

soft max function은 여러개 클래스 예측할 때 좋아

In [1]:
import tensorflow as tf
 

1. 텐서플로우로 어떻게 구현할 것인가?

 
 
 

hypothesis = tf.nn.softmax(tf.matmul(X,W)+b)

tf.matmul(X,W)+b 는 XW=y를 의미한다.
scores 부분을 logit이라고 부르기도 한다.

 

Cost function: cross entropy

소프트 맥스에 걸맞는 loss function이 필요하다. loss function은 텐서플로우로 굉장히 쉽게 구현 가능하다.

 
 
 

Cost = tf. reduce_mean(-tf.reduce_sum(Y * tf.log(hypothesis), axis=1))

 
 
 

optimizer = tf. train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost)

 

2. 구현

 

1) 학습

In [5]:
x_data = [[1, 2, 1, 1], [2, 1, 3, 2], [3, 1, 3, 4], [4, 1, 5, 5], [1, 7, 5, 5],
         [1, 2, 5, 6], [1, 6, 6, 6], [1, 7, 7, 7]]
y_data = [[0, 0, 1], [0, 0, 1], [0, 0, 1], [0, 1, 0], [0, 1, 0], [0, 1, 0], [1, 0, 0], [1, 0, 0]]
# y value가 여러개의 class가 있으므로 one hot encoding을 이용해 표현했다. 2,2,2,1,1,1,0,0

X = tf.placeholder("float", [None, 4])
Y = tf.placeholder("float", [None, 3])
nb_classes = 3

W = tf.Variable(tf.random_normal([4, nb_classes]), name='weight')
b = tf.Variable(tf.random_normal([nb_classes]), name='bias')

# tf.nn.softmax computes softmax activations
# softmax = exp(logits) / reduce_sum(exp(logits), dim)
hypothesis = tf.nn.softmax(tf.matmul(X,W) + b)

# Cross entropy cost/loss
cost = tf.reduce_mean(-tf.reduce_sum(Y * tf.log(hypothesis), axis=1))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost)

# Launch graph
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    
    for step in range(2001):
        sess.run(optimizer, feed_dict={X: x_data, Y: y_data})
        if step % 200 == 0:
            print(step, sess.run(cost, feed_dict={X: x_data, Y: y_data}))
 
WARNING:tensorflow:From C:\Users\whanh\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From C:\Users\whanh\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
0 5.5452747
200 0.591108
400 0.49374837
600 0.40227863
800 0.31145102
1000 0.23698442
1200 0.21411678
1400 0.19557284
1600 0.17987308
1800 0.16642055
2000 0.15477483
 

2) Test & one-hot encoding

 

어떻게 출력되는지 확인해 보자.

 

hypothesis = tf.nn.softmax(tf.matmul(X,W)+b)

 
 
In [22]:
# Testing & One-hot encoding
with tf.Session() as sess : 
    sess.run(tf.global_variables_initializer())
    a = sess.run(hypothesis, feed_dict={X: [[1, 11, 7, 9]]})
    print(a, sess.run(tf.arg_max(a, 1)))
    

    print('--------------------')

    b = sess.run(hypothesis, feed_dict={X: [[1, 3, 4, 3]]})
    print(b, sess.run(tf.arg_max(b, 1)))

    print('--------------------')

    c = sess.run(hypothesis, feed_dict={X: [[1, 1, 0, 1]]})
    print(c, sess.run(tf.arg_max(c, 1)))

    print('--------------------')

    all = sess.run(hypothesis, feed_dict={X: [[1, 11, 7, 9], [1, 3, 4, 3], [1, 1, 0, 1]]})
    print(all, sess.run(tf.arg_max(all, 1)))
 
[[1.4244564e-01 1.1248226e-05 8.5754305e-01]] [2]
--------------------
[[0.27005303 0.01810987 0.7118372 ]] [2]
--------------------
[[0.51402605 0.15140067 0.33457324]] [0]
--------------------
[[1.4244564e-01 1.1248226e-05 8.5754305e-01]
 [2.7005303e-01 1.8109867e-02 7.1183717e-01]
 [5.1402605e-01 1.5140067e-01 3.3457324e-01]] [2 2 0]
 

[a일확률, b일확률, c일확률] , [?] ?=0 이면 a, 1이면 b, 2이면 c

In [ ]:
오류 'Attempted to use a closed Session.' 때문에 찾아보니
with tf.Session() as sess : 
    sess.run(tf.global_variables_initializer()) 붙여줘야됨.

 # 주피터 새 라인에서 할 경우! 학습 데이터에 그대로 붙여줬었다면 이거 안해도 됨.
공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2025/01   »
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
글 보관함