티스토리 뷰

 

 

 

 

Multi-variable linear regression

In [5]:
from PIL import Image
Image.open('그림3.png')
Out[5]:
 

H(x1,x2,x3) = x1w1 + x2w2 + x3w3

In [ ]:
import tensorflow as tf

x1_data = [73., 93., 89., 96., 73.]
x2_data = [80., 88., 91., 98., 66.]
x3_data = [75., 93., 90., 100., 70.]

y_data = [152., 185., 180., 196., 142.]

# placeholders for a tensor that will be always fed.
x1 = tf.placeholder(tf.float32)
x2 = tf.placeholder(tf.float32)
x3 = tf.placeholder(tf.float32)

Y = tf.placeholder(tf.float32)

w1 = tf.Variable(tf.random_normal([1]), name='weight1')
w2 = tf.Variable(tf.random_normal([1]), name='weight2')
w3 = tf.Variable(tf.random_normal([1]), name='weight3')
b = tf.Variable(tf.random_normal([1]), name='bias')

hypothesis = x1 * w1 + x2 * w2 + x3 * w3 +b

# cost/loss function
cost = tf.reduce_mean(tf.square(hypothesis - Y))
#Minimize. Need a very small learning rate for this data set
optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-5)
train = optimizer.minimize(cost)

# Launch the graph in a session.
sess = tf.Session()
# Initializes global variables in the graph
sess.run(tf.global_variables_initializer())
for step in range(2001):
    cost_val, hy_val, _ = sess.run([cost, hypothesis, train],
                                  feed_dict={x1: x1_data, x2: x2_data, x3: x3_data, Y: y_data})
    if step % 10 == 0:
        print(step, "Cost: ", cost_val, "\nPrediction:\n", hy_val)
 

결과값이 너무 많아 위아래로 5개만 복사함

0 Cost: 41205.996
Prediction:
[-29.728159 -31.313475 -33.342583 -34.231766 -24.40795 ]
10 Cost: 2.0762541
Prediction:
[149.83916 184.51231 179.31468 197.346 140.21315]
20 Cost: 1.6982925
Prediction:
[150.38287 185.16508 179.9583 198.0462 140.71143]
30 Cost: 1.6975454
Prediction:
[150.38492 185.16682 179.96045 198.04788 140.71313]
40 Cost: 1.696792
Prediction:
[150.38535 185.16661 179.96066 198.04745 140.71335]
50 Cost: 1.6960138
Prediction:
[150.38577 185.1664 179.96086 198.047 140.71356]

......................

1960 Cost: 1.5586101
Prediction:
[150.45787 185.12816 179.99718 197.9628 140.75667]
1970 Cost: 1.5579323
Prediction:
[150.4582 185.12798 179.99734 197.96236 140.7569 ]
1980 Cost: 1.557247
Prediction:
[150.45854 185.12779 179.99753 197.96191 140.75714]
1990 Cost: 1.5565653
Prediction:
[150.4589 185.12762 179.9977 197.96149 140.75739]
2000 Cost: 1.5558803
Prediction:
[150.45923 185.12744 179.99788 197.96104 140.75763]

 

Prediction이 y_data에 수렴해 간다는 것을 확인할 수 있다..
그러나 data에 따라 수식을 일일히 써서 풀어내는 것은 스파게티처럼 너무 복잡해서 우리는 이런 방법은 이제 사용하지 않는다.

 

Matrix

In [9]:
from PIL import Image
Image.open('그림4.png')
Out[9]:
 

이전과 결과값은 같지만 매트릭스를 사용하여 앞부분을 쉽게 표현했다.

In [ ]:
x_data = [[73., 80., 75.], [93., 88., 93.],
         [89., 91., 90.], [96., 98., 100.], [73., 66., 70.]]
y_data = [[152.],[185.],[180.],[196.],[142.]]

# placeholders for a tensor that will be always fed.
X = tf.placeholder(tf.float32, shape=[None, 3]) # 전체 x 3
Y = tf.placeholder(tf.float32, shape=[None, 1]) # 전체 x 1

W = tf.Variable(tf.random_normal([3,1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')
                
# Hypothesis
hypothesis = tf.matmul(X, W) + b
                
                
# simplified cost/loss function
cost = tf.reduce_mean(tf.square(hypothesis - Y))
# Minimize
optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-5)
train = optimizer.minimize(cost)
                
# Launch the graph in a session.
sess = tf.Session()
#Initializes global variables in the graph.
sess.run(tf.global_variables_initializer())
                
for step in range(2001):
    cost_val, hy_val, _ = sess.run(
        [cost, hypothesis, train], feed_dict={X: x_data, Y:y_data})
    if step % 10 == 0:
        print(step, "Cost: ", cost_val, "\nPrediction:\n", hy_val)
 

결과값이 너무 많아 위아래로 5개만 복사함

0 Cost: 7030.7046
Prediction:
[[ 80.20585 ]
[ 93.21972 ]
[ 93.588715]
[100.07896 ]
[ 71.94469 ]]
10 Cost: 3.77499
Prediction:
[[154.35341]
[182.35323]
[181.40634]
[195.71117]
[139.93365]]
20 Cost: 3.692349
Prediction:
[[154.57053]
[182.62782]
[181.66982]
[195.9991 ]
[140.14565]]
30 Cost: 3.67409
Prediction:
[[154.56398]
[182.63358]
[181.66837]
[195.99857]
[140.15256]]
40 Cost: 3.6559856
Prediction:
[[154.5568 ]
[182.63847]
[181.66615]
[195.99715]
[140.15884]]
50 Cost: 3.6379292
Prediction:
[[154.5496 ]
[182.64339]
[181.66391]
[195.99573]
[140.16508]]

........................

1960 Cost: 1.4921378
Prediction:
[[153.47739]
[183.3746 ]
[181.33044]
[195.79233]
[141.09286]]
1970 Cost: 1.4857347
Prediction:
[[153.4731 ]
[183.37752]
[181.32912]
[195.79156]
[141.09656]]
1980 Cost: 1.4793497
Prediction:
[[153.46881]
[183.38045]
[181.32777]
[195.79077]
[141.1002 ]]
1990 Cost: 1.473003
Prediction:
[[153.46455]
[183.38336]
[181.32645]
[195.79002]
[141.10388]]
2000 Cost: 1.4666864
Prediction:
[[153.46028]
[183.38626]
[181.32512]
[195.78926]
[141.10753]]

공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2025/02   »
1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28
글 보관함