tensorflow v2.0入门教程——03线性回归

浅浅的花香味﹌ 2023-08-17 17:33 189阅读 0赞

个人博客
本教程在理解线性回归原理的基础上。

线性回归

  1. import tensorflow as tf
  2. import numpy as np
  3. # 一些参数
  4. learning_rate = 0.01 # 学习率
  5. training_steps = 1000 # 训练次数
  6. display_step = 50 # 训练50次输出一次
  7. # 训练数据
  8. X = np.array([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,
  9. 7.042,10.791,5.313,7.997,5.654,9.27,3.1])
  10. Y = np.array([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,
  11. 2.827,3.465,1.65,2.904,2.42,2.94,1.3])
  12. n_samples = X.shape[0]
  13. # 随机初始化权重和偏置
  14. W = tf.Variable(np.random.randn(), name="weight")
  15. b = tf.Variable(np.random.randn(), name="bias")
  16. # 线性回归函数
  17. def linear_regression(x):
  18. return W*x + b
  19. # 损失函数
  20. def mean_square(y_pred, y_true):
  21. return tf.reduce_sum(tf.pow(y_pred-y_true, 2)) / (2 * n_samples)
  22. # 优化器采用随机梯度下降(SGD)
  23. optimizer = tf.optimizers.SGD(learning_rate)
  24. # 计算梯度,更新参数
  25. def run_optimization():
  26. # tf.GradientTape()梯度带,可以查看每一次epoch的参数值
  27. with tf.GradientTape() as g:
  28. pred = linear_regression(X)
  29. loss = mean_square(pred, Y)
  30. # 计算梯度
  31. gradients = g.gradient(loss, [W, b])
  32. # 更新W,b
  33. optimizer.apply_gradients(zip(gradients, [W, b]))
  34. # 开始训练
  35. for step in range(1, training_steps+1):
  36. run_optimization()
  37. if step % display_step == 0:
  38. pred = linear_regression(X)
  39. loss = mean_square(pred, Y)
  40. print("step: %i, loss: %f, W: %f, b: %f" % (step, loss, W.numpy(), b.numpy()))

输出结果

  1. step: 50, loss: 0.122252, W: 0.372333, b: -0.056897
  2. step: 100, loss: 0.117069, W: 0.365222, b: -0.006478
  3. step: 150, loss: 0.112478, W: 0.358529, b: 0.040970
  4. step: 200, loss: 0.108412, W: 0.352231, b: 0.085622
  5. step: 250, loss: 0.104811, W: 0.346304, b: 0.127643
  6. step: 300, loss: 0.101622, W: 0.340726, b: 0.167189
  7. step: 350, loss: 0.098798, W: 0.335476, b: 0.204404
  8. step: 400, loss: 0.096297, W: 0.330536, b: 0.239427
  9. step: 450, loss: 0.094082, W: 0.325887, b: 0.272386
  10. step: 500, loss: 0.092120, W: 0.321512, b: 0.303402
  11. step: 550, loss: 0.090383, W: 0.317395, b: 0.332592
  12. step: 600, loss: 0.088844, W: 0.313520, b: 0.360061
  13. step: 650, loss: 0.087481, W: 0.309874, b: 0.385912
  14. step: 700, loss: 0.086274, W: 0.306442, b: 0.410240
  15. step: 750, loss: 0.085205, W: 0.303213, b: 0.433135
  16. step: 800, loss: 0.084259, W: 0.300174, b: 0.454680
  17. step: 850, loss: 0.083421, W: 0.297314, b: 0.474956
  18. step: 900, loss: 0.082678, W: 0.294623, b: 0.494037
  19. step: 950, loss: 0.082021, W: 0.292090, b: 0.511994
  20. step: 1000, loss: 0.081438, W: 0.289706, b: 0.528893

可视化查看

  1. import matplotlib.pyplot as plt
  2. plt.plot(X, Y, 'ro', label='Original data')
  3. plt.plot(X, np.array(W * X + b), label='Fitted line')
  4. plt.legend()
  5. plt.show()

在这里插入图片描述
个人博客

在这里插入图片描述

发表评论

表情:
评论列表 (有 0 条评论,189人围观)

还没有评论,来说两句吧...

相关阅读

    相关 机器学习算法03 - 线性回归

    线性回归 > 机器学习基本算法之一的线性回归的基本原理,其要点如下: 线性回归假设输出变量是若干输入变量的线性组合,并根据这一关系求解线性组合中的最优系数;