tensorflow v2.0入门教程——04逻辑回归

喜欢ヅ旅行 2023-08-17 17:44 173阅读 0赞

个人博客
本教程需先理解逻辑回归原理。
采用MNIST手写数字数据集,60000张训练图片,10000张测试图片,图片大小为28*28,像素值为0到255。

逻辑回归

  1. import tensorflow as tf
  2. import numpy as np
  3. from tensorflow.keras.datasets import mnist
  4. # MNIST数据集参数
  5. num_classes = 10 # 数字0到9, 10类
  6. num_features = 784 # 28*28
  7. # 训练参数
  8. learing_rate = 0.01
  9. training_steps = 1000
  10. batch_size = 256
  11. display_step =50
  12. # 预处理数据集
  13. (x_train, y_train), (x_test, y_test) = mnist.load_data()
  14. # 转为float32
  15. x_train, x_test = np.array(x_train, np.float32), np.array(x_test, np.float32)
  16. # 转为一维向量
  17. x_train, x_test = x_train.reshape([-1, num_features]), x_test.reshape([-1, num_features])
  18. # [0, 255] 到 [0, 1]
  19. x_train, x_test = x_train / 255, x_test / 255
  20. # tf.data.Dataset.from_tensor_slices 是使用x_train, y_train构建数据集
  21. train_data = tf.data.Dataset.from_tensor_slices((x_train, y_train))
  22. # 将数据集打乱,并设置batch_size大小
  23. train_data = train_data.repeat().shuffle(5000).batch(batch_size).prefetch(1)
  24. # 权重[748, 10],图片大小28*28,类数
  25. W = tf.Variable(tf.ones([num_features, num_classes]), name="weight")
  26. # 偏置[10],共10类
  27. b = tf.Variable(tf.zeros([num_classes]), name="bias")
  28. # 逻辑回归函数
  29. def logistic_regression(x):
  30. return tf.nn.softmax(tf.matmul(x, W) + b)
  31. # 损失函数
  32. def cross_entropy(y_pred, y_true):
  33. # tf.one_hot()函数的作用是将一个值化为一个概率分布的向量
  34. y_true = tf.one_hot(y_true, depth=num_classes)
  35. # tf.clip_by_value将y_pred的值控制在1e-9和1.0之间
  36. y_pred = tf.clip_by_value(y_pred, 1e-9, 1.0)
  37. return tf.reduce_mean(-tf.reduce_sum(y_true * tf.math.log(y_pred)))
  38. # 计算精度
  39. def accuracy(y_pred, y_true):
  40. # tf.cast作用是类型转换
  41. correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y_true, tf.int64))
  42. return tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
  43. # 优化器采用随机梯度下降
  44. optimizer = tf.optimizers.SGD(learning_rate)
  45. # 梯度下降
  46. def run_optimization(x, y):
  47. with tf.GradientTape() as g:
  48. pred = logistic_regression(x)
  49. loss = cross_entropy(pred, y)
  50. # 计算梯度
  51. gradients = g.gradient(loss, [W, b])
  52. # 更新梯度
  53. optimizer.apply_gradients(zip(gradients, [W, b]))
  54. # 开始训练
  55. for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps), 1):
  56. run_optimization(batch_x, batch_y)
  57. if step % display_step == 0:
  58. pred = logistic_regression(batch_x)
  59. loss = cross_entropy(pred, batch_y)
  60. acc = accuracy(pred, batch_y)
  61. print("step: %i, loss: %f, accuracy: %f" % (step, loss, acc))
  62. # 测试模型的准确率
  63. pred = logistic_regression(x_test)
  64. print("Test Accuracy: %f" % accuracy(pred, y_test))

输出结果

  1. step: 50, loss: 1244.371582, accuracy: 0.656250
  2. step: 100, loss: 926.965942, accuracy: 0.792969
  3. step: 150, loss: 667.272644, accuracy: 0.832031
  4. step: 200, loss: 489.627258, accuracy: 0.871094
  5. step: 250, loss: 416.455811, accuracy: 0.878906
  6. step: 300, loss: 633.148315, accuracy: 0.796875
  7. step: 350, loss: 708.499329, accuracy: 0.816406
  8. step: 400, loss: 567.255005, accuracy: 0.765625
  9. step: 450, loss: 418.291779, accuracy: 0.890625
  10. step: 500, loss: 596.595642, accuracy: 0.824219
  11. step: 550, loss: 718.982849, accuracy: 0.746094
  12. step: 600, loss: 785.499329, accuracy: 0.781250
  13. step: 650, loss: 495.821411, accuracy: 0.847656
  14. step: 700, loss: 544.291626, accuracy: 0.871094
  15. step: 750, loss: 557.276123, accuracy: 0.867188
  16. step: 800, loss: 588.374207, accuracy: 0.843750
  17. step: 850, loss: 826.526855, accuracy: 0.804688
  18. step: 900, loss: 515.780884, accuracy: 0.851562
  19. step: 950, loss: 514.978210, accuracy: 0.855469
  20. step: 1000, loss: 580.234985, accuracy: 0.843750
  21. Test Accuracy: 0.826200

个人博客

在这里插入图片描述

发表评论

表情:
评论列表 (有 0 条评论,173人围观)

还没有评论,来说两句吧...

相关阅读