TensorFlow batch normalization
TensorFlow 1.0 (February 2017)以后 出现了高级API tf.layers.batch_normalization
.
使用简单方便
# Set this to True for training and False for testing training = tf.placeholder(tf.bool) x = tf.layers.dense(input_x, units=100) x = tf.layers.batch_normalization(x, training=training) x = tf.nn.relu(x)
除了它为图表添加额外的操作(用于更新其平均值和方差变量),使得它们不会成为训练的依赖
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) sess.run([train_op, extra_update_ops], ...)
或者手动添加更新操作作为您的培训操作的依赖关系,然后正常运行训练过程
extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(extra_update_ops): train_op = optimizer.minimize(loss) ... sess.run([train_op], ...)
还没有评论,来说两句吧...