tensorflow 2.0 随机梯度下降 之 梯度下降

6.1 梯度下降

梯度

  1. 导数,derivative

  2. 偏微分,partial derivative

  3. 梯度,gradient

    f=(fx1,fx2,...,fxn)\nabla f = (\frac{\partial f}{\partial x_1},\frac{\partial f}{\partial x_2},...,\frac{\partial f}{\partial x_n})∇f=(∂x1​∂f​,∂x2​∂f​,...,∂xn​∂f​)
    tensorflow 2.0  随机梯度下降 之 梯度下降

含义

tensorflow 2.0  随机梯度下降 之 梯度下降
tensorflow 2.0  随机梯度下降 之 梯度下降
梯度值揭示了函数值增大或者减小的方向。

梯度下降

  1. f(θ)larger value\nabla f(\theta) - larger ~value∇f(θ)−larger value
  2. Search for minima:
    • lr  α  ηlr~~\alpha~~\etalr  α  η
      θt+1=θtαtf(θt)\theta_{t+1} = \theta_t - \alpha_t \nabla f(\theta_t)θt+1​=θt​−αt​∇f(θt​)

tensorflow 2.0  随机梯度下降 之 梯度下降

实例

θt+1=θtαtf(θt)\theta_{t+1} = \theta_t - \alpha_t \nabla f(\theta_t)θt+1​=θt​−αt​∇f(θt​)
tensorflow 2.0  随机梯度下降 之 梯度下降

优化过程一

tensorflow 2.0  随机梯度下降 之 梯度下降

优化过程二

tensorflow 2.0  随机梯度下降 之 梯度下降

自动求导

  • With tf.GradientTape() as tape:
    • build computation graph
    • loss = fθ(x)f_\theta (x)fθ​(x)
  • [wgradw_{grad}wgrad​] = tape.gradient(loss, [w])
w = tf.constant(1.)
x = tf.constant(2.)
y = x * w
with tf.GradientTape() as tape:
    tape.watch([w])
    y2 = x * w

grad1 = tape.gradient(y, [w])
grad1   # [None]
with tf.GradientTape() as tape:
    tape.watch([w])
    y2 = x * w

grad2 = tape.gradient(y2, [w])
grad2   # [<tf.Tensor: id=7, shape=(), dtype=float32, numpy=2.0>]

设置persistent GradientTape 实现多次求导

w = tf.constant(1.)
x = tf.constant(2.)
y = x * w
with tf.GradientTape() as tape:
    tape.watch([w])
    y2 = x * w

grad = tape.gradient(y2, [w])
grad1   # [<tf.Tensor: id=6, shape=(), dtype=float32, numpy=2.0>]
grad = tape.gradient(y2, [w])
grad1   
# RuntimeError: GradientTape.gradient can only be called once on non-persistent tapes.

设置 persistent=True

w = tf.constant(1.)
x = tf.constant(2.)
y = x * w
with tf.GradientTape(persistent=True) as tape:
    tape.watch([w])
    y2 = x * w

grad = tape.gradient(y2, [w])
grad   # [<tf.Tensor: id=6, shape=(), dtype=float32, numpy=2.0>]
grad = tape.gradient(y2, [w])
grad   # [<tf.Tensor: id=10, shape=(), dtype=float32, numpy=2.0>]

二阶梯度

  • y=xw+by = xw + by=xw+b
  • yw=x\frac{\partial y}{\partial w} = x∂w∂y​=x
  • 2yw2=yw=xw=\frac{\partial^2y}{\partial w^2}=\frac{\partial y&#x27;}{\partial w} = \frac{\partial x}{\partial w}=∂w2∂2y​=∂w∂y′​=∂w∂x​= None
w = tf.Variable(1.0)
b = tf.Variable(2.0)
x = tf.Variable(3.0)

with tf.GradientTape() as t1:
  with tf.GradientTape() as t2:
    y = x * w + b
  dy_dw, dy_db = t2.gradient(y, [w, b])
d2y_dw2 = t1.gradient(dy_dw, w)

dy_dw   # tf.Tensor(3.0, shape=(), dtype=float32)
dy_db   # tf.Tensor(1.0, shape=(), dtype=float32)
d2y_dw2   # None
上一篇:linux – 阅读IBM 3592 JB磁带的问题


下一篇:iOS根据Url 获取图片尺寸