首页 > 其他 > 详细

函数优化实战

时间:2019-05-24 00:11:12      阅读:234      评论:0      收藏:0      [点我收藏+]

Himmelblau function

  • \(f(x,y)=(x^2+y-11)^2+(x+y^2-7)^2\)

技术分享图片

Minima

  • f(3.0,2.0)=0.0
  • f(-2.8,3.1)=0.0
  • f(-3.7,-3.2)=0.0
  • f(3.5,-1.84)=0.0

Plot

import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
def himmeblau(x):
    return (x[0]**2 + x[1] - 11)**2 + (x[0] + x[1]**2 - 7)**2


x = np.arange(-6, 6, 0.1)
y = np.arange(-6, 6, 0.1)

print(f'x_shape: {x.shape},y_shape: {y.shape}')

# 生成坐标点
X, Y = np.meshgrid(x, y)
print(f'X_shape: {X.shape},Y_shape: {Y.shape}')

Z = himmeblau([X, Y])

fig = plt.figure('himmelblau')
ax = Axes3D(fig)
ax.plot_surface(X, Y, Z)
ax.view_init(60, -30)
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.show()
x_shape: (120,),y_shape: (120,)
X_shape: (120, 120),Y_shape: (120, 120)

技术分享图片

Gradient Descent

import tensorflow as tf
x = tf.constant([-4.,0.])

for step in range(200):
    
    with tf.GradientTape() as tape:
        tape.watch([x])
        y = himmeblau(x)
        
    grads = tape.gradient(y,[x])[0]
    x -= 0.01 * grads
    
    if step % 20 == 0:
        print(f'step: {step}, x: {x}, f(x): {y}')
step: 0, x: [-2.98       -0.09999999], f(x): 146.0
step: 20, x: [-3.6890159 -3.1276689], f(x): 6.054703235626221
step: 40, x: [-3.7793102 -3.283186 ], f(x): 0.0
step: 60, x: [-3.7793102 -3.283186 ], f(x): 0.0
step: 80, x: [-3.7793102 -3.283186 ], f(x): 0.0
step: 100, x: [-3.7793102 -3.283186 ], f(x): 0.0
step: 120, x: [-3.7793102 -3.283186 ], f(x): 0.0
step: 140, x: [-3.7793102 -3.283186 ], f(x): 0.0
step: 160, x: [-3.7793102 -3.283186 ], f(x): 0.0
step: 180, x: [-3.7793102 -3.283186 ], f(x): 0.0

函数优化实战

原文:https://www.cnblogs.com/nickchen121/p/10915301.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!