学习 \(y = 2x\)
单隐层、单节点的 BP 神经网络
Mean Square Error 均方误差
\[
MSE = \frac{1}{2}(\hat{y} - y)^2
\]
模型的目标是 \(\min \frac{1}{2} (\hat{y} - y)^2\)
朴素梯度下降。在每个 epoch 内,使模型对所有的训练数据都误差最小化。
\[ E = \frac{1}{2}(\hat{Y}-Y)^2 \\hat{Y} = \beta \\beta = W b \b = sigmoid(\alpha) \\alpha = V x \]
模型的可学习参数为 \(w,v?\) ,更新的策略遵循感知机模型:
参数 w 的更新算法
\[
w \leftarrow w + \Delta w \\Delta w = - \eta \frac{\partial E}{\partial w} \\frac{\partial E}{\partial w} = \frac{\partial E}{\partial \hat{Y}} \frac{\partial \hat{Y}}{\partial \beta} \frac{\partial \beta}{\partial w} \ = (\hat{Y} - Y) \cdot 1 \cdot b
\]
参数 v 的更新算法
\[
v \leftarrow v + \Delta v \\Delta v = -\eta \frac{\partial E}{\partial v} \\frac{\partial E}{\partial v} = \frac{\partial E}{\partial \hat{Y}} \frac{\partial \hat{Y}}{\partial \beta} \frac{\partial \beta}{\partial b}
\frac{\partial \beta}{\partial \alpha} \frac{\partial \alpha}{\partial v} \= (\hat{Y} - Y) \cdot 1 \cdot w \cdot \frac{\partial \beta}{\partial \alpha} \cdot x \\frac{\partial \beta}{\partial \alpha} = sigmoid(\alpha) [ 1 - sigmoid(\alpha) ] \sigmoid(\alpha) = \frac{1}{1+e^{-\alpha}}
\]
#include <iostream>
#include <cmath>
using namespace std;
class Network {
public :
Network(float eta) :eta(eta) {}
float predict(int x) { // forward propagation
this->alpha = this->v * x;
this->b = this->sigmoid(alpha);
this->beta = this->w * this->b;
float prediction = this->beta;
return prediction;
}
void step(int x, float prediction, float label) {
this->w = this->w
- this->eta
* (prediction - label)
* this->b;
this->alpha = this->v * x;
this->v = this->v
- this->eta
* (prediction - label)
* this->w
* this->sigmoid(this->alpha) * (1 - this->sigmoid(this->alpha))
* x;
}
private:
float sigmoid(float x) {return (float)1 / (1 + exp(-x));}
float v = 1, w = 1, alpha = 1, beta = 1, b = 1, prediction, eta;
};
int main() { // Going to learn the linear relationship y = 2*x
float loss, pred;
Network model(0.01);
cout << "x is " << 3 << " prediction is " << model.predict(3) << " label is " << 2*3 << endl;
for (int epoch = 0; epoch < 500; epoch++) {
loss = 0;
for (int i = 0; i < 10; i++) {
pred = model.predict(i);
loss += pow((pred - 2*i), 2) / 2;
model.step(i, pred, 2*i);
}
loss /= 10;
cout << "Epoch: " << epoch << " Loss:" << loss << endl;
}
cout << "x is " << 3 << " prediction is " << model.predict(3) << " label is " << 2*3 << endl;
return 0;
}
初始网络权重,对数据 x=3, y=6的 预测结果为 \(\hat{y} = 0.952534\) 。
训练了 500 个 epoch 以后,平均损失下降至 7.82519,对数据 x=3, y=6的 预测结果为 \(\hat{y} = 11.242\) 。
# encoding:utf8
# 极简的神经网络,单隐层、单节点、单输入、单输出
import torch as t
import torch.nn as nn
import torch.optim as optim
class Model(nn.Module):
def __init__(self, in_dim, out_dim):
super(Model, self).__init__()
self.hidden_layer = nn.Linear(in_dim, out_dim)
def forward(self, x):
out = self.hidden_layer(x)
out = t.sigmoid(out)
return out
if __name__ == '__main__':
X, Y = [[i] for i in range(10)], [2*i for i in range(10)]
X, Y = t.Tensor(X), t.Tensor(Y)
model = Model(1, 1)
optimizer = optim.SGD(model.parameters(), lr=0.01)
criticism = nn.MSELoss(reduction='mean')
y_pred = model.forward(t.Tensor([[3]]))
print(y_pred.data)
for i in range(500):
optimizer.zero_grad()
y_pred = model.forward(X)
loss = criticism(y_pred, Y)
loss.backward()
optimizer.step()
print(loss.data)
y_pred = model.forward(t.Tensor([[3]]))
print(y_pred.data)
初始网络权重,对数据 x=3, y=6的 预测结果为 $\hat{y} =0.5164 $ 。
训练了 500 个 epoch 以后,平均损失下降至 98.8590,对数据 x=3, y=6的 预测结果为 \(\hat{y} = 0.8651\) 。
居然手工编程的实现其学习效果比 PyTorch 的实现更好,真是奇怪!但是我估计差距就产生于学习算法的不同,PyTorch采用的是 SGD。
原文:https://www.cnblogs.com/fengyubo/p/10554040.html