减肥网站源码,行业平台网站开发,秦皇岛网络,广西桂林天气预报15天学习自https://pytorch.org/tutorials/beginner/pytorch_with_examples.html
概念
Pytorch Tensor在概念上和Numpy的array一样是一个nnn维向量的。不过Tensor可以在GPU中进行计算#xff0c;且可以跟踪计算图#xff08;computational graph#xff09;和梯度#xff08;…学习自https://pytorch.org/tutorials/beginner/pytorch_with_examples.html
概念
Pytorch Tensor在概念上和Numpy的array一样是一个nnn维向量的。不过Tensor可以在GPU中进行计算且可以跟踪计算图computational graph和梯度gradients。
手动梯度下降拟合函数
我们用三次函数去拟合任意函数。 y^abxcx2dx3\hat{y}abxcx^2dx^3y^abxcx2dx3 定义损失函数L∑(y−y^)2L\sum(y-\hat{y})^2L∑(y−y^)2 那么梯度为: L:2∗∑(y−y^)L:2*\sum(y-\hat{y})L:2∗∑(y−y^) a:2∗∑(y−y^)a:2*\sum(y-\hat{y})a:2∗∑(y−y^) b:2∗x∗∑(y−y^)b:2*x*\sum(y-\hat{y})b:2∗x∗∑(y−y^) c:2∗x2∗∑(y−y^)c:2*x^2*\sum(y-\hat{y})c:2∗x2∗∑(y−y^) d:2∗x3∗∑(y−y^)d:2*x^3*\sum(y-\hat{y})d:2∗x3∗∑(y−y^)
代码
import torch
import mathdtype torch.float
device torch.device(cuda:0) # Run on GPU# Create random input and output data
x torch.linspace(-math.pi, math.pi, 2000, devicedevice,dtypedtype)
y torch.sin(x)# Randomly initialize weights
a torch.randn((), devicedevice, dtypedtype)
b torch.randn((), devicedevice, dtypedtype)
c torch.randn((), devicedevice, dtypedtype)
d torch.randn((), devicedevice, dtypedtype)learning_rate 1e-6
for t in range(2000):# Forward pass: compute predicted yy_pred a b * x c * x **2 d *x ** 3# Compute and print lossloss (y_pred - y).pow(2).sum().item()if t % 100 99:print(t, loss)# Backprop to compute gradients of a, b, c, d with respect to lossgrad_y_pred 2.0 * (y_pred - y)grad_a grad_y_pred.sum()grad_b (grad_y_pred * x).sum()grad_c (grad_y_pred * x ** 2).sum()grad_d (grad_y_pred * x ** 3).sum()# Update weights using gradient descenta - learning_rate * grad_ab - learning_rate * grad_bc - learning_rate * grad_cd - learning_rate * grad_dprint(fResult: y {a.item()} {b.item()} x {c.item()} x^2 {d.item()} x^3)
自动梯度下降拟合函数
通过PyTorch: nn构建神经网络如果我们需要一个三次函数来拟合那么我们就需要一个隐藏层为1层节点为3个的神经网络。 即y^∑i13(wixibi)\hat{y}\sum_{i1}^3(w_ix^ib_i)y^∑i13(wixibi)
model torch.nn.Sequential(torch.nn.Linear(3, 1), #三个节点torch.nn.Flatten(0, 1) # 把三个节点的结果加起来
)由于我们的神经网络第一层有三个输入x,x2,x3x,x^2,x^3x,x2,x3,所以我们需要把数据预处理一下
x torch.linspace(-math.pi, math.pi, 2000)
y torch.sin(x)p torch.tensor([1, 2, 3])
xx x.unsqueeze(-1).pow(p)然后我们预测输出就可以直接调用model了
y_pred model(xx) # y_pred也是一个tensor损失函数
loss_fn torch.nn.MSELoss(reductionsum) # 定义,使用均方误差
loss loss_fn(y_pred, y) # 计算均方误差
model.zero_grad() # 先把原先模型的梯度信息清零
loss.backward() # 计算反向传播的梯度完整代码
import torch
import mathx torch.linspace(-math.pi, math.pi, 2000)
y torch.sin(x)p torch.tensor([1, 2, 3])
xx x.unsqueeze(-1).pow(p)model torch.nn.Sequential(torch.nn.Linear(3, 1),torch.nn.Flatten(0, 1)
)loss_fn torch.nn.MSELoss(reductionsum)learning_rate 1e-6
for t in range(2000):y_pred model(xx)loss loss_fn(y_pred, y)if t % 100 99:print(t, loss.item())model.zero_grad()loss.backward()with torch.no_grad(): # 进行梯度下降for param in model.parameters():param - learning_rate * param.gradlinear_layer model[0]
print(fResult: y {linear_layer.bias.item()} {linear_layer.weight[:, 0].item()} x {linear_layer.weight[:, 1].item()} x^2 {linear_layer.weight[:, 2].item()} x^3)