PyTorch 深度学习实践 (4)构建线性回归模型

b站视频链接https://www.bilibili.com/video/BV1Y7411d7Ys?p=5

利用pytorch进行深度学习分为以下4个步骤

  1. 准备数据集(用dataloader和dataset)
  2. 设计模型(设计 计算图
  3. 构建损失函数和优化器(也就是loss函数和optimizer)
  4. 开始循环训练(前馈算损失,反馈算梯度,更新权重)
    PyTorch 深度学习实践 (4)构建线性回归模型

广播机制

关于Linear类的介绍

非常好理解
PyTorch 深度学习实践 (4)构建线性回归模型

有关python魔法函数的介绍

https://blog.csdn.net/u012609509/article/details/78557650
https://blog.csdn.net/qq_40522828/article/details/89682452
https://zhuanlan.zhihu.com/p/57656253

代码:

import torch
import matplotlib.pyplot as plt

# prepare dataset
# x,y是矩阵,3行1列 也就是说总共有3个数据,每个数据只有1个特征
# 行是数据数量,列是数据feature
x_data = torch.Tensor([[1.0], [2.0], [3.0]])
y_data = torch.Tensor([[2.0], [4.0], [6.0]])

#画图
epoch_list=[]
loss_list=[]
# design model using class
"""
关于torch.nn.Module的介绍
实现了__call__()函数,call中又有forward函数
our model class should be inherit from nn.Module, which is base class for all neural network modules.
member methods __init__() and forward() have to be implemented
class nn.linear contain two member Tensors: weight and bias
class nn.Linear has implemented the magic method __call__(),which enable the instance of the class can
be called just like a function.Normally the forward() will be called 

官网文档
https://pytorch.org/docs/1.7.0/generated/torch.nn.Linear.html#torch.nn.Linear
"""


class LinearModel(torch.nn.Module):
    def __init__(self):
        super(LinearModel, self).__init__()
        # (1,1)是指输入x和输出y的特征维度,这里数据集中的x和y的特征都是1维的
        # 该线性层需要学习的参数是w和b  获取w/b的方式分别是~linear.weight/linear.bias
        self.linear = torch.nn.Linear(1, 1)
    # override了Module中的forward方法,必须重写
    def forward(self, x):
        # 这里linear因为实现了__call__函数,call函数调用了forward函数,所以我们直接用即可
        y_pred = self.linear(x) #计算y=wx+b
        return y_pred


model = LinearModel()

# construct loss and optimizer
# criterion = torch.nn.MSELoss(size_average = False)
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)  # model.parameters()自动完成参数的初始化操作
# model.parameters()可以自动找到参数w和b(所有参数),并且计算梯度
# 这里的 SGD是批量梯度下降,因为3个数据是一个batch
# 不要见到SGD就是随机梯度下降
# Adagrad Adam adamax ASGD RMSprop Rprop SGD七种优化器可以代替SGD,效果都不一样
# training cycle forward, backward, update
for epoch in range(100):
    y_pred = model(x_data)  # forward:predict,这句话实现了前向传播
    loss = criterion(y_pred, y_data)  # forward: loss
    print(epoch, loss.item())
    # 画图
    epoch_list.append(epoch)
    loss_list.append(loss.item())

    optimizer.zero_grad()  # 必须先清0,再backwards,与之前的代码不一样
    loss.backward()  # backward: autograd,自动计算梯度
    optimizer.step()  # update 参数,即更新w和b的值,也就是w=w-α*grad

print('w = ', model.linear.weight.item())
print('b = ', model.linear.bias.item())

x_test = torch.Tensor([[4.0]])
y_test = model(x_test)
print('y_pred = ', y_test.data)

# 总结:pytorch实现线性回归分为4个步骤
# 1、prepare dataset
#
# 2、design model using Class  # 目的是为了前向传播forward,即计算y hat(预测值)
#
# 3、Construct loss and optimizer (using PyTorch API) 其中,计算loss是为了进行反向传播,optimizer是为了更新梯度。
#
# 4、Training cycle (forward,backward,update)

# 画图
plt.plot(epoch_list,loss_list)
plt.xlabel("epoch")
plt.ylabel("loss")
plt.show()

纯净版代码

import torch
import matplotlib.pyplot as plt

x_data = torch.Tensor([[1.0], [2.0], [3.0]])
y_data = torch.Tensor([[2.0], [4.0], [6.0]])

epoch_list=[]
loss_list=[]

class LinearModel(torch.nn.Module):
    def __init__(self):
        super(LinearModel, self).__init__()
        self.linear = torch.nn.Linear(1, 1)
    def forward(self, x):
        y_pred = self.linear(x) 
        return y_pred


model = LinearModel()
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=0.01) 

for epoch in range(100):
    y_pred = model(x_data)  
    loss = criterion(y_pred, y_data) 
    print(epoch, loss.item())

    epoch_list.append(epoch)
    loss_list.append(loss.item())

    optimizer.zero_grad() 
    loss.backward()  
    optimizer.step()  

print('w = ', model.linear.weight.item())
print('b = ', model.linear.bias.item())

x_test = torch.Tensor([[4.0]])
y_test = model(x_test)
print('y_pred = ', y_test.data)
plt.plot(epoch_list,loss_list)
plt.xlabel("epoch")
plt.ylabel("loss")
plt.show()
上一篇:神经网络的一些概念


下一篇:kafka高水位和Leader-Epoch