使用LeNet在MNIST数据集实现图像分类

    本教程基于Paddle 2.1 编写,如果您的环境不是本版本,请先参考官网安装 Paddle 2.1 。

    1. 2.1.0

    二、数据加载

    手写数字的MNIST数据集,包含60,000个用于训练的示例和10,000个用于测试的示例。这些数字已经过尺寸标准化并位于图像中心,图像是固定大小(28x28像素),其值为0到1。该数据集的官方地址为:http://yann.lecun.com/exdb/mnist

    我们使用飞桨框架自带的 paddle.vision.datasets.MNIST 完成mnist数据集的加载。

    1. from paddle.vision.transforms import Compose, Normalize
    2. transform = Compose([Normalize(mean=[127.5],
    3. std=[127.5],
    4. data_format='CHW')])
    5. # 使用transform对数据集做归一化
    6. print('download training data and load training data')
    7. train_dataset = paddle.vision.datasets.MNIST(mode='train', transform=transform)
    8. test_dataset = paddle.vision.datasets.MNIST(mode='test', transform=transform)
    9. print('load finished')
    1. download training data and load training data
    2. Cache file /home/aistudio/.cache/paddle/dataset/mnist/train-images-idx3-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/train-images-idx3-ubyte.gz
    3. Begin to download
    4. Download finished
    5. Cache file /home/aistudio/.cache/paddle/dataset/mnist/train-labels-idx1-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/train-labels-idx1-ubyte.gz
    6. Begin to download
    7. ........
    8. Download finished
    9. Cache file /home/aistudio/.cache/paddle/dataset/mnist/t10k-images-idx3-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/t10k-images-idx3-ubyte.gz
    10. Begin to download
    11. Download finished
    12. Cache file /home/aistudio/.cache/paddle/dataset/mnist/t10k-labels-idx1-ubyte.gz not found, downloading https://dataset.bj.bcebos.com/mnist/t10k-labels-idx1-ubyte.gz
    13. Begin to download
    14. ..
    15. Download finished
    16. load finished
    1. import numpy as np
    2. import matplotlib.pyplot as plt
    3. train_data0, train_label_0 = train_dataset[0][0],train_dataset[0][1]
    4. plt.figure(figsize=(2,2))
    5. plt.imshow(train_data0, cmap=plt.cm.binary)
    6. print('train_data0 label is: ' + str(train_label_0))

    用paddle.nn下的API,如Conv2DMaxPool2DLinear完成LeNet的构建。

    1. import paddle
    2. import paddle.nn.functional as F
    3. def __init__(self):
    4. super(LeNet, self).__init__()
    5. self.conv1 = paddle.nn.Conv2D(in_channels=1, out_channels=6, kernel_size=5, stride=1, padding=2)
    6. self.max_pool1 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)
    7. self.conv2 = paddle.nn.Conv2D(in_channels=6, out_channels=16, kernel_size=5, stride=1)
    8. self.max_pool2 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)
    9. self.linear1 = paddle.nn.Linear(in_features=16*5*5, out_features=120)
    10. self.linear2 = paddle.nn.Linear(in_features=120, out_features=84)
    11. self.linear3 = paddle.nn.Linear(in_features=84, out_features=10)
    12. def forward(self, x):
    13. x = self.conv1(x)
    14. x = F.relu(x)
    15. x = self.max_pool1(x)
    16. x = F.relu(x)
    17. x = self.conv2(x)
    18. x = self.max_pool2(x)
    19. x = paddle.flatten(x, start_axis=1,stop_axis=-1)
    20. x = self.linear1(x)
    21. x = F.relu(x)
    22. x = self.linear2(x)
    23. x = F.relu(x)
    24. x = self.linear3(x)
    25. return x

    四、方式1:基于高层API,完成模型的训练与预测

    通过paddle提供的Model 构建实例,使用封装好的训练与测试接口,快速完成模型训练与测试。

    1. from paddle.metric import Accuracy
    2. model = paddle.Model(LeNet()) # 用Model封装模型
    3. optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())
    4. # 配置模型
    5. model.prepare(
    6. optim,
    7. paddle.nn.CrossEntropyLoss(),
    8. Accuracy()
    9. )
    1. # 训练模型
    2. model.fit(train_dataset,
    3. epochs=2,
    4. batch_size=64,
    5. verbose=1
    6. )
    1. The loss value printed in the log is the current step, and the metric is the average value of previous steps.
    2. Epoch 1/2
    3. Epoch 2/2
    4. step 938/938 [==============================] - loss: 0.0127 - acc: 0.9844 - 9ms/step

    4.2 使用 来预测模型

    1. Eval begin...
    2. step 157/157 [==============================] - loss: 1.2412e-04 - acc: 0.9872 - 8ms/step
    3. Eval samples: 10000
    4. {'loss': [0.0001241174], 'acc': 0.9872}

    5.1 模型训练

    组网后,开始对模型进行训练,先构建train_loader,加载训练数据,然后定义train函数,设置好损失函数后,按batch加载数据,完成模型的训练。

    1. import paddle.nn.functional as F
    2. train_loader = paddle.io.DataLoader(train_dataset, batch_size=64, shuffle=True)
    3. # 加载训练集 batch_size 设为 64
    4. def train(model):
    5. model.train()
    6. epochs = 2
    7. optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())
    8. # 用Adam作为优化函数
    9. for epoch in range(epochs):
    10. for batch_id, data in enumerate(train_loader()):
    11. x_data = data[0]
    12. y_data = data[1]
    13. predicts = model(x_data)
    14. loss = F.cross_entropy(predicts, y_data)
    15. # 计算损失
    16. acc = paddle.metric.accuracy(predicts, y_data)
    17. loss.backward()
    18. if batch_id % 300 == 0:
    19. print("epoch: {}, batch_id: {}, loss is: {}, acc is: {}".format(epoch, batch_id, loss.numpy(), acc.numpy()))
    20. optim.step()
    21. optim.clear_grad()
    22. model = LeNet()
    23. train(model)
    1. epoch: 0, batch_id: 0, loss is: [3.0527446], acc is: [0.09375]
    2. epoch: 0, batch_id: 300, loss is: [0.05049332], acc is: [1.]
    3. epoch: 0, batch_id: 600, loss is: [0.109704], acc is: [0.953125]
    4. ...

    训练完成后,需要验证模型的效果,此时,加载测试数据集,然后用训练好的模对测试集进行预测,计算损失与精度。

    1. test_loader = paddle.io.DataLoader(test_dataset, places=paddle.CPUPlace(), batch_size=64)
    2. # 加载测试数据集
    3. def test(model):
    4. model.eval()
    5. batch_size = 64
    6. for batch_id, data in enumerate(test_loader()):
    7. x_data = data[0]
    8. y_data = data[1]
    9. predicts = model(x_data)
    10. # 获取预测结果
    11. loss = F.cross_entropy(predicts, y_data)
    12. acc = paddle.metric.accuracy(predicts, y_data)
    13. if batch_id % 20 == 0:

    方式二结束

    以上就是方式二,通过底层API,可以清楚的看到训练和测试中的每一步过程。但是,这种方式比较复杂。因此,我们提供了训练方式一,使用高层API来完成模型的训练与预测。对比底层API,高层API能够更加快速、高效的完成模型的训练与测试。

    六、总结