In [ ]:

    To access the image files, we can use get_image_files:

    In [ ]:

    1. t = get_image_files(path)
    2. t[0]

    Out[ ]:

    1. Path('/home/jhoward/.fastai/data/imagenette2-160/val/n03417042/n03417042_3752.JPEG')

    Or we could do the same thing using just Python’s standard library, with glob:

    In [ ]:

    1. from glob import glob
    2. files = L(glob(f'{path}/**/*.JPEG', recursive=True)).map(Path)
    3. files[0]

    Out[ ]:

    1. Path('/home/jhoward/.fastai/data/imagenette2-160/val/n03417042/n03417042_3752.JPEG')

    If you look at the source for get_image_files, you’ll see it uses Python’s os.walk; this is a faster and more flexible function than glob, so be sure to try it out.

    We can open an image with the Python Imaging Library’s Image class:

    In [ ]:

    1. im = Image.open(files[0])
    2. im

    Out[ ]:

    In [ ]:

    1. im_t = tensor(im)
    2. im_t.shape

    Out[ ]:

    1. torch.Size([160, 213, 3])

    That’s going to be the basis of our independent variable. For our dependent variable, we can use Path.parent from pathlib. First we’ll need our vocab:

    In [ ]:

    1. lbls = files.map(Self.parent.name()).unique(); lbls

    Out[ ]:

    1. (#10) ['n03417042','n03445777','n03888257','n03394916','n02979186','n03000684','n03425413','n01440764','n03028079','n02102040']

    In [ ]:

    Out[ ]:

    1. {'n03417042': 0,
    2. 'n03445777': 1,
    3. 'n03888257': 2,
    4. 'n03394916': 3,
    5. 'n02979186': 4,
    6. 'n03000684': 5,
    7. 'n03425413': 6,
    8. 'n01440764': 7,
    9. 'n03028079': 8,
    10. 'n02102040': 9}

    That’s all the pieces we need to put together our Dataset.

    A in PyTorch can be anything that supports indexing (__getitem__) and len:

    In [ ]:

    1. def __init__(self, fns): self.fns=fns
    2. def __len__(self): return len(self.fns)
    3. def __getitem__(self, i):
    4. im = Image.open(self.fns[i]).resize((64,64)).convert('RGB')
    5. y = v2i[self.fns[i].parent.name]
    6. return tensor(im).float()/255, tensor(y)

    We need a list of training and validation filenames to pass to Dataset.__init__:

    In [ ]:

    1. train_filt = L(o.parent.parent.name=='train' for o in files)
    2. train,valid = files[train_filt],files[~train_filt]
    3. len(train),len(valid)

    Out[ ]:

    1. (9469, 3925)

    Now we can try it out:

    In [ ]:

    1. train_ds,valid_ds = Dataset(train),Dataset(valid)
    2. x,y = train_ds[0]
    3. x.shape,y

    Out[ ]:

    1. (torch.Size([64, 64, 3]), tensor(0))

    In [ ]:

    1. show_image(x, title=lbls[y]);

    Data - 图1

    As you see, our dataset is returning the independent and dependent variables as a tuple, which is just what we need. We’ll need to be able to collate these into a mini-batch. Generally this is done with torch.stack, which is what we’ll use here:

    In [ ]:

    1. def collate(idxs, ds):
    2. xb,yb = zip(*[ds[i] for i in idxs])
    3. return torch.stack(xb),torch.stack(yb)

    Here’s a mini-batch with two items, for testing our collate:

    In [ ]:

    1. x,y = collate([1,2], train_ds)
    2. x.shape,y

    Now that we have a dataset and a collation function, we’re ready to create DataLoader. We’ll add two more things here: an optional shuffle for the training set, and a ProcessPoolExecutor to do our preprocessing in parallel. A parallel data loader is very important, because opening and decoding a JPEG image is a slow process. One CPU core is not enough to decode images fast enough to keep a modern GPU busy. Here’s our DataLoader class:

    In [ ]:

    1. class DataLoader:
    2. def __init__(self, ds, bs=128, shuffle=False, n_workers=1):
    3. self.ds,self.bs,self.shuffle,self.n_workers = ds,bs,shuffle,n_workers
    4. def __iter__(self):
    5. idxs = L.range(self.ds)
    6. if self.shuffle: idxs = idxs.shuffle()
    7. chunks = [idxs[n:n+self.bs] for n in range(0, len(self.ds), self.bs)]
    8. with ProcessPoolExecutor(self.n_workers) as ex:
    9. yield from ex.map(collate, chunks, ds=self.ds)

    Let’s try it out with our training and validation datasets:

    In [ ]:

    1. n_workers = min(16, defaults.cpus)
    2. train_dl = DataLoader(train_ds, bs=128, shuffle=True, n_workers=n_workers)
    3. valid_dl = DataLoader(valid_ds, bs=256, shuffle=False, n_workers=n_workers)
    4. xb,yb = first(train_dl)
    5. xb.shape,yb.shape,len(train_dl)

    Out[ ]:

    1. (torch.Size([128, 64, 64, 3]), torch.Size([128]), 74)

    This data loader is not much slower than PyTorch’s, but it’s far simpler. So if you’re debugging a complex data loading process, don’t be afraid to try doing things manually to help you see exactly what’s going on.

    For normalization, we’ll need image statistics. Generally it’s fine to calculate these on a single training mini-batch, since precision isn’t needed here:

    In [ ]:

    1. stats = [xb.mean((0,1,2)),xb.std((0,1,2))]
    2. stats

    Out[ ]:

    1. [tensor([0.4544, 0.4453, 0.4141]), tensor([0.2812, 0.2766, 0.2981])]

    Our Normalize class just needs to store these stats and apply them (to see why the to_device is needed, try commenting it out, and see what happens later in this notebook):

    In [ ]:

    1. class Normalize:
    2. def __init__(self, stats): self.stats=stats
    3. def __call__(self, x):
    4. if x.device != self.stats[0].device:
    5. self.stats = to_device(self.stats, x.device)
    6. return (x-self.stats[0])/self.stats[1]

    We always like to test everything we build in a notebook, as soon as we build it:

    In [ ]:

    1. norm = Normalize(stats)
    2. def tfm_x(x): return norm(x).permute((0,3,1,2))

    In [ ]:

    1. t = tfm_x(x)
    2. t.mean((0,2,3)),t.std((0,2,3))

    Out[ ]:

      Here tfm_x isn’t just applying Normalize, but is also permuting the axis order from NHWC to NCHW (see <> if you need a reminder of what these acronyms refer to). PIL uses HWC axis order, which we can’t use with PyTorch, hence the need for this .

      That’s all we need for the data for our model. So now we need the model itself!