clean

    In [ ]:

    Or, if you’re comfortable at the command line, you can set it in your terminal with:

    and then restart Jupyter Notebook, and use the above line without editing it.

    Once you’ve set key, you can use search_images_bing. This function is provided by the small utils class included with the notebooks online. If you’re not sure where a function is defined, you can just type it in your notebook to find out:

    In [ ]:

    1. search_images_bing

    Out[ ]:

    1. <function fastbook.search_images_bing(key, term, min_sz=128, max_images=150)>

    In [ ]:

    1. results = search_images_bing(key, 'grizzly bear')
    2. ims = results.attrgot('content_url')
    3. len(ims)

    Out[ ]:

    1. 150

    We’ve successfully downloaded the URLs of 150 grizzly bears (or, at least, images that Bing Image Search finds for that search term).

    NB: there’s no way to be sure exactly what images a search like this will find. The results can change over time. We’ve heard of at least one case of a community member who found some unpleasant pictures of dead bears in their search results. You’ll receive whatever images are found by the web search engine. If you’re running this at work, or with kids, etc, then be cautious before you display the downloaded images.

    Let’s look at one:

    In [ ]:

    1. dest = 'images/grizzly.jpg'
    2. download_url(ims[0], dest)

    In [ ]:

    1. im = Image.open(dest)
    2. im.to_thumb(128,128)

    Out[ ]:

    This seems to have worked nicely, so let’s use fastai’s download_images to download all the URLs for each of our search terms. We’ll put each in a separate folder:

    In [ ]:

    1. path = Path('bears')

    In [ ]:

    1. path.mkdir()
    2. for o in bear_types:
    3. dest = (path/o)
    4. dest.mkdir(exist_ok=True)
    5. results = search_images_bing(key, f'{o} bear')
    6. download_images(dest, urls=results.attrgot('contentUrl'))

    Our folder has image files, as we’d expect:

    In [ ]:

    1. fns = get_image_files(path)
    2. fns

    Out[ ]:

    In [ ]:

    1. failed = verify_images(fns)
    2. failed

    Out[ ]:

    1. (#11) [Path('bears/black/00000147.jpg'),Path('bears/black/00000057.jpg'),Path('bears/black/00000140.jpg'),Path('bears/black/00000129.jpg'),Path('bears/teddy/00000006.jpg'),Path('bears/teddy/00000048.jpg'),Path('bears/teddy/00000076.jpg'),Path('bears/teddy/00000125.jpg'),Path('bears/teddy/00000090.jpg'),Path('bears/teddy/00000075.jpg')...]

    To remove all the failed images, you can use unlink on each of them. Note that, like most fastai functions that return a collection, verify_images returns an object of type L, which includes the map method. This calls the passed function on each element of the collection:

    In [ ]:

      Jupyter notebooks are great for experimenting and immediately seeing the results of each function, but there is also a lot of functionality to help you figure out how to use different functions, or even directly look at their source code. For instance, if you type in a cell:

      a window will pop up with:

      1. Signature: verify_images(fns)
      2. Source:
      3. def verify_images(fns):
      4. "Find images in `fns` that can't be opened"
      5. return L(fns[i] for i,o in
      6. enumerate(parallel(verify_image, fns)) if not o)
      7. File: ~/git/fastai/fastai/vision/utils.py
      8. Type: function

      This tells us what argument the function accepts (fns), then shows us the source code and the file it comes from. Looking at that source code, we can see it applies the function verify_image in parallel and only keeps the image files for which the result of that function is False, which is consistent with the doc string: it finds the images in fns that can’t be opened.

      Here are some other features that are very useful in Jupyter notebooks:

      • At any point, if you don’t remember the exact spelling of a function or argument name, you can press Tab to get autocompletion suggestions.
      • When inside the parentheses of a function, pressing Shift and Tab simultaneously will display a window with the signature of the function and a short description. Pressing these keys twice will expand the documentation, and pressing them three times will open a full window with the same information at the bottom of your screen.
      • In a cell, typing ?func_name and executing will open a window with the signature of the function and a short description.
      • In a cell, typing ??func_name and executing will open a window with the signature of the function, a short description, and the source code.
      • Unrelated to the documentation but still very useful: to get help at any point if you get an error, type in the next cell and execute to open the , which will let you inspect the content of every variable.

      End sidebar

      One thing to be aware of in this process: as we discussed in <>, models can only reflect the data used to train them. And the world is full of biased data, which ends up reflected in, for example, Bing Image Search (which we used to create our dataset). For instance, let’s say you were interested in creating an app that could help users figure out whether they had healthy skin, so you trained a model on the results of searches for (say) “healthy skin.” <> shows you the kinds of results you would get.

      clean - 图1

      With this as your training data, you would end up not with a healthy skin detector, but a young white woman touching her face detector! Be sure to think carefully about the types of data that you might expect to see in practice in your application, and check carefully to ensure that all these types are reflected in your model’s source data. footnote:[Thanks to Deb Raji, who came up with the “healthy skin” example. See her paper “Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products” for more fascinating insights into model bias.]