1. If the dataset for your project is so big and complicated that working with it takes a significant amount of time, what should you do?
    2. Why do we concatenate the documents in our dataset before creating a language model?
    3. To use a standard fully connected network to predict the fourth word given the previous three words, what two tweaks do we need to make to our model?
    4. How can we share a weight matrix across multiple layers in PyTorch?
    5. Write a module that predicts the third word given the previous two words of a sentence, without peeking.
    6. What is a recurrent neural network?
    7. What is “hidden state”?
    8. What is the equivalent of hidden state in ?
    9. To maintain the state in an RNN, why is it important to pass the text to the model in order?
    10. What is an “unrolled” representation of an RNN?
    11. Why can maintaining the hidden state in an RNN lead to memory and performance problems? How do we fix this problem?
    12. What is “BPTT”?
    13. Write code to print out the first few batches of the validation set, including converting the token IDs back into English strings, as we showed for batches of IMDb data in <>.
    14. What are the downsides of predicting just one output word for each three input words?
    15. Why do we need a custom loss function for LMModel4?
    16. Why is the training of LMModel4 unstable?
    17. In the unrolled representation, we can see that a recurrent neural network actually has many layers. So why do we need to stack RNNs to get better results?
    18. Draw a representation of a stacked (multilayer) RNN.
    19. Why should we get better results in an RNN if we call detach less often? Why might this not happen in practice with a simple RNN?
    20. Why can a deep network result in very large or very small activations? Why does this matter?
    21. In a computer’s floating-point representation of numbers, which numbers are the most precise?
    22. Why do vanishing gradients prevent training?
    23. Why does it help to have two hidden states in the LSTM architecture? What is the purpose of each one?
    24. What are these two states called in an LSTM?
    25. What is tanh, and how is it related to sigmoid?
    26. What is the purpose of this code in LSTMCell:
    27. What does chunk do in PyTorch?
    28. Why can we use a higher learning rate for LMModel6?
    29. What are the three regularization techniques used in an AWD-LSTM model?
    30. What is “dropout”?
    31. Why do we scale the acitvations with dropout? Is this applied during training, inference, or both?
    32. What is the purpose of this line from Dropout: if not self.training: return x
    33. Experiment with to understand how it works.
    34. How do you set your model in training mode in PyTorch? In evaluation mode?
    35. Write the equation for activation regularization (in math or code, as you prefer). How is it different from weight decay?
    36. Write the equation for temporal activation regularization (in math or code, as you prefer). Why wouldn’t we use this for computer vision problems?
    37. What is “weight tying” in a language model?
    1. In LMModel2, why can forward start with h=0? Why don’t we need to say h=torch.zeros(...)?
    2. Write the code for an LSTM from scratch (you may refer to <>).
    3. Search the internet for the GRU architecture and implement it from scratch, and try training a model. See if you can get results similar to those we saw in this chapter. Compare your results to the results of PyTorch’s built in GRU module.
    4. Take a look at the source code for AWD-LSTM in fastai, and try to map each of the lines of code to the concepts shown in this chapter.