eremy Howard_ Sylvain Gugger - Deep Learning for Coders with fastai and PyTorch-O'Reilly Media
The most important piece of this is the special method called __init__ (pronounced dunder init). In Python, any method surrounded in double underscores like this is considered special. It indicates that some extra behavior is associated with this method name. In the case of __init__, this is the method Python will call when your new object is created. So, this is where you can set up any state that needs to be initial ized upon object creation. Any parameters included when the user constructs an instance of your class will be passed to the __init__ method as parameters. Note that the first parameter to any method defined inside a class is self, so you can use this to set and get any attributes that you will need: ex = Example('Sylvain') ex.say('nice to meet you') 'Hello Sylvain, nice to meet you.'
id: adf98aecf1f99d1c25e0130080a06138 - page: 285
Also note that creating a new PyTorch module requires inheriting from Module. Inheritance is an important object-oriented concept that we will not discuss in detail herein short, it means that we can add additional behavior to an existing class. PyTorch already provides a Module class, which provides some basic foundations that we want to build on. So, we add the name of this superclass after the name of the class that we are defining, as shown in the following examples. The final thing that you need to know to create a new PyTorch module is that when your module is called, PyTorch will call a method in your class called forward, and will pass along to that any parameters that are included in the call. Here is the class defining our dot product model: class DotProduct(Module): def __init__(self, n_users, n_movies, n_factors): self.user_factors = Embedding(n_users, n_factors) self.movie_factors = Embedding(n_movies, n_factors)
id: 8a36a143321339ae97753573b8b612db - page: 285
user_factors(x[:,0]) movies = self.movie_factors(x[:,1]) return (users * movies).sum(dim=1) If you havent seen object-oriented programming before, dont worry; you wont need to use it much in this book. We are just mentioning this approach here because most online tutorials and documentation will use the object-oriented syntax. Collaborative Filtering from Scratch | 261 Note that the input of the model is a tensor of shape batch_size x 2, where the first column (x[:, 0]) contains the user IDs, and the second column (x[:, 1]) contains the movie IDs. As explained before, we use the embedding layers to represent our matrices of user and movie latent factors: x,y = dls.one_batch() x.shape
id: 1b136826f6450923c50a699d17b126f4 - page: 285
Size([64, 2]) Now that we have defined our architecture and created our parameter matrices, we need to create a Learner to optimize our model. In the past, we have used special functions, such as cnn_learner, which set up everything for us for a particular appli cation. Since we are doing things from scratch here, we will use the plain Learner class: model = DotProduct(n_users, n_movies, 50) learn = Learner(dls, model, loss_func=MSELossFlat()) We are now ready to fit our model: learn.fit_one_cycle(5, 5e-3) epoch 0 train_loss 1.326261
id: bfd527fe2129a98fdb38427e67a0536b - page: 286