One of the most important research about neural networks. It is really important for people who is into AI related technologies.
5.1 Training Data and Batching We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding , which has a shared sourcetarget vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary . Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens.
id: d1c2d6bc762f492c22219fc09dd114f5 - page: 7
5.2 Hardware and Schedule We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days). 5.3 Optimizer We used the Adam optimizer with 1 = 0.9, 2 = 0.98 and = 109. We varied the learning rate over the course of training, according to the formula: lrate = d0.5 model min(step_num0.5, step_num warmup_steps1.5) This corresponds to increasing the learning rate linearly for the first warmup_steps training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used warmup_steps = 4000.
id: 3ebc9de1f0575d99482b149392ec70bb - page: 7
5.4 Regularization We employ three types of regularization during training: 7 (3) Table 2: The Transformer achieves better BLEU scores than previous state-of-the-art models on the English-to-German and English-to-French newstest2014 tests at a fraction of the training cost. Model ByteNet Deep-Att + PosUnk GNMT + RL ConvS2S MoE Deep-Att + PosUnk Ensemble GNMT + RL Ensemble ConvS2S Ensemble Transformer (base model) Transformer (big) BLEU EN-DE EN-FR 23.75 24.6 25.16 26.03 26.30 26.36 27.3 28.4 39.2 39.92 40.46 40.56 40.4 41.16 41.29 38.1 41.8 Training Cost (FLOPs) EN-DE
id: d8519d3fadf4807a88456017943a4f3d - page: 7
EN-FR 2.3 1019 9.6 1018 2.0 1019 1.8 1020 7.7 1019 1.0 1020 1.4 1020 1.5 1020 1.2 1020 8.0 1020 1.1 1021 1.2 1021 3.3 1018 2.3 1019 Residual Dropout We apply dropout to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of Pdrop = 0.1. Label Smoothing During training, we employed label smoothing of value ls = 0.1 . This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score. 6 Results
id: f1d47b64c39a45e52ef7afd62158c128 - page: 8