Created at 4pm, Apr 4
Ms-RAGArtificial Intelligence
0
Linear Attention Sequence Parallelism
ABBAYmWaVltQNK5CAHJEPbBqGgzt04APLWzNt0EczOA
File Type
PDF
Entry Count
67
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw

Weigao Sun * 1 Zhen Qin * 2 Dong Li 1 Xuyang Shen 1 Yu Qiao 1 Yiran Zhong 1AbstractSequence Parallel (SP) serves as a prevalent strategy to handle long sequences that exceed thememory limit of a single GPU. However, existing SP methods do not take advantage of linear attentionfeatures, resulting in sub-optimal parallelism efficiency and usability for linear attention-basedlanguage models. In this paper, we introduce Linear Attention Sequence Parallel (LASP), anefficient SP method tailored to linear attention based language models. Specifically, we designan efficient point-to-point communication mechanism to leverage the right-product kernel trickof linear attention, which sharply decreases the communication overhead of SP. We also enhancethe practical efficiency of LASP by performing kernel fusion and intermediate state caching,making the implementation of LASP hardware friendly on GPU clusters. Furthermore, we meticulouslyensure the compatibility of sequence level LASP with all types of batch-level data parallel methods, which is vital for distributed training on large clusters with long sequences and large batches. We conduct extensive experiments on two linear attention-based models with varying sequence lengths and GPU cluster sizes.LASP scales sequence length up to 4096K using 128 A100 80G GPUs on 1B models, which is 8× longer than existing SP methods while being significantly faster. The code is available athttps://github.com/OpenNLPLab/LASP.

We first calculate the mean of all training target solutions, and use this solution to predict the testing solutions of three equations separately. The relative L2 errors for degree two, three and four porous media equations are 15.43%, 17.93% and 20.29% respectively. We then present the numerical results in Table 2. Operator Degree 2 Degree 3 Degree 4 Single DON 100% data 2.52% 2.06% 2.6% MODNO 100% data 1.14% 2.03% 2.54% MODNO 90% data 1.3% 2.11% 2.79% MODNO 80% data 1.38% 2.25% 3.39% Table 2: The numerical results for the nonlinear porous media equations with different degrees. The second column, Single DON, is to train three individual DONs with identical structures with full data for each operator. 10 (8)
id: 990916ed3aea2240a017077294c3e272 - page: 10
4.2.1 Experiment Two Results Analysis Observations from Table 2 highlight that MODNO surpasses the accuracy of training a single DON across three distinct operators. Even for equations of degree two and three, MODNO maintains superior accuracy compared to training a single DON. These findings are consistent with most of the outcomes from other numerical experiments, showing one potential benefit for the entire MOL society. 4.3 Example Three In this section, we will study applying a single network to learn the parabolic equation, ut + uxx = 1, x [0, 2], t [0, 0.5], and the Viscous Burgers equation, ut + ( u2 2 )x = uxx, x [0, 2], t [0, 1], where = 0.1, and the Burgers equation,
id: 026d15d55017f9dc00cd1d89ebdc8d7d - page: 11
4]. All equations are equipped with periodic boundary conditions. We study the mapping from the initial condition to the solutions. The initial conditions are generated as w1N (1, 2 1) + w1N (1, 2 2) adjusted to be periodic with w1, w2 U(0, 0.5), 1, 2 U(2, 4), and 1, 2 U(0.3, 1). For the demonstration, we present one realization of the terminal solution for three different types of equations in Figure 4. We can observe from the Figure that the three equations solutions exhibit different behaviors, making the problem more challenging. Figure 4: The terminal solutions of the parabolic equation (9), Viscous Burgers equation (10) and Burger equation (11) with the same initial conditions. Three equations terminal simulation times are 0.5, 1 and 0.4 respectively.
id: 30e9a7118655527e877923b55da48060 - page: 11
We first introduce the relative errors for three distinct equations when forecasted utilizing the mean of all training target solutions. Specifically, the errors amount to 68.10%, 293.10%, and 281.34% for the Parabolic, Viscous Burgers, and Burgers equations respectively. 11 (9) (10) (11) We perform a sequence of numerical experiments to investigate the error fluctuation as we vary the number of samples used to train the shared branch nets. Additionally, we incorporate supplementary experiments where a single neural operator is trained to learn a single operator from the entire dataset (comprising all training samples), enabling a comparison with the multioperator scenario. The outcomes are presented in Table 3. Operator
id: ce27a098600af5aeedb3d9663d431158 - page: 11
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "ABBAYmWaVltQNK5CAHJEPbBqGgzt04APLWzNt0EczOA", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "ABBAYmWaVltQNK5CAHJEPbBqGgzt04APLWzNt0EczOA", "level": 2}'