Created at 11am, Apr 20
benjaminOther
0
Evolutionary Optimization of Model Merging Recipes
9MkWQfBc_uCGzwYL0vbEq1pGDUub-8PItL4IsuiOuYo
File Type
PDF
Entry Count
99
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw

We present a novel application of evolutionary algorithms to automate the creation of powerful foundation models. While model merging has emerged as a promising approach for LLM development due to its cost-effectiveness, it currently relies on human intuition and domain knowledge, limiting its potential. Here, we propose an evolutionary approach that overcomes this limitation by automatically discovering effective combinations of diverse open-source models, harnessing their collective intelligence without requiring extensive additional training data or compute. Our approach operates in both parameter space and data flow space, allowing for optimization beyond just the weights of the individual models. This approach even facilitates cross-domain merging, generating models like a Japanese LLM with Math reasoning capabilities. Surprisingly, our Japanese Math LLM achieved state-of-the-art performance on a variety of established Japanese LLM benchmarks, even surpassing models with significantly more parameters, despite not being explicitly trained for such tasks. Furthermore, a culturally-aware Japanese VLM generated through our approach demonstrates its effectiveness in describing Japanese culture-specific content, outperforming previous Japanese VLMs. This work not only contributes new state-of-the-art models back to the open-source community, but also introduces a new paradigm for automated model composition, paving the way for exploring alternative, efficient approaches to foundation model development.https://arxiv.org/abs/2403.13187

4.2.1 Multi-modality Extension We now extend our method to multi-modal models, and evolve a culturally-specific content aware Japanese VLM. VLMs have recently shown remarkable progress by applying the powerful instructionfollowing capabilities of pre-trained LLMs. The architecture of a VLM generally consists of three components: (1) A vision encoder to extract image features; (2) An LLM to generate text (for the purpose of describing an image); and (3) A projection network to map image features into the LLMs embedding space [5, 9, 29, 30, 32]. Crucially, the LLM component is initialized with powerful pre-trained LLMs for their text generation capabilities. During training, the projection network and optionally the LLM are trained on various vision-language datasets, while the vision encoder is fixed.
id: 6668ab31faa8b7b3f6fd988fcce0a9bd - page: 9
4.2.2 Setup Source Models The LLM component inside a VLM can be regarded as a standalone LLM, with the extra capability of understanding visual soft prompts. From this perspective, by fixing the vision encoder and the projection network and only focusing on the LLM component, it is straightforward to apply the methodologies detailed in Section 3 to produce a new LLM with expanded capabilities. In this experiment, we merge a Japanese LLM and the LLM component in a VLM in the parameter space. We select shisa-gamma-7b-v1 as the Japanese LLM and LLaVA-1.6-Mistral-7B as the VLM. Both models are fine-tunes of the Mistral-7B-v0.1 base model. Dataset To the best of our knowledge, publically accessible Japanese VLM datasets are scarce. In response, we created a new open Japanese VLM benchmark and assessed our VLM on a widely recognized Japanese VQA dataset. Our new benchmark dataset consists of:
id: 833e618ed3d76ffe7a313284cc0b5c0f - page: 9
JA-VG-VQA-500: A 500-sample test set extracted from the Japanese Visual Genome VQA dataset . JA-VLM-Bench-In-the-Wild: A Japanese version of LLaVA-Bench-In-the-Wild . We compiled a rich collection of 42 images, accompanied by a total of 50 questions, featuring a variety of Japanese cultural elements and objects found in Japan. The QAs were crafted with the assistance of GPT-4V and underwent a human-in-the-loop filtering process to eliminate nonsensical outcomes. Compared to the JA-VG-VQA-500 dataset, our set poses more complex challenges, demanding more nuanced and detailed responses. 9 We used another subset of the Japanese Visual Genome VQA dataset during the evolutionary search. This subset is not overlapped with examples in the JA-VG-VQA-500 dataset, to avoid leakage in the optimization process.
id: 2e61e9cf40f25d29abef72b12394a149 - page: 9
Evaluation We consider two baselines in our experiments: LLaVA-1.6-Mistral-7B , one of our source models, and Japanese Stable VLM a Japanese VLM trained from scratch on Japanese datasets. All models adopt the same generation configurations, with deterministic decoding. We compute ROUGE-L with a Japanese language detector to replace non-Japanese responses with empty texts, resulting in a score of zero for non-Japanese responses. To be consistent with our LLM experiments in Section 4.1, we also employed fasttext [23, 24] for this language detection task. However, we made an exception for cases where the ground-truth answer itself contains non-Japanese but commonly seen words in Japanese texts (e.g., a widely recognized acronym such as UFO). In these instances, non-Japanese responses from models are not converted to empty texts.
id: c20f5d14b18bfcb3675793e0fef0b73d - page: 10
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "9MkWQfBc_uCGzwYL0vbEq1pGDUub-8PItL4IsuiOuYo", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "9MkWQfBc_uCGzwYL0vbEq1pGDUub-8PItL4IsuiOuYo", "level": 2}'