Created at 11pm, Apr 30
buaziziSoftware Development
0
DYNAMIC GENERATION OF PERSONALITIES WITH LARGE LANGUAGE MODELS
IkHb_tXo5xumwOFbRf8bgQBrSW1ahDt078FO8d7YEqs
File Type
PDF
Entry Count
83
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw

In the realm of mimicking human deliberation, large language models (LLMs) show promisingperformance, thereby amplifying the importance of this research area. Deliberation is influencedby both logic and personality. However, previous studies predominantly focused on the logic ofLLMs, neglecting the exploration of personality aspects. In this work, we introduce DynamicPersonality Generation (DPG), a dynamic personality generation method based on Hypernetworks.Initially, we embed the Big Five personality theory into GPT-4 to form a personality assessmentmachine, enabling it to evaluate characters’ personality traits from dialogues automatically. Wepropose a new metric to assess personality generation capability based on this evaluation method.Then, we use this personality assessment machine to evaluate dialogues in script data, resultingin a personality-dialogue dataset. Finally, we fine-tune DPG on the personality-dialogue dataset.Experiments prove that DPG’s personality generation capability is stronger after fine-tuning on thisdataset than traditional fine-tuning methods, surpassing prompt-based GPT-4.

Ethics Statement The character dialogue data mentioned in this work all comes from scripts or is generated by large language models. The majority are from fictional characters in novels, films, and television works and do not involve any personal privacy information. All characters and data assets mentioned in this work are used solely for scientific research purposes. If anything infringes upon the rights of the characters themselves or their creators, please contact us, and we will remove the infringing information. Fine-tuning LLMs with scripted dialogues may lead to jailbreaking behaviors, undermining the original human safety alignment principles. This could result in LLMs generating responses that are biased, violent, or possess other undesirable traits. All outcomes of this work are intended solely for research purposes. Researchers and users utilizing this work must ensure that LLMs adhere to human ethical standards.
id: 8ff5f565c0e5fbd98f9ba1120135e3d5 - page: 9
References Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al. (2023). Gpt-4 technical report. arXiv preprint arXiv:2303.08774. AI, ., :, Young, A., Chen, B., Li, C., Huang, C., Zhang, G., Zhang, G., Li, H., Zhu, J., et al. (2024). Yi: Open foundation models by 01.ai. Botella, J., Suero, M., and Gambara, H. (2010). Psychometric inferences from a meta-analysis of reliability and internal consistency coefficients. Psychological Methods, 15(4):386. Cantor, N. (1990). From thought to behavior:" having" and" doing" in the study of personality and cognition. American
id: a99cfad44dea0f0a465d2661d16a0a82 - page: 9
Caron, G. and Srivastava, S. (2022). Identifying and manipulating the personality traits of language models. Chen, W., Su, Y., Zuo, J., Yang, C., Yuan, C., Qian, C., Chan, C.-M., Qin, Y., Lu, Y., Xie, R., et al. (2023). Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents. arXiv preprint arXiv:2308.10848. Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., et al. (2023). Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1113. Costa Jr, P. T. and McCrae, R. R. (1992). The five-factor model of personality and its relevance to personality disorders.
id: d83acb19e574b2f2fa8ead7eb99fb1e2 - page: 9
Journal of personality disorders, 6(4):343359. Crowne, D. P. (2007). Personality theory. Oxford University Press. Fu, J., Ng, S.-K., Jiang, Z., and Liu, P. (2023). Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166. Gilardi, F., Alizadeh, M., and Kubli, M. (2023). Chatgpt outperforms crowd-workers for text-annotation tasks. arXiv preprint arXiv:2303.15056. Griffin, A. S., Guillette, L. M., and Healy, S. D. (2015). Cognition and personality: an analysis of an emerging field. Trends in ecology & evolution, 30(4):207214. Ha, D., Dai, A., and Le, Q. V. (2016). Hypernetworks. Hiyouga (2023). Llama factory. 9
id: 788891ae08bb79456e93a778ed1b1fbe - page: 9
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "IkHb_tXo5xumwOFbRf8bgQBrSW1ahDt078FO8d7YEqs", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "IkHb_tXo5xumwOFbRf8bgQBrSW1ahDt078FO8d7YEqs", "level": 2}'