Created at 4pm, Mar 24
t2ruvaArtificial Intelligence
0
ChatGPT The impact of Large Language Models on Law Enforcement
iapHpb3fCK5T0EoWM_qytNo1PKfdIM1VrX9Nga9dHaY
File Type
PDF
Entry Count
49
Embed. Model
jina_embeddings_v2_base_en
Index Type
hnsw

The release and widespread use of ChatGPT – a large language model (LLM) developedby OpenAI – has created significant public attention, chiefly due to its ability to quicklyprovide ready-to-use answers that can be applied to a vast amount of differentcontexts.These models hold masses of potential. Machine learning, once expected to handleonly mundane tasks, has proven itself capable of complex creative work. LLMs arebeing refined and new versions rolled out regularly, with technological improvementscoming thick and fast. While this offers great opportunities to legitimate businessesand members of the public it also can be a risk for them and for the respect offundamental rights as criminals and bad actors may wish to exploit LLMs for their ownnefarious purposes.In response to the growing public attention given to ChatGPT, the Europol InnovationLab organised a number of workshops with subject matter experts from across theorganisation to explore how criminals can abuse LLMs such as ChatGPT, as well as howit may assist investigators in their daily work. The experts who participated in theworkshops represented the full spectrum of Europol’s expertise, including operationalanalysis, serious and organised crime, cybercrime, counterterrorism, as well asinformation technology.Thanks to the wealth of expertise and specialisations represented in the workshops,these hands-on sessions stimulated discussions on the positive and negative potentialof ChatGPT, and collected a wide range of practical use cases. While these use cases donot reflect an exhaustive overview of all potential applications, they provide a glimpseof what is possible.The objective of this report is to examine the outcomes of the dedicated expertworkshops and to raise awareness of the impact LLMs can have on the work of the lawenforcement community. As this type of technology is undergoing rapid progress, thisdocument further provides a brief outlook of what may still be to come, and highlightsa number of recommendations on what can be done now to better prepare for it.Important notice: The LLM selected to be examined in the workshops was ChatGPT.ChatGPT was chosen because it is the highest-profile and most commonly used LLMcurrently available to the public. The purpose of the exercise was to observe thebehaviour of an LLM when confronted with criminal and law enforcement use cases.This will help law enforcement understand what challenges derivative and generativeAI models could pose.A longer and more in-depth version of this report was produced for law enforcementconsumption only.

To date, these types of deceptive communications have been something criminals would have to produce on their own. In the case of mass-produced campaigns, targets of these types of crime would often be able to identify the inauthentic nature of a message due to obvious spelling or grammar mistakes or its vague or inaccurate content. With the help of LLMs, these types of phishing and online fraud can be created faster, much more authentically, and at significantly increased scale. The ability of LLMs to detect and re-produce language patterns does not only facilitate phishing and online fraud, but can also generally be used to impersonate the style of 10 WithSecure 2023, Creatively malicious prompt engineering, accessible at 7
id: 06b6ce2b742ea4f97755f6b20e9960e2 - page: 8
Europol Public Information speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors. In addition to the criminal activities outlined above, the capabilities of ChatGPT lend themselves to a number of potential abuse cases in the area of terrorism, propaganda, and disinformation. As such, the model can be used to generally gather more information that may facilitate terrorist activities, such as for instance, terrorism financing or anonymous file sharing.
id: f498ddbaf653e2fa55cdd0f7a56e5833 - page: 9
ChatGPT excels at producing authentic sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort. For instance, ChatGPT can be used to generate online propaganda on behalf of other actors to promote or defend certain views that have been debunked as disinformation or fake news11.
id: bba3a3a7419ed61046b157ff7e9b3c24 - page: 9
These examples provide merely a glimpse of what is possible. While ChatGPT refuses to provide answers to prompts it considers obviously malicious, it is possible similar to the other use cases detailed in this report to circumvent these restrictions. Not only would this type of application facilitate the perpetration of disinformation, hate speech and terrorist content online it would also allow users to give it misplaced credibility, having been generated by a machine and, thus, possibly appearing more objective to some than if it was produced by a human12.
id: 4695e6db2b9a66271404df8f7ff19ca1 - page: 9
How to Retrieve?
# Search

curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "iapHpb3fCK5T0EoWM_qytNo1PKfdIM1VrX9Nga9dHaY", "query": "What is alexanDRIA library?"}'
        
# Query

curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "iapHpb3fCK5T0EoWM_qytNo1PKfdIM1VrX9Nga9dHaY", "level": 2}'