The unprecedented success of AI toolslike ChatGPT caught many by surprise. But large language models (LLM)are here to stay and will continue togrow in sophistication. These modelsuse natural language processing algorithms to interpretand respond to text-based human input. Whilst it is possible to grasp the basic principles which drive modelssuch as ChatGPT, the companies behind them - mostlyUS-based - are becoming increasingly coy about releasing detailed information on the code and parameterswhich determine the way they generate their outputs.That makes it more challenging to assess the implications and impact of integrating large language modelsinto the workplace. At the current rate of expansion, it’sonly a matter of time before such models are integratedinto the public sector with wide practical applications, advantages, and possible efficiency gains, from24/7 availability to managing large volumes of inquiriessimultaneously.But there are also limitations. While sophisticated Al suchas ChatGPT may seem extremely intelligent, capable, andreliable, this is not a wholly accurate picture. ChatGPTcertainly has some capabilities at a speed and scale thathumans do not, but it sometimes provides responseswhich are inaccurate, biased, or nonsensical. Its purelymathematical approach to reasoning should not be mistaken for human-like intelligence.If ChatGPT and similar tools become part of daily workflows, this trend will also affect public institutions. By providing services which are instrumental to the functioningof the State and affecting the rights and obligations ofcitizens, the public sector is particularly sensitive to theintroduction of such AI-based technologies. Public administration has its own characteristics and principles whichdistinguish it from the private sector. By extension, the keyprinciples of public administration such as accountability,transparency, impartiality, or reliability need to be considered thoroughly in the integration process.To benefit from the advantages offered by ChatGPT andsimilar tools, risks should be recognised, managed and,where possible, mitigated. Whilst some of the existinglimitations will be addressed by technological advances,others, such as biases, are of a more structural natureand cannot be fully corrected. Measures are therefore needed to ensure that the appropriate proceduresand human controls are in place as well as theestablishment of feedback loops from the citizens andindependent audits.In the absence of clear regulation on ChatGPT accountability, humans are needed to monitor output especiallywhen considering what lies ahead. And only humanscan provide the personalised services, flexibility, emotional intelligence, and critical thinking needed to fulfil therequirements of public service.
# Search
curl -X POST "https://search.dria.co/hnsw/search" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"rerank": true, "top_n": 10, "contract_id": "z8NwvJSagmlH_AWCVDn8IzZfixXx5eOV_KuCN50j0Fk", "query": "What is alexanDRIA library?"}'
# Query
curl -X POST "https://search.dria.co/hnsw/query" \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{"vector": [0.123, 0.5236], "top_n": 10, "contract_id": "z8NwvJSagmlH_AWCVDn8IzZfixXx5eOV_KuCN50j0Fk", "level": 2}'