adesso Blog

Since the introduction of ChatGPT at the end of 2023, generative AI, in particular large language models (LLMs), has taken the IT world by storm. Whether as a chat assistant, for summarising texts, for extracting (domain) knowledge or as a programming aid - the possibilities of LLMs seem endless and new use cases are being added every day. So it's no wonder that public administration has also recently recognised the enormous potential of generative AI and wants a seat at the table. This blog post takes a closer look at the opportunities, but also the specific challenges and risks of using generative AI in public administration based on ten key theses.


Figure 1: Requirements for the use of generative AI in public administration.

1. Generative AI offers diverse potential in public administration

The use of generative AI offers great potential for the public sector in particular, which is aimed primarily at increasing efficiency and reducing bureaucracy, both of which are core concerns of the current federal government. In addition to the possibility of AI-supported audio, image and video generation, it is above all text generation using large language models that is currently used most frequently in the public sector.

In mid-2023, the Baden-Württemberg Innovation Lab, in cooperation with the German LLM hopeful Aleph Alpha, implemented a blueprint for the prototypical use of generative AI in public administration with the "F13" project. The project offers various functions, including

  • 1) the summarisation of texts,
  • 2) the indexing of texts,
  • 3) the conversion of texts into template formats (cabinet templates),
  • 4) research assistance for the preparation and retrieval of large amounts of internally available information and
  • 5) continuous text generation for merging individual documents and texts. Even if this is only a small selection of the wide range of functions, this prototype very clearly demonstrates the possibilities of this new technology and gives interested parties an initial indication of the direction that generative AI support could take.

In general, the use of LLM can be roughly divided into three overarching application scenarios:

  • I. Domain knowledge agents (example: F13 function 4)
  • II. document processing (example: F13 functions 2 and 3)
  • III. text manipulation and generation (example: F13 functions 1 and 5)

A tried and tested system architecture for I) is the so-called "Retrieval Augmented Generation". This makes it possible to enrich the existing knowledge of an LLM with, for example, locally available domain knowledge from the public administration. You can find out more about this in the blog post on "Retrieval augmented generation: LLM on steroids".

2. Use cases for generative AI in the public sector should be scoped

In the vast majority of cases, however, a dedicated scoping and narrowing down of possible use cases, usually as part of a workshop, is the first sensible and necessary step. This should not only highlight potential, but also discuss risks or alternatives to AI. After all, just because generative AI can solve a certain problem does not mean that this is the best way to go.

In many cases, the implementation of a (generative) AI project is seen as an innovative solution for solving complex problems or optimising processes. However, before fully committing to a (generative) AI project, it is important to check whether it is really necessary. There are situations in which conventional methods or existing technologies can be just as effective without requiring the effort and resources of an AI project.

First of all, the specific requirements and objectives of the project should be analysed in detail. Is the complexity of the task so high that conventional methods have reached their limits? Are there already existing technologies or processes that could deliver the desired results with minor adjustments? A thorough assessment of these aspects can help determine whether an AI project is the best option.

It is also important to consider the cost and time required to develop and implement an AI solution. Often, alternative solutions can be implemented faster and more cost-effectively, especially if the data required for an AI application is unavailable or of poor quality.

Ultimately, the decision for or against a (generative) AI project should be carefully weighed up, and alternative approaches should also be considered. A thorough analysis of requirements, costs and available resources is crucial to ensure that the chosen solution delivers the best possible results.

3. generative AI in public often requires local solutions

Every industry has its own requirements and challenges when it comes to implementing (generative) AI. This also applies to public administration in particular. A key reason for this is often the use of personal or other security-critical data, which places correspondingly high security requirements on a potential (generative) AI solution. While the industry leader OpenAI is often the best (but also most expensive) choice for implementation in other areas, this is not an option in public contexts, or only in exceptional cases. One of the reasons for this is that OpenAI, as a US company, is subject to US jurisdiction. Accordingly, it enables American security authorities to access user data through the interpretation of the so-called "Patriot Act" or "Cloud Act", among other things. Although it is now possible to instantiate OpenAI models on Microsoft servers in Europe in Azure, this is not an optimal solution for security-critical use cases in public administration. In the vast majority of cases, the local operation of the systems ("on-premise") is therefore an essential requirement for a possible LLM solution.

4. Generative AI solutions in public must be developed individually

There are currently two main alternatives for local operation: 1. the commercial LLM solutions from the Heidelberg-based company Aleph Alpha ("Luminous") or 2. the use of an open source solution based on Facebook's "Llama2" or the Mistral AI model series, for example.

The exact model to be used or tested must be evaluated individually depending on the use case. In principle, however, it is not enough to focus "only" on the language model. The LLM must be embedded individually in a specific AI framework to be developed for each use case. This is why we also talk about "AI systems", meaning other relevant components such as (vector) databases, user interfaces, interfaces, etc. in addition to the actual language model. There is no such thing as THE solution that can be used via "plug & play". Every solution requires customised development.

5. Generative AI in public must be trialled

AI projects and traditional projects basically take two different approaches to implementing the requirements, as shown in the following diagram.


Figure 2: Differences in the development of traditional software projects and AI projects.

In traditional projects, realisation is based on fixed rules and algorithms. The developer writes the code that directly implements the desired functionality. The code is usually written by hand and is explicitly tailored to the specific task or application. Changes or adaptations usually require the developer to change the code directly. Development is easy to plan and can follow the waterfall principle, for example.

AI, on the other hand, enables a system to learn from data instead of being explicitly programmed. The system learns patterns and correlations in the data in order to make predictions or decisions without the developer having to explicitly define all the rules. Instead of giving direct instructions on how to solve a problem, training data is used to train the model to recognise patterns and relationships. The trained model is then used to make predictions for new data. It can also be iteratively improved by training it further with new data.

Overall, the main difference between the two approaches lies in the way problems are approached. Traditional projects use explicit instructions and rules, whereas AI is data-based and the system learns to perform tasks without being explicitly programmed.

Due to their complexity and probabilistic or non-deterministic nature, LLMs in particular are difficult to assess and their responses are only predictable to a very limited extent. This makes it all the more important to initially test a generative AI solution as part of a PoC ("proof of concept"). As a rule, different models, parameter and prompt combinations are therefore first tested and evaluated against each other as part of an agile development approach.

6. Generative AI in public is not favourable

Another important point when using an "on-premise" solution is the cost. In addition to the pure licence costs (approx. 0.02-0.06 €/1000 words when using a commercial LLM), a sufficiently powerful hardware infrastructure must be available or, if necessary, purchased. In addition to the operating and server costs, dedicated graphics processors (GPUs) are the main cost drivers (approx. 12000-20000 € per unit). Accordingly, costs in the low to mid six-figure range can be expected for most use cases.

7. Generative AI in Public should be developed in a sustainable and reusable way

Generative AI harbours great potential for use in public administration. However, in order for this to be utilised in the long term, corresponding AI systems should be developed to be sustainable and reusable. This means that during development, care should be taken to create systems that are not only useful at the present time, but can also be adapted and further developed in the future. Sustainability also includes the responsible use of resources and the minimisation of environmental impact. Complex AI applications in particular are extremely resource-intensive, which is why it is all the more important to implement generative AI as energy-efficiently as possible. Taking these principles into account, generative AI systems in the public sector can be developed in such a way that they offer sustainable added value and at the same time meet ethical and ecological standards.

8. Generative AI in the public sector should be interoperable

In many public administration contexts, the interoperability of generative AI also plays an important role. Interoperability ensures that AI systems can ideally interact seamlessly with existing digital infrastructures, databases or applications in different authorities. Among other things, this is important so that data can be used across silos and services can be integrated. Accordingly, system boundaries and interfaces of AI systems must be considered from the outset. The additional effort is worth it, as generative AI solutions are ideally scalable across authority boundaries and public administration can be made significantly more agile as a result.

9. Generative AI in public should be comprehensible

Especially when generative AI is to be used in application and safety-critical contexts, it is important to be able to understand the decisions made by the AI. While transparency can be ensured with conventional AI systems, for example by using so-called white box models, this is not possible with generative AI, which is usually implemented using complex black box models. However, transparency can be achieved in other ways. A major danger with LLMs is that answers are hallucinated, i.e. false facts are passed off as true without any basis. This can be prevented or at least made comprehensible by implementing AI systems in such a way that, in case of doubt, LLMs only access local domain knowledge instead of pre-trained world knowledge and also output the sources for statements. In this way, the user can check the AI's statements in case of doubt.

10. Generative AI in Public is (only) a tool. In the end, it is the human who decides.

A key advantage of generative AI is the ability to automate processes and reduce bureaucracy in the context of public administration, among other things. Even if such AI systems can or are very reliable, they should and must never make fully automated decisions, especially in safety-critical application scenarios. In accordance with the so-called "human-in-the-loop" principle, generative AI as a tool can only have a supporting effect here. It is the responsibility of the human user to take over the support of the AI in the end and make the final decisions. Inevitably, this also means that the fear that AI could destroy jobs, which is not entirely unfounded, at least in the private sector, has not yet been justified, at least in public administration.


Figure 3: Support through empowerment. Generative AI as a tool.

Generative AI in the public sector is challenging but worthwhile

To summarise, generative AI in public administration, as in other areas, offers a wide range of potential for making processes more efficient. However, the high demands on security and transparency in the public sector are accompanied by special challenges that need to be taken into account from the outset. This makes early planning and, if necessary, consultation all the more important in order to weigh up all the opportunities and risks of using generative AI in public administration.

Would you like to find out more about exciting topics from the world of adesso? Then take a look at our previous blog post.

GenAI@adesso

Would you like to find out more about GenAI and how we can support you? Then take a look at our website. Podcasts, blog posts, events, studies and much more - we offer you a compact overview of all topics relating to GenAI.

Learn more about GenAI on our website

Picture Sascha  Windisch

Author Sascha Windisch

Sascha Windisch is Competence Centre Manager at adesso and a consultant specialising in business and software architecture, AI, GenAI and data science, requirements analysis and requirements management for complex distributed systems.

Picture Immo Weber

Author Immo Weber

Immo Weber is a habilitated Senior Consultant at adesso specialising in AI, GenAI and data science in public administration.


Our blog posts at a glance

Our tech blog invites you to dive deep into the exciting dimensions of technology. Here we offer you insights not only into our vision and expertise, but also into the latest trends, developments and ideas shaping the tech world.

Our blog is your platform for inspiring stories, informative articles and practical insights. Whether you are a tech lover, an entrepreneur looking for innovative solutions or just curious - we have something for everyone.

To the blog posts

Save this page. Remove this page.