adesso Blog

In today's digital era, artificial intelligence (AI) is integrated into almost every aspect of our lives and businesses, from smart watches that wake us up and show us the quality of our sleep to ChatGPT assistants at work. Although AI takes up such an enormous amount of our time, energy and attention, few people understand the basics of AI: just one in four people say they understand how AI is used. It is therefore not surprising that 24 per cent of Germans consider AI to be a danger. It is therefore essential to develop a thorough understanding of AI – a skill known as AI literacy.

AI literacy includes the skills to not only use AI technologies, but also to critically question how they work and evaluate their impact. It is about understanding the basic mechanisms behind AI applications and being able to comprehend their decisions.

AI literacy in the EU AI Act and the impact on companies

The EU AI Act is the first comprehensive regulation of artificial intelligence in the EU. It sets uniform standards for the safe and trustworthy use of AI, minimising risks and creating a clear framework for innovation. Companies that develop or use AI systems must ensure that they comply with the requirements to avoid legal consequences.

At the heart of the EU AI Act is the risk-based categorisation of AI systems, which determines the level of regulatory requirements. AI applications are divided into four risk classes, which we have already presented in a previous blog post.

Article 4 of the EU AI Act has been in force since 2 February 2025:
Providers and operators of AI systems shall take measures to ensure, to the best of their ability, that their personnel and other persons involved in the operation and use of AI systems on their behalf have an adequate level of AI competence, taking into account their technical knowledge, experience, education and training and the context in which the AI systems are intended to be used, as well as the persons or groups of persons the AI systems are to be used on.

This means that AI literacy is no longer a voluntary additional qualification, but a legal requirement. Employees who develop, operate or use AI must have an appropriate level of knowledge and understanding. This is not about general knowledge of technology, but about application-related competence. The requirements differ considerably from industry to industry and depending on the area of application.


GenAI

From idea to implementation

Generative Artificial Intelligence (GenAI) will change our business lives just as much as the internet or mobile business. Companies of all sizes and in all sectors are laying the foundation for the effective use of this technology in their business today.

Learn more


In critical infrastructures, a single wrong decision can have fatal consequences – power outages, chaos, security risks. IT security managers need to know exactly how AI detects threats and where its blind spots lie. After all, a deceptive certainty in faulty algorithms can jeopardise the security of supply.

In the banking sector, trust is the foundation. But when black box models decide on loans, ‘what the AI says’ is not enough. Employees need to understand how these decisions are made – not only to meet compliance requirements, but also to treat customers fairly.

And in healthcare? Here, an algorithmic error can mean the difference between life and death. Doctors need to know when to question AI results – and when to use their own expertise to counteract them. After all, unbalanced training data can lead to misdiagnoses, and a lack of understanding has not only ethical but also legal consequences.

AI can do a lot, but it shouldn't have the last word. That's why all employees who work with, on or using AI must be trained in the fundamentals of AI.

AI literacy is the key to trustworthy AI

Trustworthy AI is created when systems are not only used in accordance with the rules, but also in a comprehensible, fair and responsible manner. Technical knowledge alone is not enough – it requires the ability to critically question AI and make informed decisions.

To use AI safely and responsibly, three key areas of competence are essential:


Figure 1: The three central areas of competence

AI does not make neutral decisions – it reflects the data with which it was fed. But is this data truly representative? Or is there a bias in the data that can go unnoticed and influence entire careers, loans or diagnoses? Employees need to know what data is driving their AI, where blind spots may lurk and what prejudices may be creeping in. After all, the greatest danger is not the flawed decision itself, but that no one realises it was flawed.

Model understanding and decision traceability are also essential. Not everyone has to be able to program neural networks, but everyone who works with AI should know how a model makes decisions, which factors influence it and when results are comprehensible. Blind trust in AI is just as problematic as excessive scepticism. A sound understanding of the model helps to avoid both.

Ethical and legal issues round off the necessary skills profile. Employees need to know which rules – such as the EU AI Act – apply to their use of AI and where ethical principles are crucial. Especially in the judiciary, lending and healthcare, AI must not decide alone. Here, human supervision is mandatory to ensure fairness and security.

These areas of expertise are essential to the responsible use of AI. It is important to handle AI-supported applications that work with sensitive data securely. A lack of AI literacy could lead to employees unconsciously entering confidential information into chatbots or generative AI models, which poses data protection risks.

In addition, AI literacy is crucial to building trust in AI systems. Those who are confident in their dealings with AI develop a greater sense of control, make better decisions and promote the acceptance of AI within the company.


AI governance

How AI governance makes your company future-proof

The task of AI governance is to combine ethical principles, legal requirements and, above all, operational excellence. The aim is to create the basis for sustainable and trustworthy AI applications for every organisation.

Learn more

Implementing AI literacy in companies

Introducing AI literacy in companies is not a one-off measure, but an ongoing process that needs to be strategically planned. Training, interdisciplinary collaboration and suitable tools are key to ensuring that employees not only operate AI systems but can also critically question and responsibly deploy them.

A structured approach is crucial here: how can companies effectively integrate AI literacy into their processes? Which departments are involved? And which methods help to meet the requirements of the EU AI Act?

The first step towards responsible AI use? Targeted, practical training. Because only those who understand AI can use it safely and effectively. This employee development must be tailored to the specific requirements of different business units.

  • Managers and decision-makers should understand how AI models make decisions, what risks exist and where human control is needed.
  • Employees in specialist departments need to know the limitations of AI systems and how they can use them meaningfully in their day-to-day work – be it in human resources, finance or critical infrastructure.
  • Technical teams need in-depth knowledge of model training, data quality and regulatory requirements to make AI applications safe and transparent.

AI is powerful – but without trained people, it remains an uncontrolled variable.

To ensure that AI is not only deployed but also used responsibly, companies need to take a smart approach: inventory, internal guidelines, training and audits – meaningfully combined instead of considered in isolation. Successful AI integration requires collaboration across departmental boundaries. Regular workshops, clear governance models and joint decision-making processes create the basis for safe and efficient use. Knowledge only sticks if it can be experienced. Interactive workshops, practical case studies and simulation-based training turn abstract risks into tangible scenarios and empower employees to deal with AI. AI literacy requires more than theory – it requires a system.


Figure 2: Example of how AI literacy can be implemented in a company

A structured framework for AI literacy enables companies to use AI systems securely, transparently and in compliance with the rules.

Conclusion: AI literacy as the basis for trustworthy AI and sustainable compliance

It is clear that AI literacy is no longer an optional additional qualification, but essential for the safe and compliant use of AI. However, AI literacy is more than a duty – it is an opportunity. Companies that invest in AI expertise at an early stage secure competitive advantages and strengthen the trust of their customers, partners and regulators. Targeted training, interdisciplinary collaboration and clear AI governance minimise risks and promote innovation.

The requirements of the EU AI Act are already in force. Those who act now will set the course for the successful and sustainable use of AI. We support companies with practical training programmes and compliance strategies so that AI is not only used in a compliant manner but also profitably.

Picture Alisa Küper

Author Alisa Küper

Alisa Küper has an academic interdisciplinary background in applied cognitive and media studies, where she did her PhD at the intersection of computer science and psychology on human-AI interaction. She now shares her knowledge on trust building and explainable artificial intelligence in an advisory role. Here, she has a particular focus on the EU AI Act and trustworthy, transparent AI and explainable AI to promote the acceptance and effective use of new AI systems in companies.

Picture Paula Johanna Andreeva

Author Paula Johanna Andreeva

Paula Johanna Andreeva began her career at Trustworthy AI seven years ago when she was studying at the University of Oxford. Since then, she has been helping clients to build and analyse AI through governance, with a focus on building understanding of the EU AI Act and ensuring the trustworthiness of systems. In addition, she is doing a doctorate on Ethical Data Sharing, which examines the socio-economic consequences of the EU Data Act, in particular the monetary exchange of data.



Our blog posts at a glance

Our tech blog invites you to dive deep into the exciting dimensions of technology. Here we offer you insights not only into our vision and expertise, but also into the latest trends, developments and ideas shaping the tech world.

Our blog is your platform for inspiring stories, informative articles and practical insights. Whether you are a tech lover, an entrepreneur looking for innovative solutions or just curious - we have something for everyone.

To the blog posts