23. June 2020 By Volker Illguth
Trust AI – A call for greater trust in AI
While extremely ambitious AI projects are currently being launched in many industries, scepticism about the use of AI applications is still widespread in large parts of the population. In public debate, the worst-case scenarios are cited with the future in mind, but there is a lack of nuanced discussions on feasible use cases. That alone is reason enough to take a look at the situation.
The reason for all the scepticism
People always judge the use of new technologies and the benefits they bring by how useful they are: is there something to be gained or even lost? People approach new technologies first and foremost in terms of avoiding risk, especially when the technologies are not yet understood on a fundamental level. In most cases, how personal data works and, in particular, how it’s actually used is too uncertain. It is understandable that there is a certain reservation about this. But what was it like before, when you didn’t want to rely on smartphone navigation to help you plan your route because your phone didn’t get real-time traffic and roadworks updates? Today, traffic forecasts provided by mobile services are more accurate than those provided by some integrated services. And while older generations often had reservations about sharing information on social media or using online communication services, Covid-19 has completely changed this across the board. But then why are people still so sceptical about AI?
Outside of your comfort zone – employment aspects vs professional change
AI is often blamed for putting a variety of jobs at risk. A recent study on the topic of robotic process automation (RPA) and robotics examined figures from 1990 to 2016 and came up with astonishing results: companies using robots increased their production by an average of 20 to 25 per cent, with the number of employees also increasing by 10 per cent. The use of RPA also led to a redistribution of market shares, which speaks for its competitive relevance.
Now, RPA is not AI. The Institute for Applied Work Science assumes that digital networking and the use of artificial intelligence will change about 75 per cent of all work systems in the areas of recognition, processing, interaction and control. The questions are therefore what work in future makes sense for humans to do and what work makes sense for AI components to do. It seems certain that negative employment effects are to be expected in the case of executive or analytical activities in particular. The risk of being affected increases proportionally to the simplicity of the activities. This means that the decisive factor in the future will be which packages of measures are used to counteract this development and to support employment through education and further training opportunities.
Companies with a strategic HR department can potentially solve important allocation problems by supporting employees from an early stage and use targeted qualification measures to deploy them in areas where they will carry out their work more efficiently. This also provides many employees with the opportunity to grow in a new field of expertise and in new professional fields and thus find new and rewarding perspectives. This is how the number-crunching administrator becomes the IT project manager or the paper-savvy speaker becomes the agile master. This certainly seems possible. But the general trends on the labour market make it impossible to come to the hasty conclusion that the use of AI will overtake negative developments.
The tale of the super AI
Based on the current state of the art, AI uses certain algorithms that operate according to mathematical optimisation rules. Machine learning techniques enable an application to decide how to handle the next case based on the experience gained from the large number of cases that were previously analysed – a classification situation or a decision by analogy. However, what is intrinsic to all current AI applications is the fact that it is still human beings who decide on the quality provided by the application. In other words, the sovereignty over the machine remains with a decision-maker who is responsible for the quality assurance of the machine’s output. The supervisory authorities also adhere to this proven principle of human responsibility. For example, BaFin, at least by its own admission, does not want to accept models or algorithms that function as a black box. For AI to ‘take on a life of its own’, it would require far more incredible leaps in innovation than the technology has seen up to this point.
XAI – The new AI that explains it all
In the past, AI applications were rightly referred to as a black box, since it was not apparent to the user on what basis the software made its assumptions for a decision. This argument has been has been countered time and time again to the extent it has become an eternal mantra. This is where companies and authorities such as BaFin derived the requirement that AI algorithms must be developed carefully and at the same time transparently, and that the decisions made by the AI should be understandable. Today, careful and transparent development is followed by a methodical approach that is specifically oriented towards AI. The issue of being able to understand AI decisions has also now been solved. New algorithms have been developed that basically work their way back from the final decision that was made to the starting point of the process and identify the influencing factors or variables in the results on which the decision is based. We are talking about a further development of AI: explainable artificial intelligence (XAI). This allows the decision made by the AI to be substantiated and retraced in each individual case. This is particularly important for revision issues, for example.
A force for good – example #1: AI as a diagnostic tool
Let’s makes our case watertight and take a look at what AI is already doing today. We’ll start with an example related to healthcare: cancer is the second most common cause of death in Germany. One of the most common types of cancer is lung cancer. Researchers are currently developing an AI-based assistance system that will help doctors make decisions in the foreseeable future. This will be done through the AI-assisted analysis of CT images of the lungs, which is based on neural networks and trained with thousands of precedents. In this way, local deviations from healthy tissue can be detected quickly. This should significantly improve the prevention, early detection and treatment of lung cancer in particular. The same applies to the early detection of cancers of the digestive tract or breast cancer.
A study on the early detection of colorectal cancer has shown that AI is approximately 20 per cent more likely to detect disease than was possible by endoscopy. Correct recognition or identification was 96 per cent in real time. This means that the results could be determined during the examination. Only recently, an AI solution for breast cancer screening received CE certification for the first time in Germany under Class IIb of the German Act on Medical Devices (Medizinproduktegesetz, MPG). This certification can be considered a milestone for AI-based diagnostic tools. More than a few medical experts believe that AI will become a ‘secret weapon in the fight against cancer’ in the coming years.
A force for good – example #2: saving the environment with AI
Companies are already using intelligent algorithms to increase the energy efficiency of machines. Machines are registered in an energy management platform in order to detect deviations in energy consumption and to absorb possible load peaks. All of this is having a positive impact – for example, one company was able to reduce the carbon dioxide emissions of its plants by about ten per cent within two years. In addition, the interaction between sensor technology and AI can forecast and reduce energy consumption in the long term.
In another example, a vineyard suffering from drought collects all information on soil moisture, groundwater levels, temperature and wind in a central computer. Satellite, weather and climate data were also added so that an AI system could learn to identify important connections. The optimal amount of irrigation is calculated for each vine using AI, which makes it possible to determine the precise amount of water needed to irrigate the entire vineyard perfectly. As a result, water use was reduced by 25 per cent, but the harvest yield increased by 30 per cent. AI can therefore make a significant contribution to ensuring sustainability and resource efficiency as well as environmental and climate protection.
A force for good – example #3: using AI as an early warning system for pandemics
Given the current situation with the Covid-19 pandemic, it makes sense to take a look at this use case, too: a Canadian-based company has started predicting epidemics, relying on a combination of artificial and human intelligence to detect them early. The AI algorithm independently searches the Internet, scanning regional news in about 65 languages, as well as various databases and health alerts. Information from forums and blogs is also included, which also contains references and points out any anomalies. The results produced by the AI are then evaluated by experts. They then issue a warning if it appears sufficiently plausible after considering all (scientific) points of view.
For example, the current Covid-19 pandemic was predicted by AI nine days before the WHO issued their first warning. The time gained here is extremely significant and important in terms of preventive measures, since it is almost the length of a regular quarantine cycle. In addition, the AI used also correctly predicted where what was referred to as the Wuhan virus would likely spread to by using, among other things, ticket and booking data from airlines.
Conclusion
The technical possibilities of AI are already suitable for implementing promising use cases with undeniable benefits. Though there is currently no regulatory foundation for AI, the critical discussion usually centres around the protection of personal data. In Germany, the Federal Data Protection Act (Bundesdatenschutzgesetz, BDSG) has always provided a comparatively high level of protection, which has been extended with the General Data Protection Regulation (GDPR). It contains numerous principles that are also suitable for establishing a trustworthy legal basis for AI use cases.
But what is still needed to help AI make the breakthrough? Focus and courage. The focus to set the right priorities in the variety of use cases and the courage to invest in this technology. However, the digital transformation through AI can only be strategically designed in a meaningful way if it is accompanied by a targeted personnel and further training policy. Employee or user confidence will increase over time as they see the benefits of AI applications without losing their personal data. If development principles and traceability of AI decisions are taken seriously, scepticism about allowing AI to be used in broader application fields will dissipate.
We too are all about AI: find out more about what AI means for specific industries and take a look at what AI is already changing here and now. Our articles, interviews, videos and events on AI at ki.adesso.de/en/ will make sure you stay up to date.