The term artificial intelligence (AI) is ubiquitous these days and is used in such an inflationary way that the boundaries of its definition are becoming increasingly blurred. Smartphone manufacturers advertise with AI when they automatically sort photos of users into albums. Car manufacturers advertise with AI when their navigation system avoids the next traffic jam. And start-ups are advertising with AI to attract investors.
As a user, it’s easy to lose track and get riled up with jokes when an AI can’t keep its advertising promises and makes the stupidest mistakes. The difference between expectations and reality is already inherent in the vague term “intelligence”. He suggests that a machine could be smarter than a human. However, this only applies to very special applications and depends very much on the data with which an AI was trained and which algorithms are actually behind it.
However, there are numerous examples in which AI increases productivity, for example when recognizing incorrectly installed car parts or sorting out poisonous grain kernels. This potential for growth is the reason why many companies invest enormous resources in the development of AI systems and hope for an advantage in international competition. They don’t want to be dictated to by the law.
However, individual AI successes cannot be generalized. If you want to drive a nail into the wall, use a hammer. But it’s not good enough to pull the nail out again. It’s better with pliers. And you have to differentiate in the same way with AI or machine learning systems: Sometimes a problem can best be solved with a deep learning approach, sometimes a classic statistical method is suitable, sometimes any form of AI is overwhelmed. Clever developers do not operate AI at any price, but with a sense of proportion.
However, society or those affected can hardly understand how exactly a system called AI classifies data, because the term “artificial intelligence” has now established itself as a collective term for almost all methods of machine learning: from classic statistical methods such as decision trees to artificial neural ones Networks or deep convolutional networks (Deep Convolutional Neural Network, Deep CNN). The latter made image and speech recognition possible in the first place, because they independently deduce complex characteristics from training data.
Incidentally, the EU defines the term in its draft for the “Artificial Intelligence Act”, which is intended to regulate the technology, particularly broadly: The AI Act understands AI to mean all techniques of machine learning, statistical processes as well as logic and knowledge-based concepts. In doing so, he causes as much confusion as companies that call every conceivable type of software AI for advertising purposes.
It is therefore more precise to speak of classic machine learning or artificial neural networks: They all ultimately go back to statistical processes with similar basic principles. This shows how limited the previous AI approaches are and how their training concepts differ from the learning process of a human being.
Transparency vs Efficiency
This does not make AI any less valuable for companies, but deconstructs its apparent superiority. This becomes all the more important the more high-risk tasks in which business and government want to delegate to AI systems in the future, where errors can have fatal consequences. This requires a sober look at the technology and the will to make the processes from training to the result comprehensible. Otherwise people pay unreasonably high insurance premiums or are arbitrarily rejected when applying for jobs.
Those affected must always be able to understand why a decision was made about them. According to the planned AI Act of the EU, this should apply throughout Europe in the future. Depending on the risk level, only relatively simple algorithms may be used in certain work areas, or decisions must be made by people. This may be less efficient than automation through deep neural networks, but humans retain control.
This is important, because an AI can easily make systematic mistakes if you train it with poorly curated data. Even small image disturbances or watermarks in large training databases are enough to send an AI on the wrong track. This results in errors that can have fatal consequences, for example in the assessment of melanomas or the selection of targets for air defense systems.
It is important that these problems and the approaches to solving them are not only understood by a few scientists and computer scientists, but that a broad social discussion begins. In particular, decision-makers on executive boards and in politics must subject the AI systems to a critical evaluation and carefully weigh up the advantages and disadvantages. Only then can society benefit from the strengths of technology without losing control due to its lack of transparency.
In c’t 17/2022 we examine what artificial intelligence actually achieves today. We present apps and gadgets for the holiday and test bicycle navigation systems so that you never take detours again. Also in the test: energy cost measuring devices, with which you can track down energy guzzlers in the household, web whiteboards for digital meetings and inverters for balcony power plants. You can also find out how the James Web space telescope works in the current issue of c’t.