Three Questions and Answers: How to develop an ethical AI?

Three Questions and Answers: How to develop an ethical AI?

When dealing with people, it is important for AI systems that calculations are not based on prejudices and that decisions do not deviate from ethical principles. In an interview, Gery Zollinger, Head of Data Science & Analytics at Avaloq, explains what ethical AI should do and how it can be developed.



Gery Zollinger is Head of Data Science & Analytics at Avaloq, a provider of wealth management technology and services for financial institutions.

Mr. Zollinger, the EU is currently working on regulations for artificial intelligence; the battle terms Explainable AI, Trustworthy AI and Responsible AI run through research, legislation and marketing. You are now bringing ethical AI into play. What does it mean?

Basically, the point is that an AI should act without prejudice and without unfounded, irrational discrimination. So far, however, there is no globally uniform definition or globally valid guidelines for what constitutes an ethical AI. However, what most definitions and research reports have in common is that they assess an AI solution based on whether it prevents harm and meets criteria such as transparency, justice & fairness, accountability, and data protection & privacy. However, each of these points leaves a lot of room for interpretation. The concrete implementation always depends on which person or which team is responsible for it.

What is justified or rational discrimination in this context?

In specific cases, it is important that the AI ​​system can discriminate and, for example, evaluate the source, quality or timeliness of data. In statistics, discrimination is just another word for distinguishing – there is nothing negative about it. Each classification model works with statistical discrimination, for example whether someone is creditworthy or not, or whether someone is interested in ESG investing. In this context, this is intentional or justified and rational discrimination.

But now specifically: How do companies have to develop AI algorithms so that they are unprejudiced – and how can stereotypes and discrimination that have already been collected be cleaned of data sets?

Roughly three phases can be distinguished in which the course for an ethical AI is set. First, before actually developing the AI ​​model, it is about cleaning up the input data set. Second, it is important to eliminate discriminatory features during development. And third, after developing the AI ​​model, it is important to monitor the results. In the first, the pre-dev phase, the greatest challenge is to even discover potential discrimination in a data set. Here you can work well with hypotheses and test them statistically. If necessary, one can remove such discriminating observations as outliers from the training dataset.

In the second, the dev phase, it would be important to exclude all potentially discriminating variables, also called features, from the development of the AI ​​algorithm. Gender would be a typical feature here. But one must also consider other features that show a high correlation with the potentially discriminating variable. Finally, the post-dev phase is about determining whether the AI ​​model might become discriminatory over time. This has nothing to do with the AI ​​algorithm changing. Rather, the output of the AI ​​model can change if the input data changes over time. A technical monitoring framework can diagnose this by continuously observing the output of the AI ​​model depending on discriminating variables such as gender.

Mr. Zollinger, thank you very much for your answers. Many aspects of AI are currently on everyone’s lips. In another interview, the question is answered as to why it is better to participate than to be left behind. It can also be discussed whether image generators can promote diversity. An extended version of the interview with Gery Zollinger can be found in the upcoming iX 12/2022.

In the “Three Questions and Answers” series, iX wants to get to the heart of today’s IT challenges – whether it’s the user’s point of view in front of the PC, the manager’s point of view or the everyday life of an administrator. Do you have suggestions from your daily practice or that of your users? Whose tips on which topic would you like to read in a nutshell? Then please write to us or leave a comment in the forum.

More from iX Magazine

More from iX Magazine

More from iX Magazine

More from iX Magazine


To home page

Leave a Comment

Your email address will not be published. Required fields are marked *