Ethical AI is a hot topic that is increasingly discussed. But as long as universities don’t have ethics in the curriculums and we leave ethical standards up to CEOs, we are far away from achieving ethical AI. We talked about AI ethics with DN19 speakers Gunay Kazimzade, doctoral AI Researcher at the TU Berlin and Vince Madai, Senior Medical AI Researcher at Charité.
As AI is being implemented in every sector of society over the past years, ethical regulations have only slowly been established. Last April, the European Union published it’s ethical AI guidelines. Also, companies like Microsoft and Google have their principles for ethical AI. Still, many countries haven’t implemented ethical AI guidelines yet, like the United States and China.
And then it’s one thing to establish ethical guidelines, but another to actually create ethical AI. Sometimes a company only discovers late that it has created unethical AI, because the AI only amplifies a biased reality within the company. That happened for example in the case of Amazon’s AI hiring system, favoring men over women in leadership and tech position.
Gunay Kazimzade, a researcher at the Weizenbaum Institute for the Networked Society, believes that AI is biased because we humans are biased. “Fighting bias should start from being aware of the unconscious and conscious human biases,” she tells. In her research, she sees that bias slips into every part of the AI pipeline: in data mining, model design, and application levels. Most of the encoded stereotypes start even in the decision-making process.
Even though some mitigation methods sold by tech companies are helpful, they are not sufficient yet, she tells: “Bias is a very use-case specific problem, so the existing methods don’t always work.” Instead, she advises companies to approach their AI with a critical eye, “We should question the decisions one can make in every step of the AI pipeline and involve diverse teams into design and development processes.”
Moving too fast to keep up
Still, the implementation of AI is moving so fast, it can be difficult to step back and look critically at what we are creating. But according to Kazimzade, we really do need to take the time to asses: “There is always a danger in innovations that change the way humans live, work and operate,” she tells. “We should always question those innovations, without blindly trusting and accepting them into mass production.”
Vince Madai, a senior researcher at Charite, agrees with her view: “I fear that the current political processes in our representative democracies will soon not be able to keep up with the technological progress,” he tells. While governmental overview is crucial, he argues: “As much as any company initiative aimed at ethical AI is laudable, ethical AI guidelines cannot be left in the hands of the companies, since ethical AI cannot rely on a CEO’s decision to terminate or change them.”
Like in any other field, says Madai, the course of AI has to be determined by regulatory bodies in dialogue with policymakers, NGO’s and the public. “Key technologies which affect many people and may lead to casualties in case of misuse, are usually regulated on a government level,” he tells. “Healthcare and transportation are key examples here. We have just seen recently in aviation that the moment governmental regulatory pressure decreased fatal mistakes happened.”
Healthcare traditionally has more strict regulatory oversight and the sector will, therefore, be less affected by biased AI in the future, believes Madai. Regulatory bodies for medical products are already developing AI guidelines. Where no such regulatory body exists, the consequences could be terrifying. “In for example the legal field, AI might be used to influence the outcome of trials without any systematic oversight or regulation,” he tells.
Awareness is key
Madai is concerned that we are not doing enough to establish ethical AI. He thinks we are in dire need of a broad AI education because most of the public, policymakers and the media don’t understand technology, with all its promises and challenges. “There is, in my opinion, an overlooked danger that this will increase the likelihood of authoritarian responses, which may go hand in hand with the abuse of AI for public surveillance, police and intelligence agency work.”
It is Gunay Kazimzade’s mission at the Weizenbaum Institute to come up with research findings that can steer government regulations into the right direction. “Strategies and regulations should be developed in an informed way with AI experts, institutions and organizations involving an interdisciplinary group of experts.”
She believes raising awareness is essential to secure ethical AI. “Currently there are few programs for future data scientists and general computer scientists where Ethics is included in the curriculum,” she tells. “When we conduct experiments with the data scientists or simply discuss with them this problem, a minority of the workers are aware of the ethics in AI.”
About the author
This article was published by our friends, Data Natives, an online community of data science, machine learning, and all data-infused tech enthusiasts. Attend their upcoming Data Vatives Conference on the 25-26th of November and use code DN19_ArcticStartup to get 25% off your ticket.