Ethical
AI
When
people think of advanced artificial intelligence, many think of Hollywood films
such as Terminator films, Ex Machina, and Transcendence. Films, where machines
imbued with artificial intelligence, have risen against humans. Such concepts
seem more fiction than fact but humanity is hurtling towards a world where
machines could control the world in myriad ways. Fortunately, industry
watchdogs are beginning to re-evaluate the dangers of self-learning artificial
intelligence (AI) encouraged by an awareness of the need for ethics in the
development and deployment of AI. This brings us to the topic of ethical AI.
What is ethical AI?
In
a nutshell, ethical AI attempts to
tackle the ethical concerns that people have with how AI and robots are being
used and designed. Today, the field of ethical AI has broadened to include
theories of the consciousness and rights of artificial intelligence.
Roboethics
Robot
ethics or roboethics is the concept of designing robots with artificial
intelligence using codes of conduct to ensure that the automated system will
respond to circumstances and situations ethically. At the crux of it,
roboethics calls for ensuring that the automated system has the capacity to
make its own decisions when it comes in contact with human beings. The AI
should never lead to decisions or circumstances where human life, safety, and
dignity are put in any danger. Roboethics is primarily concerned with the
actions of the robot but it is also concerned with the thought and actions of
the human developer who created the robot and AI.
Why is there a need for ethical
AI?
For
too long we have convinced ourselves that technology is neutral. Sadly, the
reality is different. In the past, technology has been used to conduct
psychological experiments on social media users by manipulating their emotions
and using the data for advertising or political purposes. Here are a few more
reasons why ethical AI is so
important and how it can and has been misused:
- Biased AI:
Biased artificial intelligence can reinforce discrimination and put minorities, women, and disadvantaged groups at risk. A study by UNESCO showed that possibly damaging stereotypes are the norm among AI chatbots. In 2019, Amazon scrapped its recruitment tool powered by artificial intelligence as the system was rigged against female applicants since previous hires were majorly male. Such bias leads to inequality and negative outcomes in recruitment, healthcare, education, and many more instances. - Errors in facial recognition:
It has been shown that even leading facial recognition technology can mismatch people. For example, in a study by the American Civil Liberties Union (ACLU) the software misidentified and mismatched 27 professional athletes to people in a database of criminals. - Deepfakes:
Deepfakes are artificial-intelligence-generated audio or video content with the intention to mislead. This technology can cause tremendous harm to society as it will contribute to the proliferation of cybercrime and misinformation. Deepfakes have already been used for selective social engineering attacks. In 2019, a journalist created a deepfake video of Mark Zuckerberg with a popular smartphone app.
Ethical AI benchmarks
Here
are some moral concepts that should be common among the guidelines for ethical AI:
- Transparency:
The decision-making system of the artificial intelligence machine must be transparent to users. - Nonmaleficence:
This is a term usually found in relation to the medical field it simply means “to do no harm”. The developers of AI-powered algorithms must ensure that the decisions taken by the system do not cause mental or physical harm to users. - Justice:
AI systems must be monitored regularly and closely to ensure that bias is not developed. The AI system should also have access to all genders and races to ensure equality.
How to build an ethical AI program
Here
are some steps that can be used to create a customised, sustainable, and
scalable ethical AI program:
- The first step is
to identify any existing infrastructure that can be leveraged by the AI
program. It is ideal to create an AI ethics program using the authority of
existing infrastructure such as a governance board that meets to consider
and review the privacy and data-related risks. If such an existing body
does not already exist, companies should set up a dedicated committee for
ethics-related issues.
- Companies should
create a customised ethical framework that should include an elaboration
of the company’s ethical standards. It should also establish relevant key
performance indicators as well as a quality assurance program to appraise
the effectiveness of the strategy. Moreover, the framework can show how
ethical risk reduction is worked into business operations.
- Learn how
industries such as healthcare approach AI ethically. Regulators, lawyers,
medical professionals, and medical ethicists have explored the meaning of
various topics such as informed consent, data privacy, and so on.
- Organisational
awareness should be built and increased. There was a time when companies hardly
paid attention to cybersecurity. Those times are over. Cybersecurity is
important for all companies and all employees are expected to be aware of
cybersecurity and the risks to cybersecurity. All departments and
employees that come in contact with AI products should be made well aware
of the company’s ethics framework and guidelines.
- Employees should
be encouraged to identify AI ethical risks; this should be done both
formally and informally. Encouraging employees can take the form of
incentivising them via financial rewards.
- Companies need to keep a
track of the impact of AI technology. Thorough research and review should
be done to ensure that the product is used ethically. Stakeholders should
be identified early on and they should be reviewed to find out how the
product affects them.
The future of ethical AI
Google
has called for governments around the world to increase the regulation of AI to
prevent misuse and avoid mass surveillance programs and human rights
violations. Today, there are several global initiatives committed to the cause
of ethical AI. Regulation,
framework, and guidelines can be used to ensure that AI is used for a safer
world for everyone.
●
https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
●
https://www.techslang.com/what-is-ethical-ai-and-why-we-need-to-talk-about-it/
●
https://datafloq.com/read/we-need-ethical-ai-5-initiatives-ensure-ethical-ai/7571
Data points:
No comments :
Post a Comment