Image for post
Image for post

Each of us makes numerous decisions daily that concern essential ethical issues. Our daily choices between good and evil are largely habitual, many of them aren’t even noticed by us. However, in the world of artificial intelligence, morality is becoming increasingly problematic. Can robots sin?

Algorithm-based technologies, machine learning and process automation have come to a point in their development where morality-related questions and answers may seriously affect the development of all of our modern technologies and even our entire civilization.

The development of artificial intelligence rests on the assumption that the world can be improved. People may be healthier and live longer, customers may be ever happier with the products and services they receive, car driving may become more comfortable and safer, and smart homes may learn to understand our intentions and needs. Such a possibly utopian vision had to crystallize in the minds of IT system developers to make possible the huge technological advances that are still continuing. When we finally found that innovative products and services (computers that understand natural language, facial recognition systems, autonomous vehicles, smart homes and robots) can really be made, we began having doubts, misgivings and started to ask questions. Tech companies realize that their abstract intangible products (software, algorithms) inevitably entail fundamental, classic and serious questions about good and evil. Shown below are a few basic ethical challenges that sooner or later will force us to make definitive choices.

Revolution in law

Large US-based law-firms have recently begun working closely with ethicists and programmers who are developing new algorithms on a day-to-day basis. Such activities are driven largely by initiatives by US politicians who are increasingly aware that legal systems are failing to keep up with technological advances. I believe that one of the biggest challenges for large communities, states and nations is to modify the legislative system to regulate artificial intelligence responding to the major AI issues. We need this to feel safe and to allow entire IT-related fields to continue to grow. Technology rollouts in business must not rely exclusively on intuition, common sense and the rule that “everything which is not forbidden is allowed”. Sooner or later, the absence of proper regulations will claim victims, not only among innocent people but also among today’s key decision-makers.

Image for post
Image for post

Foxconn factory production line

Labor market regulation

The robotization of entire sectors of industry is now a fact of life. Fields such as logistics, big data, and warehousing are poised to steadily increase the number of installed industrial robots. There is a good reason why the use of robots is the most common theme of artificial intelligence debates. Such debates are accompanied by fears that robots will take human jobs. What can be done to prevent scaring people with the prospect of a robot takeover? On the other hand, how do we help those who will see their jobs, or at least aspects of them, soon done by machines? Although automation and robotization benefit many industries, they may also give rise to exclusion mechanisms and contribute to greater social inequality. These are the true challenges of our time and we can’t pretend they are inconsequential or irrelevant.

Autonomous vehicles make choices

A few months ago I wrote of the ethical issues associated with the appearance of autonomous vehicles on our roads. I raised the issue of cars having to make ethical choices on the road, which raises the question of what responsibility this puts in the hands of specific professionals, such as the programmers who write algorithms, and the CEOs who run car manufacturing companies. Take the scenario of a child running into a street as a self-driving car approaches. The self-driving car would be faced with a choice, which it would make depending on the algorithms hardwired into its system. Theoretically speaking, there are three options available in what we might call “ethical programming.” One is that what counts in the case of an accident or a threat to human life is the collective interest of all accident participants (the driver, the passengers, and the child in the road). Another seeks to protect the lives of pedestrians and other road users. Yet another gives priority to protecting the lives of the driver and the passengers. The algorithms that determine how the car will respond will have to take into consideration those ethical questions, and that puts manufacturers into the position of having to protect their algorithm-writing programmers against liability.

How to retain privacy rights?

Privacy is an exceptionally sensitive issue and getting more so as more of our information is collected to drive the algorithm-driven technology that improves our lives. Our personal data is processed incessantly by automated business and marketing systems. Our social security numbers, names, internet browsing, and purchase histories have tremendous value. We want the convenience of having our interactions and experiences more tailored to our interests, yet, we still want privacy.

What is privacy today and what right do we have to protect it? In the time of social media, in which so many pieces of information about our lives have been “bought out” by major portals, our privacy has been redefined. As we move into an electronics-filled smart home in which all the devices learn about our needs, we must realize that their knowledge consists of specific data that can be processed and disclosed. As a result, the question of whether our right to privacy will in time become even more vulnerable to external influences and social processes is fundamental.

Bots poIsed to rule?

In bot tournaments, programmers compete to pass the Turing test, which assesses a machine’s ability to demonstrate intelligence as good as a human’s. No one should be surprised that the best bots exceed now 50 percent success rates. This means that half of the people participating in the experiment are unable to tell whether they have communicated with a human or a machine. While such laboratory competitions are fun, the widespread use of bots in real life raises a number of ethical questions. Can a bot cheat? Can a bot manipulate me? Influence my relationships with my co-workers, service providers, and supervisors? And when I feel cheated, where do I turn for redress? Accusing a bot of manipulation sounds preposterous, but machines are increasing their presence in our lives and beginning to play with our emotions. Here, the question of good versus evil acquires particular significance.

Image for post
Image for post

Frank — a bionic robot built of prostheses and synthetic organs

Will robots tell good from evil?

The most interesting ethical dilemmas specifically concern robotization. The questions are analogous to those asked with regard to autonomous vehicles. Today’s robots are learning to walk, answer questions, hold a beverage bottle, open a fridge, and run. Some are more natural than others at these tasks. They could really be helpful with activities such as taking care of the elderly, where constant daily assistance is often required. The time when robots become social beings and “persons” protected by special rights is still far, but the legal and moral questions must be raised now. Today’s robot manufacturers already face challenges that entail choosing between good and evil. How does one program a robot to always do good and never harm people? To help us under all circumstances and never stand in the way? If we are to trust technology and artificial intelligence, we must make sure that machines follow a plan. What does that mean in the case of a robot? Imagine we program one to dispense medications to a patient at specific times. Then imagine the patient refuses to take them. What is a robot to do? Respect the patient’s choice? Who will take responsibility and bear the consequences of the machine’s choice under such circumstances?

Image for post
Image for post

Isaac Asimov’s three laws of robotics

A time of new morality?

These are just a few of the moral and ethical questions we are beginning to face with AI. Current technological breakthroughs bring with them not only unquestionable benefits that make our lives more comfortable and better, but they could also pose a huge challenge to our value system. Along with technological advance, it’s possible that machines could push us to evolve our sense of ethics, and of right and wrong.

Related articles:

- A machine will not hug you … but it may listen and offer advice

- Can machines tell right from wrong?

- Machine Learning. Computers coming of age

- What a machine will think when it looks us in the eye?

- Fall of the hierarchy. Who really rules in your company?

- The brain — the device that becomes obsolete

- Modern technologies, old fears: will robots take our jobs?

Written by

Technology is my passion. Head of Microsoft Services CEE. Private opinions only

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store