Advances in artificial intelligence pose a myriad of ethical questions, but the most incisive thinking on this subject says more about humans than it does about machines, says Paula Boddington, Oxford academic and author of Towards a Code of Ethics for Artificial Intelligence.
What do you mean by ethics for artificial intelligence?
Well, that’s a good starting point because there are different sorts of questions that are being asked in the books I’m looking at. One of the kinds of questions we’re asking is: what sorts of ethical issues is artificial intelligence, AI, going to bring? AI encompasses so many different applications that it could raise a really wide variety of different questions.
For instance, what’s going to happen to the workforce if AI makes lots of people redundant? That raises ethical issues because it affects people’s well-being and employment. There are also questions about whether we somehow need to build ethics into the sorts of decisions that AI devices are making on our behalf, especially as AI becomes more autonomous and more powerful. For example, one question which is debated a lot at the moment is: what sorts of decisions should be programmed into autonomous vehicles? If they come to a decision where they’re going to have to crash one way or the other, or kill somebody, or kill the driver, what sort of ethics might go into that?
But there are also ethical questions about AI in medicine. For instance, there’s already work developing virtual psychological therapies using AI such as cognitive behavioural therapy. This might be useful since it seems people may sometimes open up more freely online. But, obviously, there are going to be ethical issues in how you’re going to respond to someone saying that they’re going to kill themselves, or something along those lines. There are various ethical issues about how you program that in.
I suppose work in AI can be divided into whether you’re talking about the sorts of issues which we’re facing now or in the very near future. The issues we are facing now concern ‘narrow’ AI which is focused on particular tasks. But there is also speculative work about whether we might develop an artificial general intelligence or, even then, going on from that to a super intelligence. But if we’re looking at an artificial general intelligence which would be mimicking human intelligence in general, depending on whether we’re retaining control of it or even if we are not retaining control of it, lots of people are arguing that we need to build in some kind of ethics or some way to make certain that the AI isn’t going to do something like turn back and decide to rebel – the sort of thing that happens in many of the Isaac Asimov robot stories. So there are many ethical questions that arise from AI, both as it is now and as it will be in the future.
Read the source article at FiveBooks.com.
Source: AI Trends