Good and Bad Beyond the Control of Researchers: Who Controls AI?

0
295

Artificial Intelligence technology development, application, and production are growing rapidly. As we being to understand the scope of the change that lies ahead, two simple questions come to mind.

First, is it even possible to ensure that the technology is developed solely for the benefit of the humankind and not to cause harm?

Second, if such an ideal is achieved, who controls and monitors it?

The first assumption we have to make – and frankly it is not a big leap – is that artificial intelligence technology, like other technologies, can be used to do both good and bad. If we simply look at the development and adoption of the nuclear technology – we can observe that while nuclear technology can be used to do a lot of good (e.g. nuclear medicine, energy) – it can also be used to kill people, destroy entire countries, and even the entire world.

When I wrote the first sentence of the previous paragraph, notice that I used the words “technology…, can be used” – which obviously implies that there are humans controlling the use or deployment of the technology. So, when harm happens, it happens either through intentional misuse (or harmful use) or through a design or engineering flaw, maintenance lapse, or use mistake (e.g. Chernobyl disaster). We don’t expect technologies to have a mind of their own.

And that is a massive difference between technologies we have known to date and the artificial intelligence. AI has the ability to have a mind of its own.

In a recent paper Seth Baum of Global Catastrophic Risk Institute argued that building artificial intelligence technology should be for the benefit of the whole of the society. He posed the challenge as multifaceted “… the technical challenge of developing safe and beneficial technology designs, and there is the social challenge of ensuring that such designs are used.” (Baum, 2016)For the former he proposed that builders shouldn’t build technology for their own benefit if it comes at the cost of society as a whole and he acknowledged that this aspiration may be unpopular with capitalist entrepreneurship and intellectual progress. He also recognized that the current focus of designers and developers is not necessarily on the safety and benefit of the society as a whole. Baum then proposed two solutions – both presented as measures – extrinsic measures and intrinsic measures. Extrinsic measures are simply placing a ban on harmful technologies. If it is determined to be dangerous – just don’t do it. Intrinsic measures focus on cultivating norms and values that encourage safe and beneficial designs and discourage harmful designs. Baum provides several ideas on how such norms and values can be instilled in the AI community.

While Baum proposes a valid approach, there are many practical and historical limitations with the proposal. For example:

  • AI is not like other technologies – a learning technology can technically outmaneuver and outperform its expected functionality in unpredictable ways. Even something as simple as Microsoft Tay’s recent debacle,highlights the problem. When we are considering AI and human interaction and AI and AI interaction, the space can be huge and their interactions can display the characteristicsof or lead to emergent complex system dynamics. When (or even if) a good technology decides to turn bad or harmful, may be completely unpredictable.
  • Despite best efforts to cultivate ethics and values, forces of nationalism and greed can easily overpower best intentions. For example, countries such as France, Pakistan, India, Israel, and North Korea proceeded to develop nuclear weapons despite knowing about the technology’s lethal potential. In fact, the populations of Pakistan, India, and North Korea celebrated when their countries successfully tested the nuclear bomb technology. There was no remorse for joining the nuclear club.
  • Even the extrinsic measures would appear farfetched given the fact that the software industry has tremendous clout and is now formally lobbying to remove obstacles.
  • Despite having the rule of law on its side and control over military, I believe that governments will become weaker and weaker in the upcoming years, losing power to the entities that control information. This will be true inboth cases: democracies and dictatorships. In democracies, the entities that control information will be able to influence the elections. In dictatorships, entities that control the information will specifically use the information to create revolutions.
  • In many cases developers of technology may not even have an insight into, or choice over, what they are developing. For them, it could just be job.
  • Human history is ahistory of conflict and war, of domination and oppression. The honorable traits of self-regulating will most certainly be overlooked when serving the interests of local populations or influential citizens (e.g. US response to climate change)

The solution therefore, seems to be a mix of self-regulation and external regulation (as proposed by Baum) but with a humble understanding that: a) artificial intelligence is already on the path for commercialization and the competition has begun; and b) the technology may very well be uncontrollable.

References:

Baum, S. D. (2016) On the Promotion of Safe and Socially Beneficial Artificial Intelligence. (July), 161.

Microsoft’s disastrous Tay experiment shows the hidden dangers of AI

LEAVE A REPLY