Don’t drop the AI ball

director of the Broadcasting Commission Cordel Green says Jamaica must, as a matter of urgency, craft legislation and create a framework to regulate the use of artificial intelligence (AI), while warning that the “mistakes” experienced with the advent of social media must not be replicated.

Artificial intelligence is essentially the simulation of human intelligence in machines that are programmed to think and act like humans.

Speaking with the Jamaica Observer on Monday, Green said, “there is no specific piece of legislation dealing with artificial intelligence in Jamaica and there is a huge gap that needs to be addressed and very quickly. All across the world — in China, in the European Union, in the United Kingdom — there are specific pieces of legislation that are being crafted.”

He said where Jamaica is concerned, the conversation has to involve looking at AI as a general-purpose technology that affects all sectors.

“It can affect labour, for example, and we need to be determining what aspects of our economy we would be comfortable automating, utilising artificial intelligence,” he said.

Using tourism as an example, Green asked whether Jamaica would, as a policy, say that any direct tourist-facing activity, such as front desk responsibilities which place so much value in our people, move to the use of robots as receptionists.

According to Green, who is also an attorney-at-law and a former assistant attorney general of Jamaica, “those are the types of discussions that must take place because artificial intelligence law is going to have to be contextual”.

“There are some people who are way ahead in the development of the technology. The context for them is very different from Jamaica which will be impacted by the technology,” he said.

He pointed out that Jamaica has not been playing the role of a mere observer as, through the Broadcasting Commission, the country has collaborated with the UNESCO Caribbean Office on the Caribbean Artificial Intelligence Initiative to create an AI road map, the first of its kind for small island developing states.

“We definitely need to accelerate our understanding of artificial intelligence as a general-purpose technology that cuts across all sectors with a lot of opportunities but a lot of risk,” Green argued.

“I would like to suggest also that whilst we are paying attention to artificial intelligence, we must also keep our attention focused on neuro-technology because the brain, which is the last bastion of freedom, is also under assault. Our ability to think freely is also at risk of incursion by machines. Some of those incursions will help in the field of medicine but manipulation of people’s thought processes is a very dangerous undertaking and if you combine developments in neuro-technology with those in AI, you are really talking about a completely different game for humanity, which requires us now to rethink fundamental rights such as the right to mental autonomy,” Green said.

“This is not a technology to be ignored or feared, it is a technology to be managed. It is not a god, and in the very same way we have a framework for biotechnology and also how we deal with genes, we need a framework for artificial intelligence and neuro-technology. Those two are major scientific developments that government and ordinary people need to pay attention to because they can reorder our societies in ways that we can’t compare to any other disruption in all of human history,” Green cautioned further.

In the meantime, he said the issue cannot be approached in silos or politicised.

“It requires an all-of-society approach. Governments are not capable of dealing with this alone and it’s actually quite dangerous to allow governments to deal with this alone, both in terms of the capabilities that these technologies can give to government and also the threat they pose to government. It does require a real joined up approach; civil society needs to be very, very involved in this process. Even at the multilateral level we need a new multilateralism,” he stated.

In arguing that AI was made particularly dangerous, given the fact that the very engineers behind its creation cannot fully explain it, Green said, “everybody can talk about what AI can do and we can all describe how AI is impacting, [but] what you do not hear is anybody explaining precisely how these things came about because of its black box nature.”

“So we cannot leave it up to engineers dealing with AI to shape the future of the world, that’s a huge mistake. We have to learn from our social media experience. A great invention, but one of the most divisive, destructive tools that exists today. We all thought it was funny until we are now seeing a broken society and now everybody is saying we need to put some order to what we created. We cannot afford to replicate the mistakes we made with social media with AI, we need some order. We don’t want to stifle innovation; you really shouldn’t be putting people at the mercy of AI without educating them,” he told the Observer.

In the meantime, while chafing about the subpar levels of digital literacy, Green said, “We have two choices, we can create a better future with the help of technology or we can create better technology at the expense of humanity, and if you don’t have a proper framework, what you are going to do is unleash on people better technology at the expense of humanity, and that is why AI must be about the flourishing of human beings, not the ascendancy of machines. We shouldn’t suffer as human beings or be diminished because of AI.”