Governments should close the AI trust gap with businesses

13 August 2020 4 min. read

Experts at EY suggest that the absence of a focused approach to ethical artificial intelligence (AI) poses a huge risk to the business environment. Governments and private organisations need to collaborate to define ethical AI and bridge the gaps. 

The remarks come against the backdrop of a global EY study that surveyed more than 70 policymakers and over 280 private organisations across the globe, to gauge perspectives on ethics in artificial intelligence. The field is a relatively new one, and many agree that collaboration among stakeholders will be key to developing a clear ethical framework.

Private companies and regulators are central players here. According to EY, these groups have to be aligned within themselves and with each other for there to be coherent AI ethics. However, the Big Four accounting and advisory firm found that while policymakers have made significant advances in the ethical AI space, the private sector lags behind by some distance.

Ethical AI principles for facial recognition tech and home virtual assistants

In Singapore, for instance, EY found in January this year that only 15 companies had actually adopted an ethical AI framework. Worryingly, Singapore is rated as a world leader when it comes to AI preparedness, painting a bleak picture for the rest of the global market. Compounding this lag is the internal discord among companies when it comes to AI ethics. As explained by EY Singapore’s Head of Consulting Cheang Wai Keat, a disconnect can have severe consequences.

“Significant misalignments around fairness and avoiding bias generate substantial market risks, as companies may be deploying products and services poorly aligned to emerging social values and regulatory guidance. However, companies that are able to establish trust in their AI-powered products and services will be at an advantage,” he said.

Singapore and the surrounding Asian market is among the most vibrant tech hubs in the world. Businesses in the region are digitalising rapidly, leveraging the latest in AI as well as Internet of Things, data analytics, cloud technology and other tools to improve their product and service offerings. At the same time, advancements are being made in the autonomous vehicle, voice recognition and facial recognition spheres.

Trust deficit in the government

According to EY, an ethical framework around these advancements will address certain key questions, such as: “Will the algorithms powering autonomous vehicles keep passengers safe? Will automated loan application decisions be transparent and non-discriminatory? Will facial recognition cameras violate citizens’ privacy or mistakenly target innocent individuals?” are some examples.

Policymakers have been quick to think about these issues and provide safeguards. Examples include the General Data Protection Regulation (GDPR) in Europe, the LGPD in Brazil, and the California Consumer Privacy Act, among others. Each is aimed at ensuring that technology does not go too far. In addition, more than 100 ethical guidelines have been published in the AI sphere since 2016.

Global policymakers also appear to be on the same wavelength when it comes to ethical priorities. An example is that 'privacy & data' rights as well as 'safety & security' rank as top considerations for policymakers when developing facial recognition technology and virtual home assistants.

Disagreement between policymakers and companies

While some private companies also rank these factors highly, there is little consensus among businesses on any of these issues. On the whole, EY pointed out that private organisations are more concerned about compliance than actually addressing ethical issues. As a result, their top priorities often align with stipulations in regulatory frameworks such as the GDPR.

So aligning priorities is a challenge. Despite the fact that policymakers and private organisations agree that a multiple stakeholder approach is crucial to building an ethical framework, there is a fundamental disagreement on who should lead this process.

EY reports that private companies distrust the government, and believe the private sector should be allowed to take the lead and “self-regulate” when it comes to ethical AI. Policymakers, on the other hand, think that a comprehensive framework set by an intergovernmental organisation should do the trick. EY Asean & Singapore’s Government & Public Sector lead Benjamin Chiang stressed that this deadlock is far from sustainable.

“It will not be easy to bridge the disconnect between the public and private sector; however it must be a national imperative. Governments must make closing the AI trust gap a top priority, as this will allow the acceleration of digital transformation necessary to address the urgent public health, social and economic challenges before us,” he said.