>
Without Inclusive AI, Sustainable Development is Impossible
2024-05-09   |   ICCSD

2023 was generative AI’s year, when the huge potential of the technology began to be realized in real-world settings. 2024 must be the year we get serious about its risks – especially the serious implications it could have for widening existing inequalities and creating new ones.

It is incumbent on international leaders and policymakers to ensure that all of humanity – not just the fortunate few – reap the benefits of AI.

The capabilities of AI systems are maturing, and many in the developed world now use large language models that understand language, generate images and engage in reasoning, in our everyday lives. In the near future, it is expected that these models will support improvements in productivity, boosting economic growth, empowering individuals from the arts to scientific research, and probably helping humankind address large-scale social and technological challenges.

But will these effects be evenly distributed? Will they focus on the needs of communities that are already under-represented? And how can we ensure that the benefits don’t just go to developed countries that have the resources, infrastructure, digital literacy and training to best take advantage of frontier AI?

In terms of AI development, there are several issues that needs to be addressed in terms of access and inclusion.

For example, in terms of development, the majority of training data tends to be based on English-based training data. This is natural as the majority of data produced is linked to English, while it is important to recognize that important local innovations equally can be unlocked when large language models are specialized on low-resource languages where data is not as readily available.

The challenges of unlocking local innovation in AI tends to be paired with inadequate access to internet services, limited computing power, and lacking availability to sectoral training. Those groups and nations that are already struggling to take advantage of current AI systems will probably fall further behind, unless some of these trends are actively reversed.

To redress governments, private-sector leaders and technical experts need to build support for and create rules and norms for the equitable development, distribution and access to AI. They will also need to consider a range of other issues, including bias, privacy, the need for shared, precise terminology, accountability, transparency and the development of trust.

Without some degree of intervention, governance, safeguards, and importantly, consensus, we cannot assure inclusive AI. To address this divide, the World Economic Forum is using its multistakeholder model to bring together government and business leaders to ensure key issues such as inclusivity and equitable access are on the AI agenda.

Building on its Presidio AI Framework for Responsible and Optimised AI Development and Deployment, released in June last year, the Forum’s AI Governance Alliance is bringing experts from different sectors together to promote cross-border data quality and availability. It aims to mobilize resources to explore the benefits of AI in important sectors such as education and healthcare.

The AI Governance Alliance is advancing a three-pronged approach to ensure the equitable distribution of the technology worldwide. The Briefing Paper Series, published during our annual meeting, present recommendations on safe systems and technologies, responsible applications and transformations, and resilient governance and regulation.

Among the central tenets identified to success are the need for standardized perspectives on the model lifecycle, inculcating shared responsibility, taking steps to proactively manage risk, ensuring multistakeholder governance, communicating transparently, and advocating for international coordination and standards to help prevent fragmentation.

Generative AI is rapidly becoming a defining feature of modernity. Unlike other technologies of national and international importance, AI can be used by anyone with access worldwide. The issue is to ensure that it is a part of everyone’s future, and this can only be achieved if we build in inclusivity now.

Open, transparent innovation and international collaboration are essential to AI’s continued responsible development to ensure that it upholds shared human values and promotes inclusive societal progress. To date, this has been shown to be lacking, but we are well placed to see where the problems lie and how they can be addressed.

We have a narrow window in which to act, underscoring why we need to work quickly, efficiently and together during 2024.

編輯: 孫麗晨
Tag: