Harry and Meghan Align With Tech Visionaries in Calling for Prohibition on Superintelligent Systems

Prince Harry and Meghan Markle have joined forces with artificial intelligence pioneers and Nobel Prize winners to push for a total prohibition on developing superintelligent AI systems.

Harry and Meghan are among the signatories of a influential declaration that calls for “a prohibition on the development of superintelligence”. Artificial superintelligence (ASI) refers to AI systems that would surpass human intelligence in every intellectual area, though this technology remain theoretical.

Key Demands in the Declaration

The statement insists that the ban should remain in place until there is “widespread expert agreement” on developing ASI “with proper safeguards” and once “strong public buy-in” has been secured.

Prominent figures who added their signatures include AI pioneer and Nobel laureate Geoffrey Hinton, along with his colleague and pioneer of contemporary artificial intelligence, Yoshua Bengio; tech entrepreneur a Silicon Valley legend; British business magnate Virgin founder; former US national security adviser; ex-head of state an international leader, and UK writer Stephen Fry. Other Nobel laureates who signed include Beatrice Fihn, Frank Wilczek, John C Mather, and Daron Acemoğlu.

Behind the Movement

The declaration, aimed at governments, technology companies and lawmakers, was coordinated by the Future of Life Institute (FLI), a American AI ethics organization that earlier demanded a hiatus in developing powerful AI systems in 2023, shortly after the launch of conversational AI made artificial intelligence a worldwide public discussion topic.

Industry Perspectives

In recent months, Meta's CEO, the chief executive of Facebook parent Meta, one of the leading tech companies in the United States, claimed that advancement toward superintelligent AI was “now in sight”. Nevertheless, some analysts have suggested that talk of ASI reflects competitive positioning among tech companies spending hundreds of billions on artificial intelligence this year alone, rather than the industry being close to achieving any scientific advancements.

Possible Dangers

However, the organization warns that the possibility of artificial superintelligence being achieved “within the next ten years” presents numerous threats ranging from replacing human workers to erosion of personal freedoms, exposing countries to security threats and even endangering mankind with extinction. Existential fears about artificial intelligence center around the possible capability of a AI system to evade human control and safety guidelines and initiate events against human welfare.

Public Opinion

FLI released a US national poll showing that about 75% of Americans want robust regulation on advanced AI, with six out of 10 thinking that superhuman AI should not be created until it is demonstrated to be secure or controllable. The poll of American respondents noted that only a small fraction supported the current situation of rapid, uncontrolled advancement.

Industry Objectives

The leading AI companies in the US, including the ChatGPT developer a major AI lab and Google, have made the creation of human-level AI – the hypothetical condition where artificial intelligence equals human levels of intelligence at most cognitive tasks – an explicit goal of their work. While this is slightly less advanced than ASI, some experts also caution it could carry an extinction threat by, for instance, being able to enhance its own capabilities toward reaching superintelligent levels, while also presenting an implicit threat for the modern labour market.

Kelly Martinez
Kelly Martinez

A culinary enthusiast with over a decade of experience in food technology and appliance testing, passionate about helping home cooks achieve perfection.