The post US group rallies media, tech figures in call to check artificial superintelligence appeared on BitcoinEthereumNews.com. A coalition of US right-wing media personalities, scientists, and tech leaders are calling for a global ban on the development of superintelligent artificial intelligence (AI) until science ensures it can be controlled safely, in tandem with the public’s support. According to a Wednesday report by Reuters, the plea coordinated by the Future of Life Institute (FLI) was announced through a joint statement signed by more than 850 public figures.  The document is asking governments and AI companies to suspend all superintelligence work, AI systems that supposedly tower over human cognitive abilities, until publicly approved safety mechanisms are imposed. Allies converge to halt superintelligence AI development  The signatories in the coalition are led by right-wing media members Steve Bannon and Glenn Beck, alongside leading AI researchers Geoffrey Hinton and Yoshua Bengio. Other figures include Virgin Group founder Richard Branson, Apple cofounder Steve Wozniak, and former US military and political officials.  The list also features former Chairman of the Joint Chiefs of Staff Mike Mullen, former National Security Advisor Susan Rice, and the Duke and Duchess of Sussex, Prince Harry and Meghan Markle, with former President of Ireland Mary Robinson. Renowned computer scientist Yoshua Bengio spoke about the coalition’s fears in a statement on the initiative’s website, saying AI systems may soon outperform most humans in cognitive tasks. Bengio reiterated that technology could help solve global problems, but it poses immense dangers if developed recklessly. “To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use,” he said. “We also need to make sure the public has a much stronger say in decisions that will shape our collective future.” The Future of Life Institute, a nonprofit founded in 2014 with early backing from Tesla CEO Elon… The post US group rallies media, tech figures in call to check artificial superintelligence appeared on BitcoinEthereumNews.com. A coalition of US right-wing media personalities, scientists, and tech leaders are calling for a global ban on the development of superintelligent artificial intelligence (AI) until science ensures it can be controlled safely, in tandem with the public’s support. According to a Wednesday report by Reuters, the plea coordinated by the Future of Life Institute (FLI) was announced through a joint statement signed by more than 850 public figures.  The document is asking governments and AI companies to suspend all superintelligence work, AI systems that supposedly tower over human cognitive abilities, until publicly approved safety mechanisms are imposed. Allies converge to halt superintelligence AI development  The signatories in the coalition are led by right-wing media members Steve Bannon and Glenn Beck, alongside leading AI researchers Geoffrey Hinton and Yoshua Bengio. Other figures include Virgin Group founder Richard Branson, Apple cofounder Steve Wozniak, and former US military and political officials.  The list also features former Chairman of the Joint Chiefs of Staff Mike Mullen, former National Security Advisor Susan Rice, and the Duke and Duchess of Sussex, Prince Harry and Meghan Markle, with former President of Ireland Mary Robinson. Renowned computer scientist Yoshua Bengio spoke about the coalition’s fears in a statement on the initiative’s website, saying AI systems may soon outperform most humans in cognitive tasks. Bengio reiterated that technology could help solve global problems, but it poses immense dangers if developed recklessly. “To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use,” he said. “We also need to make sure the public has a much stronger say in decisions that will shape our collective future.” The Future of Life Institute, a nonprofit founded in 2014 with early backing from Tesla CEO Elon…

US group rallies media, tech figures in call to check artificial superintelligence

2025/10/22 21:37

A coalition of US right-wing media personalities, scientists, and tech leaders are calling for a global ban on the development of superintelligent artificial intelligence (AI) until science ensures it can be controlled safely, in tandem with the public’s support.

According to a Wednesday report by Reuters, the plea coordinated by the Future of Life Institute (FLI) was announced through a joint statement signed by more than 850 public figures. 

The document is asking governments and AI companies to suspend all superintelligence work, AI systems that supposedly tower over human cognitive abilities, until publicly approved safety mechanisms are imposed.

Allies converge to halt superintelligence AI development 

The signatories in the coalition are led by right-wing media members Steve Bannon and Glenn Beck, alongside leading AI researchers Geoffrey Hinton and Yoshua Bengio. Other figures include Virgin Group founder Richard Branson, Apple cofounder Steve Wozniak, and former US military and political officials. 

The list also features former Chairman of the Joint Chiefs of Staff Mike Mullen, former National Security Advisor Susan Rice, and the Duke and Duchess of Sussex, Prince Harry and Meghan Markle, with former President of Ireland Mary Robinson.

Renowned computer scientist Yoshua Bengio spoke about the coalition’s fears in a statement on the initiative’s website, saying AI systems may soon outperform most humans in cognitive tasks. Bengio reiterated that technology could help solve global problems, but it poses immense dangers if developed recklessly.

“To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use,” he said. “We also need to make sure the public has a much stronger say in decisions that will shape our collective future.”

The Future of Life Institute, a nonprofit founded in 2014 with early backing from Tesla CEO Elon Musk and tech investor Jaan Tallinn, is also among groups campaigning for responsible AI governance. 

The organization warns that the race to build superintelligent AI or artificial superintelligence (ASI) could create irreversible risks for humanity if not properly regulated.

In its latest statement, the group noted superintelligence could lead to “human economic obsolescence, disempowerment, losses of freedom, civil liberties, dignity, and control, and national security threats and even the potential extinction of humanity.”

FLI is asking policymakers to ban superintelligence research and development fully until there is “strong public support” and “scientific consensus that such systems can be safely built and controlled.”

Tech industry split on AI development

Tech giants are still trying to push the boundaries of AI capabilities, even though some groups are against how it has affected jobs and product development. Elon Musk’s xAI, Sam Altman’s OpenAI, and Meta are all racing to develop powerful large language models (LLMs). 

In July, Meta CEO Mark Zuckerberg said during a conference that the development of superintelligent systems was “now in sight.” However, some AI experts claim the Meta CEO is using marketing tactics to scare competitors about how his company is “ahead” in a sector expected to see hundreds of billions of dollars in the coming years.

The US government and technology industry have resisted demands for moratoriums, propounding that fears of an “AI apocalypse” are vehemently exaggerated. Naysayers of a development pause say it would stifle innovation, slow economic growth, and the potential benefits AI could bring to medicine, climate science, and automation.

Yet, according to a national poll commissioned by FLI, the American public is largely in favor of stricter oversight. The survey of 2,000 adults found that three-quarters of respondents support more regulation of advanced AI, and six in ten believe that superhuman AI should not be developed until it is proven controllable. 

Before becoming OpenAI’s chief executive, Sam Altman warned in a 2015 blog post that “superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”

Similarly, Elon Musk, who has simultaneously funded and fought against AI advancement, said earlier this year in Joe Rogan’s podcast that there was “only a 20% chance of annihilation” from AI surpassing human intelligence. 

Sharpen your strategy with mentorship + daily ideas – 30 days free access to our trading program

Source: https://www.cryptopolitan.com/media-tech-figures-halt-superintelligent-ai/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

The post Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO appeared on BitcoinEthereumNews.com. Aave DAO is gearing up for a significant overhaul by shutting down over 50% of underperforming L2 instances. It is also restructuring its governance framework and deploying over $100 million to boost GHO. This could be a pivotal moment that propels Aave back to the forefront of on-chain lending or sparks unprecedented controversy within the DeFi community. Sponsored Sponsored ACI Proposes Shutting Down 50% of L2s The “State of the Union” report by the Aave Chan Initiative (ACI) paints a candid picture. After a turbulent period in the DeFi market and internal challenges, Aave (AAVE) now leads in key metrics: TVL, revenue, market share, and borrowing volume. Aave’s annual revenue of $130 million surpasses the combined cash reserves of its competitors. Tokenomics improvements and the AAVE token buyback program have also contributed to the ecosystem’s growth. Aave global metrics. Source: Aave However, the ACI’s report also highlights several pain points. First, regarding the Layer-2 (L2) strategy. While Aave’s L2 strategy was once a key driver of success, it is no longer fit for purpose. Over half of Aave’s instances on L2s and alt-L1s are not economically viable. Based on year-to-date data, over 86.6% of Aave’s revenue comes from the mainnet, indicating that everything else is a side quest. On this basis, ACI proposes closing underperforming networks. The DAO should invest in key networks with significant differentiators. Second, ACI is pushing for a complete overhaul of the “friendly fork” framework, as most have been unimpressive regarding TVL and revenue. In some cases, attackers have exploited them to Aave’s detriment, as seen with Spark. Sponsored Sponsored “The friendly fork model had a good intention but bad execution where the DAO was too friendly towards these forks, allowing the DAO only little upside,” the report states. Third, the instance model, once a smart…
Share
2025/09/18 02:28
Share
Ethereum Foundation Moves Entire $650M+ Treasury to Safe Multisig

Ethereum Foundation Moves Entire $650M+ Treasury to Safe Multisig

The post Ethereum Foundation Moves Entire $650M+ Treasury to Safe Multisig appeared on BitcoinEthereumNews.com. EF completes full treasury migration to Safe smart accounts, joining Vitalik Buterin as key Safe user + Safe smart accounts cross 750M transactions milestone.   The Ethereum Foundation has completed the migration of its full treasury, over 160,000 ETH worth approximately $650 million to Safe{Wallet}, following months of successful DeFi testing. Safe{Wallet}, operated by Safe Labs (a fully owned subsidiary of the Safe Foundation), is the crypto industry’s trusted smart account standard for multisig wallets, securing billions of dollars in assets for institutions, DAOs, and projects. The move follows the Foundation’s June 2025 treasury policy announcement, which committed to actively participating in Ethereum’s DeFi ecosystem. Since February, the EF had been testing Safe with a separate DeFi-focused account, dogfooding protocols including Aave, Cowswap, and Morpho as part of their strategy to support applications built on Ethereum. After testing a 3-of-5 multisig configuration on January 20th, the Foundation has now consolidated its remaining ETH holdings into Safe, completing the transition from their previous custom-built multisig solution. This implementation enables the Ethereum Foundation to actively participate in DeFi via Safe while maintaining battle-tested security standards, marking another step toward Safe’s vision of moving the world’s GDP onchain through battle-tested self-custody infrastructure. “Safe has proven safe and has a great user experience, and we will transfer more of our funds here over time,” the Ethereum Foundation announced, indicating this is the beginning of a deeper commitment to the Safe smart account standard. Safe’s Momentum The timing is notable: Safe has just crossed 750 million transactions (751,062,286 as of today) with over 57.5 million Safes created across multiple chains. The protocol has emerged as crypto’s de facto standard for multisig wallets, securing billions in institutional and DAO treasuries. Safe also counts Ethereum co-founder Vitalik Buterin among its prominent users, who revealed in May 2024 that…
Share
2025/10/23 04:15
Share