Monday, February 23, 2026

Anthropic AI Safety Chief Resigns, Warning of World ‘In Peril.’

PULSE POINTS

âť“WHAT HAPPENED: An Anthropic researcher resigned in a cryptic, poetry-laden letter warning of a world “in peril.”

👤WHO WAS INVOLVED: Mrinank Sharma, former head of Anthropic’s Safeguards Research Team, and other Anthropic employees.

📍WHEN & WHERE: Resignation announced earlier this week, with Sharma departing from Anthropic, a San Francisco-based AI company.

đź’¬KEY QUOTE: “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.” – Mrinank Sharma

🎯IMPACT: Sharma’s warning raises concerns over AI’s societal effects and internal tensions at Anthropic, while fueling broader debates on the technology’s safety.

IN FULL

The leader of the Safeguards Research Team for Anthropic‘s Claude chatbot abruptly resigned this week, issuing a bizarre, poetry-laden letter that warned of a world “in peril.” Mrinank Sharma, who led the safety team since its inception in 2023, also indicated in his letter that internal pressure to ignore artificial intelligence (AI) safety protocols played a significant role in his decision to resign.

“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” Sharma wrote, adding that employees “constantly face pressures to set aside what matters most.” He further warned, “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”

“We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” Sharma added.

Sharma’s resignation comes as Anthropic faces scrutiny over its newly released Claude Cowork model, which sparked a stock market selloff amid fears it could disrupt software industries and automate white-collar jobs, particularly in legal roles. Employees reportedly expressed concerns in internal surveys, with one stating, “It kind of feels like I’m coming to work every day to put myself out of a job.”

Sharma’s departure follows a trend of high-profile resignations in the AI sector, often tied to safety concerns. A former OpenAI team member previously quit, accusing the company of prioritizing product launches over user safety. Similarly, ex-OpenAI researcher Tom Cunningham left after alleging the company discouraged publishing research critical of AI’s negative effects. In his parting note, Sharma hinted at a personal pivot, stating, “I hope to explore a poetry degree and devote myself to the practice of courageous speech.”

The National Pulse reported last May that former OpenAI Chief Scientist Ilya Sutskever allegedly discussed building a bunker in preparation for the release of artificial general intelligence (AGI). During a summer 2023 meeting, Sutskever reportedly stated, “We’re definitely going to build a bunker before we release AGI.” Two other individuals who attended the meeting corroborated the account, with one describing Sutskever’s AGI beliefs as akin to anticipating a “rapture.”

Join Pulse+ to comment below, and receive exclusive e-mail analyses.

show less
show more

RFK’s HHS Will Use AI Tool to Detect Vaccine Injury Patterns.

PULSE POINTS

❓WHAT HAPPENED: The Department of Health and Human Services (HHS) is developing an artificial intelligence (AI) tool to analyze vaccine monitoring data and uncover potential adverse effects.

👤WHO WAS INVOLVED: HHS Secretary Robert F. Kennedy Jr., vaccine researchers, and AI experts.

📍WHEN & WHERE: The AI tool has been under development since late 2023. The work is part of HHS operations in the United States.

🎯IMPACT: The development of this AI tool could highlight relations between reported vaccine injuries and cases of autism and other health issues.

IN FULL

The U.S. Department of Health and Human Services (HHS) is developing a generative artificial intelligence (AI) tool intended to analyze vaccine safety data, identify patterns, and generate hypotheses about potential adverse effects, according to the department’s AI inventory report for 2025. The project has been in development since late 2023 and reflects a broader push within HHS to modernize public health surveillance.

The initiative has drawn attention in part because it is moving forward under HHS Secretary Robert F. Kennedy Jr., who has long argued that vaccine safety monitoring should be expanded and made more transparent. Kennedy has pushed for changes to the childhood vaccination schedule, removing several vaccines, including those for COVID-19, influenza, and hepatitis A and B, from the recommended list, and has called for reforms to both the Vaccine Adverse Event Reporting System (VAERS) and the federal Vaccine Injury Compensation Program. Supporters say these moves reflect an effort to restore public trust by more closely scrutinizing vaccine risks alongside benefits. VAERS, established in 1990, allows healthcare providers and members of the public to submit reports of health problems that occur after vaccination.

The debate around the AI tool intersects with Kennedy’s broader campaigns questioning aspects of establishment vaccine science, including his repeated calls for renewed investigation into autism and vaccines. Kennedy has argued that the issue has not been fully or transparently examined and has cited animal studies and other research as justification for continued scrutiny.

Image by Chhor Sokunthea / World Bank.

Join Pulse+ to comment below, and receive exclusive e-mail analyses.

show less
show more

Former Google Engineer Convicted of Spying for China.

PULSE POINTS

❓WHAT HAPPENED: The U.S. Department of Justice (DOJ) announced on Friday that it secured a 14-count conviction against former Google software engineer Linwei Ding, also known as Leon Ding, 38, for conducting economic espionage and stealing trade secrets on behalf of the Chinese Communist Party (CCP).

👤WHO WAS INVOLVED: Linwei Ding, also known as Leon Ding; the DOJ, Google, and the CCP.

📍WHEN & WHERE: Ding was convicted on 14 counts of economic espionage and stealing trade secrets on Thursday, January 29, 2026.

đź’¬KEY QUOTE: “In multiple statements to potential investors, Ding claimed that he could build an AI supercomputer by copying and modifying Google’s technology. In December 2023, less than two weeks before he resigned from Google, Ding downloaded the stolen Google trade secrets to his own personal computer.” — DOJ

🎯IMPACT: The issue of Chinese corporate espionage has been a serious problem in the United States and in other Western nations in recent years, especially in the AI and semiconductor industries.

IN FULL

The U.S. Department of Justice (DOJ) announced on Friday that it secured a 14-count conviction against former Google software engineer Linwei Ding, also known as Leon Ding, 38, for conducting economic espionage and stealing trade secrets on behalf of the Chinese Communist Party (CCP). A federal jury in San Francisco found Ding guilty on seven counts of economic espionage and seven counts of theft of trade secrets on Thursday.

According to the DOJ, Ding stole thousands of pages of documents and files related to Google’s artificial intelligence (AI) technologies, uploading them from company servers to his own private Google Cloud account. In addition, between May 2022 and April 2023, while Google employed Ding, he also clandestinely worked for two China-based technology firms. It is believed he passed on trade secrets to these entities and sought to use Google’s technology to train his own large AI models.

“[B]y early 2023, Ding was in the process of founding his own technology company in the PRC focused on AI and machine learning and was acting as the company’s CEO,” the DOJ revealed on Friday, continuing, “In multiple statements to potential investors, Ding claimed that he could build an AI supercomputer by copying and modifying Google’s technology. In December 2023, less than two weeks before he resigned from Google, Ding downloaded the stolen Google trade secrets to his own personal computer.”

The issue of Chinese corporate espionage has been a serious problem in the United States and in other Western nations in recent years, especially in the AI and semiconductor industries. In late 2025, it was revealed that Chinese agents successfully recruited former employees of the Dutch company ASML, which mastered the Extreme Ultraviolet (EUV) lithography process that allows for the manufacture of some of the world’s most advanced semiconductors. Consequently, the CCP was able to construct a prototype EUV machine in Shenzhen, and will likely be able to produce advanced chips between 2028 and 2030.

The National Pulse reported in July of last year that Chenguang Gong, a dual citizen of the United States and China, pleaded guilty to stealing over 3,600 files containing military trade secrets from a Southern California defense contractor. These files included blueprints for advanced sensors used to detect and monitor hypersonic, ballistic, and nuclear missiles, as well as sensors designed to warn U.S. warplanes of incoming heat-seeking missiles and jam their infrared tracking systems.

Image by Anthony Quintano.

Join Pulse+ to comment below, and receive exclusive e-mail analyses.

show less
show more

AI Panopticon: UK Govt Wants ‘The Eyes of the State on You at All Times.’

PULSE POINTS

❓WHAT HAPPENED: Police in Britain are trialing AI technologies to prevent crime before it happens, reminiscent of Minority Report.

👤WHO WAS INVOLVED: Shabana Mahmood, the British Home Secretary, and Sir Andy Marsh, head of the College of Policing.

📍WHEN & WHERE: January 18, 2026, in the Britain.

đź’¬KEY QUOTE: “[M]y ultimate vision… was to achieve, by means of AI and technology, what Jeremy Bentham tried to do with his Panopticon. That is that the eyes of the state can be on you at all times.” – Shabana Mahmood

🎯IMPACT: The initiative is drawing criticism for weaponizing AI technology to undermine citizens’ privacy.

IN FULL

Police forces across Britain are exploring the use of artificial intelligence (AI) to identify and deter criminal activity before offenses take place, a move that has drawn comparisons to the predictive policing depicted in the film Minority Report. Around 100 separate projects are currently being reviewed by police chiefs as part of efforts to integrate AI tools into crime-fighting and public-order strategies.

Home Secretary Shabana Mahmood—roughly equivalent to the U.S. Homeland Security Secretary—is expected to formalise the expanded role of AI in policing in a white paper due to be published next week. The proposals form part of a broader reform agenda at the Home Office. Sir Andy Marsh, chief executive of the College of Policing, has said that “predictive analytics” could help forces analyse data patterns and intervene earlier to prevent crime.

Mahmood, who previously served as Justice Secretary, has also argued for a significant expansion of GPS tagging for offenders. She has suggested that increased electronic monitoring could amount to “virtual prisons” for those serving community sentences, allowing authorities to maintain close supervision without custodial sentences. Since taking over the British Home Office, she has overseen the announcement of a nationwide rollout of live facial-recognition technology by police forces.

In a recent interview with arch globalist and former Prime Minister, Sir Tony Blair, Ms Mahmood said, “AI and technology can be transformative to the whole of the law and order space.” She added that, as Justice Secretary, her “ultimate vision… was to achieve, by means of AI and technology, what Jeremy Bentham tried to do with his Panopticon. That is that the eyes of the state can be on you at all times.”

The Panopticon, an 18th-century prison design proposed by philosopher Jeremy Bentham, allowed inmates to be observed at any moment, without knowing when they were being watched. Critics argue that Bentham’s concept has become an increasingly apt metaphor for modern surveillance technology. What was once a theoretical model is now cited by civil liberties groups concerned about the scale and reach of data collection, monitoring, and algorithmic decision-making being pursued by the state.

The push for AI-driven policing comes amid wider controversy over surveillance and free speech in Britain. The government has recently reinstated a COVID-era monitoring unit tasked with spying on online commentary related to immigration and public order, prompting accusations that lawful political speech is being spied on by the state. At the same time, government ministers have defended censorship of online platforms, arguing that restrictions are necessary to maintain public safety.

Internationally, Britain’s Labour Party government has faced criticism from President Donald J. Trump, who compared Britain to China after reports that the government pressured Apple to weaken iCloud security to assist law enforcement access.

Image via UK Home Office.

Join Pulse+ to comment below, and receive exclusive e-mail analyses.

show less
show more

Newsom Demands Investigation Into Musk’s X Platform Over AI Deepfake Complaints.

PULSE POINTS

❓WHAT HAPPENED: California Governor Gavin Newsom (D) has urged the state’s attorney general to investigate Elon Musk’s social media platform, X, over allegations that its artificial intelligence (AI) tool, Grok, has been used to create sexualized deepfake images, including of minors.

👤WHO WAS INVOLVED: Gov. Newsom, California Attorney General Rob Bonta (D), Elon Musk, and X’s AI chatbot, Grok.

📍WHEN & WHERE: Newsom made his comments on Wednesday via social media, addressing issues tied to X and its AI tool, Grok, which has faced scrutiny globally.

💬KEY QUOTE: “xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile,” said Newsom.

🎯IMPACT: The allegations have prompted investigations and international scrutiny, with Indonesia and Malaysia temporarily blocking the platform over similar concerns, and the United Kingdom threatening a ban as well.

IN FULL

California Governor Gavin Newsom (D) is demanding his state’s Attorney General Rob Bonta (D) investigate Elon Musk’s social media platform, X, and xAI, over concerns that its artificial intelligence (AI) tool, Grok, is being used to create non-consensual, sexualized deepfake images, some involving minors. Newsom alleged the X platform is a “breeding ground for predators” in a message posted to X (formerly Twitter) on Wednesday.

“xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile,” Newsom wrote, adding: “I am calling on the Attorney General to immediately investigate the company and hold xAI accountable.”

The statement follows reports that X’s AI chatbot, Grok, has been under scrutiny for editing images of people to put them in bikinis, often male politicians but sometimes women and girls below the age of majority. As a result, Indonesia and Malaysia have temporarily blocked access to the platform, with the United Kingdom threatening a ban as well.

California Attorney General Rob Bonta (D) echoed Newsom’s concerns, stating, “The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking. As the top law enforcement official of California tasked with protecting our residents, I am deeply concerned with this development in AI and will use all the tools at my disposal to keep California’s residents safe.”

Elon Musk responded to the claims, denying that Grok generates illegal or explicit images by saying he was “not aware of any naked underage images generated by Grok. Literally zero. Obviously, Grok does not spontaneously generate images, it does so only according to user requests.”

“When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state. There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately,” he insisted.

Image by Gage Skidmore.

Join Pulse+ to comment below, and receive exclusive e-mail analyses.

show less
show more

AI Data Centers Overwhelm Power Grid, Sparking Supply Crisis and Rate Hikes.

PULSE POINTS

❓WHAT HAPPENED: PJM, the largest power-grid operator in the U.S., is facing a supply crisis due to the rising demand from artificial intelligence (AI) data centers.

👤WHO WAS INVOLVED: PJM, tech companies like Amazon, Alphabet, Microsoft, and state officials.

📍WHEN & WHERE: January 2026, affecting a 13-state region from New Jersey to Illinois.

đź’¬KEY QUOTE: “The reliability risk is across the street.” – Former Federal Energy Regulatory Commission (FERC) chairman Mark Christie

🎯IMPACT: Potential for rolling blackouts and increased electricity rates for consumers.

IN FULL

The United States is experiencing a significant strain on its largest power grid operator, PJM, due to the increasing demand from artificial intelligence (AI) data centers. These centers, particularly concentrated in Northern Virginia, are consuming vast amounts of electricity, pushing the grid towards a potential supply crisis.

PJM serves a 13-state region spanning from New Jersey to Illinois, supplying power to approximately 67 million people. However, as older power plants are decommissioned faster than new ones can be built, the grid is quickly nearing its capacity limits, especially during periods of high demand. This situation may force PJM to implement rolling blackouts during extreme weather conditions to protect the grid infrastructure.

Former Federal Energy Regulatory Commission (FERC) chairman Mark Christie highlighted the immediacy of the threat, stating, “The reliability risk is across the street.” PJM anticipates a 4.8 percent annual increase in power demand over the next decade, a stark contrast to previous years of stagnant growth.

The increase in electricity rates has angered consumers, while tech giants like Amazon, Alphabet (Google’s parent company), and Microsoft resist proposals requiring data centers to either build their own power sources or reduce operations during demand spikes. It is worth noting that Microsoft has partnered with energy provider Constellation Energy to reopen the functional nuclear reactor at Three Mile Island, which will be used, in part, to power its AI operations. However, the issue of strain on the physical grid infrastructure remains.

Efforts to address the grid’s challenges have stalled due to disagreements among PJM executives, tech companies, and power suppliers. An independent electrical market monitor has called for federal intervention, warning that without sufficient power infrastructure, PJM may have to allocate blackouts instead of ensuring reliability.

Join Pulse+ to comment below, and receive exclusive e-mail analyses.

show less
show more

Experts Warn: Don’t Use ChatGPT Health For Medical Advice.

PULSE POINTS

❓WHAT HAPPENED: OpenAI has introduced a new feature called ChatGPT Health, which uses medical records to generate personalized responses, despite warnings from experts about the risks of relying on artificial intelligence (AI) for health advice.

👤WHO WAS INVOLVED: OpenAI, medical professionals, legal experts, and organizations like the Center for Democracy and Technology and the Electronic Privacy Information Center.

📍WHEN & WHERE: The feature was recently launched by OpenAI.

💬KEY QUOTE: “New AI health tools offer the promise of empowering patients and promoting better health outcomes, but health data is some of the most sensitive information people can share, and it must be protected.” – Andrew Crawford, Center for Democracy and Technology.

🎯IMPACT: The launch of ChatGPT Health raises concerns over data privacy, potential misuse of health advice, and the lack of regulatory oversight for AI-driven health tools.

IN FULL

OpenAI has launched a new feature called ChatGPT Health, which aims to use medical records to provide more personalized responses for users. The company claims the tool is “designed in close collaboration with physicians” and built with “strong privacy, security, and data controls.” However, it also emphasizes that the feature is “not intended for diagnosis or treatment,” raising questions about its practical application.

The move has sparked criticism from experts who warn that artificial intelligence (AI) health advice can be inaccurate and potentially dangerous. A recent investigation revealed that Google’s AI Overviews frequently provided incorrect health information, which could lead to serious risks if followed. People are already using the standard ChatGPT AI assistant to draft legal and medical claims, often based on nonexistent case law or hallucinated data. Last December, The National Pulse reported that a U.S. Department of Justice (DOJ) indictment alleges the OpenAI-developed chatbot encouraged a man—accused of harassing over a dozen women across five states—to persist in his stalking behavior.

Privacy concerns are at the forefront of the debate, with critics highlighting the lack of federal regulations like HIPAA governing consumer AI products. Andrew Crawford from the Center for Democracy and Technology recently noted, “Health data is some of the most sensitive information people can share, and it must be protected.” He warned that inadequate data protections could put users at risk, particularly as companies like OpenAI explore advertising as a business model.

Concerns also extend to law enforcement’s access to sensitive data. “How does OpenAI handle [law enforcement] requests?” Crawford asked, adding, “Do they just turn over the information? Is the user in any way informed?” Sara Geoghegan of the Electronic Privacy Information Center echoed these concerns, noting that AI companies can change their terms of service at any time without regulation.

Despite these warnings, OpenAI says it is pushing forward with ChatGPT Health, which the company contends will help users “take a more active role in understanding and managing their health and wellness.”

Image by Jernej Furman.

Join Pulse+ to comment below, and receive exclusive e-mail analyses.

show less
show more

ChatGPT Creator Seeks Safety Chief to Prepare for Potential Rogue AI.

PULSE POINTS

❓WHAT HAPPENED: OpenAI is hiring a “head of preparedness” to address the challenges and dangers posed by artificial intelligence (AI) technologies, including a potential rogue AI.

👤WHO WAS INVOLVED: OpenAI, led by CEO Sam Altman, is behind the initiative, with the new role offering a salary of $555,000 plus equity.

📍WHEN & WHERE: The announcement was made recently on X (formerly Twitter).

💬KEY QUOTE: “This will be a stressful job,” said Sam Altman, emphasizing the stakes involved in addressing AI risks.

🎯IMPACT: The role aims to strengthen OpenAI’s safety measures and ensure its AI systems are used responsibly while mitigating potential abuses.

IN FULL

OpenAI announced it is seeking to fill a new position titled “head of preparedness” as part of its efforts to address the risks associated with artificial intelligence (AI), including a possible rogue AI. The role was revealed by OpenAI’s CEO, Sam Altman, who acknowledged the “real challenges” posed by the advanced technologies developed by the organization.

“This will be a stressful job,” Altman stated, highlighting the high stakes and complexity involved in managing the potential dangers of AI systems. He also highlighted concerns over AI’s impact on mental health and its potential to expose critical vulnerabilities in computer security systems.

In a post on X (formerly Twitter), Altman elaborated on the need for a nuanced understanding of AI capabilities. “We are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world,” he wrote. He further noted that while there is a strong foundation for measuring AI capabilities, much work remains to address the complexities and edge cases.

The new position will expand OpenAI’s existing safety measures, which the company claims include “increasingly complex safeguards.” According to the job listing, the role will focus on scaling safety standards alongside the development of more advanced AI systems. The job comes with a salary of $555,000 and equity in the company.

In May, The National Pulse reported that OpenAI’s former Chief Scientist, Ilya Sutskever, suggested constructing a bunker to prepare for the potential risks associated with artificial general intelligence (AGI), according to details shared by insiders familiar with the 2023 tumult at the top of the AI company. During a summer 2023 meeting, Sutskever reportedly stated, “We’re definitely going to build a bunker before we release AGI.”

Two other people who attended the meeting corroborated the account, with one describing Sutskever’s AGI beliefs as akin to anticipating a “rapture.”

Image by World Economic Forum / Benedikt von Loebell.

Join Pulse+ to comment below, and receive exclusive e-mail analyses.

show less
show more

Top AI YouTube Channels Rake in $114 Million Annually from Low-Quality Content.

PULSE POINTS

âť“WHAT HAPPENED: A report by Kapwing reveals that low-quality AI-generated videos, dubbed “brainrot,” have amassed over 63 billion views on YouTube, generating approximately $114 million annually.

👤WHO WAS INVOLVED: Video-editing company Kapwing, AI content creators, YouTube viewers, and researchers such as Emilie Owens and Eryk Salvaggio.

📍WHEN & WHERE: The report surveyed 15,000 YouTube channels globally in 2025, covering countries including South Korea, Spain, and Egypt.

đź’¬KEY QUOTE: “Generative AI is a tool, and like any tool it can be used to make both high- and low-quality content,” said a YouTube spokesman.

🎯IMPACT: AI-generated content is influencing YouTube’s ecosystem, sparking debates about quality, mental health, and the platform’s role in regulating such material.

IN FULL

A report by video-editing company Kapwing has found that low-quality, AI-generated videos, often described as “brainrot,” are becoming a significant part of YouTube’s ecosystem, drawing vast audiences and substantial revenue. According to the study, these videos have accumulated more than 63 billion views and generate an estimated $114 million annually, with researchers suggesting they may make up over 20 percent of content appearing in users’ feeds.

Kapwing reviewed 15,000 of the world’s most popular YouTube channels and identified 278 that publish only AI-generated material. These channels are global in scope and have amassed large followings. Spanish AI-only channels collectively attract about 20 million subscribers, while Egyptian ones have roughly 18 million. In South Korea, trending AI channels have recorded 8.45 billion views—well above the country’s population.

The content typically includes fabricated K-pop music videos, looped AI-created animal clips, and other repetitive visuals designed to maximize watch time. Kapwing named the Indian channel Bandar Apna Dost as the most-viewed AI-only channel, with 2.4 billion views and an estimated $3.9 million in revenue. Another example, Singapore-based Pouty Frenchie, features videos of a French bulldog aimed at children and could generate close to $3.8 million a year.

Researchers and mental health experts have raised concerns about the effects of prolonged exposure to such material. Emilie Owens, a media researcher at the University of Oslo, said young people often turn to “brainrot” videos as a way to escape stress.

Cambridge University researcher Eryk Salvaggio warned that AI-generated content spreads easily and is often designed to provoke outrage. The Newport Institute, a U.S. mental health organization, has cautioned that excessive consumption could contribute to behavioral addiction and harm cognitive skills such as decision-making and problem-solving.

A YouTube spokesman responded to the findings by saying, “Generative AI is a tool, and like any tool it can be used to make both high- and low-quality content. We remain focused on connecting our users with high-quality content, regardless of how it was made.” The company added that it continues to enforce community guidelines and remove policy-violating videos.

The debate over AI’s impact extends beyond online video. Concerns about safety at the highest levels of AI development have been highlighted by comments attributed to an OpenAI scientist who reportedly said, “We’re definitely going to build a bunker before we release AGI,” reflecting anxiety over artificial general intelligence.

Other research has linked AI-powered pricing systems to higher grocery costs for consumers, while lawsuits and advocacy groups have warned about the mental health risks of intense AI chatbot use among teenagers.

Image by Rego Korosi.

Join Pulse+ to comment below, and receive exclusive e-mail analyses.artificial

show less
show more

Trump Halts UK Trade Deal, Citing Frustrations with Labour Leadership.

PULSE POINTS

âť“WHAT HAPPENED: The United States has suspended the “Technology Prosperity Deal” with Britain, citing frustrations over trade negotiations with the Labour Party-led British government.

👤WHO WAS INVOLVED: President Donald J. Trump, Prime Minister Sir Keir Starmer, and officials from both governments.

📍WHEN & WHERE: The deal was struck during President Trump’s state visit to the United Kingdom earlier this year. The suspension was confirmed last week, according to officials.

💬KEY QUOTE: “Negotiations of this kind are never straightforward, and both parties obviously want what’s best for their countries.” – Prime Minister’s official spokesman.

🎯IMPACT: The suspension raises questions about the future of British-American tech cooperation and broader trade relations, as well as the Labour government’s handling of non-tariff barriers.

IN FULL

The U.S. has suspended the “Technology Prosperity Deal” with the United Kingdom, a move reportedly tied to frustrations over trade negotiations with the Labour Party-led government. The agreement, initially signed during President Donald J. Trump’s state visit to Britain earlier this year, was designed to boost cooperation on emerging technologies, including artificial intelligence (AI), quantum computing, and nuclear energy.

U.S. officials have expressed concerns over Britain’s reluctance to address non-tariff barriers, including regulations governing food and industrial goods. Despite these challenges, Downing Street insists that discussions remain active and productive.

“First of all, we remain in active conversations with U.S. counterparts at all levels of government, and we’re confident of securing a deal that will shape the future of millions on both sides of the Atlantic,” said Prime Minister Sir Keir Starmer’s official spokesman. He added, “Negotiations of this kind are never straightforward, and both parties obviously want what’s best for their countries.”

The memorandum of understanding on the deal, signed by Prime Minister Starmer and President Trump in September, outlined commitments to work together on technology and innovation. It also included significant investment pledges in the United Kingdom from major U.S. tech companies, including $29.5 billion (ÂŁ22 billion) from Microsoft and $6.7 billion (ÂŁ5 billion) from Google.

Join Pulse+ to comment below, and receive exclusive e-mail analyses.

show less
show more