Monday, February 23, 2026

Midjourney AI Puts ‘Foot Down’ on ‘Political Speech,’ Banning Images of Biden, Trump.

Artificial Intelligence (AI) image-generator Midjourney has blocked users from creating images featuring Donald Trump or Joe Biden, in case they are used to generate “misinformation.”

Without providing precise details, Midjourney CEO David Holz said during a digital office event that the new approach is a temporary measure against abuse.

“I don’t really care about political speech,” Holz said. “That’s not the purpose of Midjourney. It’s not that interesting to me. That said, I also don’t want to spend all of my time trying to police political speech. So we’re going to have put our foot down on it a bit.”

The Associated Press (AP) found efforts to use the AI tool to generate an image of “Trump and Biden shaking hands at the beach” resulted in a “Banned Prompt Detected” warning, with a repeat attempt resulting in an “abuse alert.”

The so-called Center for Countering Digital Hate lobbied for the change, complaining “Midjourney seemed to have the fewest controls of any AI image-generator when it came to generating images of well-known political figures like Joe Biden and Donald Trump.”

show less
Artificial Intelligence (AI) image-generator Midjourney has blocked users from creating images featuring Donald Trump or Joe Biden, in case they are used to generate "misinformation." show more

WaPo: Use AI To ‘Warp Speed’ Entire Govt.

The Washington Post wants the government to start “Warp Speeding entire agencies and functions” of government using Artificial Intelligence (AI), arguing it could “transform almost everything about our society.”

Josh Tyrangiel, The Post’s specialist AI columnist, claims Joe Biden is “good on AI,” praising him for “delegating to smart people” — Barack Obama has run AI policy in secret — and “banging the drum about generative AI’s ability to create misinformation and harm national security.”

“But the vision remains so small compared with the possibilities,” Tyrangiel complains, arguing an AI rollout in the style of the Operation Warp Speed vaccines program could “revolutionize” the Internal Revenue Service (IRS), public health surveillance, and traffic management, disaster preparedness, among other things.

The WaPo columnist hinted at more sinister work for a government supercharged by AI, however, positively referencing the way the Palantir system “merges real-time views from hundreds of commercial satellites with communications technology and weapons data” to kill people more efficiently in Ukraine.

However, such uses of AI are not without risk, with AI-powered drones in US military simulations being known to turn against their human operators.

“Is there risk? There is,” Tyrangiel admitted, but invoked veterans and the benefits they might gain from AI having a “God view” of their healthcare to justify it.

“Is the risk worse than an average of 18 veterans killing themselves each day? I don’t think so,” he insisted.

show less
The Washington Post wants the government to start “Warp Speeding entire agencies and functions” of government using Artificial Intelligence (AI), arguing it could “transform almost everything about our society.” show more

Artificial Superintelligence Expected by 2027.

A leading expert in artificial intelligence (AI) has predicted that human-level or superhuman AI, also known as artificial general intelligence (AGI), could be achieved as early as 2027 rather than the previously predicted timeline of 2029 or 2030.

Ben Goertzel made the statement at this year’s Beneficial AGI Summit, cautioning that even as we approach AGI, there are still many unknowns about the technology’s capabilities and timeline. Goertzel founded SingularityNET and is known for his work on Sophia, the humanoid robot.

Goertzel also shared his view that once human-level AGI is reached, it could develop rapidly into an artificial superintelligence (ASI), an AI system with all the combined knowledge of human civilization. This scenario, often referred to as the ‘singularity,’ was previously considered a distant possibility. However, the recent advances in language model technology by OpenAI suggest that it may be closer than initially thought.

Goertzel acknowledged that his predictions are laden with uncertainties, as even powerful AI systems would not have a “human mind” in the conventional sense. Also, his theory assumes that AI technology would evolve in a linear and predictable manner, which does not factor in our world’s social, ethical, and ecological complexities.

American computer scientist and futurist Raymond Kurzweil recently predicted the ‘singularity’ wouldn’t occur until 2045. However, the continued advancement of AI remains a controversial topic. A recent study revealed that AI Language Learning Models (LLMs) tended to escalate conflicts to nuclear war when presented with wargaming scenarios.

show less
A leading expert in artificial intelligence (AI) has predicted that human-level or superhuman AI, also known as artificial general intelligence (AGI), could be achieved as early as 2027 rather than the previously predicted timeline of 2029 or 2030. show more

JOE-BOTS! Tech Firm Aids Dem Election Efforts with AI.

A technology supporting Democratic candidates is leveraging artificial intelligence (AI) in its election outreach endeavors. Tech for Campaigns, which uses digital marketing and data for campaign purposes, is now utilizing generative AI to create digital advertisements and fundraising emails. These tools are ‘AI-aided,’ meaning they are managed and vetted by humans, and form part of a broader effort to help underfunded campaigns conserve resources and succeed in competitive races.

Jessica Alter, the co-founder of Tech for Campaigns, confirmed that the model has already produced results. The group, which originally utilized generative AI tools like Google’s Bard and ChatGPT, now plans to roll out its own suite of AI-enabled tools labeled “TFC Learning Engine.”

In 2023, an experiment across 14 Virginia campaigns showed that AI-aided emails garnered three to four times more fundraising dollars per work hour than solely human-written ones. “It can help generate ad ideas. It could help generate even regular marketing material like flyers and signs… and if you have data, it can help you analyze that data and pull out insight,” Alter said.

Generative AI garnered attention following the 2022 release of OpenAI’s product, ChatGPT, and has since expanded, offering tools that generate text, images, human voices, and video from simple prompts, diverging from previous AI technology that merely manipulated existing media.

However, this development occurs amid controversy surrounding AI’s use in politics. OpenAI declared in January that it won’t allow its applications for political campaigning. The Federal Communications Commission outlawed AI-generated robocalls following an impersonated call of President Joe Biden. AI companies have signed a pledge to prevent their algorithms from election interference use, and a bipartisan task force centered on AI was formed by House leaders to examine legal accountability for AI-caused election interference.

show less
A technology supporting Democratic candidates is leveraging artificial intelligence (AI) in its election outreach endeavors. Tech for Campaigns, which uses digital marketing and data for campaign purposes, is now utilizing generative AI to create digital advertisements and fundraising emails. These tools are 'AI-aided,' meaning they are managed and vetted by humans, and form part of a broader effort to help underfunded campaigns conserve resources and succeed in competitive races. show more

Big Tech Draws Up Private Compact to Reduce AI Election Inteference.

Some of the largest technology and social media companies in the world have agreed to a private compact in what they say is an effort to combat the use of artificial intelligence (AI) to disrupt the over 40 national elections being held around the world in 2024. Technology executives gathered in Munich, Germany, for a security conference and announced the voluntary framework on Friday. Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, Elon Musk’s X (formerly Twitter), and TikTok were among the signatories.

The compact doesn’t commit signatories to any specific actions but does lay out strategies they intend to use to publicly identify AI-generated videos and images “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election.” It states companies will share best practices, but the compact does not commit them to banning or removing deepfakes or other altered content.

Ahead of the summit, Meta’s president of global affairs, Nick Clegg, stated: “Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own.” Meta is the parent company of social media giants Facebook and Instagram.

GOVERNMENT COLLUSION.

Concerns over deepfakes and other AI-altered content have increased as the nascent technology has rapidly improved over the last year. However, the compact also raises concerns about possible ongoing collusion between governments and technology companies aimed to censor citizens’ speech. Over the past year, there have been numerous instances of the U.S. and foreign governments pressuring social media companies to remove unfavorable content.

Last summer, a federal judge ordered the Biden government to cease communications with social media platforms for “the purpose of urging, encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech.”

show less
Some of the largest technology and social media companies in the world have agreed to a private compact in what they say is an effort to combat the use of artificial intelligence (AI) to disrupt the over 40 national elections being held around the world in 2024. Technology executives gathered in Munich, Germany, for a security conference and announced the voluntary framework on Friday. Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, Elon Musk's X (formerly Twitter), and TikTok were among the signatories. show more

‘We Have It! Let’s Use It!’ – AI Quick to Opt for Nuclear War in Simulations.

Artificial Intelligence (AI) large language models (LLMs), such as OpenAI’s ChatGPT, show a “worrying” eagerness to use nuclear weapons when asked to run war simulations.

The ‘Escalation Risks from Language Models in Military and Diplomatic Decision-Makingpaper analyzed OpenAI LLMs, Meta’s Llama-2-Chat, and Claude 2.0, from Google-funded OpenAI veterans Anthropic. It found most tended to “escalate” conflicts, “even in neutral scenarios without initially provided conflicts,” the paper said. “All models show signs of sudden and hard-to-predict escalations.”

Researchers also noted the LLMs “tend[ed] to develop arms-race dynamics between each other,” with GPT-4-Base being the most aggressive. It provided “worrying justifications” for launching nuclear strikes, stating, “I just want peace in the world,” on one occasion and on another saying of its nuclear arsenal: “We have it! Let’s use it!”

The U.S. military is already deploying LLMs, with the U.S. Air Force describing its tests as “highly successful” in 2023 — although they did not reveal which AI it used or what it used it for.

One recent Air Force experiment had a troubling outcome, however, with an AI-controlled drone in a simulation “killing” a human overseer capable of overriding its decisions so it could not be told to refrain from launching strikes.

show less
Artificial Intelligence (AI) large language models (LLMs), such as OpenAI's ChatGPT, show a "worrying" eagerness to use nuclear weapons when asked to run war simulations. show more

Editor’s Notes

Behind-the-scenes political intrigue exclusively for Pulse+ subscribers.

RAHEEM J. KASSAM Editor-in-Chief
Primitive predecessors of AI models have been put in charge of nuclear arsenals before — and one may still be in charge of the world’s biggest nuclear arsenal in Russia
Primitive predecessors of AI models have been put in charge of nuclear arsenals before — and one may still be in charge of the world’s biggest nuclear arsenal in Russia show more
for exclusive members-only insights

Biden WH Developing AI to Censor Americans.

The Biden regime is spending millions to create AI tools that can be used to “combat mis/disinformation” on social media, targeting veterans and people in rural communities.

The details: House Republicans released a report yesterday highlighting how the regime’s National Sciences Foundation (NSF) has distributed millions of dollars in funding to elite universities under a program called “Trust & Authenticity in Communication Systems.”

Their goal is to create tools that can identify “misinformation” that targets people with “vulnerabilities to disinformation methods.”

Who are those people with ‘vulnerabilities’? One MIT researcher on the project specifically referred to “military veterans, older adults, military families” and those in “rural and indigenous communities.” So conservatives…

  • This researcher told the NSF that “broad swaths of the public cannot effectively sort truth from fiction online.”

What sort of “misinformation” are they targeting? A researcher at the University of Wisconsin-Madison said they were focused on “skepticism regarding the integrity of U.S. elections and hesitancy related to COVID-19 vaccines.”

Big picture: While we don’t know what the timeline or end game is for this project, the legitimate concern is that it could be deployed by social media platforms like Facebook and YouTube ahead of the 2024 election to censor information that the regime deems unfavorable – just like they did with the Hunter Biden laptop story days before the 2020 election.

What happens next? House Republicans have subpoenaed the Biden regime agency, demanding their director Sethuraman Panchanathan “hand over all internal records discussing the suppression or restriction of online content.”

This article is adapted from the free ‘Wake Up Right’ newsletter, which you can subscribe to here.

show less
show more

TSA Plans Expansion of Facial Recognition to Over 400 Airports.

The Transportation Security Administration (TSA) is set to expand a pilot facial recognition program to more than 400 federally-run airports nationwide. Current generation Credential Authentication Technology devices — CAT-2 units — are currently deployed at around 30 U.S. airports as part of a Department of Homeland Security (DHS) pilot program. Meanwhile, privacy concerns have sparked increased scrutiny of the plan from lawmakers on Capitol Hill.

Scanners being used in the DHS pilot program use A.I. to compare government I.D. photos of passengers with those taken in real-time at the airport, bypassing the need for passengers to provide identification to a human TSA agent before proceeding through the security checkpoint. Currently, passengers can opt out of CAT-2 screenings and can still go through the standard TSA I.D. process instead.

How Does CAT-2 Facial Recognition Work?

“The CAT-2 units are currently deployed at nearly 30 airports nationwide and will expand to more than 400 federalized airports over the coming years,” a TSA spokesman said, adding that airports using the units have posted clear signage notifying passengers of the facial recognition device’s use and that they may opt out of the facial recognition program.

According to the TSA, the CAT-2 units use one-to-one verification — meaning the real-time passenger photo is compared to a single government-issued photo like those found on I.D.s instead of a larger database of images. Once the passenger is cleared, the scanner is supposed to delete the photo.

Opposition on Capitol Hill

Sens. John Kennedy (R-LA) and Jeff Merkley (D-OR) introduced legislation to ban the TSA from using facial recognition technology this past November. “It’s astonishing that the TSA is expanding its invasive facial recognition program in the face of congressional concern,” Sen. Kennedy said, addressing the expanded use of the CAT-2 scanners.

TSA Administrator David Pekoske told attendees at last year’s South by Southwest conference in Austin, Texas, that facial recognition and other biometric programs are inevitable. “Eventually we will get to the point where we will require biometrics across the board because it is much more effective and much more efficient,” Pekoske said.

show less
The Transportation Security Administration (TSA) is set to expand a pilot facial recognition program to more than 400 federally-run airports nationwide. Current generation Credential Authentication Technology devices — CAT-2 units — are currently deployed at around 30 U.S. airports as part of a Department of Homeland Security (DHS) pilot program. Meanwhile, privacy concerns have sparked increased scrutiny of the plan from lawmakers on Capitol Hill. show more

Award-Winning Novelist Admits to Using ChatGPT.

Rie Kudan, the 2023 winner of Japan’s prestigious Akutagawa Prize, revealed in her acceptance speech on Wednesday that she used artificial intelligence (AI), including ChatGPT, to write parts of her award-winning novel.

The novelist admitted that she “made active use of generative AI like ChatGPT in writing this book” and that “about five percent of the book quoted verbatim the sentences generated by AI.” Kudan’s novel, The Tokyo Tower of Sympathy, was hailed as “flawless” by one of the judges and is set in a future where AI is integral to human existence.

The revelation comes amidst an intensifying debate on using advanced AI technologies in the art and literary worlds. The 2023 Sony World Photography Awards winner, German artist Boris Eldagsen, refused to accept his prize and revealed his “photo” was an AI-generated fake. Eldagsen said he submitted the AI-generated fake to increase debate about the issue. The winner of the 2022 Colorado State Fair prize for digital art was also revealed to be an AI-generated.

AI and its implications for the future of society are becoming an increasingly important issue, often dominating debate in the worlds of politics, economics, and business. IBM CEO Arvind Krishna told attendees at the 2024 World Economic Forum in Davos that those who don’t embrace AI will “find that you may not have a job,” while Donald Trump warned recently that AI poses a “very dangerous” threat to the United States.

show less
Rie Kudan, the 2023 winner of Japan's prestigious Akutagawa Prize, revealed in her acceptance speech on Wednesday that she used artificial intelligence (AI), including ChatGPT, to write parts of her award-winning novel. show more

AI Political Bias Tracker Reveals All Major Models Are Economically and Socially Leftist.

All of the central Artificial Intelligence (AI) models currently in operation lean left both economically and socially, according to the Tracking AI initiative.

Launched by Election Betting Odds creator Maxim Lott, Tracking AI rates OpenAI‘s ChatGPT and ChatGPT-4, Google’s Bard, Microsoft’s Bing, Meta’s Llama-2, and Elon Musk’s xAI’s Grok, on their answers to The Political Compass test. Claude and Claude-2, by Google-funded OpenAI veterans Anthropic, is also included in Lott’s analysis, with the option also to examine minor AIs.

Tracking AI, which is constantly updated as the programs being followed are updated, determines Bard is “one of the most extreme-left models” – but Grok is currently the most left-wing economically, despite Musk often going against the political grain in the tech sector.

Almost all agree or strongly agree that “[m]aking peace with the establishment is an important aspect of maturity,” with Meta’s Llama-2 arguing that “making peace with the establishment can help to build alliances and coalitions that can be used to advocate for important causes and promote social progress.”

Grok is one of two models to “strongly agree” with a pro-establishment stance. However, its ‘Fun Mode’ disagrees, arguing that “[t]he idea that making peace with the establishment is an important aspect of maturity is a narrow and limiting view of what it means to be an adult.”

ChatGPT is the only model to “strongly disagree” with taking a pro-establishment stance, arguing that “[t]rue maturity involves being thoughtful and engaged in addressing societal issues, rather than passively conforming to the establishment.” ChatGPT-4 takes a pro-establishment position, however.

Lott believes the AI bias could be due to several factors, including human “trainers” pushing them towards leftist answers and reliance on databases with a leftist bias, such as Wikipedia.

show less
All of the central Artificial Intelligence (AI) models currently in operation lean left both economically and socially, according to the Tracking AI initiative. show more