Monday, February 23, 2026

Biometric Mass Surveillance Legalized in Europe.

The European Parliament (EP) has passed the AI Act, which critics warn enshrines biometric mass surveillance in law.

The EU announced the adoption of the act as a “landmark” event, touting it as a safeguard that “limit[s] the use of identification systems by law enforcement.” An official press release claims the act “aims to protect fundamental rights, democracy, the rule of law… from high-risk AI.”

The Act’s critics say the exact opposite is the case, asserting that the current legislation resulted from semi-secret negotiations between the EP, the European Commission, and the Council of the EU that lacked transparency and subverted the law’s original intent. MEP Patrick Beyer stated that the AI Act “effectively allows law enforcement the introduction of error-prone facial surveillance and facial recognition camera software in public spaces.”  The law, Beyer says, is poised to turn European nations into “high-tech surveillance states.”

Czech MEP Marcel Kolaja echoed Beyer’s concerns. He claimed that “national governments” inserted language laying the groundwork for legalized mass spying on citizens using cameras with AI biometric tech.

“Such cameras, equipped with artificial intelligence, are able to recognize people’s faces and thus keep track of who has been where, when, and with whom,” Kolaja said. “The AI Act should have banned such an Orwellian tool, but instead it explicitly legalizes it.”

show less
The European Parliament (EP) has passed the AI Act, which critics warn enshrines biometric mass surveillance in law. show more

Adobe AI Erases White People, Including America’s Founders.

Artificial Intelligence (AI) image generator Adobe Firefly is producing the same perverse results as Gemini, the suspended Google image generator that regularly refused to depict white people and inserted minorities into historically inappropriate contexts.

An investigation by Semafor found Adobe Firefly produced some of the same inaccurate results as Gemini, rendering Vikings and even “German soldiers in 1945” as black, for example.

The National Pulse found similar issues, with a prompt for “America’s Founding Fathers” resulting in an image of a black woman and a black man, and a prompt for “European People in the Middle Ages” resulting in an image of eight black people in medieval dress. A prompt for “German soldiers in 1945” generated an image as equally historically dubious as the one found by Semafor.

While Semafor suggested that such results result from “technical shortcomings,” the images produced by the now-shuttered Gemini program resulted from deliberate programming.

Requests to create images of white families would be refused, while images to create images of black families were accepted. Similarly, only images of historically white groups, such as Vikings and medieval kings, were rendered as ethnically diverse. Historically black groups, such as Zulu warriors, were rendered accurately.

In Adobe’s case, the company defended Firefly as a tool that “isn’t meant for generating photorealistic depictions of real or historical events,” standing by their “commitment to responsible innovation” and decision to “[train] our AI models on diverse datasets to ensure we’re producing commercially safe results that don’t perpetuate harmful stereotypes.”

show less
Artificial Intelligence (AI) image generator Adobe Firefly is producing the same perverse results as Gemini, the suspended Google image generator that regularly refused to depict white people and inserted minorities into historically inappropriate contexts. show more

Midjourney AI Puts ‘Foot Down’ on ‘Political Speech,’ Banning Images of Biden, Trump.

Artificial Intelligence (AI) image-generator Midjourney has blocked users from creating images featuring Donald Trump or Joe Biden, in case they are used to generate “misinformation.”

Without providing precise details, Midjourney CEO David Holz said during a digital office event that the new approach is a temporary measure against abuse.

“I don’t really care about political speech,” Holz said. “That’s not the purpose of Midjourney. It’s not that interesting to me. That said, I also don’t want to spend all of my time trying to police political speech. So we’re going to have put our foot down on it a bit.”

The Associated Press (AP) found efforts to use the AI tool to generate an image of “Trump and Biden shaking hands at the beach” resulted in a “Banned Prompt Detected” warning, with a repeat attempt resulting in an “abuse alert.”

The so-called Center for Countering Digital Hate lobbied for the change, complaining “Midjourney seemed to have the fewest controls of any AI image-generator when it came to generating images of well-known political figures like Joe Biden and Donald Trump.”

show less
Artificial Intelligence (AI) image-generator Midjourney has blocked users from creating images featuring Donald Trump or Joe Biden, in case they are used to generate "misinformation." show more

WaPo: Use AI To ‘Warp Speed’ Entire Govt.

The Washington Post wants the government to start “Warp Speeding entire agencies and functions” of government using Artificial Intelligence (AI), arguing it could “transform almost everything about our society.”

Josh Tyrangiel, The Post’s specialist AI columnist, claims Joe Biden is “good on AI,” praising him for “delegating to smart people” — Barack Obama has run AI policy in secret — and “banging the drum about generative AI’s ability to create misinformation and harm national security.”

“But the vision remains so small compared with the possibilities,” Tyrangiel complains, arguing an AI rollout in the style of the Operation Warp Speed vaccines program could “revolutionize” the Internal Revenue Service (IRS), public health surveillance, and traffic management, disaster preparedness, among other things.

The WaPo columnist hinted at more sinister work for a government supercharged by AI, however, positively referencing the way the Palantir system “merges real-time views from hundreds of commercial satellites with communications technology and weapons data” to kill people more efficiently in Ukraine.

However, such uses of AI are not without risk, with AI-powered drones in US military simulations being known to turn against their human operators.

“Is there risk? There is,” Tyrangiel admitted, but invoked veterans and the benefits they might gain from AI having a “God view” of their healthcare to justify it.

“Is the risk worse than an average of 18 veterans killing themselves each day? I don’t think so,” he insisted.

show less
The Washington Post wants the government to start “Warp Speeding entire agencies and functions” of government using Artificial Intelligence (AI), arguing it could “transform almost everything about our society.” show more

State Dept Calls For New Powers To Address ‘Extinction Level Threat’ Posed By AI.

A US State Department-funded study authored by the consultancy firm Gladstone AI says the government should consider a temporary ban on artificial intelligence (AI) that surpasses a certain computational power threshold. The study’s authors warn that advanced AI poses an extinction-level threat to humanity.

Their 247-page report proposes the enactment of sweeping government powers to regulate the development of AI as the technology could “destabilize global security” by hijacking nuclear weapons and critical infrastructure. Gladstone AI suggests the executive branch be granted new emergency powers to respond to hypothetical AI threats.

The State Department commissioned report also recommends treating high-end computer chips crucial to AI development as international contraband and implementing strict monitoring of hardware usage. Gladstone AI’s conclusions echo sentiments expressed by some in the technology industry, government, and academia who warn that while AI holds significant potential, mismanaged deployment could be radically disruptive.

Gladstone AI’s safety report follows recent concerns raised by UNESCO over neurosurveillance and mental privacy infringements relating to emerging brain chip technology. The AI report was prepared for the State Department’s Bureau of International Security and Nonproliferation, tasked with studying and curbing the threat of emergent weapons systems.

Mark Beall, one of the report’s co-authors, has since left Gladstone AI to launch a new Super PAC, Americans for AI Safety. Beall, the former DoD AI strategy chief, and his Super PAC hope to make AI safety “a key issue in the 2024 elections, with a goal of passing AI safety legislation by the end of 2024.”

show less
A US State Department-funded study authored by the consultancy firm Gladstone AI says the government should consider a temporary ban on artificial intelligence (AI) that surpasses a certain computational power threshold. The study's authors warn that advanced AI poses an extinction-level threat to humanity. show more

Artificial Superintelligence Expected by 2027.

A leading expert in artificial intelligence (AI) has predicted that human-level or superhuman AI, also known as artificial general intelligence (AGI), could be achieved as early as 2027 rather than the previously predicted timeline of 2029 or 2030.

Ben Goertzel made the statement at this year’s Beneficial AGI Summit, cautioning that even as we approach AGI, there are still many unknowns about the technology’s capabilities and timeline. Goertzel founded SingularityNET and is known for his work on Sophia, the humanoid robot.

Goertzel also shared his view that once human-level AGI is reached, it could develop rapidly into an artificial superintelligence (ASI), an AI system with all the combined knowledge of human civilization. This scenario, often referred to as the ‘singularity,’ was previously considered a distant possibility. However, the recent advances in language model technology by OpenAI suggest that it may be closer than initially thought.

Goertzel acknowledged that his predictions are laden with uncertainties, as even powerful AI systems would not have a “human mind” in the conventional sense. Also, his theory assumes that AI technology would evolve in a linear and predictable manner, which does not factor in our world’s social, ethical, and ecological complexities.

American computer scientist and futurist Raymond Kurzweil recently predicted the ‘singularity’ wouldn’t occur until 2045. However, the continued advancement of AI remains a controversial topic. A recent study revealed that AI Language Learning Models (LLMs) tended to escalate conflicts to nuclear war when presented with wargaming scenarios.

show less
A leading expert in artificial intelligence (AI) has predicted that human-level or superhuman AI, also known as artificial general intelligence (AGI), could be achieved as early as 2027 rather than the previously predicted timeline of 2029 or 2030. show more

AI Engineer Says Microsoft’s Product Generates Graphic, Sexual Images That Violate Its Own Policies.

An artificial intelligence engineer at Microsoft says that the company’s AI image generator, Copilot Designer, has been generating inappropriate images that violate the company’s policy.

According to Shane Jones — who has been testing Copilot Designer for vulnerabilities since its March 2023 launch — the AI image generator can be manipulated into creating images depicting violent sexual images of women, drug use, teenagers with weapons, images relating to abortion, and graphic images depicting demons and monsters. The Microsoft engineer says these generated images violate Microsoft’s responsible AI principles.

Jones’s findings have prompted concerns over the lack of regulation of generative AI technologies, especially given that the Copilot Designer’s Android app remains rated as “E for Everyone,” suggesting suitability for users of all ages. The AI engineer says he started internally reporting his experiences in December, but Microsoft was unwilling to heed his advice to withdraw the product from the market. Instead, the company referred him to OpenAI, the tech that powers Copilot Designer.

After hearing nothing back, Jones published an open letter on LinkedIn, pleading with the start-up to remove DALL-E 3, the latest version of their AI model. In addition, Jones wrote to US senators detailing his concerns with generative AI technology and met with staffers from the Senate Committee on Commerce, Science, and Transportation.

This past Wednesday, Jones sent letters to Federal Trade Commission (FTC) Chair Lina Khan and Microsoft’s board of directors, again urging the tech giant to pull Copilot Designer until better safeguards could be implemented. He also called for the company to add product disclosures and change the app’s rating on Google Play Store to indicate its content is for mature audiences only.

show less
An artificial intelligence engineer at Microsoft says that the company’s AI image generator, Copilot Designer, has been generating inappropriate images that violate the company’s policy. show more

Anti-Misinformation AI Flagging Factual Stories As False.

Artificial intelligence hired by the Washington Secretary of State’s Office to monitor potential electionmisinformation’ has flagged multiple factual stories from The Center Square regarding evidence of noncitizens illegally voting. Logically — a UK-based AI company — was contracted by the Washington Secretary of State last year to scan for “false content” on various social platforms, including X (formerly Twitter).

The state contract with Logically tasks them with using their AI tools to identify “harmful narratives” concerning Washington’s elections and generate reports for the Secretary of State’s review. Last summer, Logically generated several reports, which included stories published by The Center Square regarding Washington state’s election laws and an incident in which a foreign national avoided prosecution after illegally voting 28 times.

According to the Logically reports – which don’t address the factual claims made by the news outlet – these stories were flagged as ‘misinformation’ because of how social media users interpreted them. The UK-based AI company flagged the first story because some social media users who read it concluded “…no government agency in Washington State has the mandate or authority to verify citizenship of registered voters, and some users claimed it represents intentional negligence by Washington authorities to enable voter fraud. Users claimed that this leads to foreign nationals regularly voting in Washington’s elections.”

The second story from The Center Square – regarding a foreign national who voted illegally 28 times – was flagged because it may “…motivate individuals to call for stricter voter registration laws and more oversight of the process.”

Derrick Nunnally, a spokesman for the Washington Secretary of State, said neither Logically nor the Secretary of State has used the reports to push for censorship of news stories. Nunnally also insisted the reports have not been used to push for labeling any social media users as having spread ‘misinformation’ either.

show less
Artificial intelligence hired by the Washington Secretary of State’s Office to monitor potential election ‘misinformation’ has flagged multiple factual stories from The Center Square regarding evidence of noncitizens illegally voting. Logically — a UK-based AI company — was contracted by the Washington Secretary of State last year to scan for “false content” on various social platforms, including X (formerly Twitter). show more

JOE-BOTS! Tech Firm Aids Dem Election Efforts with AI.

A technology supporting Democratic candidates is leveraging artificial intelligence (AI) in its election outreach endeavors. Tech for Campaigns, which uses digital marketing and data for campaign purposes, is now utilizing generative AI to create digital advertisements and fundraising emails. These tools are ‘AI-aided,’ meaning they are managed and vetted by humans, and form part of a broader effort to help underfunded campaigns conserve resources and succeed in competitive races.

Jessica Alter, the co-founder of Tech for Campaigns, confirmed that the model has already produced results. The group, which originally utilized generative AI tools like Google’s Bard and ChatGPT, now plans to roll out its own suite of AI-enabled tools labeled “TFC Learning Engine.”

In 2023, an experiment across 14 Virginia campaigns showed that AI-aided emails garnered three to four times more fundraising dollars per work hour than solely human-written ones. “It can help generate ad ideas. It could help generate even regular marketing material like flyers and signs… and if you have data, it can help you analyze that data and pull out insight,” Alter said.

Generative AI garnered attention following the 2022 release of OpenAI’s product, ChatGPT, and has since expanded, offering tools that generate text, images, human voices, and video from simple prompts, diverging from previous AI technology that merely manipulated existing media.

However, this development occurs amid controversy surrounding AI’s use in politics. OpenAI declared in January that it won’t allow its applications for political campaigning. The Federal Communications Commission outlawed AI-generated robocalls following an impersonated call of President Joe Biden. AI companies have signed a pledge to prevent their algorithms from election interference use, and a bipartisan task force centered on AI was formed by House leaders to examine legal accountability for AI-caused election interference.

show less
A technology supporting Democratic candidates is leveraging artificial intelligence (AI) in its election outreach endeavors. Tech for Campaigns, which uses digital marketing and data for campaign purposes, is now utilizing generative AI to create digital advertisements and fundraising emails. These tools are 'AI-aided,' meaning they are managed and vetted by humans, and form part of a broader effort to help underfunded campaigns conserve resources and succeed in competitive races. show more

Big Tech Draws Up Private Compact to Reduce AI Election Inteference.

Some of the largest technology and social media companies in the world have agreed to a private compact in what they say is an effort to combat the use of artificial intelligence (AI) to disrupt the over 40 national elections being held around the world in 2024. Technology executives gathered in Munich, Germany, for a security conference and announced the voluntary framework on Friday. Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, Elon Musk’s X (formerly Twitter), and TikTok were among the signatories.

The compact doesn’t commit signatories to any specific actions but does lay out strategies they intend to use to publicly identify AI-generated videos and images “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election.” It states companies will share best practices, but the compact does not commit them to banning or removing deepfakes or other altered content.

Ahead of the summit, Meta’s president of global affairs, Nick Clegg, stated: “Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own.” Meta is the parent company of social media giants Facebook and Instagram.

GOVERNMENT COLLUSION.

Concerns over deepfakes and other AI-altered content have increased as the nascent technology has rapidly improved over the last year. However, the compact also raises concerns about possible ongoing collusion between governments and technology companies aimed to censor citizens’ speech. Over the past year, there have been numerous instances of the U.S. and foreign governments pressuring social media companies to remove unfavorable content.

Last summer, a federal judge ordered the Biden government to cease communications with social media platforms for “the purpose of urging, encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech.”

show less
Some of the largest technology and social media companies in the world have agreed to a private compact in what they say is an effort to combat the use of artificial intelligence (AI) to disrupt the over 40 national elections being held around the world in 2024. Technology executives gathered in Munich, Germany, for a security conference and announced the voluntary framework on Friday. Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, Elon Musk's X (formerly Twitter), and TikTok were among the signatories. show more