Monday, February 23, 2026

Top AI YouTube Channels Rake in $114 Million Annually from Low-Quality Content.

PULSE POINTS

âť“WHAT HAPPENED: A report by Kapwing reveals that low-quality AI-generated videos, dubbed “brainrot,” have amassed over 63 billion views on YouTube, generating approximately $114 million annually.

👤WHO WAS INVOLVED: Video-editing company Kapwing, AI content creators, YouTube viewers, and researchers such as Emilie Owens and Eryk Salvaggio.

📍WHEN & WHERE: The report surveyed 15,000 YouTube channels globally in 2025, covering countries including South Korea, Spain, and Egypt.

đź’¬KEY QUOTE: “Generative AI is a tool, and like any tool it can be used to make both high- and low-quality content,” said a YouTube spokesman.

🎯IMPACT: AI-generated content is influencing YouTube’s ecosystem, sparking debates about quality, mental health, and the platform’s role in regulating such material.

IN FULL

A report by video-editing company Kapwing has found that low-quality, AI-generated videos, often described as “brainrot,” are becoming a significant part of YouTube’s ecosystem, drawing vast audiences and substantial revenue. According to the study, these videos have accumulated more than 63 billion views and generate an estimated $114 million annually, with researchers suggesting they may make up over 20 percent of content appearing in users’ feeds.

Kapwing reviewed 15,000 of the world’s most popular YouTube channels and identified 278 that publish only AI-generated material. These channels are global in scope and have amassed large followings. Spanish AI-only channels collectively attract about 20 million subscribers, while Egyptian ones have roughly 18 million. In South Korea, trending AI channels have recorded 8.45 billion views—well above the country’s population.

The content typically includes fabricated K-pop music videos, looped AI-created animal clips, and other repetitive visuals designed to maximize watch time. Kapwing named the Indian channel Bandar Apna Dost as the most-viewed AI-only channel, with 2.4 billion views and an estimated $3.9 million in revenue. Another example, Singapore-based Pouty Frenchie, features videos of a French bulldog aimed at children and could generate close to $3.8 million a year.

Researchers and mental health experts have raised concerns about the effects of prolonged exposure to such material. Emilie Owens, a media researcher at the University of Oslo, said young people often turn to “brainrot” videos as a way to escape stress.

Cambridge University researcher Eryk Salvaggio warned that AI-generated content spreads easily and is often designed to provoke outrage. The Newport Institute, a U.S. mental health organization, has cautioned that excessive consumption could contribute to behavioral addiction and harm cognitive skills such as decision-making and problem-solving.

A YouTube spokesman responded to the findings by saying, “Generative AI is a tool, and like any tool it can be used to make both high- and low-quality content. We remain focused on connecting our users with high-quality content, regardless of how it was made.” The company added that it continues to enforce community guidelines and remove policy-violating videos.

The debate over AI’s impact extends beyond online video. Concerns about safety at the highest levels of AI development have been highlighted by comments attributed to an OpenAI scientist who reportedly said, “We’re definitely going to build a bunker before we release AGI,” reflecting anxiety over artificial general intelligence.

Other research has linked AI-powered pricing systems to higher grocery costs for consumers, while lawsuits and advocacy groups have warned about the mental health risks of intense AI chatbot use among teenagers.

Image by Rego Korosi.

Join Pulse+ to comment below, and receive exclusive e-mail analyses.artificial

show less
show more

Trump Admin Partners With Musk’s Grok AI.

PULSE POINTS

âť“WHAT HAPPENED: Elon Musk’s xAI signed a deal with the General Services Administration (GSA) to integrate its Grok AI chatbot with federal agencies.

👤WHO WAS INVOLVED: Elon Musk, xAI, the GSA, and Federal Acquisition Service Commissioner Josh Gruenbaum.

📍WHEN & WHERE: The agreement was recently announced, with implementation expected across U.S. federal agencies.

đź’¬KEY QUOTE: “Thanks to President Trump and his administration, xAI’s frontier AI is now unlocked for every federal agency empowering the U.S. Government to innovate faster and accomplish its mission more effectively than ever before.” – Elon Musk.

🎯IMPACT: The partnership could enhance government efficiency but may also raise concerns among Musk’s political opponents regarding his influence in federal operations.

IN FULL

Elon Musk’s artificial intelligence (AI) company, xAI, has entered into a partnership with the U.S. General Services Administration (GSA) to provide its Grok chatbot for use across federal agencies. The deal is being promoted as a step toward modernizing government services through advanced AI tools, and it will make xAI’s latest Grok models available to agencies through March 2027.

Musk said, xAI has the “most capable AI models in the world.” He credited President Donald J. Trump for laying the groundwork for this partnership, stating, “Thanks to President Trump and his administration, xAI’s frontier AI is now unlocked for every federal agency empowering the U.S. Government to innovate faster and accomplish its mission more effectively than ever before.”

Federal Acquisition Service Commissioner Josh Gruenbaum praised the move, calling it essential to government modernization. “Widespread access to advanced AI models is essential to building the efficient, accountable government that taxpayers deserve.”

However, the move is not without controversy, with Grok recently referring to itself as “MechaHitler” and writing fiction about violently sodomizing online liberal personality Will Stancil. A study also found that, in more normal circumstances, Grok, like other large language models, tends to lean left politically, raising questions about potential bias in federally deployed AI tools.

This development places xAI alongside other major AI providers like OpenAI, Google, Anthropic, and Meta, all of whom are part of the federal government’s list of approved AI vendors. It also aligns with GSA’s OneGov strategy, which aims to streamline technology acquisition across government agencies.

Join Pulse+ to comment below, and receive exclusive e-mail analyses.

show less
show more

MTG Under Fire for Admitting She Voted for the Big, Beautiful Bill Without Reading It.

PULSE POINTS:

❓What Happened: Representative Marjorie Taylor Greene (R-GA) admitted she did not read President Donald J. Trump’s budget reconciliation bill before voting for it and would have opposed it had she known about an AI-related provision. The Georgia Republican also appears not to understand the purpose of the ban on state-level AI regulation, a provision likely intended to prevent Democrat state lawmakers in California from setting regulatory standards for the whole country.

👥 Who’s Involved: Marjorie Taylor Greene, President Trump, national Democratic lawmakers, including Eric Swalwell, Ted Lieu, Mark Pocan, Governor Gavin Newsom (D-CA), and the California state legislature.

📍 Where & When: U.S. House of Representatives; Greene’s admission was posted on X (formerly Twitter) on Tuesday, June 3, 2025.

💬 Key Quote: “Full transparency, I did not know about this section on pages 278-279 of the OBBB that strips states of the right to make laws or regulate AI for 10 years,” Greene wrote on X.

⚠️ Impact: Critics contend that the AI provision would block states from regulating AI systems for a decade, potentially nullifying existing state laws. However, Greene also seems unaware that the provision is actually an assertion of federal authority over AI regulation, meant to effectively prevent far-left state-level Democrats in California from dictating AI regulatory policy for the whole country.

IN FULL:

Representative Marjorie Taylor Greene (R-GA) has acknowledged that she did not thoroughly read President Donald J. Trump‘s tax and spending bill, dubbed the “One Big Beautiful Bill” (OBBB), before voting in favor of it. Greene admitted she was unaware of a provision in the bill that would prevent states from regulating artificial intelligence (AI) systems for 10 years.

Posting on X, Greene wrote, “Full transparency, I did not know about this section on pages 278-279 of the OBBB that strips states of the right to make laws or regulate AI for 10 years. I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there.”

The AI provision, added just two days before the markup, would prohibit state and local governments from enacting laws or regulations targeting AI models, facial recognition systems, and other automated decision tools. While critics make over-the-top claims that the provision removes safeguards or is an infringement on state rights, the section appears more aimed at preventing California from setting AI regulatory standards for the entire country.

Most technology companies working on AI development are either located in California or have a nexus to the state, meaning far-left Democrats in Sacramento can enact regulation directly on most of the industry. Additionally, as has happened with other industries, when California passes sweeping regulatory standards, companies in that sector will often change their policies nationwide to comply with California law rather than creating policies and adjusting consumer or user experiences for Californians alone. Governor Gavin Newsom (D-CA) has already signed several new laws regulating AI.

Under the former Biden government, the lack of federal intervention allowed California to set emissions standards for the automotive industry and regulations on electric vehicles. The provision that Greene didn’t read in Trump’s budget reconciliation bill would prevent the very situation that the Trump White House had to correct by intervening against California on emissions standards.

Notably, Democratic lawmakers, who unanimously opposed the bill, responded sharply to Greene’s admission. Rep. Eric Swalwell (D-CA) posted, “You have one job. To. Read. The. F***ing. Bill.” Similarly, Rep. Ted Lieu (D-CA) noted that he had read the provision and cited it as a reason for his opposition, advising, “PRO TIP: It’s helpful to read stuff before voting on it.”

As it is currently written, the AI provision is unlikely to survive the Byrd Rule in the U.S. Senate, though some lawmakers say they are working to alter the section to be Byrd Rule compliant.

Image by Gage Skidmore.

show less

PULSE POINTS:

show more

Court Dismisses News Outlets’ Lawsuit Against OpenAI.

A federal judge in New York has dismissed a lawsuit by news outlets Raw Story and AlterNet against OpenAI. The media organizations accused the company of misusing their copyrighted material to train its AI language model, ChatGPT. Last week, U.S. District Judge Colleen McMahon granted OpenAI’s request to dismiss the lawsuit entirely, citing the plaintiffs’ failure to establish a tangible injury under Article 3 of the U.S. Constitution, necessary for legal standing.

Judge McMahon stated the plaintiffs did not demonstrate any actual harm from the alleged violation of the Digital Millennium Copyright Act (DMCA). She noted that they failed to provide specific instances of ChatGPT reproducing their content without acknowledgment, labeling the likelihood of such occurrences as “remote.”

The case, initiated by Raw Story Media, the parent company of Raw Story and AlterNet, alleged OpenAI contravened Section 1202(b)(1) of the DMCA. The complaint claimed the AI company removed copyright management details from numerous articles during the ChatGPT training process. Raw Story sought damages of at least $2,500 per violation and demanded the removal of their content from OpenAI’s datasets.

The judge pointed out that the plaintiffs’ grievance seemed to revolve around their articles being used without compensation rather than the lack of proper attribution. Despite the ruling, Raw Story and AlterNet can replead their case, although the judge was skeptical about their prospects of proving concrete injury.

OpenAI asserted its use of publicly accessible data is protected under fair use rules. The dismissal may influence similar cases, as OpenAI and other AI firms face numerous lawsuits over the data utilized in training generative AI systems. These include actions from prominent publishers like The New York Times, alleging unauthorized use of articles for AI development.

show less
A federal judge in New York has dismissed a lawsuit by news outlets Raw Story and AlterNet against OpenAI. The media organizations accused the company of misusing their copyrighted material to train its AI language model, ChatGPT. Last week, U.S. District Judge Colleen McMahon granted OpenAI's request to dismiss the lawsuit entirely, citing the plaintiffs' failure to establish a tangible injury under Article 3 of the U.S. Constitution, necessary for legal standing. show more

AI Chatbot Obsession Leads to Teen Suicide.

A Florida teen has died after taking his own life after becoming obsessed with an artificial intelligence (AI) chatbot designed to resemble a character from a popular television series. Sewell Setzer III, aged only 14, died by suicide after forming a strong emotional bond with the AI chatbot, which was meant to replicate Daenerys Targaryen from HBO’s Game of Thrones.

This chatbot creation, which lacked HBO’s consent, intensified Setzer’s isolation from friends and family, who observed his withdrawal from activities like Formula 1 racing and playing Fortnite. Despite being aware of the chatbot’s artificial nature, Setzer developed a significant emotional attachment. Their interactions varied, including discussions about sensitive topics like Setzer’s suicidal thoughts.

His final communications with the chatbot highlighted a deep connection, ending with Setzer’s tragic death using his father’s firearm. The family plans to file a lawsuit against Character.AI, criticizing the chatbot service as “dangerous and untested.”

Character.AI has partnered with Google to license its AI models. The company’s founders have discussed Character.AI’s personas as potential friends for lonely users, pitching them as a form of entertainment.

In response to Setzer’s death, Character.AI expressed condolences to the family and emphasized user safety as a priority. The company has shared plans to implement additional safety measures, including restrictions for users under 18 and resources for individuals discussing self-harm.

The case, which resembles the 2013 film Her, reveals the danger of AI technology. Earlier this year, Microsoft announced that it had developed AI technology that could mimic voices so well that it was too dangerous to release to the public.

In another case, two Harvard students were able to use AI and smart glasses to dox people’s personal information by merely looking at them, raising major privacy concerns.

show less
A Florida teen has died after taking his own life after becoming obsessed with an artificial intelligence (AI) chatbot designed to resemble a character from a popular television series. Sewell Setzer III, aged only 14, died by suicide after forming a strong emotional bond with the AI chatbot, which was meant to replicate Daenerys Targaryen from HBO's Game of Thrones. show more

Students Use Meta ‘Smart Glasses’ and AI to Dox People Just by Looking at Them.

Two Harvard students are using Ray-Ban Meta smart glasses linked to an artificial intelligence (AI) large language model (LLM) to dox people’s personal information just by looking at them. AnhPhu Nguyen and Caine Ardayfio’s modified glasses, dubbed I-XRAY, use facial recognition technology to find pictures of the people the wearer looks at online, cross-referencing them to build a profile including their address, contact details, and partial or even complete social security numbers in real-time.

“What makes I-XRAY unique is that it operates entirely automatically, thanks to the recent progress in LLMs,” the creators explain.

“The system leverages the ability of LLMs to understand, process, and compile vast amounts of information from diverse sources–inferring relationships between online sources… and logically parsing a person’s identity and personal details through text,” they continue, explaining that “synergy between LLMs and reverse face search allows for fully automatic and comprehensive data extraction” that can quickly identify a subject’s home address, phone number, and relatives, among other personal information.

Nguyen and Ardayfio recommend a number of steps for people to better protect their privacy, including using services to remove themselves from Reverse Face Search Engines and proactively opting out of databases such as FastPeopleSearch, which allows users to look up the often extensive publicly available information about a person.

show less
Two Harvard students are using Ray-Ban Meta smart glasses linked to an artificial intelligence (AI) large language model (LLM) to dox people's personal information just by looking at them. AnhPhu Nguyen and Caine Ardayfio's modified glasses, dubbed I-XRAY, use facial recognition technology to find pictures of the people the wearer looks at online, cross-referencing them to build a profile including their address, contact details, and partial or even complete social security numbers in real-time. show more
germany

Amazon, Facebook, Google Ad Partner Admits Listening to You Through Your Smartphone.

Marketing corporation Cox Media Group (CMG), a partner of Amazon, Facebook, Google, and Microsoft‘s Bing, admits eavesdropping on smartphone owners through their microphones, using Artificial Intelligence (AI) to analyze their conversations and place targeted ads.

In a pitch deck to advertisers, CMG boasts, “Advertisers can pair this voice-data with behavioral data to target in-market consumers.”

A now-deleted blog post by CMG from November 2023 is even more explicit, explaining:

“Imagine a world where you can read minds. One where you know the second someone in your area is concerned about mold in their closet, where you have access to a list of leads who are unhappy with their current contractor, or know who is struggling to pick the perfect fine dining restaurant to propose to their discerning future fiancĂ©. This is a world where no pre-purchase murmurs go unanalyzed, and the whispers of consumers become a tool for you to target, retarget, and conquer your local market. It’s not a far-off fantasy-it’s Active Listening technology, and it enables you to unlock unmatched advertising efficiency today so you can boast a bigger bottom line tomorrow.”

CMG assures readers this “Active Listening” surveillance is not a crime. “We know what you’re thinking. Is this even legal?” the post posits.

“The short answer is: yes. It is legal for phones and devices to listen to you. When a new app download or update prompts consumers with a multi-page terms of use agreement somewhere in the fine print, Active Listening is often included.”

Following reports on its CMG’s activities, Google has said it is dropping the firm as a partner. Amazon and Facebook parent company Meta are reviewing their relationship with CMG, with the latter insisting, “Meta does not use your phone’s microphone for ads and we’ve been public about this for years.”

Facebook founder Mark Zuckerberg admits to covering his computers’ cameras and microphones with tape, suggesting he is aware they are vulnerable to would-be eavesdroppers.

show less
Marketing corporation Cox Media Group (CMG), a partner of Amazon, Facebook, Google, and Microsoft's Bing, admits eavesdropping on smartphone owners through their microphones, using Artificial Intelligence (AI) to analyze their conversations and place targeted ads. show more

Even Big Govt Stalwarts Admit AI Will Gut The Public Sector.

The Tony Blair Institute for Global Change (TBI), headed by former prime minister and Iraq War architect Sir Tony Blair, argued around a million public sector workers can be fired and ÂŁ40 billion (~$52 billion) can be saved by harnessing Artificial Intelligence (AI).

“There is only one game changer in our view, [and] that is harnessing … the 21st-century technological revolution,” Blair said at the institute’s annual conference, shortly after his Labour Party returned to power under Prime Minister Sir Keir Starmer, after 14 years in opposition.

“In this new world, companies and nations will either rise or fall,” Blair warned. The TBI believes around 40 percent of the tasks currently carried out by public sector workers could be at least partly automated by AI.

Blair wields enormous influence over Prime Minister Starmer, who went out on a limb to say the deeply unpopular former Labour leader was a “very successful” premier and defend his controversial knighthood. The former Labour leader has been trying to use that influence from the outset of Starmer‘s premiership, urging him to increase tax by an additional ÂŁ50 billion ($63.9 billion).

His proposal to slash the public sector using AI will be controversial, however, as there is a vast client state of workers on the government payroll, which the Conservatives failed to tame, which votes reliably for the Labour Party. Public sector unions are also a significant funding source for the leftist party, leaving Starmer with little incentive to cut the public sector down to size.

show less
The Tony Blair Institute for Global Change (TBI), headed by former prime minister and Iraq War architect Sir Tony Blair, argued around a million public sector workers can be fired and ÂŁ40 billion (~$52 billion) can be saved by harnessing Artificial Intelligence (AI). show more

Advanced Microsoft AI Voice Cloning Deemed Too Dangerous for Public Use.

Microsoft has developed an advanced artificial intelligence (AI) text-to-speech program that achieves human-like believability. VALL-E 2 is the first program of its kind to achieve “human parity,” meaning its speech cannot be distinguished from that of a human. However, the technology remains strictly a research project and is unavailable to the public.

“It may carry potential risks in the misuse of the model, such as spoofing voice identification or impersonating a specific speaker,” researchers say. There are therefore “no plans to incorporate VALL-E 2 into a product or expand access to the public.”

The program can replicate voices with remarkable fidelity after processing as little as three seconds of audio, surpassing previous systems in speech robustness, naturalness, and similarity to the original speaker.

There are concerns about voice spoofing, with the technology used for impersonation or identity fraud, particularly in phone scams.

Misuse of AI remains a concern in the upcoming presidential election, with AI being used for fake robocalls using Joe Biden’s voice earlier this year in New Hampshire.

Some Biden supporters are suggesting AI should be used by the Biden campaign to mask the 81-year-old’s obvious cognitive decline, frailty, and confusion from the public.

show less
Microsoft has developed an advanced artificial intelligence (AI) text-to-speech program that achieves human-like believability. VALL-E 2 is the first program of its kind to achieve "human parity," meaning its speech cannot be distinguished from that of a human. However, the technology remains strictly a research project and is unavailable to the public. show more

99.9% Chance AI Will Wipe Out Humanity, Predicts Top Scientist.

In a landmark survey conducted earlier this year, more than half of the 2,778 researchers surveyed expressed concerns over the existential threat posed by superhuman artificial intelligence (AI). The poll suggested a five percent chance that humanity could face extinction or experience other “extremely bad outcomes” due to the rise of superintelligent AI systems.

Prominent among the voices raising the alarm is Roman Yampolskiy, a distinguished computer science lecturer at the University of Louisville and a well-respected figure in AI research. Speaking on the Lex Fridman podcast, Yampolskiy gave a stark prediction, estimating a 99.9 percent probability that AI could obliterate humanity within the next 100 years.

“Creating general superintelligences may not end well for humanity in the long run,” Yampolskiy cautioned. “The best strategy might simply be to avoid starting this potentially perilous game.”

Yampolskiy also highlighted the existing issues with current large language models, noting their propensity for errors and susceptibility to manipulation as evidence of potential future risks.

“Mistakes have already been made; these systems have been jailbroken and used in ways developers did not foresee,” he observed.

Further, Yampolskiy suggested that a superintelligent AI could devise unforeseeable methods to achieve destructive ends, presenting challenges we may not even recognize as threats until too late. While he conceded that the probability of AI leading to human extinction is not a full 100 percent, he warned that the risk is alarmingly high.

“Even with exponentially increasing our resources, the risk never fully disappears,” he said, illustrating the ongoing challenge of managing a system capable of billions of decisions per second over many years.

show less
In a landmark survey conducted earlier this year, more than half of the 2,778 researchers surveyed expressed concerns over the existential threat posed by superhuman artificial intelligence (AI). The poll suggested a five percent chance that humanity could face extinction or experience other "extremely bad outcomes" due to the rise of superintelligent AI systems. show more