Monday, February 23, 2026

Ex-NSA Chief Joins OpenAI Board.

OpenAI announced on Thursday that Paul M. Nakasone, a retired US Army general and former head of the National Security Agency (NSA), will join its board of directors. The move has sparked concern among some left-leaning civil liberties advocates—in part because Nakasone was appointed to lead the NSA by former President Donald J. Trump, serving from 2018 until February of this year.

Prior to his departure from the NSA, Nakasone authored an op-ed advocating for the renewal of Section 702 of the Foreign Intelligence Surveillance Act (FISA), a provision reauthorized by Congress in April. The FISA legislation reauthorized a controversial provision that allows the government to spy on Americans without a warrant as long as they’re communicating with noncitizens in a foreign country.

As part of his board of directors role, Nakasone will join OpenAI’s recently formed Safety and Security Committee. His role will include aiding OpenAI in enhancing its understanding of how artificial intelligence (AI) can be harnessed to bolster cybersecurity efforts, particularly in detecting and responding to cybersecurity threats.

The new appointment comes amid departures related to safety concerns at OpenAI. Notable exits include co-founder and chief scientist Ilya Sutskever, who was involved in the contentious firing and reinstatement of CEO Sam Altman in November. Additionally, Jan Leike publicly remarked that “safety culture and processes have taken a backseat to shiny products.”

Bret Taylor, chair of OpenAI’s board, emphasized the significance of secure innovation in AI. “Artificial intelligence has the potential to have huge positive impacts on people’s lives, but it can only meet this potential if these innovations are securely built and deployed,” he stated. Taylor further highlighted Nakasone’s extensive experience in cybersecurity as a valuable asset in guiding OpenAI towards its mission of ensuring artificial general intelligence benefits all of humanity.

show less
OpenAI announced on Thursday that Paul M. Nakasone, a retired US Army general and former head of the National Security Agency (NSA), will join its board of directors. The move has sparked concern among some left-leaning civil liberties advocates—in part because Nakasone was appointed to lead the NSA by former President Donald J. Trump, serving from 2018 until February of this year. show more

AI Will Always Lie, Admits Apple’s Tim Cook.

Apple CEO Tim Cook says artificial intelligence (AI) programs like large language models (LLM)—think ChatGPT—are likely to continue generating false information and lying to users. “It’s not 100 percent,” Cook said in a recent interview with The Washington Post when asked if Apple‘s venture into AI would produce a ‘truthful’ chatbot. He emphasized that the company is “confident it will be very high quality” but added: “I would never claim that it’s 100 percent.”

Users of AI tools like LLM-powered chatbots have increasingly encountered what they call “hallucinations,” a term used to describe AI-generated inaccuracies. These errors can either be intentional—i.e., programmed into the AI by its creator—or can develop incidentally as the AI learns and expands its knowledge. The former has increasingly become a flashpoint in American politics as some AI image generators created by major technology companies have produced purposefully inaccurate results for the sake of diversity, equity, and inclusion (DEI).

The National Pulse reported in March that Adobe’s Firefly AI image generator regularly refused to depict white people and inserted minorities into historically inappropriate contexts. A similar issue was faced by Google‘s Gemini AI tool before they temporarily suspended it from use after public backlash. The AI image generator Midjourny recently blocked users altogether from creating images featuring Donald Trump or Joe Biden, claiming the policy is meant to prevent election misinformation.

Cook’s remarks came as Apple has officially entered the AI sector, announcing a range of machine learning tools at its Worldwide Developers Conference earlier this week.

show less
Apple CEO Tim Cook says artificial intelligence (AI) programs like large language models (LLM)—think ChatGPT—are likely to continue generating false information and lying to users. "It’s not 100 percent," Cook said in a recent interview with The Washington Post when asked if Apple's venture into AI would produce a 'truthful' chatbot. He emphasized that the company is "confident it will be very high quality" but added: "I would never claim that it’s 100 percent." show more

Joe Rogan Advances Communist Utopia Theory, With Magic $200K For ALL Americans!

Joe Rogan says he believes that “there’s enough money in this country” to support those who lose their jobs to technological advancement. “Just imagine the entire country just gets a free $200,000 a year [per person]; you’re never going to have to worry about food. You’re never going to have to worry about a place to live. You’re good,” Rogan said on his Tuesday podcast. He added: “You’ve got $200,000 a year because everything’s automated and everything’s done by the government, then you’re going to have to find something. You’re going to have to find a purpose.”

The remarks came during Tuesday’s episode of “The Joe Rogan Experience,” where he spoke with technology mogul Billy Carson, the founder of the aerospace company First Class Space Agency and media firm 4BiddenKnowledge Inc. Carson expressed concerns that while advancements in artificial intelligence and robotics are furthering human potential, the economic impacts could be devastating, with many humans around the world losing their jobs.

Meanwhile, Rogan, whose podcast boasts 14.5 million subscribers on Spotify, laid out a vision that is essentially the communist utopia depicted in the popular science fiction franchise Star Trek, where capitalism and money have been abolished, and the issue of resource scarcity has been overcome.

Despite both Carson and Rogan’s belief that technology-induced communism is just around the corner, recent setbacks experienced by driverless vehicle companies and Elon Musk’s Neuralink suggest that, for now, the Star Trek economy remains in the realm of fiction.

show less
Joe Rogan says he believes that "there's enough money in this country" to support those who lose their jobs to technological advancement. "Just imagine the entire country just gets a free $200,000 a year [per person]; you're never going to have to worry about food. You're never going to have to worry about a place to live. You're good," Rogan said on his Tuesday podcast. He added: "You've got $200,000 a year because everything's automated and everything's done by the government, then you're going to have to find something. You're going to have to find a purpose." show more
AI Musk

Musk: AI Will Take Over *ALL* Jobs.

Elon Musk, CEO of Tesla and co-founder of Neuralink, has cautioned that the rise of artificial intelligence (AI) could render almost all human jobs obsolete. Speaking virtually at the VivaTech 2024 conference in Paris, Musk expressed his concerns about the future of employment in the face of AI’s expanding capabilities.

“Probably none of us will have a job,” Musk said during his remote address, highlighting the potential for AI to replace human labor in various sectors. However, he noted that roles requiring creativity and emotional intelligence might remain secure.

“If you want to do a job that’s kinda like a hobby, you can do a job,” Musk stated. “But otherwise, AI and the robots will provide any goods and services that you want,” he added.

Musk suggested that the government might need to consider implementing a universal high income as opposed to the concept of universal basic income (UBI) to address the displacement of jobs by AI. UBI is a social welfare program where all adult citizens receive a fixed income regularly without the need to work.

“In some sense, it’ll be somewhat of a leveler, an equalizer,” Musk explained, though he did not elaborate on the specifics.

The billionaire entrepreneur, who was one of the initial board members of Open AI, has recently become a prominent critic in the debate over AI regulation. Last month, he voiced concerns about the development of superhuman artificial intelligence, which he claimed could surpass human intelligence as early as next year.

Addressing an existential question, Musk asked, “If the computer and robots can do everything better than you, does your life have meaning?”

In April, two of Japan’s leading firms predicted that AI could cause the collapse of the social order.

show less
Elon Musk, CEO of Tesla and co-founder of Neuralink, has cautioned that the rise of artificial intelligence (AI) could render almost all human jobs obsolete. Speaking virtually at the VivaTech 2024 conference in Paris, Musk expressed his concerns about the future of employment in the face of AI's expanding capabilities. show more

Autonomous Trucks on U.S. Highways in 3 Years.

Daimler Truck, a subsidiary of Mercedes-Benz’s parent company, says it has developed a fully automated long-haul truck that will be ready to hit the road by 2027. The eCascadia is an all-electric, automated version of Daimler’s popular Freightliner Cascadia truck models. A self-driving vehicle, the truck is outfitted with long-range sensors and a powerful computing system they say can quickly process data to make snap navigation decisions.

Joanna Butler, who leads Daimler’s global autonomous technology group, says the company’s ultimate goal is to deploy the trucks across the Southwest United States in a middle-mile freight hauling role. This means the autonomous trucks will engage in hub-to-hub middle-mile freight hauls, primarily using federal interstate highways. Butler emphasized the company’s commitment to safety, stating, “Our mantra is really it’s a marathon and not a sprint.”

Daimler touts its autonomous freight trucks as “an autonomous vehicle that doesn’t sleep, doesn’t need to stop, and basically can drive continuously in the Level 4 hub-to-hub mode—that is our targeted use case.” However, other companies in the self-driving truck industry have been forced to fold following a decline in public support and pushback from regulators.

In October of last year, a pedestrian was hit, pinned underneath, and dragged by a robotaxi manufactured by Cruise, the self-driving unit under General Motors. Following the pedestrian collision — and collapse in investor confidence — Cruise was forced to lay off nearly a quarter of its employees.

Around 1.5 million Americans are employed directly by the trucking and long-haul freight industries, and upwards of 8 million are employed through connected industries. Some economists fear the abrupt replacement of human drivers with autonomous computer drivers would have severe economic consequences that trump even public safety concerns.

show less
Daimler Truck, a subsidiary of Mercedes-Benz's parent company, says it has developed a fully automated long-haul truck that will be ready to hit the road by 2027. The eCascadia is an all-electric, automated version of Daimler's popular Freightliner Cascadia truck models. A self-driving vehicle, the truck is outfitted with long-range sensors and a powerful computing system they say can quickly process data to make snap navigation decisions. show more

AI Firm Suggests ‘Claud 3’ Has Achieved Sentience.

The U.S.-based, Google-funded artificial intelligence (AI) company Anthropic is suggesting that its AI-powered large language model (LLM) Claude 3 Opus has shown evidence of sentience. If conclusively proven, Claude 3 Opus would be the first sentient AI being in human history. However, experts in the field remain relatively unconvinced by Anthropic’s insinuation.

Claude 3 Opus has impressed many AI experts, especially the LLM‘s ability to solve complex problems almost instantly. However, claims of sentience began to circulate after Anthropic’s prompt engineer Alex Albert showcased an incident where Claude 3 Opus seemingly determined that it was being “tested.”

“When we ran this test on Opus, we noticed some interesting behavior—it seemed to suspect that we were running an eval on it,” Albert posted on X (formerly Twitter). He continued: “Opus not only found the needle, it recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities.”

Despite Anthropic’s claim, however, AI industry experts believe humanity is far off from developing a sentient AI — if it is even possible.

In March 2024, Anthropic unveiled their newest lineup of AI-powered LLMs, including their top-line model Claude 3 Opus. “Opus, our most intelligent model, outperforms its peers on most of the common evaluation benchmarks for AI systems, including undergraduate level expert knowledge (MMLU), graduate level expert reasoning (GPQA), basic mathematics (GSM8K), and more,” Anthropic said in a statement announcing the release. The company claims: “It exhibits near-human levels of comprehension and fluency on complex tasks.”

Advances in AI technology continue to raise ethical concerns. Earlier this month, two leading Japanese companies warned that AI could cause the collapse of democracy and the social order, leading to wars.

show less
The U.S.-based, Google-funded artificial intelligence (AI) company Anthropic is suggesting that its AI-powered large language model (LLM) Claude 3 Opus has shown evidence of sentience. If conclusively proven, Claude 3 Opus would be the first sentient AI being in human history. However, experts in the field remain relatively unconvinced by Anthropic's insinuation. show more

Top Firms Warn AI Could Cause Collapse of Social Order.

In a joint statement, Nippon Telegraph and Telephone (NTT) and Yomiuri Shimbun Group Holdings, two of Japan’s leading companies, issued a stern warning regarding the potential dangers of artificial intelligence (AI). The statement, described as an AI manifesto, highlights concerns that unless appropriate legislation is introduced swiftly across the globe, unchecked AI could destroy democracy and cause mass societal upheaval and war.

Referencing AI technology being developed by Big Tech companies in the U.S., the manifesto warns that “In the worst-case scenario, democracy and social order could collapse, resulting in wars.”

According to the joint statement, such AI technology is designed with the primary goal of engaging consumers, often with no consideration for ethical or factual accuracy. In collaboration with researchers from Keio University, the companies have requested the Japanese government fast-track the introduction of laws to shield elections and national security from potential disruptions by AI.

Previously documented events, such as Google’s Gemini AI system and Adobe’s Firefly displaying discriminatory bias against white people, underscore the concerns expressed in the joint statement. Earlier this year, war simulations performed with AI large language models (LLMs), such as OpenAI’s ChatGPT, found that the programs tended to escalate conflicts into nuclear war.

show less
In a joint statement, Nippon Telegraph and Telephone (NTT) and Yomiuri Shimbun Group Holdings, two of Japan’s leading companies, issued a stern warning regarding the potential dangers of artificial intelligence (AI). The statement, described as an AI manifesto, highlights concerns that unless appropriate legislation is introduced swiftly across the globe, unchecked AI could destroy democracy and cause mass societal upheaval and war. show more

Tranhumanists Are Worried About ‘AI Inbreeding’ – Here’s What That Means…

Artificial Intelligence (AI) large-language models (LLMs) such as OpenAI’s ChatGPT and Google’s Gemini are devouring “high-quality text data” from the Internet at such a rapid clip they could soon run out of it, resulting in AI “inbreeding.”

Data-hungry AI firms are already using “AI-generated, or synthetic, data as training material” to beat the looming shortages — but researchers warn this “could actually cause crippling malfunctions.”

Training AI on text that is itself generated by AI is described as “the computer-science version of inbreeding” by the Wall Street Journal and generally results in gibberish and “model collapse.”

One experiment involving modeling along these lines resulted in an LLM spitting out a diatribe about a species of jackrabbit that does not exist when it was supposed to discuss 14th-century English architecture.

The amount of high-quality text available for AI to train on is running out so fast that OpenAI is already looking to scour YouTube videos for more, transcribing the audio using their Whisper program.

Nevertheless, it could become difficult for AI to continue to progress at its current speed once the available online resources are exhausted.

show less
Artificial Intelligence (AI) large-language models (LLMs) such as OpenAI's ChatGPT and Google's Gemini are devouring "high-quality text data" from the Internet at such a rapid clip they could soon run out of it, resulting in AI "inbreeding." show more

AI Could Make Beer Better. Here’s How…

Belgian researchers are harnessing the power of artificial intelligence (AI) to make their nation’s world-famous beer taste even better.

Researchers from KU Leuven University, led by Professor Kevin Verstrepen, are using AI to explore the intricate complexities of aroma perception.

“Beer — like most food products — contains hundreds of different aroma molecules that get picked up by our tongue and nose, and our brain then integrates these into one picture. However, the compounds interact with each other, so how we perceive one depends also on the concentrations of the others,” Verstrepen said.

The AI models built from these data sets were used to predict taste profiles based on the beer’s chemical composition. The models then suggested enhancements to a commercial beer, such as adding lactic acid and glycerol, which improved panelist ratings on several parameters, including sweetness and body.

“Tiny changes in the concentrations of chemicals can have a big impact, especially when multiple components start changing,” said Verstrepen. And although AI can help brewers understand how to make beer better, it cannot — as of yet — brew the beer for them.

“The AI models predict the chemical changes that could optimise a beer, but it is still up to brewers to make that happen starting from the recipe and brewing methods,” he said.

show less
Belgian researchers are harnessing the power of artificial intelligence (AI) to make their nation’s world-famous beer taste even better. show more

AI Could Take Over 80% of Repetitive Civil Servant Jobs.

Artificial Intelligence (AI) could assume responsibility for over 80 percent of repetitive civil servant tasks in the UK, according to a study by The Alan Turing Institute. This adoption of AI automation could affect well over 100 million operations, ranging from passport processing to voter registration, and has significant implications for time-saving and efficiency.

The study examined the approximately one billion citizen-facing transactions conducted by the British government annually across 57 departments and 400 services. The researchers examined half of these services involving decision-making and information exchange between an official and a citizen. They found that the most time-intensive tasks possess the greatest potential for time-saving if automated.

The researchers concluded that AI could potentially carry out an estimated 84 percent of 143 million “complex but repetitive transactions.”

“AI has enormous potential to help governments become more responsive, efficient, and fair,” said Dr Jonathan Bright, the artificial intelligence expert at The Alan Turing Institute. “Even if AI could save one minute per transaction, that would be the equivalent of hundreds of thousands of hours of labour saved each year,” he said.

Automating predictable environments such as machine operation could save the UK government an estimated £24 billion ($30.6 billion) per year, further facilitating the path to a smaller state and improved delivery of services. In the U.S., local governments have begun experimenting with incorporating AI into their daily workflow.

show less
Artificial Intelligence (AI) could assume responsibility for over 80 percent of repetitive civil servant tasks in the UK, according to a study by The Alan Turing Institute. This adoption of AI automation could affect well over 100 million operations, ranging from passport processing to voter registration, and has significant implications for time-saving and efficiency. show more