Monday, February 23, 2026
AI Musk

Musk: AI Will Take Over *ALL* Jobs.

Elon Musk, CEO of Tesla and co-founder of Neuralink, has cautioned that the rise of artificial intelligence (AI) could render almost all human jobs obsolete. Speaking virtually at the VivaTech 2024 conference in Paris, Musk expressed his concerns about the future of employment in the face of AI’s expanding capabilities.

“Probably none of us will have a job,” Musk said during his remote address, highlighting the potential for AI to replace human labor in various sectors. However, he noted that roles requiring creativity and emotional intelligence might remain secure.

“If you want to do a job that’s kinda like a hobby, you can do a job,” Musk stated. “But otherwise, AI and the robots will provide any goods and services that you want,” he added.

Musk suggested that the government might need to consider implementing a universal high income as opposed to the concept of universal basic income (UBI) to address the displacement of jobs by AI. UBI is a social welfare program where all adult citizens receive a fixed income regularly without the need to work.

“In some sense, it’ll be somewhat of a leveler, an equalizer,” Musk explained, though he did not elaborate on the specifics.

The billionaire entrepreneur, who was one of the initial board members of Open AI, has recently become a prominent critic in the debate over AI regulation. Last month, he voiced concerns about the development of superhuman artificial intelligence, which he claimed could surpass human intelligence as early as next year.

Addressing an existential question, Musk asked, “If the computer and robots can do everything better than you, does your life have meaning?”

In April, two of Japan’s leading firms predicted that AI could cause the collapse of the social order.

show less
Elon Musk, CEO of Tesla and co-founder of Neuralink, has cautioned that the rise of artificial intelligence (AI) could render almost all human jobs obsolete. Speaking virtually at the VivaTech 2024 conference in Paris, Musk expressed his concerns about the future of employment in the face of AI's expanding capabilities. show more

China Develops Brain-Computer Interfaces to Enhance Cognition, Military Prowess.

China is making strides in brain-computer interface (BCI) technology, aiming for general cognitive enhancement alongside military applications. Last week, NeuCyber NeuroTech, in conjunction with the Chinese Institute for Brain Research, showcased a novel BCI able to interpret a monkey’s thoughts and allow it to control a robotic arm.

Despite previously limiting their research to noninvasive technology involving electrodes in the past, advances in devices that implant directly in the brain, such as Elon Musk’s Neuralink, have galvanized Chinese researchers. Analysts suggest that their progress is now being conducted at a rate that is competitive with that of the United States.

The Chinese Communist Party (CCP) has laid out its ambition to use brain-computer interfaces for “Nonmedical purposes such as attention modulation, sleep regulation, memory regulation, and exoskeletons,” raising national security concerns.

The potential for BCI technologies to influence warfighters’ cognition and merge human-machine intelligence could result in a military paradigm shift, potentially leaving the U.S. at a disadvantage if it does not follow suit.

“China’s strategy fundamentally links the military and the commercial, and that is why there is concern,” explains Margaret Kosal, associate professor of international affairs at Georgia Institute of Technology.

The research, made more viable by advances in Artificial Intelligence (AI), also raises ethical concerns about a “transhumanist” future in which man and machine increasingly overlap and dystopian concepts such as “virtual children” become normalized.

show less
China is making strides in brain-computer interface (BCI) technology, aiming for general cognitive enhancement alongside military applications. Last week, NeuCyber NeuroTech, in conjunction with the Chinese Institute for Brain Research, showcased a novel BCI able to interpret a monkey's thoughts and allow it to control a robotic arm. show more

AI Firm Suggests ‘Claud 3’ Has Achieved Sentience.

The U.S.-based, Google-funded artificial intelligence (AI) company Anthropic is suggesting that its AI-powered large language model (LLM) Claude 3 Opus has shown evidence of sentience. If conclusively proven, Claude 3 Opus would be the first sentient AI being in human history. However, experts in the field remain relatively unconvinced by Anthropic’s insinuation.

Claude 3 Opus has impressed many AI experts, especially the LLM‘s ability to solve complex problems almost instantly. However, claims of sentience began to circulate after Anthropic’s prompt engineer Alex Albert showcased an incident where Claude 3 Opus seemingly determined that it was being “tested.”

“When we ran this test on Opus, we noticed some interesting behavior—it seemed to suspect that we were running an eval on it,” Albert posted on X (formerly Twitter). He continued: “Opus not only found the needle, it recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities.”

Despite Anthropic’s claim, however, AI industry experts believe humanity is far off from developing a sentient AI — if it is even possible.

In March 2024, Anthropic unveiled their newest lineup of AI-powered LLMs, including their top-line model Claude 3 Opus. “Opus, our most intelligent model, outperforms its peers on most of the common evaluation benchmarks for AI systems, including undergraduate level expert knowledge (MMLU), graduate level expert reasoning (GPQA), basic mathematics (GSM8K), and more,” Anthropic said in a statement announcing the release. The company claims: “It exhibits near-human levels of comprehension and fluency on complex tasks.”

Advances in AI technology continue to raise ethical concerns. Earlier this month, two leading Japanese companies warned that AI could cause the collapse of democracy and the social order, leading to wars.

show less
The U.S.-based, Google-funded artificial intelligence (AI) company Anthropic is suggesting that its AI-powered large language model (LLM) Claude 3 Opus has shown evidence of sentience. If conclusively proven, Claude 3 Opus would be the first sentient AI being in human history. However, experts in the field remain relatively unconvinced by Anthropic's insinuation. show more

Physiognomy Is Real: AI Can Predict Your Politics Just From Your Face.

Artificial Intelligence (AI) can successfully predict a person’s political orientation based on images of a blank facial expression, a development that researchers say shows that facial recognition technology is  “more threatening than previously thought” and poses “serious challenges to privacy.” It also supports the idea that physiognomy, often discredited as ‘pseudoscience,’ may, in fact, be a valid practice.

A recent study published in the journal American Psychologist revealed that an algorithm’s ability to accurately guess a person’s political views is “on par with how well job interviews predict job success, or alcohol drives aggressiveness.” According to lead author Michal Kosinski, the 591 participants filled out a political orientation questionnaire. AI then captured what Kosinski described as a numerical “fingerprint” of participants’ faces and compared them to a database to predict political views.

“Participants wore a black T-shirt adjusted using binder clips to cover their clothes. They removed all jewelry and – if necessary – shaved facial hair. Face wipes were used to remove cosmetics until no residues were detected on a fresh wipe. Their hair was pulled back using hair ties, hair pins, and a headband while taking care to avoid flyaway hairs,” the study’s authors wrote.

A facial recognition algorithm — VGGFace2 — then examined the images to determine “face descriptors, or a numerical vector that is both unique to that individual and consistent across their different images.”

‘MORE THREATENING THAT PREVIOUSLY THOUGHT.’

“Descriptors extracted from a given image are compared to those stored in a database. If they are similar enough, the faces are considered a match. Here, we use a linear regression to map face descriptors on a political orientation scale and then use this mapping to predict political orientation for a previously unseen face,” the study said.

The study’s authors observed that an “analysis of facial features associated with political orientation revealed that conservatives tended to have larger lower faces.”

“Perhaps most crucially, our findings suggest that widespread biometric surveillance technologies are more threatening than previously thought,” the study warns. “Our results, suggesting that stable facial features convey a substantial amount of the signal, imply that individuals have less control over their privacy.”

show less
Artificial Intelligence (AI) can successfully predict a person's political orientation based on images of a blank facial expression, a development that researchers say shows that facial recognition technology is  "more threatening than previously thought" and poses "serious challenges to privacy." It also supports the idea that physiognomy, often discredited as 'pseudoscience,' may, in fact, be a valid practice. show more

Top Firms Warn AI Could Cause Collapse of Social Order.

In a joint statement, Nippon Telegraph and Telephone (NTT) and Yomiuri Shimbun Group Holdings, two of Japan’s leading companies, issued a stern warning regarding the potential dangers of artificial intelligence (AI). The statement, described as an AI manifesto, highlights concerns that unless appropriate legislation is introduced swiftly across the globe, unchecked AI could destroy democracy and cause mass societal upheaval and war.

Referencing AI technology being developed by Big Tech companies in the U.S., the manifesto warns that “In the worst-case scenario, democracy and social order could collapse, resulting in wars.”

According to the joint statement, such AI technology is designed with the primary goal of engaging consumers, often with no consideration for ethical or factual accuracy. In collaboration with researchers from Keio University, the companies have requested the Japanese government fast-track the introduction of laws to shield elections and national security from potential disruptions by AI.

Previously documented events, such as Google’s Gemini AI system and Adobe’s Firefly displaying discriminatory bias against white people, underscore the concerns expressed in the joint statement. Earlier this year, war simulations performed with AI large language models (LLMs), such as OpenAI’s ChatGPT, found that the programs tended to escalate conflicts into nuclear war.

show less
In a joint statement, Nippon Telegraph and Telephone (NTT) and Yomiuri Shimbun Group Holdings, two of Japan’s leading companies, issued a stern warning regarding the potential dangers of artificial intelligence (AI). The statement, described as an AI manifesto, highlights concerns that unless appropriate legislation is introduced swiftly across the globe, unchecked AI could destroy democracy and cause mass societal upheaval and war. show more

Tranhumanists Are Worried About ‘AI Inbreeding’ – Here’s What That Means…

Artificial Intelligence (AI) large-language models (LLMs) such as OpenAI’s ChatGPT and Google’s Gemini are devouring “high-quality text data” from the Internet at such a rapid clip they could soon run out of it, resulting in AI “inbreeding.”

Data-hungry AI firms are already using “AI-generated, or synthetic, data as training material” to beat the looming shortages — but researchers warn this “could actually cause crippling malfunctions.”

Training AI on text that is itself generated by AI is described as “the computer-science version of inbreeding” by the Wall Street Journal and generally results in gibberish and “model collapse.”

One experiment involving modeling along these lines resulted in an LLM spitting out a diatribe about a species of jackrabbit that does not exist when it was supposed to discuss 14th-century English architecture.

The amount of high-quality text available for AI to train on is running out so fast that OpenAI is already looking to scour YouTube videos for more, transcribing the audio using their Whisper program.

Nevertheless, it could become difficult for AI to continue to progress at its current speed once the available online resources are exhausted.

show less
Artificial Intelligence (AI) large-language models (LLMs) such as OpenAI's ChatGPT and Google's Gemini are devouring "high-quality text data" from the Internet at such a rapid clip they could soon run out of it, resulting in AI "inbreeding." show more

AI Could Make Beer Better. Here’s How…

Belgian researchers are harnessing the power of artificial intelligence (AI) to make their nation’s world-famous beer taste even better.

Researchers from KU Leuven University, led by Professor Kevin Verstrepen, are using AI to explore the intricate complexities of aroma perception.

“Beer — like most food products — contains hundreds of different aroma molecules that get picked up by our tongue and nose, and our brain then integrates these into one picture. However, the compounds interact with each other, so how we perceive one depends also on the concentrations of the others,” Verstrepen said.

The AI models built from these data sets were used to predict taste profiles based on the beer’s chemical composition. The models then suggested enhancements to a commercial beer, such as adding lactic acid and glycerol, which improved panelist ratings on several parameters, including sweetness and body.

“Tiny changes in the concentrations of chemicals can have a big impact, especially when multiple components start changing,” said Verstrepen. And although AI can help brewers understand how to make beer better, it cannot — as of yet — brew the beer for them.

“The AI models predict the chemical changes that could optimise a beer, but it is still up to brewers to make that happen starting from the recipe and brewing methods,” he said.

show less
Belgian researchers are harnessing the power of artificial intelligence (AI) to make their nation’s world-famous beer taste even better. show more

AI Could Take Over 80% of Repetitive Civil Servant Jobs.

Artificial Intelligence (AI) could assume responsibility for over 80 percent of repetitive civil servant tasks in the UK, according to a study by The Alan Turing Institute. This adoption of AI automation could affect well over 100 million operations, ranging from passport processing to voter registration, and has significant implications for time-saving and efficiency.

The study examined the approximately one billion citizen-facing transactions conducted by the British government annually across 57 departments and 400 services. The researchers examined half of these services involving decision-making and information exchange between an official and a citizen. They found that the most time-intensive tasks possess the greatest potential for time-saving if automated.

The researchers concluded that AI could potentially carry out an estimated 84 percent of 143 million “complex but repetitive transactions.”

“AI has enormous potential to help governments become more responsive, efficient, and fair,” said Dr Jonathan Bright, the artificial intelligence expert at The Alan Turing Institute. “Even if AI could save one minute per transaction, that would be the equivalent of hundreds of thousands of hours of labour saved each year,” he said.

Automating predictable environments such as machine operation could save the UK government an estimated £24 billion ($30.6 billion) per year, further facilitating the path to a smaller state and improved delivery of services. In the U.S., local governments have begun experimenting with incorporating AI into their daily workflow.

show less
Artificial Intelligence (AI) could assume responsibility for over 80 percent of repetitive civil servant tasks in the UK, according to a study by The Alan Turing Institute. This adoption of AI automation could affect well over 100 million operations, ranging from passport processing to voter registration, and has significant implications for time-saving and efficiency. show more

Biometric Mass Surveillance Legalized in Europe.

The European Parliament (EP) has passed the AI Act, which critics warn enshrines biometric mass surveillance in law.

The EU announced the adoption of the act as a “landmark” event, touting it as a safeguard that “limit[s] the use of identification systems by law enforcement.” An official press release claims the act “aims to protect fundamental rights, democracy, the rule of law… from high-risk AI.”

The Act’s critics say the exact opposite is the case, asserting that the current legislation resulted from semi-secret negotiations between the EP, the European Commission, and the Council of the EU that lacked transparency and subverted the law’s original intent. MEP Patrick Beyer stated that the AI Act “effectively allows law enforcement the introduction of error-prone facial surveillance and facial recognition camera software in public spaces.”  The law, Beyer says, is poised to turn European nations into “high-tech surveillance states.”

Czech MEP Marcel Kolaja echoed Beyer’s concerns. He claimed that “national governments” inserted language laying the groundwork for legalized mass spying on citizens using cameras with AI biometric tech.

“Such cameras, equipped with artificial intelligence, are able to recognize people’s faces and thus keep track of who has been where, when, and with whom,” Kolaja said. “The AI Act should have banned such an Orwellian tool, but instead it explicitly legalizes it.”

show less
The European Parliament (EP) has passed the AI Act, which critics warn enshrines biometric mass surveillance in law. show more

Adobe AI Erases White People, Including America’s Founders.

Artificial Intelligence (AI) image generator Adobe Firefly is producing the same perverse results as Gemini, the suspended Google image generator that regularly refused to depict white people and inserted minorities into historically inappropriate contexts.

An investigation by Semafor found Adobe Firefly produced some of the same inaccurate results as Gemini, rendering Vikings and even “German soldiers in 1945” as black, for example.

The National Pulse found similar issues, with a prompt for “America’s Founding Fathers” resulting in an image of a black woman and a black man, and a prompt for “European People in the Middle Ages” resulting in an image of eight black people in medieval dress. A prompt for “German soldiers in 1945” generated an image as equally historically dubious as the one found by Semafor.

While Semafor suggested that such results result from “technical shortcomings,” the images produced by the now-shuttered Gemini program resulted from deliberate programming.

Requests to create images of white families would be refused, while images to create images of black families were accepted. Similarly, only images of historically white groups, such as Vikings and medieval kings, were rendered as ethnically diverse. Historically black groups, such as Zulu warriors, were rendered accurately.

In Adobe’s case, the company defended Firefly as a tool that “isn’t meant for generating photorealistic depictions of real or historical events,” standing by their “commitment to responsible innovation” and decision to “[train] our AI models on diverse datasets to ensure we’re producing commercially safe results that don’t perpetuate harmful stereotypes.”

show less
Artificial Intelligence (AI) image generator Adobe Firefly is producing the same perverse results as Gemini, the suspended Google image generator that regularly refused to depict white people and inserted minorities into historically inappropriate contexts. show more