Apple CEO Tim Cook says artificial intelligence (AI) programs like large language models (LLM)—think ChatGPT—are likely to continue generating false information and lying to users. “It’s not 100 percent,” Cook said in a recent interview with The Washington Post when asked if Apple‘s venture into AI would produce a ‘truthful’ chatbot. He emphasized that the company is “confident it will be very high quality” but added: “I would never claim that it’s 100 percent.”
Users of AI tools like LLM-powered chatbots have increasingly encountered what they call “hallucinations,” a term used to describe AI-generated inaccuracies. These errors can either be intentional—i.e., programmed into the AI by its creator—or can develop incidentally as the AI learns and expands its knowledge. The former has increasingly become a flashpoint in American politics as some AI image generators created by major technology companies have produced purposefully inaccurate results for the sake of diversity, equity, and inclusion (DEI).
The National Pulse reported in March that Adobe’s Firefly AI image generator regularly refused to depict white people and inserted minorities into historically inappropriate contexts. A similar issue was faced by Google‘s Gemini AI tool before they temporarily suspended it from use after public backlash. The AI image generator Midjourny recently blocked users altogether from creating images featuring Donald Trump or Joe Biden, claiming the policy is meant to prevent election misinformation.
Cook’s remarks came as Apple has officially entered the AI sector, announcing a range of machine learning tools at its Worldwide Developers Conference earlier this week.