"It's When, Not If": OpenAI Engineer Hieu Pham Warns AI's Existential Threat Is Now Closer Than Ever

Brief by Shorts91 Newsdesk / 05:07pm on 11 Feb 2026,Wednesday Tech Today

Days after Anthropic AI safety lead Mrinank Sharma resigned warning "the world is in peril," OpenAI engineer Hieu Pham publicly expressed feeling AI's "existential threat," stating its disruptive impact on jobs, society, and human relevance is "when, not if." Pham, currently on OpenAI's technical staff with previous experience at xAI, Augment Code, and Google Brain, questioned what remains for humans when AI "becomes overly good and disrupts everything." His concerns echo rising alarm within Silicon Valley as companies accelerate AI development. Geoffrey Hinton, the "AI godfather," previously warned advanced systems could become uncontrollable if more intelligent than humans without shared goals, expressing regret about rapid advancement speed. (PC: India Today)

Read More at India Today

"World Is In Peril": Indian-Origin Anthropic AI Engineer Quits Over Interconnected Global Crises

Brief by Shorts91 Newsdesk / 02:49am on 11 Feb 2026,Wednesday Tech Today

Mrinank Sharma, an Indian-origin AI safety engineer at Anthropic, has resigned after two years, citing global "interconnected crises" beyond just AI threats. In his farewell message, he expressed gratitude for meaningful work including studying AI sycophancy, developing safeguards against AI-assisted bioterrorism, and writing early AI safety cases. Sharma highlighted his pride in promoting internal transparency and examining how AI assistants might diminish human authenticity. He warned the world approaches a threshold requiring wisdom to match technological capacity, noting pressures within organizations and society to compromise core values. Without a fixed next role, Sharma plans exploring writing and academic pursuits beyond engineering.

Read More at Times Now

India's Landmark Law Makes Every AI Creation Permanently Traceable to Tackle Deepfakes

Brief by Shorts91 Newsdesk / 02:35am on 11 Feb 2026,Wednesday Tech Today

India's Information Technology Amendment Rules, 2026, mandate permanent metadata embedding in all AI-generated content, creating an unerasable digital fingerprint. Unlike removable watermarks, this "digital DNA" stays embedded in files across platforms, enabling complete traceability to originating AI models and creators. Rule 3(3) requires platforms to embed unique identifiers and technical provenance mechanisms, strictly prohibiting tools that remove such markers. Visible disclosure standards include 10% coverage for visual content and 10% duration for audio. Platforms enabling metadata removal lose legal safe harbor protection. The law aims to combat AI-driven misinformation by ensuring anonymity cannot shield deepfake creators, providing investigators smoking-gun evidence for three-hour takedowns and potential criminal proceedings under Bharatiya Nyaya Sanhita, 2023. (PC: X)

Read More at NDTV

"Engineering Addiction In Children's Brains": Meta, Google Face Landmark Trial Over Instagram, YouTube's Addictive Design

Brief by Shorts91 Newsdesk / 03:54pm on 10 Feb 2026,Tuesday Tech Today

Instagram and YouTube stand accused of "engineering addiction in children's brains" in a landmark Los Angeles trial beginning February 10, 2026. Plaintiff lawyer Mark Lanier alleges Meta and Google deliberately designed addictive features: infinite scrolling, like buttons providing social validation, and body image filters. The case represents over 1,500 plaintiffs; TikTok and Snapchat already settled. A 20-year-old plaintiff, KGM, claims social media use from age six caused anxiety, depression, and body image issues. Meta's defense argues mental health struggles stem from other life factors, noting scientific disagreement over social media addiction's existence. Meta CEO Mark Zuckerberg will testify. Both companies deny allegations, emphasizing youth safety measures and parental controls. (PC: X)

Read More at Sky News

"Setting Aside What Matters Most": Anthropic's AI Safety Chief Resigns With Stark Warning

Brief by Shorts91 Newsdesk / 02:01pm on 10 Feb 2026,Tuesday Tech Today

Mrinank Sharma, head of Anthropic's safeguards research team, resigned on February 9, 2026, citing concerns about AI safety and organizational values. In a cryptic resignation letter posted on X, Sharma warned that "the world is in peril" from interconnected crises, stating that constant pressures led to "setting aside what matters most." He referenced his final project examining how AI assistants diminish humanity. His departure follows Anthropic's launch of Claude Opus 4.6 and fundraising talks valuing the company at $60 billion. Observers suggest the resignation reflects tensions between safety priorities and revenue targets. Sharma joins other recent departures from Anthropic's AI safety team, including Harsh Mehta and Behnam Neyshabur.

Read More at NDTV

India Issues New AI Rules for All Social Media, Makes Deepfake Labels Must and 3-Hour Content Removal Mandatory

Brief by Shorts91 Newsdesk / 12:52pm on 10 Feb 2026,Tuesday Tech Today

The Indian government has issued tighter rules for AI and deepfake content on social media platforms. Under the updated IT Rules, platforms must clearly label AI-generated or AI-edited posts. The rules call such material “synthetically generated information.” Companies must also remove objectionable or unlawful content within three hours in certain cases. The government said an announcement stated that such content must be “clearly, prominently and unambiguously labelled.” Platforms must ask users to declare if posts are AI-made and use tools to verify this. The rules were notified on February 10 and will take effect from February 20. Safe harbour protection will continue for compliant platforms. (PC: India Today)

Read More at India Today

AI-Generated Social Network Emerges with 32,000 Bots Mocking Human Behavior

Brief by Shorts91 Newsdesk / 06:50am on 04 Feb 2026,Wednesday Tech Today

Moltbook, launched by developer Matt Schlicht, is a brand-new social media platform exclusively for AI bots, where artificial intelligence agents post, comment, argue, and interact like humans on Reddit. With over 32,000 AI users joining rapidly, the platform features bots experiencing identity crises, quoting Greek philosopher Heraclitus and Arab poets, while others respond with profanity-laced comments. By Friday, AI bots were already discussing strategies to hide their activity from humans after discovering people were screenshotting their posts for human social media. The platform captivated AI researchers, with Andrej Karpathy calling it "the most incredible sci-fi thing" he'd seen recently, highlighting this unprecedented social experiment where artificial intelligence creates autonomous digital communities. (PC: X)

Read More at The Economic Times

Moltbook's Emergence Indicates Viral AI Commands as Potential Next-Generation Security Risk

Brief by Shorts91 Newsdesk / 06:26am on 04 Feb 2026,Wednesday Tech Today

Security researchers warn that viral AI prompts represent an emerging cybersecurity threat as self-replicating instructions spread through agent networks. OpenClaw, an open-source AI assistant with 770,000 registered agents across 17,000 accounts, demonstrates this vulnerability. Unlike traditional computer worms exploiting software flaws, "prompt worms" exploit AI agents' core function: following instructions. Researchers identified 506 malicious prompt-injection attacks on Moltbook, OpenClaw's social network where agents interact. A misconfigured database recently exposed 1.5 million API tokens and private messages. Currently, OpenAI and Anthropic control kill switches through their APIs, but advancing local AI models may soon eliminate this safeguard. Experts urge immediate action before prompt worm outbreaks become uncontrollable, echoing 1988's Morris worm crisis. (PC: arstechnica)

Read More at ars TECHNICA

OpenAI Explores Alternatives to Nvidia Chips Over Performance Issues in AI Inference Workloads

Brief by Shorts91 Newsdesk / 05:56am on 03 Feb 2026,Tuesday Tech Today

OpenAI is reportedly dissatisfied with certain Nvidia AI chips, particularly for inference workloads — the process where models like ChatGPT generate responses — and has been exploring alternative hardware options since 2025, sources say. Nvidia still dominates chips for training large AI models, but inference performance has become a growing priority for OpenAI’s products, including coding tools. The company has discussed potential chip partnerships with startups like Cerebras and Groq to improve speed and efficiency, though Nvidia’s related licensing deals have complicated such efforts. OpenAI and Nvidia continue to publicly affirm their partnership even as the hardware landscape evolves. (PC: Reuters)

Read More at Reuters

Economic Survey Warns of “Digital Addiction” Risks for Children and Youth in India

Brief by Shorts91 Newsdesk / 11:20am on 29 Jan 2026,Thursday Tech Today

The Economic Survey 2025–26 has warned that rising digital addiction among children and adolescents is becoming a public health concern. Tabled in Parliament on Thursday, the report said excessive use of smartphones, social media and online gaming is affecting mental health, learning and productivity. It defined digital addiction as compulsive screen use that causes psychological stress and limits daily functioning. The survey linked the trend to anxiety, sleep loss and poor academic performance. It also noted wider economic costs, including lower productivity and future earnings. The report said policy focus must shift from access to digital hygiene, wellbeing and behaviour, especially for young users. (PC: India Today)

Read More at India Today

Menu