Author: V Diwahar

V Diwahar is a final-year B.E Cybersecurity student, independent security researcher, and founder of CyberInfos.in an - global cybersecurity analysis blog delivering technical depth, expert threat intelligence, and actionable security guidance to readers across the US, UK, Europe, Asia, and beyond. With hands-on academic and practical experience in ethical hacking, network security, malware analysis, penetration testing, vulnerability research, and digital forensics, I brings a practitioner's perspective to every article going beyond headlines to analyse what vulnerabilities and breaches actually mean, who is genuinely at risk, and what every reader should do about it right now. Every article published on CyberInfos.in is built on verified technical research CVE details cross-referenced with nvd.nist.gov, attack mechanics explained using real tools and lab environments, and expert analysis that challenges official statements when the evidence demands it. I founded CyberInfos.in with a single mission: to fill the gap between generic press-release rewrites and inaccessible technical papers delivering cybersecurity analysis that is deep enough for security professionals, clear enough for business owners, and actionable enough for everyone.

Anthropic has accused three major Chinese AI companies of orchestrating large-scale Claude distillation attacks, involving more than 16 million exchanges with its Claude models. According to the company, the activity violated its terms of service and sidestepped regional access restrictions designed to limit availability.Anthropic says the campaigns were engineered to extract advanced reasoning, coding, and agentic capabilities from its frontier systems using AI model distillation. Inside a lab, distillation is routine smaller models learn from larger ones all the time. But Anthropic’s position is blunt: when you point that technique at a rival’s model without permission, it stops being research…

Read More

Google’s Google Antigravity suspension wave has sent a jolt through the AI developer ecosystem, leaving thousands of OpenClaw users abruptly cut off from Gemini model access. What initially looked like a routine backend capacity hiccup quickly spiraled into account restrictions, persistent 403 errors, and growing accusations that Google had overcorrected. And here’s where it gets uncomfortable. The enforcement action zeroes in on developers who used the OpenClaw OAuth plugin to tap into subsidized Gemini model tokens through Google’s Antigravity platform Google DeepMind’s developer-facing gateway to Gemini AI infrastructure. Google says the setup violated its AI terms of service by funneling…

Read More

The recent PayPal data breach didn’t begin with ransomware headlines or a dramatic network intrusion. Instead, it unfolded quietly inside a specialized financial workflow used by small businesses. For roughly six months in 2025, a software error in PayPal’s Working Capital loan system exposed highly sensitive customer data, including Social Security numbers and dates of birth. According to reporting from BleepingComputer and Cybernews, approximately 100 customers were affected.That number may sound limited. But when SSNs are involved, impact matters more than scale.Some affected customers experienced unauthorized transactions. PayPal says those transactions were reversed. Still, for business owners who depend on…

Read More

When a brand as globally recognized as Adidas makes headlines for a data breach, the world pays attention. And right now, Adidas is doing exactly what no billion-dollar company wants to do  investigate a potential breach of customer and partner data.On February 16, 2026, a threat actor going by the name “LAPSUS-GROUP” posted on the dark web forum BreachForums, claiming to have infiltrated Adidas’ extranet and extracted 815,000 rows of sensitive data.If that number sounds alarming, that’s because it’s meant to but as we dig deeper, the real story is far more nuanced. How Did This All Start? Like many…

Read More

For years, security researchers have warned that generative AI would eventually move beyond phishing emails and scripted scams and become embedded directly inside live malware. That shift no longer feels theoretical. A newly discovered strain called PromptSpy Android malware is the first documented example of Android malware using generative AI during runtime execution. Rather than relying entirely on hardcoded logic, it consults Google’s Gemini model mid-operation to determine how to stay persistent on an infected device. It’s not cinematic. It’s not dramatic. But it is meaningful. Discovered in February 2026 by researchers at ESET, PromptSpy integrates AI in a focused…

Read More

When critical infrastructure software is exposed to the internet, attackers rarely wait. That pattern has repeated itself with the recent SmarterMail vulnerabilities, which were weaponized within days of disclosure and are now tied to real-world ransomware activity. Security researchers monitoring underground Telegram channels and cybercrime forums observed threat actors rapidly sharing proof-of-concept (PoC) exploit code, offensive tooling, and even stolen administrator credentials linked to CVE-2026-24423 and CVE-2026-23760. What stands out isn’t just the severity of the flaws it’s the speed at which they were operationalized. Email servers have quietly become one of the most strategic entry points into corporate networks.…

Read More