Cybersecurity Update 12-25 July 2025
- Melissa Hathaway

- 2 days ago
- 11 min read
United States of America
Stablecoin Now Recognized Currency and Regulated. On 18 July 2025, President Trump
signed the GENIUS Act into law. It establishes the first comprehensive federal framework for regulating payment stablecoins, a type of cryptocurrency designed to maintain a stable value. The law aims to provide clear regulatory pathways for stablecoin issuers while also imposing significant compliance requirements. Today, there are some $265 billion in circulation. Citigroup has forecast that the market could swell to as much as $3.7 trillion by 2030. Citigroup, JP Morgan, and Bank of America have described the “digital dollar” as a potential threat to the banking industry’s grip on payments — and signaled they’re preparing to respond. Banks, payment companies, and stablecoin issuers must now prepare for significant changes in the regulatory landscape. Any firm that wants to continue offering dollar-backed stablecoins to U.S. users will need to qualify either as a permitted payment stablecoin issuer under federal oversight or as a state-qualified issuer operating below the Act’s $10 billion issuance cap. Issuers remain subject to Bank Secrecy Act compliance obligations, including anti-money laundering controls, know-your-customer requirements, and suspicious-activity monitoring. Wallet providers, custodians, and payment processors that plug into an issuer’s ecosystem will likely fall under the
same glare if they hold customer assets or directly touch transaction flows. Accordingly, issuers may need to expand their vendor-risk frameworks, diligence, contracts, and ongoing monitoring to cover such third-party service providers. Stablecoins are not backed by federal deposit insurance or subject to share insurance by the National Credit Union Administration (NCUA). Permitted issuers may not represent that stablecoins are backed by the federal government or federal deposit insurance. GENIUS will take effect either 18 months after its passage or 120 days after final regulations are issued—whichever comes first. Regulations implementing the Act must be issued within one year of enactment. (Bloomberg, Genius Act, CoinGecko, Axios, Venable, Wilmer Hale, Wired)
America’s AI Action Plan. On 23 July 2025, the Trump Administration released its 28 page
"America's AI Action Plan." This document presents a significant policy shift. It abandons the
Biden Administration's safety-first approach in favor of an innovation-first strategy designed to win what the White House calls "the AI race.” AI development is framed as a zero-sum global competition: "Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits.” The Plan has three pillars: innovation, infrastructure, and international diplomacy and security.
• Accelerate AI Innovation: The plan aims to foster a pro-innovation environment by
removing bureaucratic "red tape" and burdensome regulations. It promotes open-source and open-weight AI models, viewing them as valuable for innovation and as having geostrategic value. It also emphasizes enabling AI adoption across various sectors, empowering American workers through skills development and retraining, and supporting next-generation manufacturing.
• Build American AI Infrastructure: This pillar focuses on creating the physical and digital
foundation for AI dominance. It calls for streamlined permitting for data centers,
semiconductor manufacturing facilities, and energy infrastructure. The plan also details
strategies to strengthen the U.S. electric grid, restore American semiconductor manufacturing, and build high-security data centers for military and intelligence use. A key component is training a skilled workforce to build and maintain this infrastructure.
• Lead in International AI Diplomacy and Security: The plan seeks to establish American AI as the global gold standard by exporting its technology stack to allies and partners. The
White House defined “full stack” as “including hardware, models, software, applications, and standards.” It aims to counter Chinese influence in international governance bodies and strengthen export controls on advanced AI compute and semiconductor manufacturing sub-systems to deny adversaries access to these critical resources. The plan also addresses national security risks, including biosecurity and malicious synthetic media, or “deepfakes"
The Plan also calls for multiple actions to advance the military’s adoption of the technology, including the standup of an “AI and Autonomous Systems Virtual Proving Ground” at the Department of Defense. The plan outlines over 90 federal actions, including ensuring the government buys AI free from “ideological bias.” The actions were developed with “overwhelming input from industry, academia and civil society,” including a request for
information that generated over 10,000 responses. (America’s AI Action Plan, MeriTalk,
Microsoft SharePoint Exploitation. As early as 7 July 2025, customers using Microsoft’s
SharePoint began experiencing exploitation. On 20 July 2025, Microsoft issued an urgent alert to its customers stating that it was aware of active exploitation of SharePoint on-premise servers, worldwide. In the alert, Microsoft said that a vulnerability "allows an authorized attacker to perform spoofing over a network." It issued recommendations to stop the attackers from exploiting it — primarily disconnecting the server from the Internet. At least 10,000 servers are at risk and at least 400 companies were penetrated by Chinese malicious actors. Microsoft released a security patch for customers to apply to their SharePoint servers and is working to roll out others. On 23 July 2025, Microsoft warned that ransomware syndicates were now exploiting the flaws. Malicious actors successfully penetrated the National Nuclear Security Administration, a semi-autonomous part of the Department of Energy. Senator Wyden, a Democrat from Oregon, said government agencies have become dependent on “a company that not only doesn’t care about security but is making billions of dollars selling premium cybersecurity services to address the flaws in its products.” (Reuters, HackerNews, Microsoft, WP, EyeResearch, Microsoft, Data Breach Today, Bloomberg, Bleeping Computer, Krebs)
Microsoft is Using Chinese Tech Support for DoD Systems. On 15 July 2025, ProRepublica exposed the fact that Microsoft has been using Chinese engineers to help maintain the Defense Department’s Azure cloud — with minimal supervision by U.S. personnel. Upon learning of this issue Defense Secretary Hegseth announced that DoD would be 'looking into' Microsoft security and associated risks or espionage issues that may have resulted in their decisions. On 18 July 2025, Microsoft stated, ”In response to concerns raised earlier this week about U.S.-supervised foreign engineers, Microsoft has made changes to our support for U.S. government customers to assure that no China-based engineering teams are providing technical assistance for DOD government cloud and related services.” Microsoft stated that it was using “digital escorts” to supervise the Chinese engineers. Microsoft went on to state that global workers “have no direct access to customer data or customer systems.” Escorts “with the appropriate clearances and
training provide direct support. These personnel are provided specific training on protecting sensitive data, preventing harm, and use of the specific commands/controls within the environment.” Yet, some with deep knowledge on the hiring processes stated that the $18-per-hour "digital escort" positions, lacked the adequate tech expertise to prevent a rogue Chinese employee from hacking the system or turning over classified information to the CCP. Each month, Microsoft outsourced the digital escorts to Leidos, Accenture, Insight Global and other consulting groups. The roughly 50-person escort team fields hundreds of interactions with Microsoft’s China-based engineers and developers, inputting those workers’ commands into federal networks like updating a firewall, installing an update to fix a bug or reviewing logs to troubleshoot a problem. (ProPublica, Reuters, FoxNews, SeekingAlpha)
Clorox Sues Cognizant for 2023 Breach. On 22 July 2025, Clorox filed a $380M law suit
against Cognizant, the company responsible for managing its IT infrastructure. In August 2023, Cognizant staff were socially engineered by Scattered Spider and provided credentials to the malicious actors without proper authentication or identity checks. Clorox’s manufacturing environments and IT infrastructure were disrupted for months. “Clorox entrusted Cognizant with the critical responsibility of safeguarding Clorox’s corporate systems — and Cognizant failed miserably,” said Mary Rose Alexander, outside counsel for The Clorox Company and partner at Latham & Watkins. “Cognizant didn’t just drop the ball. They handed over the keys to Clorox’s corporate network to a notorious cybercriminal group in reckless disregard for Clorox’s policies and long-established cybersecurity standards. It’s all captured on call recordings, and it’s indefensible.” Cognizant criticized Clorox for the lawsuit and said questions remained about how Clorox managed its own internal cybersecurity protocols. “Clorox has tried to blame us for these failures, but the reality is that Clorox hired Cognizant for a narrow scope of help desk services
which Cognizant reasonably performed. Cognizant did not manage cybersecurity for Clorox.” (Cybersecurity Dive, ArsTechnica, CSOOnline)
China's Salt Typhoon Breached a US state’s National Guard Network. On 11 June 2025, the Department of Homeland Security published it assessment of a breach of Army National Guard network. “Between March and December 2024, Salt Typhoon extensively compromised a US state’s Army National Guard’s network and, among other things, collected its network configuration and its data traffic with its counterparts’ networks in every other US state and at least four US territories, according to a DOD report. This data also included these networks’ administrator credentials and network diagrams—which could be used to facilitate follow-on Salt Typhoon hacks of these units.” The analysis also stated, “The National Guard in 14 U.S. states work with law enforcement “fusion centers” to share intelligence, the DHS memo notes. The hackers accessed a map of geographic locations in the targeted state, diagrams of how internal networks are set up, and personal information of service members.” A National Guard Bureau (NGB) spokesperson confirmed the compromise, telling NBC that the attack "has not prevented the National Guard from accomplishing assigned state or federal missions, and that NGB continues to investigate the intrusion to determine its full scope.” (NBC News, DHS Document)
Dell Confirms Breach of Test Lab Platform. In July 2025, Dell stated that a malicious actor — World Leaks (formerly known as Hunters International), gained access to its Solution Center, an environment designed to demonstrate Dell’s products and test proofs-of-concept for Dell’s commercial customers. World Leaks is a ransomware group that specializes in extortion, where members of the victim organization are pressured into paying a ransom in order to avoid the release of sensitive information. It is estimated that in the last two years, they have successfully attacked 280 organizations worldwide. The breach underscores broader enterprise challenges in securing demonstration environments that must balance accessibility for sales purposes with adequate security controls. (BleepingComputer, CSOOnline)
International Items of Interest
Europe Publishes its AI Code of Practice. On 10 July 2025, the EU published the AI Code of Practice. In the following weeks, Member States and the Commission will assess its adequacy. Additionally, the code will be complemented by Commission guidelines on key concepts related to general-purpose AI models, to be published still in July. The General-Purpose AI (GPAI) Code of Practice is a voluntary tool, prepared by independent experts in a multi-stakeholder process, designed to help industry comply with the AI Act’s obligations for providers of general-purpose AI models. It aims to help companies implement processes and systems to comply with the EU’s legislation for regulating AI. Among other things, the code requires companies to provide and regularly update documentation about their AI tools and services and bans developers from training AI on pirated content; companies must also comply with content owners’ requests to not use their works in their datasets. The EU AI Act goes into effect on 2 August 2025 and includes copyright protections for creators and transparency requirements for advanced AI models. The companies operating highly capable models should sign on to the Code of Practice while the European Commission works out more harmonized and longstanding controls. Those that don’t sign will have to prove to the commission that they’re complying with the AI Act. Meta has stated that it will not sign onto the Code of Practice. These models would also have to: (1) Report their energy consumption; (2) Perform red-teaming, or adversarial tests,
either internally or externally; (3) Assess and mitigate possible systemic risks, and report any incidents; (5) Ensure they’re using adequate cybersecurity controls; (6) Report the information used to fine-tune the model, and their system architecture; and (7) Conform to more energy efficient standards if they’re developed. France and Germany have previously voiced concerns about applying too much regulation to general-purpose AI models and risk killing off European competitors like France’s Mistral AI or Germany’s Aleph Alpha. (AI Code of Practice, TechCrunch, Bloomberg, Reuters)
Russia’s Cyber Actors Prompt LLMs to Create Malicious Windows Commands. On 18 July 2025, the Ukraine CERT published its analysis of how Russia’s cyber actors (APT 28) are developing malware capable of querying LLMs to generate Windows shell commands as part of its attack chain. This group is linked to Russia's General Staff Main Intelligence Directorate (GRU) 85th Main Special Service Center (GTsSS). The malware, named LAMEHUG by the Ukrainian CERT, was used in recent spear phishing attacks against Ukrainian government entities and represents a new example of how attackers are using AI in their attacks. The phishing emails were sent from a compromised email account and impersonated a representative of a Ukrainian ministry, according to the CERT-UA report. The malware was contained in a ZIP archive. LAMEHUG’s creators are building the ability to query LLMs directly into the malware. To do so, LAMEHUG leverages the APIs from Hugging Face, the largest platform for hosting LLMs and AI models. The malware instructs the model to behave as a Windows system administrator and to generate a list of shell commands that create a folder, collect computer, network, and Active Directory information, and store it in a text file. By introducing variety into the commands executed on each infection through real-time LLM queries, attackers may aim to evade detection via traditional signature-based methods. (CSOOnline, Ukraine CERT, Ukraine Blog, UK NCSC Report, Bleeping Computer, HackerNews)
Russian Hackers Are Exploring Ways to Inject Propaganda into the Training Data of Generative AI Models. On 11 July 2025, American Sunlight published a blog regarding its
research that GenAI powered chatbots’ lack of reasoning can directly contribute to the nefarious effects of LLM grooming: the mass-production and duplication of false narratives online with the intent of manipulating LLM outputs. There is increasing evidence that malign individuals, organizations and states are attempting to “groom” generative AI models. In fact, OpenAI’s ChatGPT-4o model cited Pravda content in response to five out of seven prompts on contested subjects. A report last month from the UK think-tank Royal United Services Institute found that Russian hackers were actively exploring ways of injecting propaganda or biased material into the training data of generative AI models to skew their output. “This tactic represents a shift from targeting audiences directly to subtly shaping the tools these audiences use.” (FT, RUSI Report, American Sunlight)
Taiwan Semiconductor Industry Targeted. On 16 July 2025, ProofPoint published a research paper highlighting that at least 3 different Chinese malicious actors are targeting Taiwan’s semiconductor industry. It was observed between March-June 2025 that targets ranged from organizations involved in the manufacturing, design, and testing of semiconductors and integrated circuits, wider equipment and services supply chain entities within this sector, as well as financial investment analysts specializing in the Taiwanese semiconductor market. (ProofPoint, CSOOnline)
North Korea Floods NPM Registry with Malware. On 16 July 2025, the Socket Research
team published that North Korean malicious actors escalated their software supply chain attacks by uploading 67 new malicious packages to the Node package manager (npm) Registry as part of the ongoing Contagious Interview campaign. The malware targets open-source JavaScript developers with malware loaders. The 67 malicious packages have been downloaded more than 17,000 times and 27 of these packages remain live on the npm registry. Socket submitted takedown requests to the npm security team and petitioned for the suspension of the associated accounts. The Contagious Interview campaign is designed to be persistent, evasive and modular. Its reliance on memory-only execution, JavaScript-based payload delivery and legitimate cloud infrastructure reduces visibility and complicates incident response. (Socket Research, BankInfoSecurity)
UK Pet Owners Targeted by Fake Microchip Renewal Scams. On 15 July 2025, UK pet
owners began receiving convincing scam emails demanding microchip registration renewals. and the source of the problem appears to lie deeper than just spam. A recent investigation by Pen Test Partners has revealed serious security issues in how microchip data is stored and accessed, giving scammers the tools they need to convincingly imitate official registries. People receiving the email were asked to verify information about their animals and confirm their own details before being asked to click on a link to make a payment, usually £29. (HackRead, SundayPost)
NCSC Launches Vulnerability Research Institute. On 14 July 2025, The UK’s National
Cyber Security Centre (NCSC) launched a new initiative (the Vulnerability Research Institute
(VRI)) designed to enhance its understanding of vulnerability research and improve the sharing of best practices among the external cybersecurity community. Yet, the VRI is not new and has been running quietly as a formal initiative since at least 2019. The VRI is comprised of a core team of technical experts, relationship managers and project managers. Their job is to pass on requirements from the NCSC’s in-house vulnerability research team to its VRI industry partners, and then monitor the progress of any research. “This successful way of working increases NCSC’s capacity to do VR and shares VR expertise across the UK’s VR ecosystem,” said the NCSC. “Developing deep understanding and expertise of technologies, security mitigations and products takes time. Technology growth is constant, ever complex, security is improving, and thus VR is getting harder. This means the NCSC demand for VR continues to grow.” (InfoSec Magazine, NCSC, The Stack, BleepingComputer)

Comments