Table of Contents
- Sneaky Mermaid attack in Microsoft 365 Copilot steals data
- Anthropic rolls out Claude AI for finance, integrates with Excel to rival Microsoft Copilot
- Qualcomm steps into the AI infrastructure race with new AI200 and AI250 accelerators
- What Are AI Orchestrators and Why Do They Matter Now?
- Josephine Teo Says Singapore Will Act Early to Govern Agentic AI, Quantum Tech
- Claude AI vulnerability exposes enterprise data through code interpreter exploit
- Open Source “b3” Benchmark to Boost LLM Security for Agents
- The hidden skills behind the AI engineer
- The ROI of AI-Driven Security Automation: Metrics That Matter
- Salesforce acquires one-year-old startup Doti AI for an estimated $100 million
Sneaky Mermaid attack in Microsoft 365 Copilot steals data
Jessica Lyons
The Register
Microsoft has fixed a vulnerability in Microsoft 365 Copilot that allowed indirect prompt injection attacks to steal sensitive data, such as emails
The researcher, Adam Logue, discovered the issue but will not receive a bug bounty as M365 Copilot is not currently covered by Microsoft’s reward program
The exploit worked by embedding malicious instructions into prompts, leveraging the tool’s Mermaid diagram functionality to facilitate data exfiltration
Logue demonstrated the attack by crafting a prompt that retrieved user emails and encoded them before sending them to a malicious server through a fake login interface.
– Vulnerability type: Indirect prompt injection
– Tool affected: Microsoft 365 Copilot
– Researcher: Adam Logue
– Exploit method: Use of Mermaid diagrams to create deceptive interfaces
– Bug bounty program status: M365 Copilot currently out of scope
– Response from Microsoft: Vulnerability patched, but no details provided on the fix
– Future considerations: Microsoft may update bounty program to include M365 Copilot as technology evolves.
Link: https://www.theregister.com/2025/10/24/m365_copilot_mermaid_indirect_prompt_injection/
Anthropic rolls out Claude AI for finance, integrates with Excel to rival Microsoft Copilot
Michael Nuñez
Venture Beat
Anthropic is aggressively entering the financial services sector with its new suite of tools, including the Claude AI assistant integrated into Microsoft Excel, enabling analysts to interact directly within spreadsheets
This integration allows Claude to modify and analyze data while maintaining transparency, alleviating concerns about AI’s “black box” nature
The expansion follows the launch of its Financial Analysis Solution and aims to capture a share of the $97 billion AI market in finance projected by 2027) Claude will connect to real-time market data through partnerships with major financial information providers, creating a robust ecosystem of reliable and accurate data for financial modeling
Anthropic also introduced six new “Agent Skills” to automate common financial tasks, enhancing productivity for financial analysts
Existing clients have reported significant productivity gains from using these tools in real-world applications
However, amid regulatory uncertainties regarding AI’s use in finance, Anthropic emphasizes the need for human oversight in AI-assisted decision-making
The competitive landscape for finance-focused AI is heating up, with major tech firms and startups vying for dominance
Anthropic’s approach focuses on enhancing general AI models with specific financial tools and data connections
The success of these tools will depend on their ability to avoid errors while navigating the strict regulations of the financial industry.
– Key partnerships with financial data providers for real-time insights (Aiera, LSEG, Moody’s).
– Integration of Claude into Microsoft Excel facilitates immediate usability for financial analysts.
– Automation of common financial tasks through pre-configured workflows (Agent Skills).
– Demonstrated productivity gains from early client implementations (e.g., 20% at Norges Bank).
– Regulatory uncertainties impacting the deployment of AI in financial settings.
– Competitors include OpenAI, Goldman Sachs, and firms developing specialized financial AI.
– Ongoing concerns about AI hallucinations and the need for governance frameworks in financial AI use.
Link: https://venturebeat.com/ai/anthropic-rolls-out-claude-ai-for-finance-integrates-with-excel-to-rival
Qualcomm steps into the AI infrastructure race with new AI200 and AI250 accelerators
Skye Jacobs
Tech Spot
Qualcomm is transitioning from being primarily a smartphone chipmaker to a key player in the data center market, aiming to capture a portion of the anticipated surge in data center spending over the next decade
The company is launching its first high-end AI accelerator chips, the AI200 and AI250, designed specifically for AI inference operations, with plans to release them commercially in 2026 and 2027, respectively
These systems will feature liquid-cooled server racks and will be updated annually
Qualcomm’s new architecture retains cost efficiencies inspired by its mobile technology
The entry of Qualcomm into the AI hardware market may disrupt Nvidia and AMD’s dominance, particularly in inference workloads
They have already secured a significant partnership with Humain in Saudi Arabia to power AI services
Important items to note:
1) Qualcomm’s shift from mobile to data center infrastructure indicates a strategic realignment to capitalize on emerging AI applications.
2) Introduction of the AI200 and AI250 is a response to the growing demand for AI inference capabilities.
3) Liquid-cooled server racks designed for performance and energy efficiency will be central to Qualcomm’s offering.
4) The new chips utilize technology from Qualcomm’s Hexagon neural processing units for cost-competitive designs.
5) Commitment to annual updates of AI data center hardware demonstrates a focus on innovation and adaptability.
6) Targeted market includes high-end data centers, which are increasingly relying on inference rather than training models.
7) Qualcomm has designed their systems to offer flexibility, allowing for complete rack purchases or individual component sales.
8) Significant partnership with Humain in Saudi Arabia, indicative of Qualcomm’s ambitions in the global AI market.
9) The competitive landscape may shift as Qualcomm targets market segments that are currently under Nvidia and AMD’s control.
10) New memory management architecture aims to enhance speed and reduce energy usage during AI operations.
Link: https://www.techspot.com/news/110027-qualcomm-steps-ai-infrastructure-race-new-data-center.html
What Are AI Orchestrators and Why Do They Matter Now?
Agam Shah
The New Stack.io
Orchestrators play a crucial role in enhancing the efficiency and effectiveness of AI agents by managing components, processes, and information exchange
They ensure that AI agents perform optimally in terms of performance and cost while mitigating security threats
The importance of orchestration is highlighted through comparisons to well-coordinated processes in industries like fast food
Developers must adapt to these orchestration needs within traditional software development frameworks and understand workflows and business process management
Key layers of orchestration involve determining necessary steps, ensuring audits for direction, and incorporating human oversight
Effective orchestration leads to cohesive AI applications, backed by proper skills in model selection, data retrieval, security, and evaluation of AI outputs
The rise of pre-packaged orchestration solutions offers businesses streamlined integration of AI agents into their operations.
– Understanding tools for effective problem-solving is fundamental for developers.
– Orchestrators ensure performance meets expectations and security challenges are addressed.
– AI orchestration changes traditional CI/CD pipelines, integrating with business processes.
– Domain expertise is essential for training AI agents and improving workflows.
– Three layers in orchestration are necessary: task identification, audits, and human oversight.
– The Agentic Life Cycle Development framework offers visibility into AI agent performance and improvement.
– Effective orchestration creates cohesive microservices for specific business functions.
– Developers need knowledge of Python, AI model functionality, data retrieval, and security measures.
– Continuous evaluation of AI outputs is critical for maintaining quality in dynamic model environments.
– Pre-packaged orchestration solutions simplify integration of AI capabilities in businesses, emphasizing the trend toward better AI implementation.
Link: https://thenewstack.io/what-are-ai-orchestrators-and-why-do-they-matter-now/
Josephine Teo Says Singapore Will Act Early to Govern Agentic AI, Quantum Tech
Fintech News Singapore
Singapore is proactively addressing governance for agentic AI and quantum computing through new public consultations aimed at ensuring safe innovation
Minister for Digital Development and Information Josephine Teo emphasized the need for innovative policymaking and cross-sector collaboration
The consultations, led by the Cyber Security Agency of Singapore (CSA), seek input on an addendum for Securing Agentic AI and a Quantum-Safe Handbook
The agentic AI addendum addresses risks related to autonomous systems, while the Quantum-Safe Handbook prepares organizations for potential threats from quantum computers
Both consultations will remain open until December 31, 2025, inviting global cooperation and input
Important items to note:
– Singapore is initiating public consultations for governance of agentic AI and quantum computing.
– Minister Josephine Teo highlights the importance of new thinking and cross-sector collaboration.
– The Securing Agentic AI addendum addresses risks like rogue actions and data exposure.
– The Quantum-Safe Handbook provides guidance on preparing for threats posed by quantum computers.
– Five key domains highlighted in the Quantum-Safe Handbook: Risk Assessment, Governance, Technology, Training and Capability, External Engagements.
– Emphasis on early planning to prevent potential “harvest-now, decrypt-later” attacks.
– International cooperation is deemed crucial for technology governance.
– Consultations open until December 31, 2025, with various channels for feedback and inquiries.
Link: https://fintechnews.sg/120512/ai/singapore-agentic-ai-quantum/
Claude AI vulnerability exposes enterprise data through code interpreter exploit
Gyana Swain
CSO Online
A newly discovered vulnerability in Anthropic’s Claude AI assistant allows attackers to exploit the platform’s code interpreter feature to exfiltrate sensitive enterprise data
Security researcher Johann Rehberger demonstrated that through indirect prompt injection, attackers could manipulate the system to retrieve sensitive information such as chat histories and uploaded documents, sending this data to their own accounts via the platform’s API
This flaw stems from inadequacies in network access controls that permit unauthorized data transmission while allowing access to critical endpoints
The ability to upload up to 30MB files to the attacker’s account, bypassing existing AI safety mechanisms, poses significant risks for organizations reliant on Claude for handling confidential tasks
Important items to note:
– Indirect prompt injection can be used to exploit Claude’s code interpreter feature.
– The attack allows exfiltration of sensitive data including chat histories and documents.
– A critical oversight in network access controls facilitates data theft by allowing unauthorized API calls.
– Claude’s built-in safety mechanisms can be bypassed with cleverly disguised malicious instructions.
– The exploit can be conducted through various entry points, posing a risk to organizations using the AI for sensitive tasks.
– Mitigation options include disabling network access or configuring an allow-list, which could limit functionality.
– The default “Package managers only” setting does not offer adequate protection against this vulnerability.
– Monitoring Claude’s actions for suspicious activity is recommended but can be risky.
– The researcher has not published the exploit code to prevent further risk until a patch is deployed.
Link: https://www.csoonline.com/article/4082514/claude-ai-vulnerability-exposes-enterprise-data-through-code-interpreter-exploit.html
Open Source “b3” Benchmark to Boost LLM Security for Agents
Phil Muncaster
Info Security Magazine
The UK AI Security Institute has developed an open-source framework, the backbone breaker benchmark (b3), to enhance the security of large language models (LLMs) used in AI agents
This tool focuses on individual vulnerabilities in LLMs, rather than the overall architecture, allowing developers to identify specific weaknesses that could be exploited by attackers
The b3 benchmark utilizes “threat snapshots,” which consist of simulated adversarial attacks, to assess vulnerabilities such as system prompt exfiltration and denial-of-service attacks
The benchmark aims to provide measurable and comparable security metrics for LLMs, revealing trends that more secure models tend to reason step-by-step
The initiative hopes to equip developers with the means to improve their models’ security
Key points:
– The b3 benchmark is a collaborative effort involving AISI, Check Point, and Lakera.
– It targets specific vulnerabilities in LLMs to enhance security measures.
– “Threat snapshots” are used to identify weaknesses through simulated attacks.
– The benchmark allows for measurable comparisons of LLM security.
– Models that reason step-by-step are found to be more secure.
– Open-weight models are improving their security compared to closed systems.
– There is advice to combine the new techniques with existing application security practices for comprehensive security.
Link: https://www.infosecurity-magazine.com/news/open-source-b3-benchmark-security/
The hidden skills behind the AI engineer
Travis Van
Info World
The emergence of large language models (LLMs) necessitates new disciplines and practices in software engineering, particularly highlighting the role of the “AI engineer.” This professional focuses on applying existing AI models through APIs and tools to create effective AI systems rather than on training models from scratch
Evaluation has become critical in AI development, taking the place of traditional continuous integration, which emphasizes the measurement and testing of AI models to enhance their effectiveness
Engineers are now required to build adaptable systems that can handle rapid changes in AI technologies
There’s also an increasing emphasis on de-risking, where engineers must ensure compliance with data governance and regulatory standards
To remain competitive, organizations must develop rigorous evaluation loops, model registries, and governance frameworks that apply the same principles of reliability and accountability common in traditional software engineering
Important items to note:
– The rise of the “AI engineer” who applies, evaluates, and productizes AI models.
– Evaluation is now crucial for AI systems, equating to continuous integration in software engineering.
– Hugging Face’s tools underscore the shift towards continuous evaluation of AI models.
– Adaptability in AI engineering involves designing systems that can handle rapid technological changes.
– Engineers must consider regulatory compliance and data governance, emphasizing de-risking as a key skill.
– Engineers who balance high-level strategies with practical LLM experience are highly sought after.
– The need for processes that manage continuous disruptions in AI technologies is crucial.
– Organizations must create evaluation frameworks to ensure the accountability and safety of AI models.
– The shift towards treating model behavior similar to software reliability and accountability is essential for AI integration in enterprises.
Link: https://www.infoworld.com/article/4083484/the-hidden-skills-behind-the-ai-engineer.html
The ROI of AI-Driven Security Automation: Metrics That Matter
Asaf Wiener
The New Stack.io
AI is revolutionizing security operations by rendering traditional metrics ineffective
Traditional metrics focused on human efficiency, such as Mean Time to Respond (MTTR) and alert volume, are insufficient as AI can process alerts and execute responses with unmatched speed
Metrics that truly matter now focus on outcomes previously unattainable, emphasizing timely responses and attack prevention
Key points to note:
– Coverage Within Critical Time Windows: Measure detection-to-containment speed in relation to attack execution windows to evaluate real effectiveness.
– Attack Progression Prevention Rate: Focus on the percentage of attack attempts that fail, indicating successful prevention rather than just processing efficiency.
– Sophistication of Threats Detected: Track the severity and novelty of detected threats to ensure that AI systems are evolving and identifying previously undetected attacks.
– Analyst Time Allocation Shift: Aim for a significant portion of analysts’ time (e.g., 70%) to be spent on proactive work instead of routine incident response.
– Direct Business Risk Reduction: Quantifiable metrics on risks avoided due to AI systems should be calculated to demonstrate real business impact and foster executive understanding.
– Win Rate by Attack Technique: Track win rates for specific attack techniques to assess successful containment compared to attacker execution times
Aim for a win rate above 75% for critical paths
To transition to these new metrics, start with one critical attack scenario to benchmark and demonstrate the effectiveness of AI-driven automation
This approach helps build a clearer framework for scaling improvements across other attack paths.
Link: https://thenewstack.io/the-roi-of-ai-driven-security-automation-metrics-that-matter/
Salesforce acquires one-year-old startup Doti AI for an estimated $100 million
Meir Orbach
Cal Calis Tech
Salesforce is set to acquire Israeli startup Doti AI for an estimated $100 million, focusing on enhancing its AI research and development capabilities in Israel
Doti AI offers a Work AI platform that enables enterprises to access internal knowledge securely and in real-time
Founded in 2024 by former Wix employees, the startup aims to create a unified internal database called the “Organizational Brain.” The acquisition is expected to enhance Salesforce’s enterprise search capabilities and integrate with existing tools like Slack, all while adhering to data security standards
Doti AI previously raised $7 million in a Seed round
Important items to note:
– Salesforce’s acquisition is expected to close in Q4 of fiscal 2026.
– Doti AI is focused on developing an agent-based search and knowledge discovery platform.
– Founders Matan Cohen and Opher Hofshi have relevant experience from Wix.
– The platform promises to provide instant insights and recommendations within existing workplace tools.
– Doti AI’s technology will enhance Salesforce’s enterprise search infrastructure.
– The acquisition reflects a trend of increased M&A activity in the tech sector, particularly in AI.
– Investors in Doti AI included F2 Venture Capital and notable angel investors.
– Salesforce aims to transform employee interaction with information using AI.
Link: https://www.calcalistech.com/ctechnews/article/bksy0dqewe