CoPhish: Copilot Studio Agents Used to Steal OAuth Tokens via Trusted Microsoft Domains
Overview
Security researchers have identified a new phishing technique named “CoPhish” that leverages Microsoft Copilot Studio agents to deliver fraudulent OAuth consent prompts via legitimate Microsoft domains. The campaign uses the trust provided by Microsoft-owned infrastructure to present users with what appear to be valid consent dialogs, inducing them to grant permissions to malicious applications and thereby handing over OAuth tokens that can be used to access corporate resources.
CoPhish weaponizes Copilot Studio agents to deliver fraudulent OAuth consent requests via legitimate and trusted Microsoft domains.
Background and context: why this matters
OAuth consent phishing — where attackers trick users into approving access for malicious third‑party applications — is a well-established social engineering technique. Unlike password theft, a successful OAuth consent grant can give attackers persistent API‑level access to mail, files, calendars and other enterprise resources without ever seeing a user’s password. Because access tokens and refresh tokens are valid credentials in their own right, they can bypass protections that depend on passwords, including some forms of multi‑factor authentication.
The CoPhish pattern matters because it combines two risk multipliers: the platformization of AI tooling and the implicit trust users place in services hosted on trusted vendor domains. Copilot Studio is an environment intended for building and running AI agents; when those agents are used to surface links or consent flows that resolve on Microsoft’s own domains, users are more likely to accept the request. That blending of legitimate infrastructure with fraudulent intent makes detection and user education harder.
There is precedent for consent‑based malware and phishing: the 2017 Google Docs “OAuth worm” is a high‑profile example where users granted a malicious app access to their Google accounts, allowing the attack to propagate. Security teams have since repeatedly warned about consent phishing as part of the broader identity‑first threat landscape.
Technical analysis and attacker techniques (practitioner view)
Abuse of trusted hosting: By leveraging Microsoft domains to host or redirect to consent dialogs, attackers reduce obvious visual cues that normally alert users — such as unfamiliar domains — and increase the perceived legitimacy of the consent prompt.
Use of platform features: Copilot Studio agents can generate and deliver content at scale. When those agents are used as distribution vectors, they can automate persuasion, tailor content to targets and deliver consent requests in contextually relevant ways (for example, as part of a workflow or ticket).
OAuth token acquisition: When a user consents to an application, Azure AD (or another identity provider) issues access and refresh tokens scoped to the permissions granted. Attackers can use those tokens to call APIs on behalf of the user — exfiltrating mail, files or other data — without needing the user’s password.
Persistence and lateral movement: Refresh tokens allow attackers to obtain new access tokens without returning to the user for consent. Combined with API access, this capability can enable prolonged access, data staging and stealthy lateral activity within an environment.
Risks, implications and comparable cases
The primary risks from a CoPhish‑style campaign are unauthorized data access and account takeover via delegated permissions. Specific implications for organizations include:
- Data exfiltration from mailboxes, file stores and collaboration platforms.
- Privilege escalation if high‑scope permissions are granted (for example, directory read/write access).
- Compromise of third‑party integrations and downstream services that trust compromised accounts.
- Operational disruption from cleanup, token revocation and forensic investigations.
- Reputational damage if attacker access leads to leak of sensitive customer or employee data.
Comparable cases are well known in the security community. The 2017 Google Docs incident demonstrated how OAuth consent flows can be abused to propagate malware and harvest data. Since then, security practitioners have repeatedly observed consent phishing campaigns targeting corporate users across different identity providers. The CoPhish vector is notable primarily for its use of Copilot Studio agents and Microsoft domains to increase the social engineering success rate.
Actionable recommendations for detection and mitigation
Practitioners should treat consent phishing as a critical identity risk and implement layered technical controls, governance and user education. Recommended actions include:
Harden app consent policies in Azure AD:
- Require administrator consent for applications requesting high‑risk permissions.
- Disable or restrict the “Users can consent to apps” setting, or limit consentable apps to a pre‑approved gallery.
- Use “Verified publisher” and application publisher restrictions to reduce acceptance of unknown apps.
Inventory and audit enterprise app permissions:
- Regularly review enterprise applications, registered apps and user consent grants; revoke any unfamiliar or excessive permissions.
- Audit OAuth grant events and application registrations for anomalies (sudden spikes, new redirect URIs, unusual scopes).
Monitor identity and cloud activity:
- Enable and review Azure AD sign‑in and audit logs for suspicious tokens, unusual IPs, geographic hopping and post‑consent API calls that transfer large volumes of data.
- Use Cloud Access Security Broker (CASB) tooling such as Microsoft Defender for Cloud Apps to detect unsanctioned applications and risky OAuth grants.
Enforce conditional access and least privilege:
- Apply conditional access policies that restrict app access by device compliance, location, or risk signals.
- Require step‑up authentication for sensitive operations and scope‑based least privilege for applications.
Incident response and remediation:
- When a consent grant is suspected, revoke the app’s permissions and any associated refresh tokens immediately, and rotate affected credentials.
- Perform forensic collection of sign‑in logs, token issuance events and API call history to determine scope and duration of access.
- Notify affected stakeholders and follow breach reporting obligations where applicable.
User awareness and operational playbooks:
- Train users to scrutinize OAuth consent screens — check the requesting application name, publisher and scopes being requested. Encourage reporting of unexpected consent prompts even when they appear on trusted domains.
- Develop runbooks for rapid response to consent phishing: app revocation, token revocation, account containment and forensic analysis.
Expert commentary and defensive considerations
From a defender’s perspective, CoPhish underscores the importance of treating identity as a primary attack surface. Traditional phishing training and email filtering remain essential, but identity controls and OAuth governance are equally important because they close a vector that bypasses password‑centric defenses.
Architectural controls are critical. Limiting application registration to admins, enforcing admin consent for high‑risk scopes, and maintaining strict publisher verification reduce the attack surface. Continuous monitoring of consent grants and leveraging behavior analytics to flag unusual API access patterns will help detect abuse faster.
Operationally, organizations should take a zero‑trust posture toward any new consent grant. Even when flows originate on vendor domains, validate that the requesting application and requested scopes align with business need. Where possible, require multi‑party approval for the most sensitive entitlements.
Conclusion
CoPhish demonstrates how attackers are adapting social engineering to exploit modern collaboration and AI development platforms. By using Copilot Studio agents and trusted Microsoft domains, attackers increase the credibility of OAuth consent prompts and raise the success rate of consent phishing. Defenders must respond by tightening OAuth governance, monitoring delegated access, and ensuring operational readiness to revoke malicious app grants quickly. In an identity‑first threat landscape, robust consent management and continuous monitoring are essential controls for reducing the risk of token‑based compromise.
Source: www.bleepingcomputer.com







