How can we prepare our data for secure AI adoption?
Before you deploy AI broadly, it helps to treat data security as the foundation of your AI strategy. The white paper outlines a four-step approach you can follow:
1. Know your data
Many organizations simply don’t know where their sensitive data lives. In fact, about 30% of decision-makers say they lack visibility into all their business-critical data. Start by:
- Using Microsoft Purview Content Explorer and Activity Explorer to locate sensitive data and see how it’s being used.
- Classifying and labeling sensitive data with built-in or custom Sensitive Information Types (SITs) in Microsoft Purview.
- Enabling users to apply sensitivity labels directly in Microsoft 365 apps as they work.
This gives you a clear view of which data could be referenced by AI and where you’re most exposed.
2. Clean up and govern your data
Non-compliant AI usage can lead to regulatory issues and fines, especially if permissions and content governance are weak. To strengthen your posture:
- Review SharePoint sites and file permissions to identify overshared or open access content.
- Remediate risky permissions and apply SharePoint-wide policies for content management.
- Delete old or obsolete data that no longer needs to be retained.
- Use Microsoft Purview machine learning classifiers to detect and mitigate risks at scale.
Doing this before AI deployment reduces the chance that Copilot or other AI tools surface content that should never have been accessible in the first place.
3. Protect your sensitive data
Protection should be tied to the sensitivity of the data itself. Microsoft Purview Information Protection lets you:
- Classify, label, and protect data based on sensitivity (for example, “General” vs. “Highly Confidential”).
- Apply protections such as encryption, rights management, and watermarks automatically based on labels.
- Enable secure collaboration so that less sensitive content can be shared broadly, while highly confidential data is restricted to a small, authorized group.
Copilot respects these labels. Any output generated from labeled content inherits the highest sensitivity level of the referenced files that the user is allowed to access, providing end-to-end protection.
4. Prevent data loss
Data loss prevention (DLP) becomes even more important when AI is in the mix, because users may paste or upload sensitive data into prompts. To reduce this risk:
- Configure Microsoft Purview Data Loss Prevention policies to monitor and block risky activities such as uploading sensitive data to cloud services, copying to USB, or sharing externally.
- Extend DLP controls to Windows 10 devices, the Chrome browser, on-premises file shares, SharePoint libraries, and Microsoft Teams chats and channels.
- Use DLP to help prevent users from sending sensitive data in AI prompts or losing AI-generated content that should be protected.
By following these four steps—know, govern, protect, and prevent—you create a data environment that is better prepared for AI and aligned with your security and compliance requirements before you turn on tools like Copilot.
Why choose Copilot for Microsoft 365 from a security and compliance perspective?
Many employees are already using AI at work—75% of knowledge workers, according to recent research—and about 78% are bringing their own AI tools into the workplace. That creates shadow AI and increases the risk of data oversharing, data leakage, and non-compliant usage, especially when tools lack enterprise-grade controls.
Copilot for Microsoft 365 is designed to fit into an enterprise security and compliance strategy rather than sit outside it. Key reasons to consider it include:
1. Built on your existing Microsoft 365 security and compliance posture
Copilot runs on top of your existing Microsoft 365 environment. It:
- Uses the same identity, access, and permissions model you already have in place.
- Only accesses content that the user is already authorized to see.
- Is managed with the same tools and standards you use today for Microsoft 365.
This means you don’t have to reinvent your security model just to adopt AI.
2. Strong data protection and privacy commitments
Copilot is designed so that:
- You control your data, including where it is stored and processed (data residency and EU data boundary options for data at rest).
- Your data is encrypted.
- Your organization’s data is not used to train the foundational large language models that power Copilot.
- You receive commercial data protection for web-grounded prompts that use the latest web data.
These commitments help you adopt AI while maintaining control over sensitive information.
3. Label-aware and protection-aware AI
Because Copilot is integrated with Microsoft Purview Information Protection:
- It recognizes sensitivity labels and protections applied to files and emails.
- Any Copilot-generated response inherits the highest sensitivity label of the referenced content that the user can access.
- Protections such as encryption and rights management can carry through from source content to AI-generated output.
This label inheritance helps reduce the risk of accidental oversharing through AI responses.
4. Flexible security paths: Core vs. Best-in-class
Microsoft offers two deployment paths:
- Core path: Uses Copilot’s built-in security, compliance, privacy, and responsible AI capabilities.
- Best-in-class path: Adds more advanced data security and compliance controls on top, such as richer labeling policies, DLP, and governance features.
You can choose the path that aligns with your current maturity and then evolve over time.
5. Integrated governance and compliance tooling
Copilot is treated as a core Microsoft 365 service, so it is subject to the same terms and conditions and can be governed with:
- Audit logs to capture when Copilot interactions occur.
- Data lifecycle management to retain or delete interaction content according to policy.
- Communication compliance to detect non-compliant prompts or responses.
- eDiscovery to search Copilot interactions as part of investigations.
For organizations that want to reimagine work with AI but still meet regulatory and internal policy requirements, this integrated approach can be more manageable than trying to secure multiple disconnected AI tools.
How do we secure and govern Copilot usage after deployment?
Deploying Copilot is only the beginning. To use AI responsibly at scale, you need ongoing visibility, protection, and governance. The white paper outlines a three-step approach using Microsoft Purview and related capabilities:
1. Discover AI-related data risks with Microsoft Purview AI Hub
A common challenge is lack of visibility into how sensitive data flows through AI tools. Around 30% of decision-makers say they don’t know where or what their sensitive business-critical data is. Microsoft Purview AI Hub helps you:
- See how AI applications—including Copilot and third-party tools—are being used in your organization.
- View total AI interactions and assess the risk level associated with those interactions.
- Identify sensitive data being shared with Copilot.
- Detect unlabeled files and SharePoint sites referenced in prompts and responses.
- Use audit logging and SharePoint permission checks to spot overshared content.
These insights help you refine policies and close gaps as AI usage grows.
2. Protect sensitive data throughout its AI journey
You want to avoid scenarios where confidential project details or intellectual property are exposed through AI tools. Microsoft Purview provides several layers of protection:
- Permission-aware responses: Copilot responses are based on the user’s existing permissions, so only authorized users see sensitive content.
- Information Protection controls: Encryption, watermarking, autolabeling, and label inheritance can be applied to prompts and responses to prevent oversharing.
- Autolabeling: Sensitivity labels can be automatically applied to files and emails based on detected sensitive data, making it easier to scale protection.
- Broad label coverage: Sensitivity labels can be applied across Microsoft 365 apps and services, SQL Server, Azure Data Lake Storage, and Microsoft Fabric, and Copilot inherits these labels.
For third-party generative AI apps, Microsoft Purview Data Loss Prevention (DLP) adds another safeguard:
- DLP policies can restrict users from pasting sensitive data into AI prompts.
- Adaptive protection can block high-risk users from sharing sensitive data with AI tools while allowing lower-risk users more flexibility.
3. Govern Copilot usage and support compliance with AI regulations
As AI regulations and internal policies evolve, governance becomes critical. Microsoft Purview offers integrated compliance tools that work with Copilot:
- Audit: Capture when Copilot interactions occur for accountability and review.
- Data lifecycle management: Retain or delete Copilot interaction content according to your retention policies.
- Communication compliance: Detect non-compliant usage in Copilot prompts and responses, such as attempts to generate unethical or high-risk content.
- eDiscovery: Search and review Copilot interactions as part of legal or regulatory investigations.
Combined, these capabilities help you:
- Align Copilot usage with regulatory requirements and industry standards.
- Enforce company policies around acceptable AI use.
- Reduce the risk of regulatory liability tied to non-compliant AI behavior.
By continuously discovering risks, protecting data, and governing usage, you can use Copilot to reshape how your organization works while maintaining a clear, defensible security and compliance posture.