AI Connectors — Data Handling & Security
How Section reads usage data from your enterprise AI tools without touching prompts, responses, or conversation content.
Connector Data Access
How do connectors access data from our AI tools?
We connect through each provider's official enterprise API. No proxies, no scraping, no interception. Each connector requires your admin to authorize access via a standard credential or OAuth flow — one-time setup per provider.
Currently supported:
- ChatGPT Enterprise / OpenAI — OpenAI Platform APIs and ChatGPT Compliance APIs. Admin generates the appropriate API key and enables the required enterprise scopes.
- Microsoft Copilot — Microsoft Graph API via Azure AD. Admin registers an Azure AD application and grants consent.
- Claude Enterprise — Anthropic Admin, Compliance, and Analytics APIs.
Each provider exposes different data and permission models. Section only collects what the customer authorizes and what the provider makes available.
What data do connectors collect?
Connectors are built to collect metadata about AI adoption, usage, models, and proficiency signals — not private conversation content. Depending on the provider, this may include:
- User information, such as email, name, role, account status, or provider user ID
- Activity timestamps
- Message, chat, or interaction counts
- Model information, when the provider makes it available
- Workspace, project, app, license, or tool identifiers
- Usage summaries and adoption metrics
- Admin, audit, or compliance event metadata
- File or attachment counts when available
This lets us help customers answer questions like: who is using enterprise AI tools, which teams are adopting them, which tools or models are being used, whether licenses are being used, and what the data says about AI proficiency across the organization.
Do connectors pull prompt or conversation content?
Our current connector implementation is designed not to store prompt text, response text, conversation titles, uploaded file contents, or custom GPT / assistant instructions.
What this means by provider:
- ChatGPT / OpenAI — We store message and audit metadata, not message text or conversation titles.
- Copilot — We store interaction metadata and usage/adoption data. We do not store attachment content, attachment names, file URLs, link URLs, link labels, context labels, or mention text.
- Claude — We store user, usage, cost, project, API key, analytics, and chat metadata. We do not store chat names or message bodies.
What is Section's approach to prompt data when it is accessed?
Our design principle is "Zero Human Access." If prompt data is accessed for classification purposes in an explicitly approved workflow, it is evaluated against our Proficiency Index engine and discarded from memory. No permanent storage, no human review, no database writes of raw prompt content. What we retain are aggregated classification outputs: task types, sophistication scores, and usage patterns.
We are also exploring options for running the Proficiency Index engine within a customer's own secured environment, so that classification happens before any data reaches Section.
Can you control what data is shared?
Yes. API scopes are transparent and visible in your admin consoles. We configure access on a client-by-client basis.
Customers control the credentials and permissions granted to Section. Access can be revoked from the provider's admin console at any time. If a permission is removed or a credential is revoked, the affected connector stops syncing and surfaces a connection issue.
How does Section avoid storing sensitive content?
We treat employee-written content as sensitive by default. The connectors are implemented with explicit redaction rules so that fields likely to contain private content are excluded or reduced to safe metadata.
Examples:
- Conversation titles are not stored
- Message text and AI response text are not stored
- Uploaded file contents are not stored
- Custom GPT or assistant instructions are not stored
- Copilot attachment content, attachment names, URLs, link labels, context labels, and mention text are not stored
How do we know the connector data is complete?
For high-volume activity data, Section validates connector output by comparing raw provider API data with the records stored in Section. This helps confirm that syncs are not missing expected records.
Recent validation confirmed complete matches for key activity data from OpenAI Compliance, Microsoft Copilot, and Claude.
How often does connector data sync?
Connector data syncs on a scheduled basis, typically hourly. Some provider data is available quickly, while other provider data may be delayed by the provider.
For example, Microsoft usage reports can lag behind real-time activity, and OpenAI audit logs may appear after the event occurred. Section accounts for this by safely re-checking recent time windows. Records use stable provider IDs, so re-checking recent data does not create duplicates.
How far back does the first sync go?
For high-volume activity data, the first sync starts with a limited recent window. This avoids collecting large volumes of historical interaction data all at once.
For lower-volume inventory data — such as users, projects, API keys, license data, or daily usage summaries — Section can sync the provider's available list or report history.
Historical backfills can be discussed separately depending on the customer's goals, provider capabilities, and provider retention limits.
Compliance and Security Posture
Is Section SOC 2 compliant?
Yes. Section achieved SOC 2 Type II compliance in March 2026, attested by an independent auditor. We are in the process of launching a public Trust Center where we can provide access to our security policies and certifications.
Is any data used for AI model training?
No. Our AI models are provided by Anthropic and OpenAI. We do not fine-tune or train on customer data.
Who controls the credentials?
The customer does. Credentials are created and managed in the customer's provider admin console. Section uses those credentials only for the authorized connector sync. Customers can rotate or revoke credentials at any time.
Are all providers supported in the same way?
No. Each provider exposes different APIs, fields, permissions, and retention windows. Some providers expose detailed audit logs, while others expose usage summaries or delayed reports.
Section does not invent data that a provider does not expose. If a field is unavailable from a provider, we will make that limitation clear.
What is the short version we can share with IT or Security teams?
Section's enterprise LLM connectors help customers understand AI adoption, usage, license utilization, model usage, and AI proficiency across tools like ChatGPT Enterprise, Claude Enterprise, and Microsoft Copilot. We collect metadata, user attribution, usage signals, model fields when providers make them available, and audit events where providers make them available. We do not store prompts, AI responses, conversation titles, uploaded file contents, or custom GPT / assistant instructions. Customers control the credentials and permissions, and access can be revoked from the provider admin console at any time.
The Browser Extension: Data Handling and Security
What data does the extension collect?
The extension observes how users interact with AI tools (ChatGPT, Claude, Gemini, Copilot, Perplexity). It classifies prompts across structural dimensions: task type (12 categories), cognitive depth, interaction mode, and domain. It also tracks coaching interactions (suggestions offered, accepted, dismissed), session metadata (platform, duration, prompt count), and a sophistication score from 0 to 100 based on detection of prompting techniques — including role assignment, examples, constraints, multi-step reasoning, structured output, context setting, chain-of-thought, and evaluation criteria.
Does the extension capture or transmit prompt text?
No. The classification engine runs entirely in the user's browser. It analyzes prompt structure to produce scores and categorical labels, then sends only those scores and metadata upstream. Prompt text, AI responses, and conversation content are never captured, transmitted, or stored by Section.
What data does the extension NOT collect?
Actual prompt text, AI responses, conversation content, specific projects or topics, and output quality. We know the domain (e.g., "business" or "code") but not the specific subject matter.
Updated 12 days ago
