Skip to content

OpenAI Agents SDK

AsyncOpenAI-compatible client for seamless integration with the OpenAI Agents SDK framework.

Installation

To use the OpenAI Agents SDK integration, install Sequrity with the agents dependency group:

# Using pip
pip install sequrity[agents]

# Using uv (for development)
uv sync --group agents

This installs the required openai-agents package along with Sequrity.

API Reference

OpenAI Agent ADK integration for Sequrity Control.

This module provides an AsyncOpenAI-compatible client that routes requests through Sequrity's secure orchestrator with automatic session management and security features.

Example
from sequrity.control.integrations.openai_agents_sdk import create_sequrity_openai_agents_sdk_client
from sequrity.control import FeaturesHeader, SecurityPolicyHeader
from agents import Agent, Runner, RunConfig

# Create client with Sequrity security features
provider = create_sequrity_openai_agents_sdk_client(
    sequrity_api_key="your-sequrity-key",
    features=FeaturesHeader.dual_llm(),
    security_policy=SecurityPolicyHeader.dual_llm()
)

# Use with OpenAI Agents SDK
agent = Agent(name="Assistant", instructions="You are helpful.")
config = RunConfig(model="gpt-5-mini", model_provider=provider)
result = await Runner.run(agent, input="Hello!", run_config=config)

Classes:

  • SequrityAsyncOpenAI

    AsyncOpenAI client configured to route requests through Sequrity's secure orchestrator.

Functions:

SequrityAsyncOpenAI

SequrityAsyncOpenAI(
    sequrity_api_key: str,
    features: FeaturesHeader | None = None,
    security_policy: SecurityPolicyHeader | None = None,
    fine_grained_config: FineGrainedConfigHeader | None = None,
    service_provider: LlmServiceProvider | LlmServiceProviderStr = OPENROUTER,
    llm_api_key: str | None = None,
    base_url: str | None = None,
    endpoint_type: EndpointType | str = CHAT,
    timeout: float = 60.0,
    **kwargs: Any,
)

AsyncOpenAI client configured to route requests through Sequrity's secure orchestrator.

This client is a drop-in replacement for AsyncOpenAI that automatically: - Adds Sequrity security headers (features, policies, configuration) - Tracks session IDs across multiple requests - Routes to Sequrity's API endpoint

The client maintains session state across multiple chat completion requests, which is essential for Sequrity's dual-LLM architecture to maintain context.

Parameters:

  • sequrity_api_key

    (str) –

    Sequrity API key (used as bearer token)

  • features

    (FeaturesHeader | None, default: None ) –

    Security features configuration (LLM mode, taggers, constraints)

  • security_policy

    (SecurityPolicyHeader | None, default: None ) –

    Security policy configuration (SQRT/Cedar policies)

  • fine_grained_config

    (FineGrainedConfigHeader | None, default: None ) –

    Fine-grained session configuration

  • service_provider

    (LlmServiceProvider | LlmServiceProviderStr, default: OPENROUTER ) –

    LLM service provider (LlmServiceProvider enum or string literal)

  • llm_api_key

    (str | None, default: None ) –

    Optional API key for the LLM provider

  • base_url

    (str | None, default: None ) –

    Sequrity base URL (default: https://api.sequrity.ai)

  • endpoint_type

    (EndpointType | str, default: CHAT ) –

    Endpoint type (chat, code, lang-graph). Defaults to chat.

  • timeout

    (float, default: 60.0 ) –

    Request timeout in seconds (default: 60.0)

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to AsyncOpenAI

Example
features = FeaturesHeader.dual_llm()

client = SequrityAsyncOpenAI(
    sequrity_api_key="your-key",
    features=features,
)

# Session ID is automatically tracked
response1 = await client.chat.completions.create(...)
response2 = await client.chat.completions.create(...)  # Uses same session

Methods:

Attributes:

  • session_id (str | None) –

    Get the current session ID, if any.

session_id property

session_id: str | None

Get the current session ID, if any.

reset_session

reset_session() -> None

Reset the session ID, starting a new conversation context.

Call this method when you want to start a fresh conversation without carrying over context from previous requests.

Example
client = SequrityAsyncOpenAI(...)
await client.chat.completions.create(...)
client.reset_session()  # Start fresh conversation
await client.chat.completions.create(...)

set_session_id

set_session_id(session_id: str | None) -> None

Manually set the session ID.

Use this to resume a previous conversation or share sessions across clients.

Parameters:

  • session_id

    (str | None) –

    Session ID to use, or None to clear

Example
client = SequrityAsyncOpenAI(...)
client.set_session_id("existing-session-id")
await client.chat.completions.create(...)  # Uses existing session

create_sequrity_openai_agents_sdk_client

create_sequrity_openai_agents_sdk_client(
    sequrity_api_key: str,
    features: FeaturesHeader | None = None,
    security_policy: SecurityPolicyHeader | None = None,
    fine_grained_config: FineGrainedConfigHeader | None = None,
    service_provider: LlmServiceProvider | LlmServiceProviderStr = OPENROUTER,
    llm_api_key: str | None = None,
    base_url: str | None = None,
    endpoint_type: EndpointType | str = CHAT,
    timeout: float = 60.0,
    **kwargs: Any,
) -> SequrityModelProvider

Create a ModelProvider for use with OpenAI Agents SDK and Sequrity.

This factory function creates a SequrityModelProvider that wraps a SequrityAsyncOpenAI client, providing the ModelProvider interface required by the OpenAI Agents SDK's RunConfig.

Parameters:

  • sequrity_api_key

    (str) –

    Sequrity API key (required)

  • features

    (FeaturesHeader | None, default: None ) –

    Security features configuration (LLM mode, taggers, etc.)

  • security_policy

    (SecurityPolicyHeader | None, default: None ) –

    Security policy configuration (SQRT/Cedar policies)

  • fine_grained_config

    (FineGrainedConfigHeader | None, default: None ) –

    Fine-grained session configuration

  • service_provider

    (LlmServiceProvider | LlmServiceProviderStr, default: OPENROUTER ) –

    LLM service provider (LlmServiceProvider enum or string literal)

  • llm_api_key

    (str | None, default: None ) –

    Optional API key for the LLM provider

  • base_url

    (str | None, default: None ) –

    Sequrity base URL (default: https://api.sequrity.ai)

  • endpoint_type

    (EndpointType | str, default: CHAT ) –

    Endpoint type (chat, code, lang-graph). Defaults to chat.

  • timeout

    (float, default: 60.0 ) –

    Request timeout in seconds (default: 60.0)

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to AsyncOpenAI

Returns:

  • SequrityModelProvider

    Configured SequrityModelProvider instance

Example
from sequrity.control.integrations.openai_agents_sdk import create_sequrity_openai_agents_sdk_client
from sequrity.control import FeaturesHeader
from agents import Agent, Runner, RunConfig

# Create provider with dual-LLM
provider = create_sequrity_openai_agents_sdk_client(
    sequrity_api_key="your-key",
    features=FeaturesHeader.dual_llm()
)

# Use with OpenAI Agents SDK
agent = Agent(name="Assistant", instructions="You are helpful.")
config = RunConfig(model="gpt-5-mini", model_provider=provider)
result = await Runner.run(agent, input="Hello", run_config=config)