OpenAI Agents SDK
AsyncOpenAI-compatible client for seamless integration with the OpenAI Agents SDK framework.
Installation
To use the OpenAI Agents SDK integration, install Sequrity with the agents dependency group:
This installs the required openai-agents package along with Sequrity.
API Reference
OpenAI Agent ADK integration for Sequrity Control.
This module provides an AsyncOpenAI-compatible client that routes requests through Sequrity's secure orchestrator with automatic session management and security features.
Example
from sequrity.control.integrations.openai_agents_sdk import create_sequrity_openai_agents_sdk_client
from sequrity.control import FeaturesHeader, SecurityPolicyHeader
from agents import Agent, Runner, RunConfig
# Create client with Sequrity security features
provider = create_sequrity_openai_agents_sdk_client(
sequrity_api_key="your-sequrity-key",
features=FeaturesHeader.dual_llm(),
security_policy=SecurityPolicyHeader.dual_llm()
)
# Use with OpenAI Agents SDK
agent = Agent(name="Assistant", instructions="You are helpful.")
config = RunConfig(model="gpt-5-mini", model_provider=provider)
result = await Runner.run(agent, input="Hello!", run_config=config)
Classes:
-
SequrityAsyncOpenAI–AsyncOpenAI client configured to route requests through Sequrity's secure orchestrator.
Functions:
-
create_sequrity_openai_agents_sdk_client–Create a ModelProvider for use with OpenAI Agents SDK and Sequrity.
SequrityAsyncOpenAI
SequrityAsyncOpenAI(
sequrity_api_key: str,
features: FeaturesHeader | None = None,
security_policy: SecurityPolicyHeader | None = None,
fine_grained_config: FineGrainedConfigHeader | None = None,
service_provider: LlmServiceProvider | LlmServiceProviderStr = OPENROUTER,
llm_api_key: str | None = None,
base_url: str | None = None,
endpoint_type: EndpointType | str = CHAT,
timeout: float = 60.0,
**kwargs: Any,
)
AsyncOpenAI client configured to route requests through Sequrity's secure orchestrator.
This client is a drop-in replacement for AsyncOpenAI that automatically: - Adds Sequrity security headers (features, policies, configuration) - Tracks session IDs across multiple requests - Routes to Sequrity's API endpoint
The client maintains session state across multiple chat completion requests, which is essential for Sequrity's dual-LLM architecture to maintain context.
Parameters:
-
(sequrity_api_keystr) –Sequrity API key (used as bearer token)
-
(featuresFeaturesHeader | None, default:None) –Security features configuration (LLM mode, taggers, constraints)
-
(security_policySecurityPolicyHeader | None, default:None) –Security policy configuration (SQRT/Cedar policies)
-
(fine_grained_configFineGrainedConfigHeader | None, default:None) –Fine-grained session configuration
-
(service_providerLlmServiceProvider | LlmServiceProviderStr, default:OPENROUTER) –LLM service provider (
LlmServiceProviderenum or string literal) -
(llm_api_keystr | None, default:None) –Optional API key for the LLM provider
-
(base_urlstr | None, default:None) –Sequrity base URL (default: https://api.sequrity.ai)
-
(endpoint_typeEndpointType | str, default:CHAT) –Endpoint type (chat, code, lang-graph). Defaults to chat.
-
(timeoutfloat, default:60.0) –Request timeout in seconds (default: 60.0)
-
(**kwargsAny, default:{}) –Additional arguments passed to AsyncOpenAI
Example
Methods:
-
reset_session–Reset the session ID, starting a new conversation context.
-
set_session_id–Manually set the session ID.
Attributes:
-
session_id(str | None) –Get the current session ID, if any.
reset_session
Reset the session ID, starting a new conversation context.
Call this method when you want to start a fresh conversation without carrying over context from previous requests.
set_session_id
set_session_id(session_id: str | None) -> None
Manually set the session ID.
Use this to resume a previous conversation or share sessions across clients.
Parameters:
-
(session_idstr | None) –Session ID to use, or None to clear
create_sequrity_openai_agents_sdk_client
create_sequrity_openai_agents_sdk_client(
sequrity_api_key: str,
features: FeaturesHeader | None = None,
security_policy: SecurityPolicyHeader | None = None,
fine_grained_config: FineGrainedConfigHeader | None = None,
service_provider: LlmServiceProvider | LlmServiceProviderStr = OPENROUTER,
llm_api_key: str | None = None,
base_url: str | None = None,
endpoint_type: EndpointType | str = CHAT,
timeout: float = 60.0,
**kwargs: Any,
) -> SequrityModelProvider
Create a ModelProvider for use with OpenAI Agents SDK and Sequrity.
This factory function creates a SequrityModelProvider that wraps a SequrityAsyncOpenAI client, providing the ModelProvider interface required by the OpenAI Agents SDK's RunConfig.
Parameters:
-
(sequrity_api_keystr) –Sequrity API key (required)
-
(featuresFeaturesHeader | None, default:None) –Security features configuration (LLM mode, taggers, etc.)
-
(security_policySecurityPolicyHeader | None, default:None) –Security policy configuration (SQRT/Cedar policies)
-
(fine_grained_configFineGrainedConfigHeader | None, default:None) –Fine-grained session configuration
-
(service_providerLlmServiceProvider | LlmServiceProviderStr, default:OPENROUTER) –LLM service provider (
LlmServiceProviderenum or string literal) -
(llm_api_keystr | None, default:None) –Optional API key for the LLM provider
-
(base_urlstr | None, default:None) –Sequrity base URL (default: https://api.sequrity.ai)
-
(endpoint_typeEndpointType | str, default:CHAT) –Endpoint type (chat, code, lang-graph). Defaults to chat.
-
(timeoutfloat, default:60.0) –Request timeout in seconds (default: 60.0)
-
(**kwargsAny, default:{}) –Additional arguments passed to AsyncOpenAI
Returns:
-
SequrityModelProvider–Configured SequrityModelProvider instance
Example
from sequrity.control.integrations.openai_agents_sdk import create_sequrity_openai_agents_sdk_client
from sequrity.control import FeaturesHeader
from agents import Agent, Runner, RunConfig
# Create provider with dual-LLM
provider = create_sequrity_openai_agents_sdk_client(
sequrity_api_key="your-key",
features=FeaturesHeader.dual_llm()
)
# Use with OpenAI Agents SDK
agent = Agent(name="Assistant", instructions="You are helpful.")
config = RunConfig(model="gpt-5-mini", model_provider=provider)
result = await Runner.run(agent, input="Hello", run_config=config)