Skip to content

LangGraph

ChatOpenAI-compatible client for seamless integration with LangChain and LangGraph frameworks.

Installation

To use the LangGraph integration, install Sequrity with the langgraph dependency group:

# Using pip
pip install sequrity[langgraph]

# Using uv (for development)
uv sync --group langgraph

This installs the required langgraph package along with Sequrity.

API Reference

LangChain/LangGraph integration for Sequrity Control.

This module provides a ChatOpenAI-compatible client that routes requests through Sequrity's secure orchestrator with automatic session management and security features.

Example
from sequrity.control.integrations.langgraph import create_sequrity_langgraph_client
from sequrity.control import FeaturesHeader, SecurityPolicyHeader

# Create client with Sequrity security features
llm = create_sequrity_langgraph_client(
    sequrity_api_key="your-sequrity-key",
    features=FeaturesHeader.dual_llm(),
    security_policy=SecurityPolicyHeader.dual_llm()
)

# Use with LangChain
response = llm.invoke([{"role": "user", "content": "Hello!"}])

# Use with LangGraph
from langgraph.graph import StateGraph
llm_with_tools = llm.bind_tools([...])
# ... build your graph

Classes:

  • LangGraphChatSequrityAI

    ChatOpenAI client configured to route requests through Sequrity's secure orchestrator.

Functions:

LangGraphChatSequrityAI

LangGraphChatSequrityAI(
    sequrity_api_key: str,
    features: FeaturesHeader | None = None,
    security_policy: SecurityPolicyHeader | None = None,
    fine_grained_config: FineGrainedConfigHeader | None = None,
    service_provider: LlmServiceProvider | LlmServiceProviderStr = OPENROUTER,
    llm_api_key: str | None = None,
    base_url: str | None = None,
    endpoint_type: EndpointType | str = CHAT,
    model: str = "gpt-4",
    **kwargs: Any,
)

ChatOpenAI client configured to route requests through Sequrity's secure orchestrator.

This client is a drop-in replacement for ChatOpenAI that automatically: - Adds Sequrity security headers (features, policies, configuration) - Tracks session IDs across multiple requests - Routes to Sequrity's API endpoint

The client maintains session state across multiple chat completion requests, which is essential for Sequrity's dual-LLM architecture to maintain context.

Parameters:

  • sequrity_api_key

    (str) –

    Sequrity API key (used as bearer token)

  • features

    (FeaturesHeader | None, default: None ) –

    Security features configuration (LLM mode, taggers, constraints)

  • security_policy

    (SecurityPolicyHeader | None, default: None ) –

    Security policy configuration (SQRT/Cedar policies)

  • fine_grained_config

    (FineGrainedConfigHeader | None, default: None ) –

    Fine-grained session configuration

  • service_provider

    (LlmServiceProvider | LlmServiceProviderStr, default: OPENROUTER ) –

    LLM service provider (LlmServiceProvider enum or string literal)

  • llm_api_key

    (str | None, default: None ) –

    Optional API key for the LLM provider

  • base_url

    (str | None, default: None ) –

    Sequrity base URL (default: https://api.sequrity.ai)

  • endpoint_type

    (EndpointType | str, default: CHAT ) –

    Endpoint type (chat, code, lang-graph). Defaults to chat.

  • model

    (str, default: 'gpt-4' ) –

    Model name to use (default: gpt-4)

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to ChatOpenAI

Example
features = FeaturesHeader.dual_llm()

llm = LangGraphChatSequrityAI(
    sequrity_api_key="your-key",
    features=features,
)

# Session ID is automatically tracked
response1 = llm.invoke([{"role": "user", "content": "Hello"}])
response2 = llm.invoke([{"role": "user", "content": "Continue"}])  # Uses same session

Methods:

Attributes:

  • session_id (str | None) –

    Get the current session ID, if any.

session_id property

session_id: str | None

Get the current session ID, if any.

reset_session

reset_session() -> None

Reset the session ID, starting a new conversation context.

Call this method when you want to start a fresh conversation without carrying over context from previous requests.

Example
llm = LangGraphChatSequrityAI(...)
llm.invoke([{"role": "user", "content": "Hello"}])
llm.reset_session()  # Start fresh conversation
llm.invoke([{"role": "user", "content": "Hello"}])

set_session_id

set_session_id(session_id: str | None) -> None

Manually set the session ID.

Use this to resume a previous conversation or share sessions across clients.

Parameters:

  • session_id

    (str | None) –

    Session ID to use, or None to clear

Example
llm = LangGraphChatSequrityAI(...)
llm.set_session_id("existing-session-id")
llm.invoke([{"role": "user", "content": "Continue"}])  # Uses existing session

create_sequrity_langgraph_client

create_sequrity_langgraph_client(
    sequrity_api_key: str,
    features: FeaturesHeader | None = None,
    security_policy: SecurityPolicyHeader | None = None,
    fine_grained_config: FineGrainedConfigHeader | None = None,
    service_provider: LlmServiceProvider | LlmServiceProviderStr = OPENROUTER,
    llm_api_key: str | None = None,
    base_url: str | None = None,
    endpoint_type: EndpointType | str = CHAT,
    model: str = "gpt-4",
    **kwargs: Any,
) -> LangGraphChatSequrityAI

Create a ChatOpenAI-compatible client with Sequrity security features for LangGraph.

This is a convenience factory function that creates a LangGraphChatSequrityAI instance configured to route requests through Sequrity's secure orchestrator.

Parameters:

  • sequrity_api_key

    (str) –

    Sequrity API key (required)

  • features

    (FeaturesHeader | None, default: None ) –

    Security features configuration (LLM mode, taggers, etc.)

  • security_policy

    (SecurityPolicyHeader | None, default: None ) –

    Security policy configuration (SQRT/Cedar policies)

  • fine_grained_config

    (FineGrainedConfigHeader | None, default: None ) –

    Fine-grained session configuration

  • service_provider

    (LlmServiceProvider | LlmServiceProviderStr, default: OPENROUTER ) –

    LLM service provider (LlmServiceProvider enum or string literal)

  • llm_api_key

    (str | None, default: None ) –

    Optional API key for the LLM provider

  • base_url

    (str | None, default: None ) –

    Sequrity base URL (default: https://api.sequrity.ai)

  • endpoint_type

    (EndpointType | str, default: CHAT ) –

    Endpoint type (chat, code, lang-graph). Defaults to chat.

  • model

    (str, default: 'gpt-4' ) –

    Model name to use (default: gpt-4)

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to ChatOpenAI

Returns:

Example
from sequrity.control.integrations.langgraph import create_sequrity_langgraph_client
from sequrity.control import FeaturesHeader

# Basic usage with dual-LLM
llm = create_sequrity_langgraph_client(
    sequrity_api_key="your-key",
    features=FeaturesHeader.dual_llm()
)

# With security policy
from sequrity.control import SecurityPolicyHeader
llm = create_sequrity_langgraph_client(
    sequrity_api_key="your-key",
    features=FeaturesHeader.dual_llm(),
    security_policy=SecurityPolicyHeader.dual_llm()
)

# Use with LangChain
response = llm.invoke([{"role": "user", "content": "Hello!"}])

# Use with LangGraph
from langgraph.graph import StateGraph
from langgraph.prebuilt import ToolNode
llm_with_tools = llm.bind_tools([...])
# ... build your graph