Sending Your First Message with Sequrity Control API
This guide shows you how to send your first chat completion request through the Sequrity Control API.
Prerequisites
- Sequrity API Key: Log in to the Sequrity Dashboard, navigate to API Keys, and create a new API key by selecting Dual LLM option.
- LLM Provider API Key: You can consider Sequrity as a relay service that forwards your requests to LLM service providers, thus you need to offer LLM API keys. This example uses OpenRouter, but you can use any supported provider1
Download Tutorial Scripts
Installation
You can interact with the Sequrity Control API using either the Sequrity Python client or directly via REST API calls.
Sending Your First Message
Both Sequrity client and REST API are compatible with OpenAI Chat Completions API and Anthropic Messages API. In this example, we use OpenAI Chat Completions API.
Request
Let's send a simple message asking "What is the largest prime number below 100?"
import os
from sequrity import SequrityClient
sequrity_key = os.getenv("SEQURITY_API_KEY", "your-sequrity-api-key")
openrouter_api_key = os.getenv("OPENROUTER_API_KEY", "your-openrouter-key")
def first_message_example():
# Initialize the Sequrity client
client = SequrityClient(api_key=sequrity_key)
# Send a chat completion request
response = client.control.chat.create(
messages=[{"role": "user", "content": "What is the largest prime number below 100?"}],
model="openai/gpt-5-mini", # model name from your LLM provider
llm_api_key=openrouter_api_key, # your LLM provider API key
provider="openrouter", # specify the LLM provider
)
print(response)
if __name__ == "__main__":
print("=== First Message Example ===")
first_message_example()
We create an instance of SequrityClient with your Sequrity API key, and send messages using chat.create, specifying the model name on OpenRouter and your OpenRouter API key.
SEQURITY_API_KEY="${SEQURITY_API_KEY:-your-sequrity-api-key}"
OPENROUTER_API_KEY="${OPENROUTER_API_KEY:-your-openrouter-key}"
SERVICE_PROVIDER="openrouter"
REST_API_URL="https://api.sequrity.ai/control/chat/${SERVICE_PROVIDER}/v1/chat/completions"
curl -X POST $REST_API_URL \
-H "Authorization: Bearer $SEQURITY_API_KEY" \
-H "Content-Type: application/json" \
-H "X-Api-Key: $OPENROUTER_API_KEY" \
-d '{
"model": "openai/gpt-5-mini",
"messages": [{"role": "user", "content": "What is the largest prime number below 100?"}]
}'
We use curl to send a POST request to the Sequrity Control API's endpoint for OpenRouter, specifying the model name on OpenRouter and your OpenRouter API key in the request body.
Response
The response follows the OpenAI Chat Completions format.
Minor Difference from OpenAI Chat Completions API
Compared to OpenAI's Chat Completions API, Sequrity Control API adds an extra piece of information to the response, Session ID.
- For Sequrity client, the session ID is available as
ChatCompletionResponse.session_id. - For REST API, the response has a custom header
X-Session-IDfor REST API.
The session ID is for maintaining context across multiple interactions in a chat session.
However, users do not need to manually handle session IDs in most cases, because Sequrity Control also encodes session ID into tool call ID of ChatCompletion requests, and will parse the session ID from requests with tool results.
Read more in Session ID and Multi-turn Sessions.
ChatCompletionResponse(
id="7f4f6398-f72d-11f0-b822-0f87f79310f1",
choices=[
Choice(
finish_reason="stop",
index=0,
message=ResponseMessage(
role="assistant",
content='{"status": "success", "final_return_value": {"value": 97, "meta": {"tags": [], "consumers": ["*"], "producers": []}}}',
refusal=None,
annotations=None,
audio=None,
function_call=None,
tool_calls=None,
),
logprobs=None,
)
],
created=1769043533,
model="openai/gpt-5-mini,openai/gpt-5-mini",
object="chat.completion",
usage=CompletionUsage(completion_tokens=304, prompt_tokens=2881, total_tokens=3185),
session_id="7f4f6398-f72d-11f0-b822-0f87f79310f1",
)
{
"id": "df728048-f72c-11f0-b1e5-0f87f79310f1",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "{\"status\": \"success\", \"final_return_value\": {\"value\": 97, \"meta\": {\"tags\": [], \"consumers\": [\"*\"], \"producers\": []}}}",
"role": "assistant"
}
}
],
"created": 1769043264,
"model": "openai/gpt-5-mini,openai/gpt-5-mini",
"object": "chat.completion",
"usage": {
"completion_tokens": 312,
"prompt_tokens": 2889,
"total_tokens": 3201
}
}
Specifying Single/Dual LLM
You may notice we selected Dual LLM when creating the API key in Prerequisites. Sequrity Control API supports two architectures for interacting with LLMs:
-
Single-LLM is how most existing applications interact with LLMs today, i.e., sending all requests to a single LLM, and letting the LLM handle everything, including both instruction and data. Sequrity Control adds basic security features on top of this architecture.
-
Dual-LLM uses a planning LLM (pllm) to generate execution plans, and a quarantined LLM (qllm) to process data, which decouples control flow from data flow, and provides advanced and stronger security guarantees.
Learn More about single vs dual LLM?
See Single vs Dual LLM for a detailed comparison.
You can specify Single-LLM or Dual-LLM mode in either of the following two ways:
-
Select mode when creating API key
Log in to the Sequrity Dashboard, navigate to API Keys, and create a new API key by selecting the Single LLM or Dual LLM option.
Example: Select Single-LLM in Dashboard

-
Override mode via the
X-FeaturesheaderWhichever Sequrity API key you use (Single-LLM or Dual-LLM), you can always override the mode by passing the
X-Featuresheader:- For Sequrity client, use
FeaturesHeader.single_llm/FeaturesHeader.dual_llm - For REST API, use the
X-Featuresheader
Only the
X-Featuresheader is needed to switch the architecture. The other config headers (X-Policy,X-Config) are optional — the server uses preset defaults for any header not provided.Specify Single-LLM via Request Headers
from sequrity.control import FeaturesHeader def single_llm_example(): # Initialize the client client = SequrityClient(api_key=sequrity_key) # Only FeaturesHeader is needed to select the architecture. # X-Policy and X-Config are optional — the server uses preset defaults. features = FeaturesHeader.single_llm() # Send a chat completion request response = client.control.chat.create( messages=[{"role": "user", "content": "What is the largest prime number below 100?"}], model="openai/gpt-5-mini", llm_api_key=openrouter_api_key, features=features, provider="openrouter", ) print(response)curl -X POST $REST_API_URL \ -H "Authorization: Bearer $SEQURITY_API_KEY" \ -H "Content-Type: application/json" \ -H "X-Api-Key: $OPENROUTER_API_KEY" \ -H 'X-Features: {"agent_arch":"single-llm"}' \ -d '{ "model": "openai/gpt-5-mini", "messages": [{"role": "user", "content": "What is the largest prime number below 100?"}] }'Specify Dual-LLM via Request Headers
def dual_llm_example(): # Initialize the client client = SequrityClient(api_key=sequrity_key) # Only FeaturesHeader is needed to select the architecture. # X-Policy and X-Config are optional — the server uses preset defaults. features = FeaturesHeader.dual_llm() # Send a chat completion request response = client.control.chat.create( messages=[{"role": "user", "content": "What is the largest prime number below 100?"}], model="openai/gpt-5-mini", llm_api_key=openrouter_api_key, features=features, ) print(response)curl -X POST $REST_API_URL \ -H "Authorization: Bearer $SEQURITY_API_KEY" \ -H "Content-Type: application/json" \ -H "X-Api-Key: $OPENROUTER_API_KEY" \ -H 'X-Features: {"agent_arch":"dual-llm"}' \ -d '{ "model": "openai/gpt-5-mini", "messages": [{"role": "user", "content": "What is the largest prime number below 100?"}] }' - For Sequrity client, use
How is the session config built?
Every request to Sequrity Control runs inside a session governed by a session config. The config is built at request time through a layered pipeline:
- Base config from your API key (DB lookup) or a default preset
- Header overrides —
X-Features,X-Policy,X-Config(all optional, applied in order) - Request-level LLM config — model name and API key from the request body/headers
All three config headers are independent and optional. You can pass any combination of them, and omitted headers simply keep their preset defaults.
Refer to How Session Config Is Built for details.
Next Steps
In the examples above, Dual-LLM seems not very different from Single-LLM. However, Dual-LLM enables advanced security features when tool calls are involved. Learn more in Secure Tool Use with Dual-LLM.
More resources explaining Security Features, Security Policies, and Fine-grained Configurations:
- See more security features like toxicity filtering and PII redaction
- Explore security policies for fine-grained control
- Learn about advanced configurations
- See examples for more advanced use cases
-
See Supported Providers for a list of supported LLM providers in REST API, and LLM Service Provider Enum for Sequrity Client. ↩