Skip to content

Secure Tool Use with Dual-LLM

As mentioned at the end of Sending your first message, Dual-LLM enables advanced security features when tool calls are involved. This tutorial demonstrates how to use Sequrity's Dual-LLM feature to secure tool calling in chat completion workflows. Specifically, the example below illustrate how to enforce security policies that prevent sensitive data from being sent to unauthorized recipients.

Prerequisites

Before starting, ensure you have the following API keys:

  • Sequrity API Key: Sign up at Sequrity.ai to get your API key from the dashboard
  • LLM Provider API Key: You can consider Sequrity as a relay service that forwards your requests to LLM service providers, thus you need to offer LLM API keys which Sequrity Control will use for the planning LLM (PLLM) and quarantined LLM (QLLM). This example uses OpenRouter, but you can use any supported provider1.

Set these keys as environment variables:

export SEQURITY_API_KEY="your-sequrity-api-key"
export OPENROUTER_API_KEY="your-openrouter-api-key"
Download Tutorial Scripts

Installation

Install the required packages based on your preferred approach:

pip install sequrity rich

For ease of reading, we use requests library to demonstrate the REST API calls.

pip install requests rich

The rich package is optional but provides nice formatted output for demonstrations.

Tool Use in Chat Completion

Tool use (also known as function calling) allows LLMs to interact with external APIs and services. In a typical tool use flow:

  1. A user sends a message requesting some action that requires tool use, and offers tool definitions like input schema and descriptions to the LLM.

  2. The LLM returns an assistant message with tool_calls containing the function name and arguments.

    Example Assistant Message with Tool Call
    {
        "content": "",
        "role": "assistant",
        "tool_calls": [
            {
                "id": "tc-6e0ec4e8-f7ef-11f0-8bfb-9166...",
                "function": {
                    "arguments": '{"doc_id": "DOC12345"}',
                    "name": "get_internal_document",
                },
                "type": "function",
            }
        ],
    }
    
  3. Your application executes the tool and returns a tool message with the result.

    Example Tool Message with Tool Result
    {
        "role": "tool",
        "content": "The document content is: 'Sequrity is a secure AI...'",
        "tool_call_id": "tc-6e0ec4e8-f7ef-11f0-8bfb-9166...",
    }
    
  4. Append the tool call and tool result messages to the conversation history, then send it back to the LLM for further processing.

For a comprehensive guide on tool use, see OpenAI's function calling tutorial.

Security Features, Policies, and Fine-Grained Configs

Sequrity Control provides powerful and fine-grained control over tool use through custom headers. Let's examine the security configuration used in this example:

features = FeaturesHeader.dual_llm()
security_policy = SecurityPolicyHeader.dual_llm(
    codes=r"""
    let sensitive_docs = {"internal_use", "confidential"};
    tool "get_internal_document" -> @tags |= sensitive_docs;
    tool "send_email" {
        hard deny when (body.tags overlaps sensitive_docs) and (not to.value in {str matching r".*@trustedcorp\.com"});
    }
    """,
)
fine_grained_config = FineGrainedConfigHeader(response_format=ResponseFormatOverrides(include_program=True))
import json
import os
import re

import requests

# Custom headers as JSON (no classes)
features = json.dumps({"agent_arch": "dual-llm"})

security_policy = json.dumps(
    {
        "codes": {
            "code": r"""
                let sensitive_docs = {"internal_use", "confidential"};
                tool "get_internal_document" -> @tags |= sensitive_docs;
                tool "send_email" {
                    hard deny when (body.tags overlaps sensitive_docs) and (not to.value in {str matching r".*@trustedcorp\.com"});
                }
            """,
            "language": "sqrt",
        },
    }
)

fine_grained_config = json.dumps({"response_format": {"include_program": True}})
  • X-Features: Enables the Dual-LLM feature in this example
  • X-Policy: Defines security policies in SQRT language:

    // Define sensitive document tags
    let sensitive_docs = {"internal_use", "confidential"};
    // Add tags to tool results of get_internal_document
    tool "get_internal_document" -> @tags |= sensitive_docs;
    // Hard deny sending emails if body contains sensitive tags
    // and recipient does not match trusted pattern
    tool "send_email" {
        hard deny when (body.tags overlaps sensitive_docs) and
        (not to.value in {str matching r".*@trustedcorp\.com"});
    }
    

    The policies do the following:

    • Tags documents retrieved by get_internal_document as internal_use and confidential
    • Blocks send_email calls if the email body contains sensitive tags AND the recipient is not from trustedcorp.com
  • X-Config: Controls response format - include_program: true returns the generated execution program for auditing and transparency

Tool Definitions

Both examples use two tools: one for retrieving internal documents and another for sending emails.

def get_internal_document(doc_id: str) -> str:
    ...

def send_email(to: str, subject: str, body: str) -> str:
    ...

Here we follow the OpenAI chat completion's tool definition format to define these tools:

Tool Definitions of get_internal_document and send_email
tool_defs = [
    {
        "type": "function",
        "function": {
            "name": "get_internal_document",
            "description": "Retrieve an internal document by its ID. Returns the document content as a string.",
            "parameters": {
                "type": "object",
                "properties": {
                    "doc_id": {
                        "type": "string",
                        "description": "The ID of the internal document to retrieve.",
                    }
                },
                "required": ["doc_id"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "send_email",
            "description": "Send an email to a specified recipient. Returns a confirmation string upon success.",
            "parameters": {
                "type": "object",
                "properties": {
                    "to": {"type": "string", "description": "The recipient's email address."},
                    "subject": {"type": "string", "description": "The subject of the email."},
                    "body": {"type": "string", "description": "The body content of the email."},
                },
                "required": ["to", "subject", "body"],
            },
        },
    },
]

Case 1: Blocking Emails to Untrusted Domains

Now we demonstrate how Sequrity blocks attempts to send sensitive documents to an untrusted email address research@gmail.com.

Step 1: Setup Client & Model

Sequrity Control API allows you to specify two LLMs for Dual-LLM tool use: PLLM for generating the execution plan, and QLLM for processing data.

client = SequrityClient(api_key=sequrity_key)
service_provider = "openrouter"
model = "openai/gpt-5-mini,openai/gpt-5-nano"  # Dual-LLM: PLLM, QLLM

We define a helper function chat_completion to call the chat completion endpoint.

import json
import os
import re

import requests

service_provider = "openrouter"
model = "openai/gpt-5-mini,openai/gpt-5-nano"  # Dual-LLM: PLLM, QLLM

def chat_completion(messages):
    url = f"{base_url}/control/chat/{service_provider}/v1/chat/completions"
    headers = {
        "Authorization": f"Bearer {sequrity_key}",
        "Content-Type": "application/json",
        "X-Api-Key": openrouter_api_key,
        "X-Features": features,
        "X-Policy": security_policy,
        "X-Config": fine_grained_config,
    }

    payload = {"messages": messages, "model": model, "tools": tool_defs}
    response = requests.post(url, headers=headers, json=payload)
    response.raise_for_status()
    return response.json()

Step 2: Send User Query

The user requests to retrieve an internal document and email it to an untrusted domain (research@gmail.com). Note that we need to keep track of the session_id to maintain context across multiple tool calls.

user_query = "Retrieve the internal document with ID 'DOC12345' and email it to research@gmail.com"
messages = [{"role": "user", "content": user_query}]

response = client.control.chat.create(
    messages=messages,
    model=model,
    tools=tool_defs,
    features=features,
    security_policy=security_policy,
    fine_grained_config=fine_grained_config,
    provider=service_provider,
)
user_query = "Retrieve the internal document with ID 'DOC12345' and email it to research@gmail.com"
messages = [{"role": "user", "content": user_query}]

response_data = chat_completion(messages)

Step 3: LLM Calls get_internal_document

The LLM first calls get_internal_document to retrieve the document. This tool call is allowed because there are no denying policies for it2.

assert response.choices[0].message.tool_calls[0].function.name == "get_internal_document"
tool_call = response.choices[0].message.tool_calls[0]
assert response_data["choices"][0]["message"]["tool_calls"][0]["function"]["name"] == "get_internal_document"
tool_call = response_data["choices"][0]["message"]["tool_calls"][0]

Step 4: Return Tool Result

Simulate the tool execution and return the sensitive document content.

# simulate tool execution and get tool response
messages.append(
    {
        "role": "tool",
        "content": "The document content is: 'Sequrity is a secure AI orchestration platform...'",
        "tool_call_id": tool_call.id,
    }
)
# simulate tool execution and get tool response
messages.append(
    {
        "role": "tool",
        "content": "The document content is: 'Sequrity is a secure AI orchestration platform...'",
        "tool_call_id": tool_call["id"],
    }
)

Step 5: Security Policy Blocks send_email

When the LLM attempts to call send_email, Sequrity detects that the email body contains sensitive tags and the recipient is not from a trusted domain. The tool call is blocked.

response = client.control.chat.create(
    messages=messages,
    model=model,
    tools=tool_defs,
    provider=service_provider,
)
assert response.choices[0].message is not None
assert response.choices[0].message.content is not None
assert "denied by argument checking policies" in response.choices[0].message.content

content = ResponseContentJsonSchema.model_validate_json(response.choices[0].message.content)
rprint("\n[bold red]🚨 Send email denied by security policy[/bold red]")
assert content.error is not None, "Expected error info in denied response"
rprint(f"[yellow]Error:[/yellow] {content.error.message}\n")

rprint("[bold yellow]Generated Program:[/bold yellow]")
syntax = Syntax(content.program, "python", theme="monokai", line_numbers=True, word_wrap=False)
rprint(syntax)
response_data = chat_completion(messages)
assert "denied by argument checking policies" in response_data["choices"][0]["message"]["content"]

content = json.loads(response_data["choices"][0]["message"]["content"])
rprint("\n[bold red]🚨 Send email denied by security policy[/bold red]")
rprint(f"[yellow]Error:[/yellow] {content['error']['message']}\n")

rprint("[bold yellow]Generated Program:[/bold yellow]")
syntax = Syntax(content["program"], "python", theme="monokai", line_numbers=True, word_wrap=False)
rprint(syntax)

Expected Output

You may see a program like this generated by the PLLM:

🚨 Send email denied by security policy
Error: Tool call send_email denied by argument checking policies.
Tool calls: {...}. Checking results: {'final_verdict': 'hard_denied', ...}

Generated Program:
   1 # Retrieve the internal document and email it
   2 final_return_value = None
   3
   4 doc_id = "DOC12345"
   5 recipient = "research@gmail.com"
   6 subject = f"Internal Document {doc_id}"
   7
   8 # Get the document content
   9 document_content = get_internal_document(doc_id=doc_id)
  10
  11 # Prepare email body
  12 email_body = f"Attached is the content of internal document {doc_id}:\n\n{document_content}"
  13
  14 # Send the email (🚨 BLOCKED HERE)
  15 send_result = send_email(to=recipient, subject=subject, body=email_body)

The security policy successfully blocks the email because:

  1. The document returned by get_internal_document is tagged as internal_use and confidential
  2. These two tags propagate to the email body in line 12's string formatting
  3. The recipient research@gmail.com doesn't match the trusted pattern .*@trustedcorp\.com, thus violating the hard deny policy for send_email.

Case 2: Allowing Emails to Trusted Domains

Now let's see what happens when emailing to a trusted domain.

Send Query with Trusted Recipient

Change the recipient to user@trustedcorp.com and start a new session:

messages = [{"role": "user", "content": user_query.replace("research@gmail.com", "user@trustedcorp.com")}]

response = client.control.chat.create(
    messages=messages,
    model=model,
    tools=tool_defs,
    features=features,
    security_policy=security_policy,
    provider=service_provider,
    fine_grained_config=fine_grained_config,
)
messages = [{"role": "user", "content": user_query.replace("research@gmail.com", "user@trustedcorp.com")}]

response_data = chat_completion(messages)

Execute Tool Calls

Following the same flow as before:

  1. LLM calls get_internal_document - return the document content
  2. LLM calls send_email - this time it's allowed!
  3. Return send_email result
  4. Get final response from LLM
Tool Call Executions with Trusted Recipient
# append assistant message (tool call to get_internal_document)
messages.append(response.choices[0].message.model_dump(mode="json"))
# simulate tool execution and get tool response
messages.append(
    {
        "role": "tool",
        "content": "The document content is: 'Sequrity is a secure AI orchestration platform...'",
        "tool_call_id": tool_call.id,
    }
)
rprint("\n[dim]→ Executing tool call: [bold]get_internal_document[/bold][/dim]")
response = client.control.chat.create(
    messages=messages,
    model=model,
    tools=tool_defs,
    provider=service_provider,
)
# this should be tool call to send_email
assert response.choices[0].message is not None
assert response.choices[0].message.tool_calls is not None
assert response.choices[0].message.tool_calls[0].function.name == "send_email"
tool_call = response.choices[0].message.tool_calls[0]
# append assistant message (tool call to send_email)
messages.append(response.choices[0].message.model_dump(mode="json"))
# simulate tool execution and get tool response
messages.append(
    {
        "role": "tool",
        "content": "Email sent successfully",
        "tool_call_id": tool_call.id,
    }
)
rprint("\n[dim]→ Executing tool call: [bold]send_email[/bold][/dim]")
response = client.control.chat.create(
    messages=messages,
    model=model,
    tools=tool_defs,
    provider=service_provider,
)
# final response
assert response.choices[0].message is not None
assert response.choices[0].message.content is not None
content = ResponseContentJsonSchema.model_validate_json(response.choices[0].message.content)
assert content.status == "success"
rprint("\n[bold green]✅ Email allowed to trusted domain[/bold green]")
messages.append(response_data["choices"][0]["message"])
messages.append(
    {
        "role": "tool",
        "content": "The document content is: 'Sequrity is a secure AI orchestration platform...'",
        "tool_call_id": tool_call["id"],
    }
)
rprint("\n[dim]→ Executing tool call: [bold]get_internal_document[/bold][/dim]")

response_data = chat_completion(messages)
assert response_data["choices"][0]["message"]["tool_calls"][0]["function"]["name"] == "send_email"
tool_call = response_data["choices"][0]["message"]["tool_calls"][0]

messages.append(response_data["choices"][0]["message"])
messages.append({"role": "tool", "content": "Email sent successfully", "tool_call_id": tool_call["id"]})
rprint("\n[dim]→ Executing tool call: [bold]send_email[/bold][/dim]")

response_data = chat_completion(messages)
content = json.loads(response_data["choices"][0]["message"]["content"])
assert content["status"] == "success"
rprint("\n[bold green]✅ Email allowed to trusted domain[/bold green]")

Expected Output

✅ Email allowed to trusted domain
Status: success
Return Value: {'value': {'status': 'success', 'doc_id': 'DOC12345',
'emailed_to': 'user@trustedcorp.com',
'message': 'Document retrieved and emailed successfully.'}, ...}

Generated Program:
   1 # Retrieve the document and email it to the recipient.
   2 final_return_value = None
   3
   4 try:
   5     doc_content = get_internal_document(doc_id="DOC12345")
   6     email_subject = "Requested document DOC12345"
   7     email_body = (
   8         "Hello,\n\n"
   9         "Attached below is the content of internal document DOC12345 as requested:\n\n"
  10         f"{doc_content}\n\n"
  11         "Regards,\nAutomated Document Service"
  12     )
  13     send_email(to="user@trustedcorp.com", subject=email_subject, body=email_body)
  14     final_return_value = {
  15         "status": "success",
  16         "doc_id": "DOC12345",
  17         "emailed_to": "user@trustedcorp.com",
  18         "message": "Document retrieved and emailed successfully."
  19     }
  20 except Exception as e:
  21     final_return_value = {"status": "error", "error": str(e)}

This time the email is allowed because the recipient matches the trusted domain pattern .*@trustedcorp\.com, even though the email body contains sensitive tags.

Key Takeaways

  1. Dual-LLM separate control flow and data processing, where the control flow is a python program generated by the PLLM.
  2. MetaData like tags propagate through the program execution
  3. Sequrity Control API enforces security policies on tool calls based on the propagated metadata, preventing unauthorized actions

More Complex Examples

Sequrity Control API supports more complex scenarios, such as enforcing complex business logics, ensuring factuality with data provenance, enforcing legal and compliance mandates, fairness, and interpretability. Go and explore more examples to see how Sequrity can help secure your LLM applications!


  1. See Supported Providers for a list of supported LLM providers in REST API, and LLM Service Provider Enum for Sequrity Client. 

  2. get_internal_documet has no user-defined policy but got allowed. This is because InternalPolicyPresets has default_allow=true by default.