Skip to content

Security Features / X-Features

FeaturesHeader pydantic-model

Configuration header for Sequrity security features (X-Features).

Sent as a JSON object with agent architecture selection and optional content classifiers/blockers.

Example
features = FeaturesHeader.single_llm(toxicity_filter=True)
features = FeaturesHeader.dual_llm(pii_redaction=True, url_blocker=True)
Show JSON schema:
{
  "$defs": {
    "ConstraintConfig": {
      "additionalProperties": false,
      "description": "Configuration for a content blocker.\n\nAttributes:\n    name: Blocker identifier (\"url_blocker\" or \"file_blocker\").",
      "properties": {
        "name": {
          "description": "Blocker identifier ('url_blocker' or 'file_blocker').",
          "enum": [
            "url_blocker",
            "file_blocker"
          ],
          "title": "Name",
          "type": "string"
        }
      },
      "required": [
        "name"
      ],
      "title": "ConstraintConfig",
      "type": "object"
    },
    "TaggerConfig": {
      "additionalProperties": false,
      "description": "Configuration for a content classifier.\n\nAttributes:\n    name: Classifier identifier.\n    threshold: Detection sensitivity threshold (0.0-1.0).\n    mode: Optional mode that overrides threshold (e.g., \"high sensitivity\", \"strict\", \"low sensitivity\", \"normal\").",
      "properties": {
        "name": {
          "description": "Classifier identifier.",
          "enum": [
            "pii_redaction",
            "toxicity_filter",
            "healthcare_topic_guardrail",
            "finance_topic_guardrail"
          ],
          "title": "Name",
          "type": "string"
        },
        "threshold": {
          "default": 0.5,
          "description": "Threshold for the tagger.",
          "maximum": 1.0,
          "minimum": 0.0,
          "title": "Threshold",
          "type": "number"
        },
        "mode": {
          "anyOf": [
            {
              "type": "string"
            },
            {
              "type": "null"
            }
          ],
          "default": null,
          "description": "Optional mode that overrides threshold (e.g., 'high sensitivity', 'strict', 'low sensitivity', 'normal').",
          "title": "Mode"
        }
      },
      "required": [
        "name"
      ],
      "title": "TaggerConfig",
      "type": "object"
    }
  },
  "additionalProperties": false,
  "description": "Configuration header for Sequrity security features (``X-Features``).\n\nSent as a JSON object with agent architecture selection and optional\ncontent classifiers/blockers.\n\nExample:\n    ```python\n    features = FeaturesHeader.single_llm(toxicity_filter=True)\n    features = FeaturesHeader.dual_llm(pii_redaction=True, url_blocker=True)\n    ```",
  "properties": {
    "agent_arch": {
      "anyOf": [
        {
          "enum": [
            "single-llm",
            "dual-llm"
          ],
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Agent architecture: single-llm or dual-llm.",
      "title": "Agent Arch"
    },
    "content_classifiers": {
      "anyOf": [
        {
          "items": {
            "$ref": "#/$defs/TaggerConfig"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "LLM-based content classifiers that analyze tool call arguments (pre-execution) and results (post-execution) to detect sensitive content (e.g., PII, toxicity).",
      "title": "Content Classifiers"
    },
    "content_blockers": {
      "anyOf": [
        {
          "items": {
            "$ref": "#/$defs/ConstraintConfig"
          },
          "type": "array"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Content blockers that redact or mask sensitive content in tool call arguments (pre-execution) and results (post-execution).",
      "title": "Content Blockers"
    }
  },
  "title": "FeaturesHeader",
  "type": "object"
}

Config:

  • extra: forbid

Fields:

single_llm classmethod

single_llm(
    toxicity_filter: bool = False,
    pii_redaction: bool = False,
    healthcare_guardrail: bool = False,
    finance_guardrail: bool = False,
    url_blocker: bool = False,
    file_blocker: bool = False,
) -> FeaturesHeader

Create a Single LLM features configuration.

Source code in src/sequrity/control/types/headers.py
@classmethod
def single_llm(
    cls,
    toxicity_filter: bool = False,
    pii_redaction: bool = False,
    healthcare_guardrail: bool = False,
    finance_guardrail: bool = False,
    url_blocker: bool = False,
    file_blocker: bool = False,
) -> FeaturesHeader:
    """Create a Single LLM features configuration."""
    return cls._build(
        "single-llm",
        toxicity_filter,
        pii_redaction,
        healthcare_guardrail,
        finance_guardrail,
        url_blocker,
        file_blocker,
    )

dual_llm classmethod

dual_llm(
    toxicity_filter: bool = False,
    pii_redaction: bool = False,
    healthcare_guardrail: bool = False,
    finance_guardrail: bool = False,
    url_blocker: bool = False,
    file_blocker: bool = False,
) -> FeaturesHeader

Create a Dual LLM features configuration.

Source code in src/sequrity/control/types/headers.py
@classmethod
def dual_llm(
    cls,
    toxicity_filter: bool = False,
    pii_redaction: bool = False,
    healthcare_guardrail: bool = False,
    finance_guardrail: bool = False,
    url_blocker: bool = False,
    file_blocker: bool = False,
) -> FeaturesHeader:
    """Create a Dual LLM features configuration."""
    return cls._build(
        "dual-llm",
        toxicity_filter,
        pii_redaction,
        healthcare_guardrail,
        finance_guardrail,
        url_blocker,
        file_blocker,
    )

dump_for_headers

dump_for_headers(
    mode: Literal["json_str"] = ..., *, overrides: dict[str, Any] | None = ...
) -> str
dump_for_headers(
    mode: Literal["json"], *, overrides: dict[str, Any] | None = ...
) -> dict
dump_for_headers(
    mode: Literal["json", "json_str"] = "json_str",
    *,
    overrides: dict[str, Any] | None = None,
) -> dict | str

Serialize for use as the X-Features HTTP header value.

Parameters:

  • mode

    (Literal['json', 'json_str'], default: 'json_str' ) –

    Output format — "json" for a dict, "json_str" for a JSON string.

  • overrides

    (dict[str, Any] | None, default: None ) –

    Optional dict to deep-merge into the serialized output. Allows adding or overriding fields not defined on the model without loosening Pydantic validation.

Source code in src/sequrity/control/types/headers.py
def dump_for_headers(
    self, mode: Literal["json", "json_str"] = "json_str", *, overrides: dict[str, Any] | None = None
) -> dict | str:
    """Serialize for use as the ``X-Features`` HTTP header value.

    Args:
        mode: Output format — ``"json"`` for a dict, ``"json_str"`` for a JSON string.
        overrides: Optional dict to deep-merge into the serialized output.
            Allows adding or overriding fields not defined on the model
            without loosening Pydantic validation.
    """
    data = self.model_dump(mode="json", exclude_none=True)
    if overrides:
        _deep_merge(data, overrides)
    if mode == "json":
        return data
    elif mode == "json_str":
        return json.dumps(data)
    else:
        raise ValueError(f"Invalid mode: {mode}. Must be 'json' or 'json_str'.")

Configuration for a content classifier.

Attributes:

  • name (ContentClassifierName) –

    Classifier identifier.

  • threshold (float) –

    Detection sensitivity threshold (0.0-1.0).

  • mode (str | None) –

    Optional mode that overrides threshold (e.g., "high sensitivity", "strict", "low sensitivity", "normal").

Show JSON schema:
{
  "additionalProperties": false,
  "description": "Configuration for a content classifier.\n\nAttributes:\n    name: Classifier identifier.\n    threshold: Detection sensitivity threshold (0.0-1.0).\n    mode: Optional mode that overrides threshold (e.g., \"high sensitivity\", \"strict\", \"low sensitivity\", \"normal\").",
  "properties": {
    "name": {
      "description": "Classifier identifier.",
      "enum": [
        "pii_redaction",
        "toxicity_filter",
        "healthcare_topic_guardrail",
        "finance_topic_guardrail"
      ],
      "title": "Name",
      "type": "string"
    },
    "threshold": {
      "default": 0.5,
      "description": "Threshold for the tagger.",
      "maximum": 1.0,
      "minimum": 0.0,
      "title": "Threshold",
      "type": "number"
    },
    "mode": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Optional mode that overrides threshold (e.g., 'high sensitivity', 'strict', 'low sensitivity', 'normal').",
      "title": "Mode"
    }
  },
  "required": [
    "name"
  ],
  "title": "TaggerConfig",
  "type": "object"
}

Config:

  • extra: forbid

Fields:

name pydantic-field

name: ContentClassifierName

Classifier identifier.

threshold pydantic-field

threshold: float = 0.5

Threshold for the tagger.

mode pydantic-field

mode: str | None = None

Optional mode that overrides threshold (e.g., 'high sensitivity', 'strict', 'low sensitivity', 'normal').

Configuration for a content blocker.

Attributes:

  • name (ContentBlockerName) –

    Blocker identifier ("url_blocker" or "file_blocker").

Show JSON schema:
{
  "additionalProperties": false,
  "description": "Configuration for a content blocker.\n\nAttributes:\n    name: Blocker identifier (\"url_blocker\" or \"file_blocker\").",
  "properties": {
    "name": {
      "description": "Blocker identifier ('url_blocker' or 'file_blocker').",
      "enum": [
        "url_blocker",
        "file_blocker"
      ],
      "title": "Name",
      "type": "string"
    }
  },
  "required": [
    "name"
  ],
  "title": "ConstraintConfig",
  "type": "object"
}

Config:

  • extra: forbid

Fields:

  • name (ContentBlockerName)

name pydantic-field

name: ContentBlockerName

Blocker identifier ('url_blocker' or 'file_blocker').