The New Threat Vector
Deploying a read-only chatbot internally carries relatively low risk. But the moment you grant an AI agent the ability to *write* to your database, *issue* refunds, or *provision* infrastructure, the security paradigm changes entirely.
If an attacker can manipulate an LLM's prompt via indirect injection (e.g., hiding malicious instructions in a document the agent is instructed to summarize), they can potentially hijack the agent to execute unauthorized actions on their behalf.
Agentic Zero Trust
At BasaltHQ, security is not an afterthought; it is the foundational layer of our entire infrastructure stack. We operate on a model of Agentic Zero Trust.
Strict Tool Permissioning
Agents in the BasaltHQ ecosystem do not have blanket access to APIs. Access is strictly scoped using granular Role-Based Access Control (RBAC). An agent deployed for customer support can issue refunds up to $50, but any refund above that threshold automatically triggers a human-in-the-loop approval workflow.Cryptographic Provenance
Every action taken by an agent is cryptographically signed and logged to an immutable ledger. When an auditor asks *why* an action was taken, they can trace the exact prompt, the specific LLM version, the context window, and the resulting API payload.Semantic Firewalls
Traditional Web Application Firewalls (WAFs) look for SQL injection patterns. BasaltHQ employs Semantic Firewalls that analyze the *intent* of an agent's proposed action before it is executed. If an agent suddenly attempts to query the entirety of the employee salary table, the semantic firewall intercepts and blocks the anomaly.
Integrating with Onyx
For enterprises in highly regulated sectors, we pair this infrastructure with BASALTONYX, our dedicated legal and compliance engine. Onyx continuously monitors agent activity against a matrix of regulatory requirements (GDPR, SOC2, HIPAA), ensuring that autonomous actions never breach compliance boundaries.
Security is the prerequisite for scale. With BasaltHQ, you can deploy agentic AI with the confidence that your operations remain mathematically secure.

