Security & Guardrails

Powerful Hallucination Prevention

In financial applications, generative AI must never guess. If data is not present in the primary sources, the AI must fail gracefully. We built three layers of systemic Guardrails to prevent mathematical and factual hallucinations.

1The "Strict Grounding" Policy

The final synthesis prompt mathematically restricts the LLM from accessing its predefined GPT-4 weights for financial data. The prompt explicitly orders:

2Sanitized Entity Extraction (Anti-SQL Injection)

Instead of allowing the LLM to write raw SQL code directly to the Supabase database (which introduces massive security flaws and SQL Injection vulnerabilities), the Python backend uses the LLM solely to extract the Entity Name.

The extracted string is then safely passed to Supabase's native ORM via a parameterized Python ilike() query.

3The "Data Not Available" Fallback

If the user asks for the price of a fictional company (e.g. "Stark Industries") or a company whose region was not ingested (e.g. "Where is Marks & Spencer from?"), the SQL database returns empty. The LLM reads an empty context block and triggers the systemic fallback rule, gracefully replying:

"I do not have enough information available in the internal sources to answer this."