Prompt Engineering

The Hybrid Orchestrator

Standard AI Agents either fetch from SQL or Vector DBs, but rarely fuse them gracefully. Our Custom Python Backend acts as a two-stage Hybrid Orchestrator using system prompt engineering.

Stage 1: Safe Text-to-SQL

Preventing SQL Injections

Instead of blindly asking an LLM to generate raw SELECT * FROM... queries (which exposes the backend to Prompt Injection and massive hallucination risks), we use a systemic Guardrail Prompt:

"Analyze the following question: '...'. Extract ONLY the company name or financial ticker mentioned. If none is mentioned, answer 'NONE'."

The Python backend then securely sanitizes this entity and uses standard ORM methods (ilike) against Supabase.

Stage 2: The Final Synthesis

Dynamic Context Injection

Once we have the raw SQL row (e.g., Apple's target price) AND the raw Semantic Chunks (e.g., geopolitical macro risks), we inject both into a strict synthesis prompt.

Dynamically Extracted Context: {context_text}

Critical Rules:
1. You MUST answer ONLY using the provided Extracted Context. Do not invent financial math...
2. If there is not enough information... firmly declare: "I do not have enough information"
3. YOUR FINAL RESPONSE MUST ALWAYS BE 100% IN ENGLISH.