AIGQLUnify • Architecture patent pending
Your REST · Our Graph
AI • Policy-Aware • In-Path

AI that adds real-time intelligence to your data — and explains why.

AIGQLUnify doesn’t guess against a blank schema. It reads your OpenAPI files, builds a graph, then lets AI propose joins, SDL patches, and GraphQL queries — all visible and overridable in the UI. Every AI call is downstream of PDP, so prompts and outputs never bypass policy.

1. AI only runs when PDP says it can.

AI helpers in AIGQLUnify are just another consumer of the governed graph. Before an AI resolver runs, the data plane calls your PDP with the same subject, resource, and context used for normal queries. If the PDP obligations say features.ai = false, the AI resolver simply does not execute.

  • Obligations-driven: PDP returns features.ai and a masked selection set.
  • No side doors: AI never sees fields that PDP removed or masked.
  • Auditable: PDP decision + AI call share one trace / span tree.
PDP: gate AI with obligations
# PDP evaluates whether AI is allowed for this query
curl -sS -X POST https://<cp-host>/pdp/decision.v2 \
  -H "content-type: application/json" \
  -d '{
    "tenant":"t_demo",
    "workspace":"ws_primary",
    "action":"read",
    "resource":{"type":"GraphQuery","name":"orders"},
    "context":{
      "role":"analyst",
      "selection":["orders.id","orders.total","orders.userEmail"],
      "client":"console",
      "useAI":true
    }
  }' | jq

AIGQLUnify uses allowFields, mask, and obligations.features.ai to decide which fields exist in the AI plan and whether AI runs at all.

2. Natural language → GraphQL, over the same governed graph.

NL → GraphQL → PDP → AI
# 1) User types: "Show risky orders from last week."
# 2) Modeling helper proposes a GraphQL query plan.
# 3) Data plane calls PDP. Only if features.ai = true:
#    - execute GraphQL selection
#    - run AI summarizer on the masked result.

curl -sS https://<dp-host>/graphql \
  -H "content-type: application/json" \
  -H "x-tenant-id: t_demo" \
  -H "x-workspace-id: ws_primary" \
  -H "authorization: Bearer <jwt>" \
  -d '{
    "query": "query AskAIOverOrders($prompt:String!) { 
      askAI {
        riskyOrdersSummary(prompt:$prompt) { 
          text 
          traceId 
          usedFields 
        }
      }
    }",
    "variables": {
      "prompt": "Summarize the riskiest orders from last week."
    }
  }' | jq

If PDP returns features.ai = false, askAI short-circuits with a policy error and emits a span so you can see who asked and why it was denied.

Modeling helpers do three things:

  • Read OpenAPI and suggest joins and SDL patches.
  • Turn natural language into a candidate GraphQL query, scoped to PDP-allowed fields.
  • Optionally run post-query analysis (summaries, clustering, comparisons) on the masked result set.

In every case, they operate on the same shape and the same decisions as your regular GraphQL clients. No secret data paths, no “AI shadow API.”

3. AI that’s traceable and DSAR-friendly.

AI outputs carry a traceId that ties them back to:

  • the original GraphQL query and PDP decision,
  • the masked response that fed the prompt,
  • and any DSAR exports that later touch the same subject.

That means “what data fed this AI answer?” goes from hand-wavy to one search in your tracing backend.

One trace for query, policy, and AI
# Find the AI span by traceId
# (traceId is returned in askAI.summary.traceId)

# Example Jaeger / OpenTelemetry search
traceId="4f2b9c1d7a3e4f12"
open "https://<jaeger-host>/search?traceID=$traceId"

Because AI is in-path, not sidecar, DSAR and policy logs already include the context for what each AI call saw and why it was allowed.