What this builds
The agentic_credit example is a complete loan approval system
that shows exactly where a language model fits inside a structured workflow
engine — and, equally important, where it does not.
The final program runs three normal applicants and one suspicious high-value application through a BPMN loan process, detects the unusual process path using geometric memory, then opens a CMMN adaptive fraud investigation case. At every step where a human needs guidance, a language response is generated by the Noetic bridge — without any network call, GGUF model file, or paid API.
- Alice Martin — €25 000, low risk → auto-approved
- Bob Chen — €50 000, medium risk → routed to manual review
- Clara Diallo — €12 000, low risk → auto-approved
- Dave Anomaly — €500 000, risk assessment bypassed → unusual pattern score 1.0 → fraud case opened
Two-layer architecture
The system is split into two completely independent layers. Neither layer knows about the internal implementation of the other.
Layer A — Process orchestration (BPMN 2.0 · DMN 1.3 · CMMN 1.1 · ArchiMate 4.0)
credit_policy.dmn — DMN 1.3 credit acceptance decision tablerisk_policy.dmn — DMN 1.3 risk escalation decision tableloan_approval.bpmn — BPMN 2.0 loan approval process (6 tasks, 1 XOR gateway)fraud_investigation.cmmn — CMMN 1.1 adaptive case (2 stages, 3 milestones, sentries)ea_model.archimate — ArchiMate 4.0 enterprise architecture (Strategy → Technology)
Layer B — Noetic language intelligence (zero external dependencies)
NoeticBridge — credit-domain CorpusRetriever (38 KA-pairs)PATH A — hashProjectEmbed + cosine-sort + synthesiseNo GGUF model · No external API · No network · Deterministic output
The key insight is the separation of concerns: Layer A is responsible for what the process does and whether it is structurally correct. Layer B is responsible only for producing human-readable text at the four points where a person needs to act or understand a decision.
You can swap either layer independently. Replace the NoeticBridge with a call to a real LLM API without touching a single line of BPMN. Replace the BPMN model with a different process without touching the language layer.
Reviewer brief
BPMN manual_review UserTask — natural-language brief for the loan officer.
Fraud narrative
Unusual pattern score > 0.5 — human-readable investigation summary.
DMN explanation
credit_policy output — plain-language compliance note for the audit record.
CMMN instruction
Blocking CMMN task — task instruction shown to the human worker.
Prerequisites
- Go 1.22 or later
- The
ideaswave.com/qubit/coremodule (this repository)
Clone and verify the build:
git clone https://github.com/yourorg/qubit-core-v1.0
cd qubit-core-v1.0
go build ./examples/agentic_credit/ # must produce no errors
go run ./examples/agentic_credit/ # should complete in <100ms
Layer A — DMN decision tables
Two DMN 1.3 tables drive the credit logic. They are pure data — any
business analyst can open credit_policy.dmn in Camunda Modeler
or any DMN-compatible editor, change a threshold, save, and redeploy.
No Go code changes required.
credit_policy.dmn
Evaluates identity_verified (bool) and risk_score
(0.0–1.0) and emits decision: approved |
review_required | rejected.
Hit policy is FIRST: rules are evaluated top-to-bottom and
the first match wins.
<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="https://www.omg.org/spec/DMN/20191111/MODEL/"
id="def_credit_policy" name="Credit Acceptance Policy"
namespace="http://qubit.ideaswave.com/dmn">
<decision id="credit_policy" name="Credit Acceptance Policy">
<decisionTable hitPolicy="FIRST">
<input label="Identity Verified">
<inputExpression typeRef="xs:boolean"><text>identity_verified</text></inputExpression>
</input>
<input label="Risk Score">
<inputExpression typeRef="xs:decimal"><text>risk_score</text></inputExpression>
</input>
<output name="decision" typeRef="xs:string"/>
<rule id="r1"><!-- Identity not verified → always reject -->
<inputEntry><text>== false</text></inputEntry>
<inputEntry><text>-</text></inputEntry>
<outputEntry><text>"rejected"</text></outputEntry>
</rule>
<rule id="r2"><!-- Low risk → auto-approve -->
<inputEntry><text>== true</text></inputEntry>
<inputEntry><text>< 0.3</text></inputEntry>
<outputEntry><text>"approved"</text></outputEntry>
</rule>
<rule id="r3"><!-- Medium risk → human review -->
<inputEntry><text>== true</text></inputEntry>
<inputEntry><text>< 0.7</text></inputEntry>
<outputEntry><text>"review_required"</text></outputEntry>
</rule>
<rule id="r4"><!-- High risk → reject -->
<inputEntry><text>== true</text></inputEntry>
<inputEntry><text>>= 0.7</text></inputEntry>
<outputEntry><text>"rejected"</text></outputEntry>
</rule>
</decisionTable>
</decision>
</definitions>
risk_policy.dmn
Takes the decision output from credit_policy and
produces two additional fields used downstream: risk_level
(critical | high | medium | low) and escalate_to_compliance
(bool). These two tables chain — the output of one is input to the other.
bpm.ParseDMNXML() and deployed to the same
DecisionEngine instance. They are evaluated sequentially within the
assess_risk service task. The variable map is merged after each
evaluation so the second table always sees the first table's output.
Layer A — BPMN loan approval process
The BPMN process is defined in loan_approval.bpmn — a standard
BPMN 2.0 XML file that opens in Camunda Modeler, Signavio, or any
BPMN-compatible editor. At deployment, Priostack compiles it into an
execution graph. Every flow guarantee in the BPMN diagram is enforced at
runtime — the engine will not advance past a gateway unless the condition
is satisfied.
<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL"
targetNamespace="http://qubit.ideaswave.com/bpmn"
id="def_loan_approval_bpmn">
<process id="loan_approval_bpmn" name="Loan Approval" isExecutable="true">
<startEvent id="start"/>
<serviceTask id="validate_identity" name="Validate Identity"/>
<serviceTask id="assess_risk" name="Assess Risk"/>
<serviceTask id="approve_loan" name="Approve Loan"/>
<userTask id="manual_review" name="Manual Review"/>
<serviceTask id="notify_applicant" name="Notify Applicant"/>
<exclusiveGateway id="gw_credit_decision" name="Credit Decision Gateway"/>
<endEvent id="end"/>
<sequenceFlow id="f1" sourceRef="start" targetRef="validate_identity"/>
<sequenceFlow id="f2" sourceRef="validate_identity" targetRef="assess_risk"/>
<sequenceFlow id="f3" sourceRef="assess_risk" targetRef="gw_credit_decision"/>
<sequenceFlow id="f_approve" sourceRef="gw_credit_decision" targetRef="approve_loan">
<conditionExpression>decision == "approved"</conditionExpression>
</sequenceFlow>
<sequenceFlow id="f_review" sourceRef="gw_credit_decision" targetRef="manual_review">
<conditionExpression>decision != "approved"</conditionExpression>
</sequenceFlow>
<sequenceFlow id="f5a" sourceRef="approve_loan" targetRef="notify_applicant"/>
<sequenceFlow id="f5b" sourceRef="manual_review" targetRef="notify_applicant"/>
<sequenceFlow id="f6" sourceRef="notify_applicant" targetRef="end"/>
</process>
</definitions>
The gateway uses a FEEL condition (decision == "approved").
Execution can only reach approve_loan when that condition is
true — the structural guarantee holds even if a worker passes incorrect
variables.
Process flow
StartEvent
│
▼
validate_identity (ServiceTask) — sets identity_verified=true
│
▼
assess_risk (ServiceTask) — evaluates credit_policy + risk_policy DMN
│ calls noetic.ExplainDecision() ← Integration point 3
▼
gw_credit_decision (XOR Gateway) — FEEL condition: decision == "approved"
│ │
▼ ▼
approve_loan manual_review (UserTask) ← Integration point 1
│ │
└───────────┬───────────┘
▼
notify_applicant (ServiceTask)
│
▼
EndEvent
Running an instance
runLoanInstance() drives the process step by step.
Each step advances one node in the execution graph. As it runs, geometric
cell IDs accumulate into a trace that is later used for pattern analysis.
trace, err := runLoanInstance(
ctx,
processEngine,
dmnEngine,
pnet, idx, condIdx,
newLoanItem("Alice Martin", 25000),
false, // skipRisk=false → normal path
noetic,
)
The skipRisk=true flag simulates Dave's anomalous instance,
where the risk assessment task completes without evaluating the DMN tables.
This produces a process path that diverges from all normal paths, which the
pattern analysis engine detects.
Layer A — CMMN fraud investigation case
CMMN (Case Management Model and Notation) is the right tool for the fraud investigation because unlike BPMN it has no fixed sequence. Stages activate on sentries — conditions that watch for milestones completed in other stages.
<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="http://www.omg.org/spec/CMMN/20151109/MODEL"
targetNamespace="http://qubit.ideaswave.com/cmmn">
<case id="fraud_investigation_case" name="Fraud Investigation Case">
<casePlanModel id="cpm_fraud" name="Fraud Case Plan">
<!-- Stage 1: gather evidence (human + decision tasks) -->
<planItem id="stage_fraud" definitionRef="stage_fraud_def"/>
<!-- Stage 2: only starts after evidence_collected milestone OCCURS -->
<planItem id="stage_compliance" definitionRef="stage_compliance_def">
<entryCriterion sentryRef="sentry-crit_compliance_entry"/>
</planItem>
<stage id="stage_fraud_def" name="Fraud Investigation" autoComplete="true">
<planItem id="collect_evidence" name="Collect Evidence" <!-- Integration point 4 -->
definitionRef="collect_evidence_def"/>
<planItem id="evidence_collected" name="Evidence Collected"
definitionRef="evidence_collected_def">
<entryCriterion><planItemOnPart sourceRef="collect_evidence">
<standardEvent>complete</standardEvent>
</planItemOnPart></entryCriterion>
</planItem>
</stage>
<stage id="stage_compliance_def" name="Compliance Review" autoComplete="true">
<planItem id="legal_review" name="Legal Review" <!-- Integration point 4 -->
definitionRef="legal_review_def"/>
<planItem id="compliance_cleared" name="Compliance Cleared"
definitionRef="compliance_cleared_def">
<entryCriterion><planItemOnPart sourceRef="legal_review">
<standardEvent>complete</standardEvent>
</planItemOnPart></entryCriterion>
</planItem>
</stage>
</casePlanModel>
</case>
</definitions>
The Compliance Review stage does not start until the
evidence_collected milestone occurs. This is enforced by the
CMMN sentry mechanism, not by application code. The engine won't even
activate legal_review until the sentry fires, regardless of what
variables the worker sets.
Layer A — ArchiMate enterprise model
The ea_model.archimate file captures the why and the
who of the loan origination system in ArchiMate 4.0 — the open
Enterprise Architecture modelling standard. Open it in
Archi
to navigate from strategy goals down to the technology layer.
The model is organised into five folders that correspond to ArchiMate viewpoints:
<archimate:model name="qubit-core ArchiMate Model" version="4.0">
<folder name="Strategy">
<element type="Resource" name="Human Underwriter Pool"/>
<element type="CourseOfAction" name="Accelerate Digital Lending"/>
<element type="ValueStream" name="Loan Origination Value Stream"/>
</folder>
<folder name="Motivation">
<element type="Stakeholder" name="Customer"/>
<element type="Stakeholder" name="Regulatory Authority"/>
<element type="Driver" name="Customer Experience Demand"/>
<element type="Driver" name="Regulatory Compliance Pressure"/>
<element type="Goal" name="Approve 90% of clean applications within 48 hours"/>
<element type="Goal" name="Detect fraudulent applications with ≥99% accuracy"/>
<element type="Constraint" name="Identity Verification Mandatory"/>
</folder>
<folder name="Business">
<element type="BusinessRole" name="Loan Officer"/>
<element type="BusinessRole" name="Risk Analyst"/>
<element type="BusinessRole" name="Fraud Investigator"/>
<element type="BusinessRole" name="Compliance Officer"/>
<element type="BusinessProcess" name="Loan Approval"/> <!-- realises loan_approval.bpmn -->
<element type="BusinessProcess" name="Fraud Investigation"/>
</folder>
<folder name="Application">
<element type="ApplicationComponent" name="Priostack Process Engine"/>
<element type="ApplicationComponent" name="Noetic Language Layer"/>
<element type="DataObject" name="Loan Application"/>
<element type="DataObject" name="Audit Record"/>
</folder>
<folder name="Technology">
<element type="SystemSoftware" name="Go Runtime"/>
<element type="TechnologyService" name="BPMN Engine"/>
<element type="TechnologyService" name="DMN Engine"/>
<element type="TechnologyService" name="CMMN Engine"/>
</folder>
</archimate:model>
The ArchiMate model documents compliance requirements (GDPR, AML, KYC) as Constraint elements linked to process steps, making it easy for auditors to trace every regulatory obligation back to the BPMN or DMN artefact that enforces it.
examples/e2e/models/ea_model.archimate, and switch to the
Viewpoints tab to see the motivation-to-technology traceability diagram.
Layer A — Unusual pattern detection
After all four instances run, each one has left a sequence of execution
step IDs (cell IDs) that describe the exact path it took through the process.
These sequences are passed to trajectoryAnalysis().
The three normal traces (Alice, Bob, Clara) are registered as the reference corpus. Dave's trace is then scored against that corpus using geometric distance in step-ID space. A score of 0.0 means the path is indistinguishable from normal; 1.0 means it is maximally different.
// Phase 6 — compare Dave's path against the normal corpus
anomalyScore, frechetDist := trajectoryAnalysis(ctx, wrapper, allTraces)
// Dave's path bypassed assess_risk, producing cell IDs [15–19]
// Normal paths use IDs [0–14] — completely disjoint
// anomalyScore = 1.0000 (maximally unusual)
// frechetDist = 48.0208 (how far the paths diverge)
if anomalyScore > 0.5 {
narrative := noetic.AnomalyNarrative(ctx,
"Dave Anomaly", 500000, anomalyScore, frechetDist)
// → open CMMN fraud investigation case
}
Dave's trace produces cell IDs [15, 16, 17, 18, 19]. The normal
traces use IDs in the range [0, 14]. The paths are
completely disjoint, so the unusual pattern score is 1.0 and
the path divergence is 48.02.
Layer B — NoeticBridge
NoeticBridge is the language intelligence layer. It is a
self-contained Go struct that owns a bridgeCorpusRetriever
pre-warmed with 38 credit-domain knowledge–answer pairs (KA-pairs).
Language generation uses PATH A from the Noetic module: pure-Go cosine retrieval with no model file, no IPC, and no network. Every word is projected to a deterministic 64-dimensional unit vector via FNV64a hashing. A query is the centroid of its key-token vectors. The top-4 corpus entries by cosine similarity are merged into a response.
// Creating and warming the bridge takes <1ms
noetic := NewNoeticBridge(ctx)
// noetic.Len() == 38
// Each of the four methods builds a targeted prompt from structured data
// and retrieves the best-matching KA-pairs from the corpus
brief := noetic.ReviewerBrief(ctx, "Bob Chen", 50000, 0.5, "review_required")
// → "manual review applicant loan amount risk score required officer verify
// documentation income…"
Why pure-Go cosine retrieval is sufficient here
The prompts and corpus entries are structured — the prompt for a reviewer brief always contains applicant, amount, risk score, and decision. The corpus entries are authored to contain exactly those tokens. Cosine similarity in the 64-D hash-projected space reliably identifies the correct KA-pairs for each use case. The output is not poetic prose — it is a token-joined summary of the best-matching knowledge, which is exactly what a compliance audit record needs.
When you replace NoeticBridge with a call to a real LLM, the
four method signatures stay identical. The upgrade is a drop-in.
Integration point 1 — Reviewer brief
BPMN
Fires inside simulateLoanWorker() when the current task is
manual_review. Bob's application hits this path because his risk
score (0.5) triggers the review_required DMN rule.
case "manual_review":
applicant, _ := vars["applicant"].(string)
amount, _ := item.Variables["amount"].(float64)
riskScore, _ := vars["risk_score"].(float64)
decision, _ := vars["decision"].(string)
brief := noetic.ReviewerBrief(ctx, applicant, amount, riskScore, decision)
fmt.Printf("🧠 Noetic [brief]: %s\n", brief)
item.Variables["reviewer_brief"] = brief // stored in audit trail
The brief is stored in item.Variables["reviewer_brief"]. Via the
Priostack operate API this variable is visible to any monitoring dashboard
that queries GET /api/v1/process-instances/{key}.
Integration point 2 — Fraud narrative
Pattern
Fires in main.go after trajectoryAnalysis() returns.
The unusual pattern score and path divergence measure are forwarded directly
to Noetic so the narrative is grounded in the actual detection numbers.
anomalyScore, frechetDist := trajectoryAnalysis(ctx, wrapper, allTraces)
narrative := noetic.AnomalyNarrative(ctx,
"Dave Anomaly", 500000, anomalyScore, frechetDist)
fmt.Printf("🧠 Noetic [fraud narrative]:\n%s\n", narrative)
anomalousItem.Variables["fraud_narrative"] = narrative
This is the only integration point where whether Noetic fires depends on a numerical threshold from Layer A. All other points fire unconditionally when the relevant process step is reached.
Integration point 3 — DMN explanation
DMN
Fires inside simulateLoanWorker() immediately after the
credit_policy and risk_policy tables have been
evaluated. The explanation is generated from the structured DMN output, not
from BPMN state.
case "assess_risk":
// … run DMN tables …
decision, _ := vars["decision"].(string)
riskLevel, _ := vars["risk_level"].(string)
escalate, _ := vars["escalate_to_compliance"].(bool)
explanation := noetic.ExplainDecision(ctx, decision, riskLevel, escalate)
fmt.Printf("🧠 Noetic [DMN]: %s\n", explanation)
item.Variables["dmn_explanation"] = explanation
Integration point 4 — CMMN task instruction
CMMN
Fires inside runFraudCase() before each blocking task is handed
to the simulated worker. In production this text would be displayed in the
Priostack tasklist UI.
for _, task := range bpm.ActiveCaseTasks(marking, waitIdx) {
instruction := noetic.CaseTaskInstruction(ctx, task.JobType, vars)
fmt.Printf("🧠 Noetic [task]: %s\n", instruction)
// Store for tasklist display
vars["task_instruction_"+task.JobType] = instruction
simulateFraudWorker(vars, task)
marking[task.ResumePlace] = 1
}
ActiveCaseTasks() returns only the tasks whose sentry conditions
have fired and which currently hold a token. If the Compliance Review stage
hasn't started yet (because the evidence_collected milestone hasn't occurred),
legal_review will not appear in the list — no instruction will be
generated for it, because the human shouldn't be thinking about it yet.
Run the example
cd qubit-core-v1.0
go run ./examples/agentic_credit/
The example builds and runs with a single command. All models are loaded
from examples/e2e/models/ at startup — no extra binary required.
Reading the output
Every line prefixed with 🧠 Noetic is a language-layer output.
Every other line is a structural engine event. The two streams are
intentionally interleaved to show exactly where language generation happens
inside the process flow.
Extending it
Replace Noetic with a real LLM
The four public methods on NoeticBridge each take a
context.Context as their first argument precisely so you can add
a timeout for a remote call. Swap the implementation:
// Replace generate() in bridge.go with an OpenAI / Ollama / Anthropic call
func (b *NoeticBridge) generate(prompt string, maxTokens int) string {
resp, _ := openaiClient.CreateCompletion(ctx, openai.CompletionRequest{
Model: "gpt-4o-mini",
Prompt: prompt,
MaxTokens: maxTokens,
})
return resp.Choices[0].Text
}
Add your own BPMN process
Deploy any BPMN 2.0 XML to the Priostack API:
curl -X POST https://priostack.com/api/v1/process-definitions \
-H "X-API-Key: $YOUR_KEY" \
-H "Content-Type: application/xml" \
--data-binary @your-process.bpmn
Add Noetic integration points by registering job workers that call the language layer before completing the job:
// Poll for manual_review jobs
jobs := activateJobs("manual_review", 10)
for _, job := range jobs {
brief := noetic.ReviewerBrief(ctx,
job.Variables["applicant"].(string),
job.Variables["amount"].(float64),
job.Variables["risk_score"].(float64),
job.Variables["decision"].(string),
)
completeJob(job.Key, map[string]interface{}{"reviewer_brief": brief})
}
Grow the pattern corpus from production data
In production, register each completed instance's path into the corpus after it finishes, not just at test time. As the corpus grows the unusual pattern score distribution tightens and the false-positive rate drops. Read the geometric memory deep-dive for a full technical walkthrough.