The moat shows up where the queue changes, not where the demo shines.
Read the scenes first. Then test the same business pattern against shared rows, proof cues, and a visible hold rule.
AI margin stays where the system changes the operating queue
The durable AI business usually sees proprietary operating data, changes the next decision, and writes back into the workflow before the metric moves.
The comparison layer keeps the same six questions visible while the examples change.
Challenge the thesis before you read anything deeper.
Activate one module, read the spotlight, then open the drawers only if you need more proof.
What queue changes, what system gets updated, and what metric moves if this actually works?
If the answer is still abstract after the scenes, the rows, and this board, the moat claim is probably still too soft.
The supporting intelligence engine matters because it knows when to hold.
Recurring automation memory matters here because the thesis was built from patterns that persisted, not from one loud week of launches.
Useful software can still lose the economics.
If the decision still gets re-entered into another system, the incumbent rail often keeps the durable margin even when the new interface feels smarter.