NORNR
Mandates, approvals and evidence for autonomous agents.
Guide / LangChain
10 minutesHow to give your LangChain agent a budget in 10 minutes
Add a governed wallet, budget threshold and decision handling to a LangChain workflow with NORNR in 10 minutes.
1. Why this guide matters
LangChain makes it easy to compose prompts, tools and retrieval. Once those flows start touching paid APIs or downstream vendors, the missing piece is usually not capability. It is spend control.
This guide adds a NORNR wallet in front of the billable step so the workflow gets a budget, an approval threshold and a readable decision trail.
2. Install what you need
pip install agentpay langchain langchain-openai
This guide uses the hosted NORNR path at https://nornr.com, so you can validate the decision flow without standing up the full local stack first.
3. Create the governed wallet
from agentpay import Wallet
wallet = Wallet.create(
owner="research-agent",
daily_limit=50,
require_approval_above=20,
base_url="https://nornr.com",
)
This wallet is the mandate. It sets the budget and review threshold before the framework-specific workflow is allowed to continue.
4. Apply it in the workflow
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
decision = wallet.pay(
amount=12.50,
to="openai",
purpose="model inference",
)
if decision.get("status") == "approved":
prompt = ChatPromptTemplate.from_messages(
[("system", "You are a concise research assistant."), ("human", "{question}")]
)
chain = prompt | ChatOpenAI(model="gpt-4o-mini")
result = chain.invoke({"question": "Summarize the latest SOC 2 requirements."})
print(result.content)
elif decision.get("status") == "queued":
print("Approval required before the workflow can continue.", decision)
else:
print("Spend blocked by policy.", decision)
The key pattern stays the same across frameworks: ask NORNR for a decision first, then let the expensive or externally billable step run only if policy says yes.
5. What to expect
- approved means the workflow can continue immediately inside its mandate.
- queued means the request crossed an approval threshold and should wait for review.
- rejected means policy did not allow the action to proceed.
That three-way split is what makes the pattern useful: low-risk work stays fast, higher-risk work becomes reviewable, and clearly out-of-policy work never leaves the workflow.
6. When to use this pattern
- your LangChain workflow already touches paid APIs
- you want a budget before you redesign the whole stack
- you need a clear reviewed state for larger requests
7. What the output should look like
{'status': 'approved', 'requiresApproval': False}
{'status': 'queued', 'requiresApproval': True}
{'status': 'rejected', 'reasons': ['counterparty_not_allowed']}
You do not need the exact same payload shape everywhere. What matters is that the workflow can clearly distinguish approved, queued and rejected outcomes and persist the decision context.
8. Live proof and operator reality
- The same approved, queued and rejected model already runs in the hosted quickstart.
- The control room already exposes approval and evidence states after a governed run.
- You can start with one wallet in front of one paid model call and expand later.
The point of this pattern is not just better code structure. It is a workflow that operators can actually inspect, explain and intervene in when a request leaves its normal mandate.
9. Common mistakes
- Adding a budget check after the model call instead of before it.
- Using one global limit without a separate approval threshold.
- Letting retry loops call the model again without re-checking the mandate.
10. When not to use this pattern
- you only need to cap monthly usage after the fact, not govern live calls
- your workflow never touches billable or externally committed steps
- you are still deciding whether LangChain is even the right runtime
11. What this replaces and what it does not
- OpenAI account quotas cap usage globally, but they do not let this workflow decide approved vs queued in real time.
- Prompt instructions can ask the agent to be frugal, but they do not create operator-reviewable state.
- Manual spend review catches issues later, but it does not prevent the specific paid call from executing.
12. Implementation checklist
- Put the NORNR decision right before the paid model call.
- Keep the amount estimate explicit so operators know what the workflow asked for.
- Handle queued and rejected separately instead of treating both as generic failure.
- Log the decision payload next to the LangChain run output.
13. Featured follow-up paths
These are the adjacent pages most likely to help you turn this guide into a real rollout path instead of a one-off demo.
14. Where to go next
Related guides
Keep going from the same control problem.
These are the closest follow-up guides in the same part of the library.
How to add approval rules to an OpenAI Agents SDK workflow
Add approval thresholds to an OpenAI Agents SDK workflow so expensive or risky steps pause before money moves.
Read guide LangGraph / 12 minutesHow to gate paid tool calls in LangGraph
Gate paid LangGraph tool calls with NORNR so graph edges only reach expensive actions after policy says yes.
Read guide Research agents / 9 minutesHow to govern OpenAI API spend for research agents
Put OpenAI API spend for research agents behind budget, approval and evidence controls with NORNR.
Read guide