NORNRMandates, approvals and evidence for autonomous agents.
Guide / OpenAI / ChatGPT
10 minutesHow to add spend control to ChatGPT agents
Intercept ChatGPT agent tool calls with a NORNR approval gate before paid actions execute — using the Responses API alongside wallet.pay().
1. Why ChatGPT agent tool calls need a spend gate
The OpenAI Responses API makes it straightforward to give a ChatGPT agent a set of tools — functions it can call autonomously to complete tasks. Some of those tools involve real-world actions: sending emails, making API calls to paid services, triggering purchases, or provisioning cloud resources. Once the model decides to use a tool, there is no built-in mechanism to pause and verify that the intended spend is within policy before the action fires.
NORNR provides that interception point. By inserting a wallet.pay() call between the moment the model requests a tool and the moment your code executes it, you get a policy evaluation, a decision record, and a human escalation path — all before any money moves. The tool definition schema you provide to OpenAI stays exactly the same; NORNR only affects what happens on your server when a tool call arrives.
2. Agent loop structure with the governance gate
The standard Responses API agentic loop looks like this: send a message, receive a response, check for tool calls, execute them, send results back. NORNR slots into the "execute them" step:
- Send message — user query or agent-initiated task
- Receive response — the model may include one or more
tool_callevents - For each tool_call: call wallet.pay() — evaluate the spend intent against policy
- On approved — execute the tool, collect the result
- On queued — pause the loop, notify the reviewer, await webhook
- On rejected — return an error result to the model without executing the tool
- Send tool results back — continue the agent loop with approved results only
3. Installing NORNR and creating a wallet
Install both SDKs in the same environment. Create the NORNR wallet once and store the ID as an environment variable. Set allowed_counterparties to the specific vendors your tools call so that any unexpected spend destination is blocked at the policy level without additional code changes.
pip install openai nornr
import nornr, os
nornr.api_key = os.environ["NORNR_API_KEY"]
wallet = nornr.Wallet.create(
owner="chatgpt-research-agent",
daily_limit=50.00,
require_approval_above=10.00,
allowed_counterparties=["openai", "stripe", "sendgrid"],
)
print(wallet.id) # save as NORNR_WALLET_ID
4. Intercepting tool_call results and gating with wallet.pay()
The key is to intercept every tool call before your execution handler runs the actual action. The pattern below shows a generic tool dispatcher that checks NORNR first. You define the cost estimate per tool in a lookup table so the governance layer always works with a concrete amount rather than an unknown variable.
import openai, nornr, json, os
openai.api_key = os.environ["OPENAI_API_KEY"]
nornr.api_key = os.environ["NORNR_API_KEY"]
wallet = nornr.Wallet(os.environ["NORNR_WALLET_ID"])
# Estimated cost and counterparty per tool name
TOOL_POLICY = {
"send_email": {"amount": 0.01, "to": "sendgrid"},
"charge_card": {"amount": 50.0, "to": "stripe"},
"run_web_search": {"amount": 0.05, "to": "openai"},
}
def governed_dispatch(tool_name: str, tool_args: dict) -> str:
policy = TOOL_POLICY.get(tool_name, {"amount": 1.0, "to": "unknown"})
# Gate: call NORNR before executing the tool
decision = wallet.pay(
amount=policy["amount"],
to=policy["to"],
purpose=f"Tool call: {tool_name}",
)
if decision.status == "approved":
return execute_tool(tool_name, tool_args)
elif decision.status == "queued":
notify_reviewer(decision.id, tool_name)
return json.dumps({"error": "spend_queued", "decision_id": decision.id})
else: # rejected
return json.dumps({"error": "spend_rejected", "reasons": decision.reasons})
5. Returning the gated result back to the Responses API
The tool result — whether it is the actual output or an error response from a queued or rejected decision — is returned to the model exactly the same way as any other tool result. The model receives the error message and can either stop, ask for clarification, or try an alternative approach. This keeps the agent loop intact while preventing unauthorized spend from reaching real vendors.
client = openai.OpenAI()
def run_agent(user_message: str):
messages = [{"role": "user", "content": user_message}]
while True:
response = client.responses.create(
model="gpt-4o",
input=messages,
tools=TOOLS, # standard tool definitions, unchanged
)
if response.stop_reason == "end_turn":
return response.output_text
# Process each tool call through the governance gate
tool_results = []
for call in response.tool_calls:
result = governed_dispatch(call.name, json.loads(call.arguments))
tool_results.append({"tool_call_id": call.id, "output": result})
messages.append({"role": "assistant", "tool_calls": response.tool_calls})
messages.append({"role": "tool", "content": tool_results})
6. Common mistakes
- Gating after tool execution. The call to
wallet.pay()must happen beforeexecute_tool(). Checking after the fact creates an audit record but does not prevent the spend. - Returning a hard exception on queued. A queued result should be communicated to the model as a structured response so it can pause gracefully. Raising an unhandled exception crashes the agent loop instead of routing cleanly to a human reviewer.
- Not including the decision ID in the tool result. When a decision is queued, the
decision_idis the only link between the reviewer's action in the control room and your agent's resume path. Always include it. - Using a single wallet for all agents. Different agents have different risk profiles. Separate wallets make it straightforward to attribute spend, apply different approval thresholds, and audit each agent independently.