Python SDK
Drop-in wrappers for Anthropic, OpenAI, and xAI that capture every call to your TokenID dashboard.
Installation
pip install --extra-index-url https://tokenid:@pypi.token.audit.id/simple/ tokenid
What gets captured
| Field | Description |
|---|---|
provider |
anthropic, openai, or xai |
model |
Exact model string from the response |
input_tokens |
Prompt tokens |
output_tokens |
Completion tokens |
cache_read |
Anthropic prompt cache read tokens |
cache_creation |
Anthropic prompt cache creation tokens |
cost_usd |
Computed from live rate card at call time |
session_id |
Your identifier — groups calls into sessions |
tool_name / tool_input / tool_output |
Per tool_use and tool_result block |
duration_ms |
Wall-clock latency |
stop_reason |
end_turn, max_tokens, etc. |
Wrappers
Anthropic JournaledAnthropic JournaledOpenAI JournaledGrok
Streaming
All wrappers support streaming responses. Usage is captured from the final message_delta event.
with client.messages.stream(
model="claude-opus-4-7",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}]
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
Error handling
If the ingest call to TokenID fails, the original API call still succeeds and your code runs normally. TokenID never blocks your critical path.