Install the MarginDash SDK for your language.
npm install margindash
Track events using the MarginDash TypeScript SDK.
import { MarginDash } from "margindash";
import OpenAI from "openai";
const md = new MarginDash({ apiKey: "YOUR_API_KEY_HERE" }); // Get from Settings page
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
});
// No prompts or responses ever leave your servers
md.addUsage({
vendor: "openai",
model: response.model, // "gpt-4o"
inputTokens: response.usage!.prompt_tokens, // 1200
outputTokens: response.usage!.completion_tokens, // 340
});
md.track({
customerId: user.id, // 8291
eventType: "summarize",
revenueAmountInCents: 500,
});
// Events are flushed automatically in the background.
// Before your process exits:
await md.shutdown();
Privacy first: Only the model name and token counts are sent to MarginDash — no request or response content ever leaves your infrastructure. Pass the vendor name and usage via addUsage(), and MarginDash calculates cost from your configured vendor rates. For agent sessions with multiple AI calls, call addUsage() once per call, then track() once to attach them all.
All fields accepted by track() and addUsage().
| Field | Type | Required | Description |
|---|---|---|---|
| customerId | string | yes | Your customer identifier. Auto-creates the customer if it doesn’t exist. |
| eventType | string | no | Category for this event, max 255 characters (default: ai_request) |
| revenueAmountInCents | number | no | Revenue for this event in cents, 0–10,000,000, e.g. 1500 = $15.00 (default: 0) |
| occurredAt | string | no | ISO 8601 timestamp, e.g. 2025-06-15T14:30:00Z. Must be within the last 90 days and no more than 1 hour in the future. (default: current time) |
| Field | Type | Required | Description |
|---|---|---|---|
| vendor | string | yes | AI vendor name, e.g. openai, anthropic, google. See all vendors and model slugs below. |
| model | string | yes | AI model slug, e.g. gpt-4o, claude-4-opus. See all vendors and model slugs below. |
| inputTokens | number | yes | Input/prompt token count, 0–100,000,000. At least one of inputTokens or outputTokens must be > 0. |
| outputTokens | number | yes | Output/completion token count, 0–100,000,000 |
Click a vendor to see valid slugs for the vendor and model fields. You can also fetch this list programmatically via GET /api/v1/models.
jamba-1-5-large
jamba-1-5-mini
jamba-1-6-large
jamba-1-6-mini
jamba-1-7-large
jamba-1-7-mini
jamba-reasoning-3b
qwen1.5-110b-chat
qwen2-5-coder-32b-instruct
qwen2-5-coder-7b-instruct
qwen2.5-32b-instruct
qwen2-5-72b-instruct
qwen-2-5-max
qwen-turbo
qwen2-72b-instruct
qwen3-0.6b-instruct
qwen3-0.6b-instruct-reasoning
qwen3-14b-instruct
qwen3-14b-instruct-reasoning
qwen3-1.7b-instruct
qwen3-1.7b-instruct-reasoning
qwen3-235b-a22b-instruct-2507
qwen3-235b-a22b-instruct-2507-reasoning
qwen3-235b-a22b-instruct
qwen3-235b-a22b-instruct-reasoning
qwen3-30b-a3b-2507
qwen3-30b-a3b-2507-reasoning
qwen3-30b-a3b-instruct
qwen3-30b-a3b-instruct-reasoning
qwen3-32b-instruct
qwen3-32b-instruct-reasoning
qwen3-4b-2507-instruct
qwen3-4b-2507-instruct-reasoning
qwen3-4b-instruct
qwen3-4b-instruct-reasoning
qwen3-5-397b-a17b-non-reasoning
qwen3-5-397b-a17b
qwen3-8b-instruct
qwen3-8b-instruct-reasoning
qwen3-coder-30b-a3b-instruct
qwen3-coder-480b-a35b-instruct
qwen3-coder-next
qwen3-max
qwen3-max-preview
qwen3-max-thinking
qwen3-max-thinking-preview
qwen3-next-80b-a3b-instruct
qwen3-next-80b-a3b-reasoning
qwen3-omni-30b-a3b-instruct
qwen3-omni-30b-a3b-reasoning
qwen3-vl-235b-a22b-instruct
qwen3-vl-235b-a22b-reasoning
qwen3-vl-30b-a3b-instruct
qwen3-vl-30b-a3b-reasoning
qwen3-vl-32b-instruct
qwen3-vl-32b-reasoning
qwen3-vl-4b-instruct
qwen3-vl-4b-reasoning
qwen3-vl-8b-instruct
qwen3-vl-8b-reasoning
qwen-chat-14b
qwen-chat-72b
qwq-32b
QwQ-32B-Preview
tulu3-405b
molmo2-8b
molmo-7b-d
olmo-2-32b
olmo-2-7b
olmo-3-1-32b-instruct
olmo-3-1-32b-think
olmo-3-32b-think
olmo-3-7b-instruct
olmo-3-7b-think
nova-2-0-lite-reasoning-low
nova-2-0-lite-reasoning-medium
nova-2-0-lite
nova-2-0-omni-reasoning-low
nova-2-0-omni-reasoning-medium
nova-2-0-omni
nova-2-0-pro-reasoning-low
nova-2-0-pro-reasoning-medium
nova-2-0-pro
nova-lite
nova-micro
nova-premier
nova-pro
claude-2
claude-21
claude-3-5-haiku
claude-35-sonnet-june-24
claude-35-sonnet
claude-3-7-sonnet
claude-3-7-sonnet-thinking
claude-3-haiku
claude-3-opus
claude-3-sonnet
claude-4-1-opus
claude-4-1-opus-thinking
claude-4-5-haiku
claude-4-5-haiku-reasoning
claude-4-5-sonnet
claude-4-5-sonnet-thinking
claude-4-opus
claude-4-opus-thinking
claude-4-sonnet
claude-4-sonnet-thinking
claude-instant
claude-opus-4-5
claude-opus-4-5-thinking
claude-opus-4-6-adaptive
claude-opus-4-6
claude-sonnet-4-6-adaptive
claude-sonnet-4-6
claude-sonnet-4-6-non-reasoning-low-effort
ernie-4-5-300b-a47b
ernie-5-0-thinking-preview
doubao-seed-1-8
doubao-seed-code
seed-oss-36b-instruct
command-a
command-r-plus-04-2024
command-r-03-2024
tiny-aya-global
dbrx
cogito-v2-1-reasoning
deepseek-coder-v2
deepseek-coder-v2-lite
deepseek-llm-67b-chat
deepseek-ocr
deepseek-r1
deepseek-r1-qwen3-8b
deepseek-r1-distill-llama-70b
deepseek-r1-distill-llama-8b
deepseek-r1-distill-qwen-14b
deepseek-r1-distill-qwen-1-5b
deepseek-r1-distill-qwen-32b
deepseek-r1-0120
deepseek-v2-5-sep-2024
deepseek-v2-5
deepseek-v2
deepseek-v3-0324
deepseek-v3-1
deepseek-v3-1-reasoning
deepseek-v3-1-terminus
deepseek-v3-1-terminus-reasoning
deepseek-v3-2-0925
deepseek-v3-2-reasoning-0925
deepseek-v3-2
deepseek-v3-2-reasoning
deepseek-v3-2-speciale
deepseek-v3
gemini-1-0-pro
gemini-1-0-ultra
gemini-1-5-flash-8b
gemini-1-5-flash-may-2024
gemini-1-5-flash
gemini-1-5-pro-may-2024
gemini-1-5-pro
gemini-2-0-flash-experimental
gemini-2-0-flash
gemini-2-0-flash-lite-001
gemini-2-0-flash-lite-preview
gemini-2-0-flash-thinking-exp-1219
gemini-2-0-flash-thinking-exp-0121
gemini-2-0-pro-experimental-02-05
gemini-2-5-flash-lite
gemini-2-5-flash-lite-preview-09-2025
gemini-2-5-flash-lite-preview-09-2025-reasoning
gemini-2-5-flash-lite-reasoning
gemini-2-5-flash
gemini-2-5-flash-04-2025
gemini-2-5-flash-reasoning-04-2025
gemini-2-5-flash-preview-09-2025
gemini-2-5-flash-preview-09-2025-reasoning
gemini-2-5-flash-reasoning
gemini-2-5-pro
gemini-2-5-pro-03-25
gemini-2-5-pro-05-06
gemini-3-1-pro-preview
gemini-3-flash
gemini-3-flash-reasoning
gemini-3-pro
gemini-3-pro-low
gemma-3-12b
gemma-3-1b
gemma-3-270m
gemma-3-27b
gemma-3-4b
gemma-3n-e2b
gemma-3n-e4b
gemma-3n-e4b-preview-0520
palm-2
granite-3-3-8b-instruct
granite-4-0-nano-1b
granite-4-0-350m
granite-4-0-h-nano-1b
granite-4-0-h-350m
granite-4-0-h-small
granite-4-0-micro
ling-1t
ling-flash-2-0
ling-mini-2-0
ring-1t
ring-flash-2-0
kimi-k2
kimi-k2-0905
kimi-k2-5-non-reasoning
kimi-k2-5
kimi-k2-thinking
kimi-linear-48b-a3b-instruct
mi-dm-k-2-5-pro-dec28
midm-250-pro-rsnsft
kat-coder-pro-v1
exaone-4-0-1-2b
exaone-4-0-1-2b-reasoning
exaone-4-0-32b
exaone-4-0-32b-reasoning
k-exaone-non-reasoning
k-exaone
lfm2-1-2b
lfm2-2-6b
lfm2-5-1-2b-instruct
lfm2-5-1-2b-thinking
lfm2-5-vl-1-6b
lfm2-8b-a1b
lfm-40b
k2-think-v2
k2-v2
k2-v2-low
k2-v2-medium
llama-2-chat-13b
llama-2-chat-70b
llama-2-chat-7b
llama-3-1-instruct-405b
llama-3-1-instruct-70b
llama-3-1-instruct-8b
llama-3-2-instruct-11b-vision
llama-3-2-instruct-1b
llama-3-2-instruct-3b
llama-3-2-instruct-90b-vision
llama-3-3-instruct-70b
llama-3-instruct-70b
llama-3-instruct-8b
llama-4-maverick
llama-4-scout
llama-65b
phi-3-mini
phi-4
phi-4-mini
phi-4-multimodal
minimax-m1-40k
minimax-m1-80k
minimax-m2
minimax-m2-1
minimax-m2-5
devstral-2
devstral-medium
devstral-small-2
devstral-small
devstral-small-2505
magistral-medium
magistral-medium-2509
magistral-small
magistral-small-2509
ministral-3-14b
ministral-3-3b
ministral-3-8b
mistral-7b-instruct
mistral-large-2407
mistral-large-2
mistral-large-3
mistral-large
mistral-medium
mistral-medium-3
mistral-medium-3-1
mistral-saba
mistral-small-3
mistral-small-3-1
mistral-small-3-2
mistral-small-2402
mistral-small
mistral-8x22b-instruct
mixtral-8x7b-instruct
pixtral-large-2411
motif-2-12-7b
hyperclova-x-seed-think-32b
deephermes-3-llama-3-1-8b-preview
deephermes-3-mistral-24b-preview
hermes-3-llama-3-1-70b
hermes-4-llama-3-1-405b
hermes-4-llama-3-1-405b-reasoning
hermes-4-llama-3-1-70b
hermes-4-llama-3-1-70b-reasoning
llama-3-1-nemotron-instruct-70b
llama-3-1-nemotron-nano-4b-reasoning
llama-3-1-nemotron-ultra-253b-v1-reasoning
llama-3-3-nemotron-super-49b
llama-3-3-nemotron-super-49b-reasoning
llama-nemotron-super-49b-v1-5
llama-nemotron-super-49b-v1-5-reasoning
nvidia-nemotron-3-nano-30b-a3b
nvidia-nemotron-3-nano-30b-a3b-reasoning
nvidia-nemotron-nano-12b-v2-vl
nvidia-nemotron-nano-12b-v2-vl-reasoning
nvidia-nemotron-nano-9b-v2
nvidia-nemotron-nano-9b-v2-reasoning
gpt-35-turbo
gpt-3-5-turbo-0613
gpt-4
gpt-4-1
gpt-4-1-mini
gpt-4-1-nano
gpt-4-5
gpt-4o-2024-08-06
gpt-4o-chatgpt
gpt-4o-chatgpt-03-25
gpt-4o-2024-05-13
gpt-4o-mini
gpt-4o-mini-realtime-dec-2024
gpt-4o
gpt-4o-realtime-dec-2024
gpt-4-turbo
gpt-5-1-codex
gpt-5-1-codex-mini
gpt-5-1
gpt-5-1-non-reasoning
gpt-5-2-codex
gpt-5-2-medium
gpt-5-2-non-reasoning
gpt-5-2
gpt-5-chatgpt
gpt-5-codex
gpt-5
gpt-5-low
gpt-5-medium
gpt-5-mini
gpt-5-minimal
gpt-5-mini-medium
gpt-5-mini-minimal
gpt-5-nano
gpt-5-nano-medium
gpt-5-nano-minimal
gpt-oss-120b
gpt-oss-120b-low
gpt-oss-20b
gpt-oss-20b-low
o1
o1-mini
o1-preview
o1-pro
o3
o3-mini
o3-mini-high
o3-pro
o4-mini
openchat-35
r1-1776
sonar
sonar-pro
sonar-reasoning
sonar-reasoning-pro
intellect-3
reka-flash-3
reka-flash
apriel-v1-5-15b-thinker
apriel-v1-6-15b-thinker
arctic-instruct
step3-vl-10b
falcon-h1r-7b
tri-21b-think-v0-5
tri-21b-think-preview
solar-mini
solar-open-100b-reasoning
solar-pro-2
solar-pro-2-preview
solar-pro-2-preview-reasoning
solar-pro-2-reasoning
grok-1
grok-2-1212
grok-3
grok-3-mini-reasoning
grok-3-reasoning
grok-4
grok-4-1-fast
grok-4-1-fast-reasoning
grok-4-fast
grok-4-fast-reasoning
grok-beta
grok-code-fast-1
grok-voice
mimo-v2-0206
mimo-v2-flash
mimo-v2-flash-reasoning
glm-4-5-air
glm-4.5
glm-4-5v
glm-4-5v-reasoning
glm-4-6
glm-4-6-reasoning
glm-4-6v
glm-4-6v-reasoning
glm-4-7-flash-non-reasoning
glm-4-7-flash
glm-4-7-non-reasoning
glm-4-7
glm-5-non-reasoning
glm-5
Install the MarginDash SDK for your language.
pip install margindash
Track events using the MarginDash Python SDK.
from openai import OpenAI
from margindash import MarginDash
openai = OpenAI()
md = MarginDash(api_key="YOUR_API_KEY_HERE") # Get from Settings page
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
# No prompts or responses ever leave your servers
md.add_usage(
vendor="openai",
model=response.model, # "gpt-4o"
input_tokens=response.usage.prompt_tokens, # 1200
output_tokens=response.usage.completion_tokens, # 340
)
md.track(
customer_id=user.id, # 8291
event_type="summarize",
revenue_amount_in_cents=500,
)
# Events are flushed automatically in the background.
# Before your process exits:
md.shutdown()
Privacy first: Only the model name and token counts are sent to MarginDash — no request or response content ever leaves your infrastructure. Pass the vendor name and usage via add_usage(), and MarginDash calculates cost from your configured vendor rates. For agent sessions with multiple AI calls, call add_usage() once per call, then track() once to attach them all.
All fields accepted by track() and add_usage().
| Field | Type | Required | Description |
|---|---|---|---|
| customer_id | str | yes | Your customer identifier. Auto-creates the customer if it doesn’t exist. |
| event_type | str | no | Category for this event, max 255 characters (default: ai_request) |
| revenue_amount_in_cents | int | no | Revenue for this event in cents, 0–10,000,000, e.g. 1500 = $15.00 (default: 0) |
| occurred_at | str | no | ISO 8601 timestamp, e.g. 2025-06-15T14:30:00Z. Must be within the last 90 days and no more than 1 hour in the future. (default: current time) |
| Field | Type | Required | Description |
|---|---|---|---|
| vendor | str | yes | AI vendor name, e.g. openai, anthropic, google. See all vendors and model slugs below. |
| model | str | yes | AI model slug, e.g. gpt-4o, claude-4-opus. See all vendors and model slugs below. |
| input_tokens | int | yes | Input/prompt token count, 0–100,000,000. At least one of input_tokens or output_tokens must be > 0. |
| output_tokens | int | yes | Output/completion token count, 0–100,000,000 |
Click a vendor to see valid slugs for the vendor and model fields. You can also fetch this list programmatically via GET /api/v1/models.
jamba-1-5-large
jamba-1-5-mini
jamba-1-6-large
jamba-1-6-mini
jamba-1-7-large
jamba-1-7-mini
jamba-reasoning-3b
qwen1.5-110b-chat
qwen2-5-coder-32b-instruct
qwen2-5-coder-7b-instruct
qwen2.5-32b-instruct
qwen2-5-72b-instruct
qwen-2-5-max
qwen-turbo
qwen2-72b-instruct
qwen3-0.6b-instruct
qwen3-0.6b-instruct-reasoning
qwen3-14b-instruct
qwen3-14b-instruct-reasoning
qwen3-1.7b-instruct
qwen3-1.7b-instruct-reasoning
qwen3-235b-a22b-instruct-2507
qwen3-235b-a22b-instruct-2507-reasoning
qwen3-235b-a22b-instruct
qwen3-235b-a22b-instruct-reasoning
qwen3-30b-a3b-2507
qwen3-30b-a3b-2507-reasoning
qwen3-30b-a3b-instruct
qwen3-30b-a3b-instruct-reasoning
qwen3-32b-instruct
qwen3-32b-instruct-reasoning
qwen3-4b-2507-instruct
qwen3-4b-2507-instruct-reasoning
qwen3-4b-instruct
qwen3-4b-instruct-reasoning
qwen3-5-397b-a17b-non-reasoning
qwen3-5-397b-a17b
qwen3-8b-instruct
qwen3-8b-instruct-reasoning
qwen3-coder-30b-a3b-instruct
qwen3-coder-480b-a35b-instruct
qwen3-coder-next
qwen3-max
qwen3-max-preview
qwen3-max-thinking
qwen3-max-thinking-preview
qwen3-next-80b-a3b-instruct
qwen3-next-80b-a3b-reasoning
qwen3-omni-30b-a3b-instruct
qwen3-omni-30b-a3b-reasoning
qwen3-vl-235b-a22b-instruct
qwen3-vl-235b-a22b-reasoning
qwen3-vl-30b-a3b-instruct
qwen3-vl-30b-a3b-reasoning
qwen3-vl-32b-instruct
qwen3-vl-32b-reasoning
qwen3-vl-4b-instruct
qwen3-vl-4b-reasoning
qwen3-vl-8b-instruct
qwen3-vl-8b-reasoning
qwen-chat-14b
qwen-chat-72b
qwq-32b
QwQ-32B-Preview
tulu3-405b
molmo2-8b
molmo-7b-d
olmo-2-32b
olmo-2-7b
olmo-3-1-32b-instruct
olmo-3-1-32b-think
olmo-3-32b-think
olmo-3-7b-instruct
olmo-3-7b-think
nova-2-0-lite-reasoning-low
nova-2-0-lite-reasoning-medium
nova-2-0-lite
nova-2-0-omni-reasoning-low
nova-2-0-omni-reasoning-medium
nova-2-0-omni
nova-2-0-pro-reasoning-low
nova-2-0-pro-reasoning-medium
nova-2-0-pro
nova-lite
nova-micro
nova-premier
nova-pro
claude-2
claude-21
claude-3-5-haiku
claude-35-sonnet-june-24
claude-35-sonnet
claude-3-7-sonnet
claude-3-7-sonnet-thinking
claude-3-haiku
claude-3-opus
claude-3-sonnet
claude-4-1-opus
claude-4-1-opus-thinking
claude-4-5-haiku
claude-4-5-haiku-reasoning
claude-4-5-sonnet
claude-4-5-sonnet-thinking
claude-4-opus
claude-4-opus-thinking
claude-4-sonnet
claude-4-sonnet-thinking
claude-instant
claude-opus-4-5
claude-opus-4-5-thinking
claude-opus-4-6-adaptive
claude-opus-4-6
claude-sonnet-4-6-adaptive
claude-sonnet-4-6
claude-sonnet-4-6-non-reasoning-low-effort
ernie-4-5-300b-a47b
ernie-5-0-thinking-preview
doubao-seed-1-8
doubao-seed-code
seed-oss-36b-instruct
command-a
command-r-plus-04-2024
command-r-03-2024
tiny-aya-global
dbrx
cogito-v2-1-reasoning
deepseek-coder-v2
deepseek-coder-v2-lite
deepseek-llm-67b-chat
deepseek-ocr
deepseek-r1
deepseek-r1-qwen3-8b
deepseek-r1-distill-llama-70b
deepseek-r1-distill-llama-8b
deepseek-r1-distill-qwen-14b
deepseek-r1-distill-qwen-1-5b
deepseek-r1-distill-qwen-32b
deepseek-r1-0120
deepseek-v2-5-sep-2024
deepseek-v2-5
deepseek-v2
deepseek-v3-0324
deepseek-v3-1
deepseek-v3-1-reasoning
deepseek-v3-1-terminus
deepseek-v3-1-terminus-reasoning
deepseek-v3-2-0925
deepseek-v3-2-reasoning-0925
deepseek-v3-2
deepseek-v3-2-reasoning
deepseek-v3-2-speciale
deepseek-v3
gemini-1-0-pro
gemini-1-0-ultra
gemini-1-5-flash-8b
gemini-1-5-flash-may-2024
gemini-1-5-flash
gemini-1-5-pro-may-2024
gemini-1-5-pro
gemini-2-0-flash-experimental
gemini-2-0-flash
gemini-2-0-flash-lite-001
gemini-2-0-flash-lite-preview
gemini-2-0-flash-thinking-exp-1219
gemini-2-0-flash-thinking-exp-0121
gemini-2-0-pro-experimental-02-05
gemini-2-5-flash-lite
gemini-2-5-flash-lite-preview-09-2025
gemini-2-5-flash-lite-preview-09-2025-reasoning
gemini-2-5-flash-lite-reasoning
gemini-2-5-flash
gemini-2-5-flash-04-2025
gemini-2-5-flash-reasoning-04-2025
gemini-2-5-flash-preview-09-2025
gemini-2-5-flash-preview-09-2025-reasoning
gemini-2-5-flash-reasoning
gemini-2-5-pro
gemini-2-5-pro-03-25
gemini-2-5-pro-05-06
gemini-3-1-pro-preview
gemini-3-flash
gemini-3-flash-reasoning
gemini-3-pro
gemini-3-pro-low
gemma-3-12b
gemma-3-1b
gemma-3-270m
gemma-3-27b
gemma-3-4b
gemma-3n-e2b
gemma-3n-e4b
gemma-3n-e4b-preview-0520
palm-2
granite-3-3-8b-instruct
granite-4-0-nano-1b
granite-4-0-350m
granite-4-0-h-nano-1b
granite-4-0-h-350m
granite-4-0-h-small
granite-4-0-micro
ling-1t
ling-flash-2-0
ling-mini-2-0
ring-1t
ring-flash-2-0
kimi-k2
kimi-k2-0905
kimi-k2-5-non-reasoning
kimi-k2-5
kimi-k2-thinking
kimi-linear-48b-a3b-instruct
mi-dm-k-2-5-pro-dec28
midm-250-pro-rsnsft
kat-coder-pro-v1
exaone-4-0-1-2b
exaone-4-0-1-2b-reasoning
exaone-4-0-32b
exaone-4-0-32b-reasoning
k-exaone-non-reasoning
k-exaone
lfm2-1-2b
lfm2-2-6b
lfm2-5-1-2b-instruct
lfm2-5-1-2b-thinking
lfm2-5-vl-1-6b
lfm2-8b-a1b
lfm-40b
k2-think-v2
k2-v2
k2-v2-low
k2-v2-medium
llama-2-chat-13b
llama-2-chat-70b
llama-2-chat-7b
llama-3-1-instruct-405b
llama-3-1-instruct-70b
llama-3-1-instruct-8b
llama-3-2-instruct-11b-vision
llama-3-2-instruct-1b
llama-3-2-instruct-3b
llama-3-2-instruct-90b-vision
llama-3-3-instruct-70b
llama-3-instruct-70b
llama-3-instruct-8b
llama-4-maverick
llama-4-scout
llama-65b
phi-3-mini
phi-4
phi-4-mini
phi-4-multimodal
minimax-m1-40k
minimax-m1-80k
minimax-m2
minimax-m2-1
minimax-m2-5
devstral-2
devstral-medium
devstral-small-2
devstral-small
devstral-small-2505
magistral-medium
magistral-medium-2509
magistral-small
magistral-small-2509
ministral-3-14b
ministral-3-3b
ministral-3-8b
mistral-7b-instruct
mistral-large-2407
mistral-large-2
mistral-large-3
mistral-large
mistral-medium
mistral-medium-3
mistral-medium-3-1
mistral-saba
mistral-small-3
mistral-small-3-1
mistral-small-3-2
mistral-small-2402
mistral-small
mistral-8x22b-instruct
mixtral-8x7b-instruct
pixtral-large-2411
motif-2-12-7b
hyperclova-x-seed-think-32b
deephermes-3-llama-3-1-8b-preview
deephermes-3-mistral-24b-preview
hermes-3-llama-3-1-70b
hermes-4-llama-3-1-405b
hermes-4-llama-3-1-405b-reasoning
hermes-4-llama-3-1-70b
hermes-4-llama-3-1-70b-reasoning
llama-3-1-nemotron-instruct-70b
llama-3-1-nemotron-nano-4b-reasoning
llama-3-1-nemotron-ultra-253b-v1-reasoning
llama-3-3-nemotron-super-49b
llama-3-3-nemotron-super-49b-reasoning
llama-nemotron-super-49b-v1-5
llama-nemotron-super-49b-v1-5-reasoning
nvidia-nemotron-3-nano-30b-a3b
nvidia-nemotron-3-nano-30b-a3b-reasoning
nvidia-nemotron-nano-12b-v2-vl
nvidia-nemotron-nano-12b-v2-vl-reasoning
nvidia-nemotron-nano-9b-v2
nvidia-nemotron-nano-9b-v2-reasoning
gpt-35-turbo
gpt-3-5-turbo-0613
gpt-4
gpt-4-1
gpt-4-1-mini
gpt-4-1-nano
gpt-4-5
gpt-4o-2024-08-06
gpt-4o-chatgpt
gpt-4o-chatgpt-03-25
gpt-4o-2024-05-13
gpt-4o-mini
gpt-4o-mini-realtime-dec-2024
gpt-4o
gpt-4o-realtime-dec-2024
gpt-4-turbo
gpt-5-1-codex
gpt-5-1-codex-mini
gpt-5-1
gpt-5-1-non-reasoning
gpt-5-2-codex
gpt-5-2-medium
gpt-5-2-non-reasoning
gpt-5-2
gpt-5-chatgpt
gpt-5-codex
gpt-5
gpt-5-low
gpt-5-medium
gpt-5-mini
gpt-5-minimal
gpt-5-mini-medium
gpt-5-mini-minimal
gpt-5-nano
gpt-5-nano-medium
gpt-5-nano-minimal
gpt-oss-120b
gpt-oss-120b-low
gpt-oss-20b
gpt-oss-20b-low
o1
o1-mini
o1-preview
o1-pro
o3
o3-mini
o3-mini-high
o3-pro
o4-mini
openchat-35
r1-1776
sonar
sonar-pro
sonar-reasoning
sonar-reasoning-pro
intellect-3
reka-flash-3
reka-flash
apriel-v1-5-15b-thinker
apriel-v1-6-15b-thinker
arctic-instruct
step3-vl-10b
falcon-h1r-7b
tri-21b-think-v0-5
tri-21b-think-preview
solar-mini
solar-open-100b-reasoning
solar-pro-2
solar-pro-2-preview
solar-pro-2-preview-reasoning
solar-pro-2-reasoning
grok-1
grok-2-1212
grok-3
grok-3-mini-reasoning
grok-3-reasoning
grok-4
grok-4-1-fast
grok-4-1-fast-reasoning
grok-4-fast
grok-4-fast-reasoning
grok-beta
grok-code-fast-1
grok-voice
mimo-v2-0206
mimo-v2-flash
mimo-v2-flash-reasoning
glm-4-5-air
glm-4.5
glm-4-5v
glm-4-5v-reasoning
glm-4-6
glm-4-6-reasoning
glm-4-6v
glm-4-6v-reasoning
glm-4-7-flash-non-reasoning
glm-4-7-flash
glm-4-7-non-reasoning
glm-4-7
glm-5-non-reasoning
glm-5
Use the REST API to send events from any language. The TypeScript and Python SDKs wrap this endpoint with automatic batching and retries — use the REST API directly for Go, Rust, Java, or any other language.
https://margindash.com/api/v1/events
Pass your API key as a Bearer token in the Authorization header. You can find your API key on the Settings page.
SDKs recommended for TypeScript & Python: The official SDKs (npm install margindash / pip install margindash) handle automatic batching, retries, and graceful shutdown. Use this REST API for languages that don’t have an SDK yet.
Send a single event with one vendor response:
curl -X POST https://margindash.com/api/v1/events \
-H "Authorization: Bearer YOUR_API_KEY_HERE" \
-H "Content-Type: application/json" \
-d '{
"events": [
{
"customer_id": "12345",
"event_type": "summarize",
"vendor_responses": [
{
"vendor_name": "openai",
"ai_model_name": "gpt-4o",
"input_tokens": 1200,
"output_tokens": 340
}
]
}
]
}'
{
"results": [
{
"id": 42,
"status": "created"
}
]
}
The request body is a JSON object with an events array. Each event accepts these fields:
| Field | Type | Required | Description |
|---|---|---|---|
| customer_id | string | yes | Your customer identifier. Auto-creates the customer if it doesn’t exist. |
| event_type | string | no | Category for this event, max 255 characters (default: ai_request) |
| vendor_responses | array | yes | Array of vendor response objects (see below). At least one required. |
| revenue_amount_in_cents | number | no | Revenue for this event in cents, 0–10,000,000, e.g. 1500 = $15.00 (default: 0) |
| occurred_at | string | no | ISO 8601 timestamp, e.g. 2025-06-15T14:30:00Z. Must be within the last 90 days and no more than 1 hour in the future. (default: current time) |
| Field | Type | Required | Description |
|---|---|---|---|
| vendor_name | string | yes | AI vendor name, e.g. openai, anthropic, google |
| ai_model_name | string | yes | AI model slug, e.g. gpt-4o, claude-4-opus |
| input_tokens | number | yes | Input/prompt token count, 0–100,000,000. At least one of input_tokens or output_tokens must be > 0. |
| output_tokens | number | yes | Output/completion token count, 0–100,000,000 |
Click a vendor to see valid slugs for the vendor_name and ai_model_name fields. You can also fetch this list programmatically via GET /api/v1/models.
jamba-1-5-large
jamba-1-5-mini
jamba-1-6-large
jamba-1-6-mini
jamba-1-7-large
jamba-1-7-mini
jamba-reasoning-3b
qwen1.5-110b-chat
qwen2-5-coder-32b-instruct
qwen2-5-coder-7b-instruct
qwen2.5-32b-instruct
qwen2-5-72b-instruct
qwen-2-5-max
qwen-turbo
qwen2-72b-instruct
qwen3-0.6b-instruct
qwen3-0.6b-instruct-reasoning
qwen3-14b-instruct
qwen3-14b-instruct-reasoning
qwen3-1.7b-instruct
qwen3-1.7b-instruct-reasoning
qwen3-235b-a22b-instruct-2507
qwen3-235b-a22b-instruct-2507-reasoning
qwen3-235b-a22b-instruct
qwen3-235b-a22b-instruct-reasoning
qwen3-30b-a3b-2507
qwen3-30b-a3b-2507-reasoning
qwen3-30b-a3b-instruct
qwen3-30b-a3b-instruct-reasoning
qwen3-32b-instruct
qwen3-32b-instruct-reasoning
qwen3-4b-2507-instruct
qwen3-4b-2507-instruct-reasoning
qwen3-4b-instruct
qwen3-4b-instruct-reasoning
qwen3-5-397b-a17b-non-reasoning
qwen3-5-397b-a17b
qwen3-8b-instruct
qwen3-8b-instruct-reasoning
qwen3-coder-30b-a3b-instruct
qwen3-coder-480b-a35b-instruct
qwen3-coder-next
qwen3-max
qwen3-max-preview
qwen3-max-thinking
qwen3-max-thinking-preview
qwen3-next-80b-a3b-instruct
qwen3-next-80b-a3b-reasoning
qwen3-omni-30b-a3b-instruct
qwen3-omni-30b-a3b-reasoning
qwen3-vl-235b-a22b-instruct
qwen3-vl-235b-a22b-reasoning
qwen3-vl-30b-a3b-instruct
qwen3-vl-30b-a3b-reasoning
qwen3-vl-32b-instruct
qwen3-vl-32b-reasoning
qwen3-vl-4b-instruct
qwen3-vl-4b-reasoning
qwen3-vl-8b-instruct
qwen3-vl-8b-reasoning
qwen-chat-14b
qwen-chat-72b
qwq-32b
QwQ-32B-Preview
tulu3-405b
molmo2-8b
molmo-7b-d
olmo-2-32b
olmo-2-7b
olmo-3-1-32b-instruct
olmo-3-1-32b-think
olmo-3-32b-think
olmo-3-7b-instruct
olmo-3-7b-think
nova-2-0-lite-reasoning-low
nova-2-0-lite-reasoning-medium
nova-2-0-lite
nova-2-0-omni-reasoning-low
nova-2-0-omni-reasoning-medium
nova-2-0-omni
nova-2-0-pro-reasoning-low
nova-2-0-pro-reasoning-medium
nova-2-0-pro
nova-lite
nova-micro
nova-premier
nova-pro
claude-2
claude-21
claude-3-5-haiku
claude-35-sonnet-june-24
claude-35-sonnet
claude-3-7-sonnet
claude-3-7-sonnet-thinking
claude-3-haiku
claude-3-opus
claude-3-sonnet
claude-4-1-opus
claude-4-1-opus-thinking
claude-4-5-haiku
claude-4-5-haiku-reasoning
claude-4-5-sonnet
claude-4-5-sonnet-thinking
claude-4-opus
claude-4-opus-thinking
claude-4-sonnet
claude-4-sonnet-thinking
claude-instant
claude-opus-4-5
claude-opus-4-5-thinking
claude-opus-4-6-adaptive
claude-opus-4-6
claude-sonnet-4-6-adaptive
claude-sonnet-4-6
claude-sonnet-4-6-non-reasoning-low-effort
ernie-4-5-300b-a47b
ernie-5-0-thinking-preview
doubao-seed-1-8
doubao-seed-code
seed-oss-36b-instruct
command-a
command-r-plus-04-2024
command-r-03-2024
tiny-aya-global
dbrx
cogito-v2-1-reasoning
deepseek-coder-v2
deepseek-coder-v2-lite
deepseek-llm-67b-chat
deepseek-ocr
deepseek-r1
deepseek-r1-qwen3-8b
deepseek-r1-distill-llama-70b
deepseek-r1-distill-llama-8b
deepseek-r1-distill-qwen-14b
deepseek-r1-distill-qwen-1-5b
deepseek-r1-distill-qwen-32b
deepseek-r1-0120
deepseek-v2-5-sep-2024
deepseek-v2-5
deepseek-v2
deepseek-v3-0324
deepseek-v3-1
deepseek-v3-1-reasoning
deepseek-v3-1-terminus
deepseek-v3-1-terminus-reasoning
deepseek-v3-2-0925
deepseek-v3-2-reasoning-0925
deepseek-v3-2
deepseek-v3-2-reasoning
deepseek-v3-2-speciale
deepseek-v3
gemini-1-0-pro
gemini-1-0-ultra
gemini-1-5-flash-8b
gemini-1-5-flash-may-2024
gemini-1-5-flash
gemini-1-5-pro-may-2024
gemini-1-5-pro
gemini-2-0-flash-experimental
gemini-2-0-flash
gemini-2-0-flash-lite-001
gemini-2-0-flash-lite-preview
gemini-2-0-flash-thinking-exp-1219
gemini-2-0-flash-thinking-exp-0121
gemini-2-0-pro-experimental-02-05
gemini-2-5-flash-lite
gemini-2-5-flash-lite-preview-09-2025
gemini-2-5-flash-lite-preview-09-2025-reasoning
gemini-2-5-flash-lite-reasoning
gemini-2-5-flash
gemini-2-5-flash-04-2025
gemini-2-5-flash-reasoning-04-2025
gemini-2-5-flash-preview-09-2025
gemini-2-5-flash-preview-09-2025-reasoning
gemini-2-5-flash-reasoning
gemini-2-5-pro
gemini-2-5-pro-03-25
gemini-2-5-pro-05-06
gemini-3-1-pro-preview
gemini-3-flash
gemini-3-flash-reasoning
gemini-3-pro
gemini-3-pro-low
gemma-3-12b
gemma-3-1b
gemma-3-270m
gemma-3-27b
gemma-3-4b
gemma-3n-e2b
gemma-3n-e4b
gemma-3n-e4b-preview-0520
palm-2
granite-3-3-8b-instruct
granite-4-0-nano-1b
granite-4-0-350m
granite-4-0-h-nano-1b
granite-4-0-h-350m
granite-4-0-h-small
granite-4-0-micro
ling-1t
ling-flash-2-0
ling-mini-2-0
ring-1t
ring-flash-2-0
kimi-k2
kimi-k2-0905
kimi-k2-5-non-reasoning
kimi-k2-5
kimi-k2-thinking
kimi-linear-48b-a3b-instruct
mi-dm-k-2-5-pro-dec28
midm-250-pro-rsnsft
kat-coder-pro-v1
exaone-4-0-1-2b
exaone-4-0-1-2b-reasoning
exaone-4-0-32b
exaone-4-0-32b-reasoning
k-exaone-non-reasoning
k-exaone
lfm2-1-2b
lfm2-2-6b
lfm2-5-1-2b-instruct
lfm2-5-1-2b-thinking
lfm2-5-vl-1-6b
lfm2-8b-a1b
lfm-40b
k2-think-v2
k2-v2
k2-v2-low
k2-v2-medium
llama-2-chat-13b
llama-2-chat-70b
llama-2-chat-7b
llama-3-1-instruct-405b
llama-3-1-instruct-70b
llama-3-1-instruct-8b
llama-3-2-instruct-11b-vision
llama-3-2-instruct-1b
llama-3-2-instruct-3b
llama-3-2-instruct-90b-vision
llama-3-3-instruct-70b
llama-3-instruct-70b
llama-3-instruct-8b
llama-4-maverick
llama-4-scout
llama-65b
phi-3-mini
phi-4
phi-4-mini
phi-4-multimodal
minimax-m1-40k
minimax-m1-80k
minimax-m2
minimax-m2-1
minimax-m2-5
devstral-2
devstral-medium
devstral-small-2
devstral-small
devstral-small-2505
magistral-medium
magistral-medium-2509
magistral-small
magistral-small-2509
ministral-3-14b
ministral-3-3b
ministral-3-8b
mistral-7b-instruct
mistral-large-2407
mistral-large-2
mistral-large-3
mistral-large
mistral-medium
mistral-medium-3
mistral-medium-3-1
mistral-saba
mistral-small-3
mistral-small-3-1
mistral-small-3-2
mistral-small-2402
mistral-small
mistral-8x22b-instruct
mixtral-8x7b-instruct
pixtral-large-2411
motif-2-12-7b
hyperclova-x-seed-think-32b
deephermes-3-llama-3-1-8b-preview
deephermes-3-mistral-24b-preview
hermes-3-llama-3-1-70b
hermes-4-llama-3-1-405b
hermes-4-llama-3-1-405b-reasoning
hermes-4-llama-3-1-70b
hermes-4-llama-3-1-70b-reasoning
llama-3-1-nemotron-instruct-70b
llama-3-1-nemotron-nano-4b-reasoning
llama-3-1-nemotron-ultra-253b-v1-reasoning
llama-3-3-nemotron-super-49b
llama-3-3-nemotron-super-49b-reasoning
llama-nemotron-super-49b-v1-5
llama-nemotron-super-49b-v1-5-reasoning
nvidia-nemotron-3-nano-30b-a3b
nvidia-nemotron-3-nano-30b-a3b-reasoning
nvidia-nemotron-nano-12b-v2-vl
nvidia-nemotron-nano-12b-v2-vl-reasoning
nvidia-nemotron-nano-9b-v2
nvidia-nemotron-nano-9b-v2-reasoning
gpt-35-turbo
gpt-3-5-turbo-0613
gpt-4
gpt-4-1
gpt-4-1-mini
gpt-4-1-nano
gpt-4-5
gpt-4o-2024-08-06
gpt-4o-chatgpt
gpt-4o-chatgpt-03-25
gpt-4o-2024-05-13
gpt-4o-mini
gpt-4o-mini-realtime-dec-2024
gpt-4o
gpt-4o-realtime-dec-2024
gpt-4-turbo
gpt-5-1-codex
gpt-5-1-codex-mini
gpt-5-1
gpt-5-1-non-reasoning
gpt-5-2-codex
gpt-5-2-medium
gpt-5-2-non-reasoning
gpt-5-2
gpt-5-chatgpt
gpt-5-codex
gpt-5
gpt-5-low
gpt-5-medium
gpt-5-mini
gpt-5-minimal
gpt-5-mini-medium
gpt-5-mini-minimal
gpt-5-nano
gpt-5-nano-medium
gpt-5-nano-minimal
gpt-oss-120b
gpt-oss-120b-low
gpt-oss-20b
gpt-oss-20b-low
o1
o1-mini
o1-preview
o1-pro
o3
o3-mini
o3-mini-high
o3-pro
o4-mini
openchat-35
r1-1776
sonar
sonar-pro
sonar-reasoning
sonar-reasoning-pro
intellect-3
reka-flash-3
reka-flash
apriel-v1-5-15b-thinker
apriel-v1-6-15b-thinker
arctic-instruct
step3-vl-10b
falcon-h1r-7b
tri-21b-think-v0-5
tri-21b-think-preview
solar-mini
solar-open-100b-reasoning
solar-pro-2
solar-pro-2-preview
solar-pro-2-preview-reasoning
solar-pro-2-reasoning
grok-1
grok-2-1212
grok-3
grok-3-mini-reasoning
grok-3-reasoning
grok-4
grok-4-1-fast
grok-4-1-fast-reasoning
grok-4-fast
grok-4-fast-reasoning
grok-beta
grok-code-fast-1
grok-voice
mimo-v2-0206
mimo-v2-flash
mimo-v2-flash-reasoning
glm-4-5-air
glm-4.5
glm-4-5v
glm-4-5v-reasoning
glm-4-6
glm-4-6-reasoning
glm-4-6v
glm-4-6v-reasoning
glm-4-7-flash-non-reasoning
glm-4-7-flash
glm-4-7-non-reasoning
glm-4-7
glm-5-non-reasoning
glm-5
Store your customer_id in the Stripe customer's
metadata, and our invoice sync will automatically
match Stripe payments to the correct customer in your dashboard. This ensures that invoice revenue
is attributed to the same customer as your events, enabling accurate margin calculations.
Store your customer ID in the Stripe customer's metadata:
curl https://api.stripe.com/v1/customers \
-u "sk_live_YOUR_KEY:" \
-d "name=Acme Corp" \
-d "email=billing@acme.com" \
-d "metadata[customer_id]=12345"
Developer Note
The ID you pass to Stripe (12345) must exactly match the
customer_id you send in your events.
If these values don't match, MarginDash cannot link subscription revenue to usage costs, and margin calculations will be inaccurate.
If you already have Stripe customers, you can backfill the metadata[customer_id]
with a one-time script. Loop through your customers and update each one with your internal ID.
Existing invoices are automatically re-linked when you update the metadata — no manual re-sync needed.
import Stripe from "stripe";
const stripe = new Stripe("sk_live_YOUR_KEY");
// Your mapping of Stripe customer IDs to internal IDs
const customerMap: Record<string, string> = {
"cus_ABC123": "12345",
"cus_DEF456": "67890",
};
for (const [stripeId, internalId] of Object.entries(customerMap)) {
await stripe.customers.update(stripeId, {
metadata: { customer_id: internalId },
});
}
import stripe
stripe.api_key = "sk_live_YOUR_KEY"
# Your mapping of Stripe customer IDs to internal IDs
customer_map = {
"cus_ABC123": "12345",
"cus_DEF456": "67890",
}
for stripe_id, internal_id in customer_map.items():
stripe.Customer.modify(
stripe_id,
metadata={"customer_id": internal_id},
)