OpenAI LLM Costs
When using gpt-4.1 model, I'm always, for all my calls the costs that correspond exactly to $2 per M input token and $8 per M output token. It's exactly what is being shown here https://platform.openai.com/docs/models/compare?model=gpt-4.1
But that would mean that no input token is cached, which is surprising to me. Is that normal ? That no input token is cached for any call ?
You can check on any call from my orgs. Here are a few examples from my org id 94a0b476-b4f6-4f9c-89e8-7a8475c70adb
But that would mean that no input token is cached, which is surprising to me. Is that normal ? That no input token is cached for any call ?
You can check on any call from my orgs. Here are a few examples from my org id 94a0b476-b4f6-4f9c-89e8-7a8475c70adb
- Call id 0298ba6e-cd87-4a79-92dd-9f6e66f5e4a1
- Call id 33aaac31-0a0a-489a-a98b-56bbbb8af13c
- Call id 9f14b1dc-fae7-49b2-81d7-eef2b33c6e86
Explore developer resources, tutorials, API docs, and dynamic examples to get the most out of OpenAI's platform.
