The price for each query is based on the number of input words in the entire chat conversation as well as the number output words that the AI responds back. Additionally, output words tend to be more expensive than input words.
Here's a breakdown of our recommended chat models and their pricing:
Model Name | Model ID | Provider | Max Context | Date Added | Input Rate | Output Rate |
---|---|---|---|---|---|---|
No models available |
Model Name | Model ID | Provider | Max Context | Date Added | Input Rate | Output Rate |
---|---|---|---|---|---|---|
No models available |
Model Name | Model ID | Provider | Date Added | Quality | Size | Cost per image |
---|---|---|---|---|---|---|
No image models available |
Model Name | Model ID | Provider | Date Added | Quality | Size | Cost per video |
---|---|---|---|---|---|---|
No video models available |
Note: prices are the average and may vary based on your own usage.
Lastly, to keep users from unknowingly charging more than they intend by letting a conversation become indefinitely long, I've set the default max context at 2048 tokens.