The Prompt Analyser API provides functionality to calculate the token count for a given text prompt based on the selected model. It is designed to help users estimate the token usage and cost associated with processing text prompts using various models.
The API endpoint for accessing the token count functionality is:
POST https://promptanalyser.pythonanywhere.com/api/token_count
The API accepts a POST request with the following JSON payload:
{
"prompt": "Your text prompt here",
"model": "Selected model name"
}
Replace "Your text prompt here"
with the text you want to analyze and "Selected model name"
with one of the supported model names.
The following models are supported:
gpt-4
gpt-3.5-turbo
text-embedding-ada-002
text-embedding-3-small
text-embedding-3-large
text-davinci-002
text-davinci-003
davinci
The API responds with a JSON object containing the following fields:
{
"tokens": "Number of tokens",
"token_list": ["List of tokens"],
"notification": "Any additional notification"
}
"tokens"
represents the total number of tokens, "token_list"
is an array of tokens (limited to the first 50), and "notification"
provides any relevant warnings or messages.
Visit the frontend at https://promptanalyser.pythonanywhere.com/ to interact with the Prompt Analyser. Use the form to enter a prompt and select a model to get the token count and related details.
To integrate the API into your own project, make a POST request to the endpoint with the appropriate payload. Handle the JSON response as described to display token counts and other relevant information to your users.