POST
/
v2
/
bots
/
ask-question
import requests
endpoint = "https://api-prod.usefini.com/v2/bots/ask-question"

question = "How do I upgrade to premium?"
instruction = "Answer as if you are replying to an email."

# The instruction part (role: 'system') in messageHistory is optional.
# If you skip it (set to []) the instruction from Fini's UI setup page will be picked up instead.
messageHistory = [{"content": instruction, "role": "system"}] # This is optional

data = {"question": question, "messageHistory": messageHistory} // optional parameters: stream, temperature, stop, categories

token = "your_api_key_here"  # Best stored in secrets management
headers = {"Authorization": "Bearer " + token}

# Make sure your 'content-type' header is set to 'application/json'
answer = requests.post(endpoint, json=data, headers=headers)
# This response will contain an updated messageHistory
# Re-use this messageHistory in a new request to enable chat mode

print(answer.json());
{
    "answer": "Hello there! Fini is an AI tool that helps companies turn their knowledgebase into AI chat in 2 minutes. Let me know if you need more details. Have a fantastic day!",
    "answer_uuid": "f4ead1e0-576b-489f-bc59-7b1c6c43ee27",
    "based_on": [
        {
            "answer": "Fini is an AI tool that helps companies turn their knowledgebase into AI chat in 2 minutes",
            "score": 0.726,
            "source_id": "https://usefini.com",
            "source_type": "url",
        }
    ],
   "categories": [
           "Fini intro"
       ],
    "messages": [
        {
            "content": "Answer in friendly tone",
            "role": "system"
        },
        {
            "content": "What is Fini?",
            "role": "user"
        },
        {
            "content": "Hello there! Fini is an AI tool that helps companies turn their knowledgebase into AI chat in 2 minutes. Let me know if you need more details. Have a fantastic day!",
            "role": "assistant"
        }
    ]
}

Body

instruction
string

Prompt to generate completions. If you remove this parameter from your request, the instruction from the Fini’s UI setup page will be picked up instead. We recommend always filling this for faster responses (skips DB lookup of instruction)

question
string
required

User question for which the answer needs to be fetched

messageHistory
array
required

A list of messages comprising the conversation so far.

functions
array

A list of functions the model may generate JSON inputs for.

function_call
string or object

Controls how the model responds to function calls. “none” means the model does not call a function, and responds to the end-user. “auto” means the model can pick between an end-user or calling a function. Specifying a particular function via {"name":\ "my_function"} forces the model to call that function. “none” is the default when no functions are present. “auto” is the default if functions are present.

stream
boolean
default: "false"

Whether to stream back partial progress.

temperature
number
default: "0.4"

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We recommend using 0.4 as the default.

stop
string

The API will stop generating further tokens. The returned text will not contain the stop sequence. It can be set to [“Optional”, “stop”, “words”]

categories
array

List of categories that you want to use to categorize the Q&A pair.

Response

success
number

Indicates whether the call was successful. 1 if successful, 0 if not.

based_on
object

The contents of the “based_on” field. “based_on” is a list of all data blocks from the knowledgbase that were used to generate the answer

messages
object

A list of messages comprising the conversation so far.