Skip to main content

Response generation settings

To access the parameters for generating responses to user queries, go to Project settingsGeneration.

System prompt

On request

The system prompt is used when generating responses to user queries. The system prompt can provide the model with additional information — for example, describe your company’s area of business; specify requirements for the response’s style, tone, and formatting; and define options for handling special cases, such as when the knowledge base has no relevant information.

Prompt editing is available on request. If you require this feature, please contact support at support@tovie.ai.

caution

A poorly written prompt can significantly degrade response quality. Please make changes with caution and test the outcome thoroughly. Always keep a backup of the working version of your prompt.

LLM settings

The LLM settings apply to:

  • generating responses to user queries
  • chunk retrieval within the agentic pipeline
  • rephrasing queries and considering history within the semantic pipeline.

Main settings:

  • Model: select one of the available language models. For the agentic pipeline, only models that support function calling are available, as the model calls functions to request chunks.
  • Max tokens in request: limits the number of tokens that can be sent to the LLM.
  • Max tokens in response: limits the number of tokens that the LLM can generate in one iteration.
  • Temperature adjusts the creativity level of responses. Higher temperature values produce more creative and less predictable results. We recommend adjusting either Temperature or Top P, but not both at once.

Advanced settings:

  • Top P adjusts the diversity of responses. At lower values, the LLM selects words from a smaller, more likely set. At higher values, the response becomes more diverse. We recommend adjusting either Top P or Temperature, but not both at once.

  • Presence penalty: reduces the likelihood of repeated tokens in a response. By increasing the value, you decrease the likelihood of repeating words or phrases in the response.

    All repetitions are penalised equally, no matter how frequently they occur. For example, the second appearance of a token is penalised the same as the tenth.

  • Frequency penalty: reduces the likelihood of frequently occurring tokens in a response. By increasing the value, you reduce the likelihood of words or phrases appearing multiple times in the response.

    The impact of Frequency penalty grows with the number of times a token appears in the text.

Show source documents in bot response

If enabled, each knowledge base response includes a list of sources, specifically the files or pages the response is based on.

How sources appear in the response

The source list includes source names and links. In the Tovie Data Agent API, the source list is returned as a relevantSources array.

  • If the source document came from an integration, the link to its original location is provided, such as a link to a page or attachment in Confluence.
  • If the source document was uploaded manually as a file, a temporary download link is provided in channels and via the API. Such links are only valid for a limited time. The test chat displays a link to the Sources section and a download button.

Additionally, the API provides an endpoint to download a source from the knowledge base: GET /sources/{sourceId}/download.

Prompt additions for CSV

On request

If the knowledge base contains CSV files, a dedicated tabular pipeline is used to extract data from them. Based on the user’s query, the AI agent determines whether the tables contain relevant data, selects the relevant table, generates an SQL query, executes it, and presents the result in the response.

The tabular pipeline uses a separate system prompt, which is not available for viewing or editing. However, you can append additional instructions to it — for example, to specify formatting requirements for the response.

Editing these prompt additions is available on request. If you require this feature, please contact support at support@tovie.ai.