Tovie Data Agent public API (1.0.0)
Download OpenAPI specification:Download
Get project info
Obtain information on the current knowledge base project.
Authorizations:
Responses
Response samples
- 200
{- "id": 0,
- "name": "string",
- "status": "CREATED",
- "resources": {
- "llmModels": [
- "string"
]
}, - "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z"
}
Send requests to search for chunks and generate responses (without considering chat history).
Retrieve chunks
Retrieve chunks relevant to the user’s query from the knowledge base.
Authorizations:
Request Body schema: application/jsonrequired
query required | string Text of the user’s query. |
Array of objects (HistoryRecord) Dialogue history. Entries are displayed in reverse chronological order (from latest to earliest). | |
object (RetrievingSettings) Chunk search settings. |
Responses
Request samples
- Payload
{- "query": "string",
- "history": [
- {
- "content": "string",
- "role": "user"
}
], - "settings": {
- "pipeline": "semantic",
- "search": {
- "similarityTopK": 5,
- "candidateRadius": 10,
- "reranker": {
- "type": "manual",
- "minScore": -10,
- "maxChunksPerDocument": 1,
- "maxChunks": 1,
- "scoreReductionLimit": 1
}, - "fullTextSearch": {
- "strategy": "hybrid",
- "semanticPortion": 10,
- "ftsPortion": 1,
- "threshold": 1
}, - "rephraseUserQuery": {
- "prompt": "Read the dialogue history, rephrase the current user question considering the history by adding it as context. Make the question more understandable, clear, and structured. Add similar queries and a title to the question, and return the text with the title."
}, - "segment": "FAQ"
}, - "llm": {
- "model": "string",
- "contextWindow": 4000,
- "maxTokens": 500,
- "temperature": 1,
- "topP": 1,
- "frequencyPenalty": -2,
- "presencePenalty": -2
}
}
}
Response samples
- 200
- 400
{- "chunks": [
- {
- "score": 0.1,
- "content": "string",
- "docId": "string",
- "metadata": {
- "sourcePath": "string",
- "sourceUrl": "string",
- "segment": "string"
}
}
]
}
Generate response
Synchronous request to generate a response to the user’s query.
Please note that request processing may take a significant amount of time. Ensure that the connection timeout set in your HTTP client is more than 1 minute.
Authorizations:
Request Body schema: application/jsonrequired
query required | string Text of the user’s query. |
Array of objects (HistoryRecord) Dialogue history. Entries are displayed in reverse chronological order (from latest to earliest). | |
object (RagSettings) Query processing settings. |
Responses
Request samples
- Payload
{- "query": "string",
- "history": [
- {
- "content": "string",
- "role": "user"
}
], - "settings": {
- "pipeline": "semantic",
- "search": {
- "similarityTopK": 5,
- "candidateRadius": 10,
- "reranker": {
- "type": "manual",
- "minScore": -10,
- "maxChunksPerDocument": 1,
- "maxChunks": 1,
- "scoreReductionLimit": 1
}, - "fullTextSearch": {
- "strategy": "hybrid",
- "semanticPortion": 10,
- "ftsPortion": 1,
- "threshold": 1
}, - "rephraseUserQuery": {
- "prompt": "Read the dialogue history, rephrase the current user question considering the history by adding it as context. Make the question more understandable, clear, and structured. Add similar queries and a title to the question, and return the text with the title."
}, - "segment": "FAQ"
}, - "llm": {
- "model": "string",
- "contextWindow": 4000,
- "maxTokens": 500,
- "temperature": 1,
- "topP": 1,
- "frequencyPenalty": -2,
- "presencePenalty": -2
}, - "responseGeneration": {
- "prompt": "string"
}
}
}
Response samples
- 200
- 400
{- "id": 0,
- "request": "string",
- "response": "string",
- "status": "READY_TO_PROCESS",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "comment": "string"
}
Generate response (asynchronous request)
Asynchronous request to generate a response to the user’s query.
The result can be obtained via GET /api/knowledge-hub/query/{queryId} endpoint, where queryId
is the request identifier received in the current response.
Authorizations:
Request Body schema: application/jsonrequired
query required | string Text of the user’s query. |
Array of objects (HistoryRecord) Dialogue history. Entries are displayed in reverse chronological order (from latest to earliest). | |
object (RagSettings) Query processing settings. |
Responses
Request samples
- Payload
{- "query": "string",
- "history": [
- {
- "content": "string",
- "role": "user"
}
], - "settings": {
- "pipeline": "semantic",
- "search": {
- "similarityTopK": 5,
- "candidateRadius": 10,
- "reranker": {
- "type": "manual",
- "minScore": -10,
- "maxChunksPerDocument": 1,
- "maxChunks": 1,
- "scoreReductionLimit": 1
}, - "fullTextSearch": {
- "strategy": "hybrid",
- "semanticPortion": 10,
- "ftsPortion": 1,
- "threshold": 1
}, - "rephraseUserQuery": {
- "prompt": "Read the dialogue history, rephrase the current user question considering the history by adding it as context. Make the question more understandable, clear, and structured. Add similar queries and a title to the question, and return the text with the title."
}, - "segment": "FAQ"
}, - "llm": {
- "model": "string",
- "contextWindow": 4000,
- "maxTokens": 500,
- "temperature": 1,
- "topP": 1,
- "frequencyPenalty": -2,
- "presencePenalty": -2
}, - "responseGeneration": {
- "prompt": "string"
}
}
}
Response samples
- 200
- 400
{- "id": 0,
- "request": "string",
- "response": "string",
- "status": "READY_TO_PROCESS",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "comment": "string"
}
Processing status of the response generation request
Get the current processing status of the response generation request.
Long polling is used if the waitTimeSeconds
parameter is specified.
Authorizations:
path Parameters
queryId required | integer <int64> (LongId) Identifier of the response generation request. |
query Parameters
waitTimeSeconds | integer <int32> [ 0 .. 30 ] Default: 3 HTTP request timeout. Used in long polling. |
Responses
Response samples
- 200
{- "id": 0,
- "request": "string",
- "response": "string",
- "status": "READY_TO_PROCESS",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "comment": "string"
}
Cancel response generation request
Authorizations:
path Parameters
queryId required | integer <int64> (LongId) Identifier of the response generation request. |
Responses
Response samples
- 200
{- "id": 0,
- "request": "string",
- "response": "string",
- "status": "READY_TO_PROCESS",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "comment": "string"
}
Chat creation
Authorizations:
Request Body schema: application/jsonrequired
name | string Name of the user’s chat. |
object (RagSettings) Query processing settings. |
Responses
Request samples
- Payload
{- "name": "string",
- "settings": {
- "pipeline": "semantic",
- "search": {
- "similarityTopK": 5,
- "candidateRadius": 10,
- "reranker": {
- "type": "manual",
- "minScore": -10,
- "maxChunksPerDocument": 1,
- "maxChunks": 1,
- "scoreReductionLimit": 1
}, - "fullTextSearch": {
- "strategy": "hybrid",
- "semanticPortion": 10,
- "ftsPortion": 1,
- "threshold": 1
}, - "rephraseUserQuery": {
- "prompt": "Read the dialogue history, rephrase the current user question considering the history by adding it as context. Make the question more understandable, clear, and structured. Add similar queries and a title to the question, and return the text with the title."
}, - "segment": "FAQ"
}, - "llm": {
- "model": "string",
- "contextWindow": 4000,
- "maxTokens": 500,
- "temperature": 1,
- "topP": 1,
- "frequencyPenalty": -2,
- "presencePenalty": -2
}, - "responseGeneration": {
- "prompt": "string"
}
}
}
Response samples
- 200
- 400
{- "id": 0,
- "name": "string",
- "settings": {
- "pipeline": "semantic",
- "search": {
- "similarityTopK": 5,
- "candidateRadius": 10,
- "reranker": {
- "type": "manual",
- "minScore": -10,
- "maxChunksPerDocument": 1,
- "maxChunks": 1,
- "scoreReductionLimit": 1
}, - "fullTextSearch": {
- "strategy": "hybrid",
- "semanticPortion": 10,
- "ftsPortion": 1,
- "threshold": 1
}, - "rephraseUserQuery": {
- "prompt": "Read the dialogue history, rephrase the current user question considering the history by adding it as context. Make the question more understandable, clear, and structured. Add similar queries and a title to the question, and return the text with the title."
}, - "segment": "FAQ"
}, - "llm": {
- "model": "string",
- "contextWindow": 4000,
- "maxTokens": 500,
- "temperature": 1,
- "topP": 1,
- "frequencyPenalty": -2,
- "presencePenalty": -2
}, - "responseGeneration": {
- "prompt": "string"
}
}
}
User chat information
Authorizations:
path Parameters
chatId required | integer <int64> (LongId) Identifier of the chat in the knowledge base project. |
Responses
Response samples
- 200
- 400
{- "id": 0,
- "name": "string",
- "settings": {
- "pipeline": "semantic",
- "search": {
- "similarityTopK": 5,
- "candidateRadius": 10,
- "reranker": {
- "type": "manual",
- "minScore": -10,
- "maxChunksPerDocument": 1,
- "maxChunks": 1,
- "scoreReductionLimit": 1
}, - "fullTextSearch": {
- "strategy": "hybrid",
- "semanticPortion": 10,
- "ftsPortion": 1,
- "threshold": 1
}, - "rephraseUserQuery": {
- "prompt": "Read the dialogue history, rephrase the current user question considering the history by adding it as context. Make the question more understandable, clear, and structured. Add similar queries and a title to the question, and return the text with the title."
}, - "segment": "FAQ"
}, - "llm": {
- "model": "string",
- "contextWindow": 4000,
- "maxTokens": 500,
- "temperature": 1,
- "topP": 1,
- "frequencyPenalty": -2,
- "presencePenalty": -2
}, - "responseGeneration": {
- "prompt": "string"
}
}
}
Retrieve chunks
Retrieve chunks from the knowledge base that are relevant to the user’s query within the chat.
Authorizations:
path Parameters
chatId required | integer <int64> (LongId) Identifier of the chat in the knowledge base project. |
Request Body schema: application/jsonrequired
query required | string Text of the user’s query. |
object (RetrievingSettings) Chunk search settings. |
Responses
Request samples
- Payload
{- "query": "string",
- "settings": {
- "pipeline": "semantic",
- "search": {
- "similarityTopK": 5,
- "candidateRadius": 10,
- "reranker": {
- "type": "manual",
- "minScore": -10,
- "maxChunksPerDocument": 1,
- "maxChunks": 1,
- "scoreReductionLimit": 1
}, - "fullTextSearch": {
- "strategy": "hybrid",
- "semanticPortion": 10,
- "ftsPortion": 1,
- "threshold": 1
}, - "rephraseUserQuery": {
- "prompt": "Read the dialogue history, rephrase the current user question considering the history by adding it as context. Make the question more understandable, clear, and structured. Add similar queries and a title to the question, and return the text with the title."
}, - "segment": "FAQ"
}, - "llm": {
- "model": "string",
- "contextWindow": 4000,
- "maxTokens": 500,
- "temperature": 1,
- "topP": 1,
- "frequencyPenalty": -2,
- "presencePenalty": -2
}
}
}
Response samples
- 200
- 400
{- "chunks": [
- {
- "score": 0.1,
- "content": "string",
- "docId": "string",
- "metadata": {
- "sourcePath": "string",
- "sourceUrl": "string",
- "segment": "string"
}
}
]
}
Generate response
Synchronous request to generate a response to the user’s query. The chat message history is taken into account.
Please note that request processing may take a significant amount of time. Ensure that the connection timeout set in your HTTP client is more than 1 minute.
Authorizations:
path Parameters
chatId required | integer <int64> (LongId) Identifier of the chat in the knowledge base project. |
Request Body schema: application/jsonrequired
query required | string Text of the user’s query. |
object (RagSettings) Query processing settings. |
Responses
Request samples
- Payload
{- "query": "string",
- "settings": {
- "pipeline": "semantic",
- "search": {
- "similarityTopK": 5,
- "candidateRadius": 10,
- "reranker": {
- "type": "manual",
- "minScore": -10,
- "maxChunksPerDocument": 1,
- "maxChunks": 1,
- "scoreReductionLimit": 1
}, - "fullTextSearch": {
- "strategy": "hybrid",
- "semanticPortion": 10,
- "ftsPortion": 1,
- "threshold": 1
}, - "rephraseUserQuery": {
- "prompt": "Read the dialogue history, rephrase the current user question considering the history by adding it as context. Make the question more understandable, clear, and structured. Add similar queries and a title to the question, and return the text with the title."
}, - "segment": "FAQ"
}, - "llm": {
- "model": "string",
- "contextWindow": 4000,
- "maxTokens": 500,
- "temperature": 1,
- "topP": 1,
- "frequencyPenalty": -2,
- "presencePenalty": -2
}, - "responseGeneration": {
- "prompt": "string"
}
}
}
Response samples
- 200
- 400
{- "id": 0,
- "chatId": 0,
- "request": "string",
- "response": "string",
- "status": "READY_TO_PROCESS",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "comment": "string"
}
Generate response (asynchronous request)
Asynchronous request to generate a response to the user’s query. The chat message history is taken into account.
The result can be obtained via GET /api/knowledge-hub/chat/{chatId}/query/{queryId} endpoint, where queryId
is the request identifier received in the current response.
Authorizations:
path Parameters
chatId required | integer <int64> (LongId) Identifier of the chat in the knowledge base project. |
Request Body schema: application/jsonrequired
query required | string Text of the user’s query. |
object (RagSettings) Query processing settings. |
Responses
Request samples
- Payload
{- "query": "string",
- "settings": {
- "pipeline": "semantic",
- "search": {
- "similarityTopK": 5,
- "candidateRadius": 10,
- "reranker": {
- "type": "manual",
- "minScore": -10,
- "maxChunksPerDocument": 1,
- "maxChunks": 1,
- "scoreReductionLimit": 1
}, - "fullTextSearch": {
- "strategy": "hybrid",
- "semanticPortion": 10,
- "ftsPortion": 1,
- "threshold": 1
}, - "rephraseUserQuery": {
- "prompt": "Read the dialogue history, rephrase the current user question considering the history by adding it as context. Make the question more understandable, clear, and structured. Add similar queries and a title to the question, and return the text with the title."
}, - "segment": "FAQ"
}, - "llm": {
- "model": "string",
- "contextWindow": 4000,
- "maxTokens": 500,
- "temperature": 1,
- "topP": 1,
- "frequencyPenalty": -2,
- "presencePenalty": -2
}, - "responseGeneration": {
- "prompt": "string"
}
}
}
Response samples
- 200
- 400
{- "id": 0,
- "chatId": 0,
- "request": "string",
- "response": "string",
- "status": "READY_TO_PROCESS",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "comment": "string"
}
Processing status of the response generation request
Get the current processing status of the response generation request within a user chat.
Long polling is used if the waitTimeSeconds
parameter is specified.
Authorizations:
path Parameters
chatId required | integer <int64> (LongId) Identifier of the chat in the knowledge base project. |
queryId required | integer <int64> (LongId) Identifier of the response generation request. |
query Parameters
waitTimeSeconds | integer <int32> [ 0 .. 30 ] Default: 3 HTTP request timeout. Used in long polling. |
Responses
Response samples
- 200
{- "id": 0,
- "chatId": 0,
- "request": "string",
- "response": "string",
- "status": "READY_TO_PROCESS",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "comment": "string"
}
Cancel response generation request within a user chat
Authorizations:
path Parameters
chatId required | integer <int64> (LongId) Identifier of the chat in the knowledge base project. |
queryId required | integer <int64> (LongId) Identifier of the response generation request. |
Responses
Response samples
- 200
{- "id": 0,
- "chatId": 0,
- "request": "string",
- "response": "string",
- "status": "READY_TO_PROCESS",
- "createdAt": "2019-08-24T14:15:22Z",
- "updatedAt": "2019-08-24T14:15:22Z",
- "comment": "string"
}