Overview
Last updated
Last updated
Knowledge Agent for Intelligence Query
|
KAI, which stands for Knowledge Agent for Intelligence Query, is an AI-agent designed to revolutionize how data is queried, analyzed, and utilized. By embedding a Generative AI component into your database, KAI allows users to perform complex analytics and document searches using natural language queries. This brings a new level of accessibility and efficiency to data interaction, making it easier for both technical and non-technical users to extract valuable insights.
KAI is built to seamlessly integrate with existing databases and systems, enhancing them with powerful AI capabilities. Whether you need to search vast amounts of documents, perform complex data analytics, or interact with your data in a more intuitive way, KAI is equipped to meet those needs.
Natural Language Querying
Description: KAI enables users to interact with their databases using plain English, eliminating the need for complex SQL queries or other technical languages.
Benefit: Makes data access and analysis more accessible to non-technical users.
Generative AI Integration
Description: Incorporates state-of-the-art Generative AI models to assist with data retrieval, analysis, and content generation.
Benefit: Enhances the intelligence and flexibility of queries, enabling more accurate and insightful responses.
Real-time Analytics
Description: Provides real-time processing and analysis of data, allowing for immediate insights and decision-making.
Benefit: Supports timely and informed decisions, critical in fast-paced environments.
Document Search and Management
Description: KAI includes powerful tools for searching and managing large volumes of documents, making it easy to find relevant information quickly.
Benefit: Increases productivity by reducing the time spent on manual document searches.
Scalable and Flexible Architecture
Description: Designed to be highly scalable, KAI can be deployed across different environments, from local setups to cloud-based infrastructures.
Benefit: Ensures that KAI can grow with your organization’s needs and integrate with various systems.
Customizable AI Models
Description: Allows the use of custom AI models tailored to specific business needs.
Benefit: Provides flexibility to optimize the AI component for specialized tasks and industries.
Here is a quickstart guide for setting up and running KAI using Docker Compose
Docker: Ensure Docker is installed on your system. You can download it from Docker's official website.
Docker Compose: Docker Compose is included with Docker Desktop. For standalone installations, you can follow Docker Compose installation instructions.
Create a .env file using the .env.example file as a reference.
Make sure to configure these fields for the engine to run.
Follow the next commands to generate an ENCRYPT_KEY and paste it in the .env file like this ENCRYPT_KEY = 4Mbe2GYx0Hk94o_f-irVHk1fKkCGAt1R7LLw5wHVghI
Navigate to your project directory where the docker-compose.yml
file is located:
Start the services using Docker Compose:
The -d
flag runs the containers in detached mode (in the background).
Verify the containers are running:
You should see output indicating that both typesense
and kai_engine
services are up and running.
To stop the services, run:
This will stop and remove the containers, but it will retain the data in the app/data/dbdata
directory for Typesense.
Network Configuration: The services are connected via the kai_network
network, allowing them to communicate.
Data Persistence: The typesense
container’s data is stored in ./app/data/dbdata
to persist data across container restarts.
With this setup, you should be able to get your product up and running with Docker Compose quickly. Let me know if you have any questions or need further assistance!
KAI relies on several environment variables to configure and control its behavior. Below is a detailed description of each environment variable used in the project:
APP_HOST
Description: The host address on which the application will run.
Example: "0.0.0.0"
APP_PORT
Description: The port number on which the application will listen for incoming requests.
Example: "8015"
APP_ENABLE_HOT_RELOAD
Description: Enables or disables hot reloading of the application. Set to 1
to enable hot reload, or 0
to disable it.
Example: "0"
TYPESENSE_API_KEY
Description: The API key used to authenticate requests to the Typesense server.
Example: "kai_typesense"
TYPESENSE_HOST
Description: The host address of the Typesense server.
Example: "localhost"
TYPESENSE_PORT
Description: The port number on which the Typesense server listens.
Example: "8108"
TYPESENSE_PROTOCOL
Description: The protocol used to communicate with the Typesense server.
Example: "HTTP"
TYPESENSE_TIMEOUT
Description: The timeout value (in seconds) for requests to the Typesense server.
Example: "2"
CHAT_MODEL
Description: The model used for chat and natural language understanding tasks.
Example: "gpt-4o-mini"
EMBEDDING_MODEL
Description: The model used for generating embeddings from text data.
Example: "text-embedding-ada-002"
OPENAI_API_KEY
Description: The API key used to authenticate with OpenAI services.
Example: ""
(To be provided)
OLLAMA_API_BASE
Description: The base URL for OLLAMA API.
Example: ""
(To be provided)
HUGGINGFACEHUB_API_TOKEN
Description: The API token for accessing Hugging Face Hub services.
Example: ""
(To be provided)
AGENT_MAX_ITERATIONS
Description: The maximum number of iterations the agent will perform. This is useful for controlling resource usage.
Example: "20"
DH_ENGINE_TIMEOUT
Description: The timeout value (in seconds) for the engine to return a response.
Example: "150"
SQL_EXECUTION_TIMEOUT
Description: The timeout (in seconds) for executing SQL queries. This is important for recovering from errors during execution.
Example: "60"
UPPER_LIMIT_QUERY_RETURN_ROWS
Description: The upper limit on the number of rows returned from the query engine. This acts similarly to the LIMIT
clause in SQL.
Example: "50"
ENCRYPT_KEY
Description: The encryption key used for securely storing database connection data in Typesense. Use Fernet Generated key for this.
Example: "f0KVMZHZPgdMStBmVIn2XD049e6Mun7ZEDhf1W7MRnw="
These environment variables provide flexibility and control over the behavior of the KAI API, ensuring that the application can be easily configured for different environments and use cases.
In your browser visit