Rig - Build LLM Applications in Rust

Build modular and scalable LLM Applications in Rust

cargo add rig-core

Rig Demo

See More
basic_llm.rs
use rig::providers::openai;
use rig::completion::Prompt;

#[tokio::main]
async fn main() -> Result<()> {
    let client = openai::Client::from_env();
    let gpt4 = client.agent("gpt-4").build();
    let response = gpt4.prompt("Translate 'Hello, world!' to French.").await?;
    println!("Translation: {}", response);
    Ok(())
}
Build Yours

Core Features of Rig

Unified LLM Interface

Consistent API across different LLM providers, simplifying integration and reducing vendor lock-in.

Rust-Powered Performance

Leverage Rust's zero-cost abstractions and memory safety for high-performance LLM operations.

Advanced AI Workflow Abstractions

Implement complex AI systems like RAG and multi-agent setups with pre-built, modular components.

Type-Safe LLM Interactions

Utilize Rust's strong type system to ensure compile-time correctness in LLM interactions.

Seamless Vector Store Integration

Built-in support for vector stores, enabling efficient similarity search and retrieval for AI applications.

Flexible Embedding Support

Easy-to-use APIs for working with embeddings, crucial for semantic search and content-based recommendations.

Why Developers Choose Rig for AI Development

Efficient Development

  • Type-safe API reduces runtime errors
  • Async-first design for optimal resource utilization
  • Seamless integration with Rust's ecosystem (Tokio, Serde, etc.)

Production-Ready Architecture

  • Modular design for easy customization and extension
  • Comprehensive error handling with custom error types
  • Built-in support for tracing and logging

Connect with the Rig Community