llm.rb 4.13.0
llm.rb is a runtime for building AI systems that integrate directly with your application. It is not just an API wrapper. It provides a unified execution model for providers, tools, MCP servers, streaming, schemas, files, and state. It is built for engineers who want control over how these systems run. llm.rb stays close to Ruby, runs on the standard library by default, loads optional pieces only when needed, and remains easy to extend. It also works well in Rails or ActiveRecord applications, where a small wrapper around context persistence is enough to save and restore long-lived conversation state across requests, jobs, or retries. Most LLM libraries stop at request/response APIs. Building real systems means stitching together streaming, tools, state, persistence, and external services by hand. llm.rb provides a single execution model for all of these, so they compose naturally instead of becoming separate subsystems.