Orchestration
Build LLM agents as dynamic graphs.
Use our visual programming interface to rapidly build and experiment with complex LLM agents. Then export graphs to zero-abstraction code or host it on our scalable infrastructure.
RAG out of the box
Fully-managed semantic search over datasets. We handle chunking, embeddings and vector database.
Python code block
Need some custom data transformation? Write custom python code with an access to all standard libraries.
LLM providers
Effortlessly swap between GPT-4o, Claude, Llama3 and many other models.
Real-time collaboration
Build and experiment with pipelines seamlessly as a team with Figma-like experience.
Local code interfacing
Seamlessly interface graph logic and local code execution. Call local functions in between node executions.
Remote debugger UI
Easily build and debug complex agents with many local function calls using our convenient UI.
Deployment
Deploy on our scalable Rust infrastructure.
Pipeline are executed on our custom async engine written in Rust. They can be easily deployed as scalable API endpoints.
from lmnr import Laminar
l = Laminar("<YOUR_PROJECT_API_KEY>")
result = l.run(
endpoint = "my_endpoint_name",
inputs = {"input_node_name": "value"},
env = {"OPENAI_API_KEY": "key"},
metadata = {"session_id": "id"}
)
Generate zero-abstraction code from graph using our open-source package.
Use our open-source package to convert nodes to pure functions. Code is generated right inside your repo.
def run_llm():
res = requests.post(
"https://api.openai.com/v1/chat/completion",
...
def run_semantic_search():
...
def run_custom_code():
Observability
Monitor every trace.
All endpoint requests are logged, and you can easily inspect detailed traces of each pipeline execution. Logs are written asynchoronously to ensure minimal latency overhead.
DocumentationEvaluations
Run custom evaluations on thousands of datapoints in seconds.
Design flexible evaluator pipelines which seamlessly interface with local code. Run them on large datasets in parallel. Don't waste time maintaining custom evaluation infrastructure.
Documentation