# HLLM (Higher-Level Language Models) HLLM is a multi-agent orchestration playground for designing, testing, and visualizing LLM agent topologies with real-time execution and deep observability. It provides 14 configurable topologies, 100+ AI models via OpenRouter, and full execution tracing. ## Getting Started - [Quick Start](https://www.hllm.dev/docs/quick-start.md): Set up HLLM and run your first topology in minutes. - [Authentication](https://www.hllm.dev/docs/authentication.md): Configure API keys and manage user sessions. - [API Key Configuration](https://www.hllm.dev/docs/api-keys.md): Add OpenRouter and provider API keys for model access. ## Topologies HLLM supports 14 agent topologies organized by execution pattern. ### Linear - [Single](https://www.hllm.dev/docs/guides/playground/topologies/single.md): Direct single-agent execution with one model. - [Sequential](https://www.hllm.dev/docs/guides/playground/topologies/sequential.md): Chain multiple agents in sequence, passing output forward. ### Fan-Out - [Parallel](https://www.hllm.dev/docs/guides/playground/topologies/parallel.md): Execute multiple agents simultaneously and collect all results. - [Map-Reduce](https://www.hllm.dev/docs/guides/playground/topologies/map-reduce.md): Distribute work across agents, then aggregate with a reducer. - [Scatter](https://www.hllm.dev/docs/guides/playground/topologies/scatter.md): Broadcast to multiple agents and gather diverse responses. ### Adversarial - [Debate](https://www.hllm.dev/docs/guides/playground/topologies/debate.md): Two agents argue opposing positions with a judge synthesizing conclusions. ### Cyclic - [Reflection](https://www.hllm.dev/docs/guides/playground/topologies/reflection.md): Self-improvement loop where an agent critiques and refines its own output. ### Mesh - [Consensus](https://www.hllm.dev/docs/guides/playground/topologies/consensus.md): Multiple agents collaborate to reach agreement on a response. - [Brainstorm](https://www.hllm.dev/docs/guides/playground/topologies/brainstorm.md): Agents generate ideas freely, then synthesize the best concepts. ### Hierarchical - [Decomposition](https://www.hllm.dev/docs/guides/playground/topologies/decomposition.md): Break complex tasks into subtasks assigned to specialist agents. - [Rhetorical Triangle](https://www.hllm.dev/docs/guides/playground/topologies/rhetorical-triangle.md): Analyze from ethos, pathos, and logos perspectives. ### Tree - [Tree of Thoughts](https://www.hllm.dev/docs/guides/playground/topologies/tree-of-thoughts.md): Explore multiple reasoning paths with branching and evaluation. ### Agentic - [ReAct](https://www.hllm.dev/docs/guides/playground/topologies/react.md): Reasoning and acting loop with tool use for complex problem-solving. ### Council - [Karpathy Council](https://www.hllm.dev/docs/guides/playground/topologies/karpathy-council.md): Multi-agent council with diverse expert personas reaching consensus. ## API Reference 29 endpoints grouped by domain. ### Health & Monitoring - [GET /health](https://www.hllm.dev/api/health.md): Check API health status. - [GET /stats](https://www.hllm.dev/api/stats.md): Get public usage statistics. - [GET /metrics](https://www.hllm.dev/api/metrics.md): Retrieve agent performance metrics. ### Chat Sessions - [POST /sessions](https://www.hllm.dev/api/sessions-create.md): Create a new chat session. - [GET /sessions](https://www.hllm.dev/api/sessions-list.md): List all user sessions. - [GET /sessions/:id](https://www.hllm.dev/api/sessions-get.md): Get session details and messages. - [PATCH /sessions/:id](https://www.hllm.dev/api/sessions-update.md): Update session title or system prompt. - [DELETE /sessions/:id](https://www.hllm.dev/api/sessions-delete.md): Delete a session and its messages. - [POST /sessions/:id/messages](https://www.hllm.dev/api/messages-add.md): Add a message to a session. - [DELETE /sessions/:id/messages](https://www.hllm.dev/api/messages-clear.md): Clear all messages in a session. ### Execution - [POST /execute](https://www.hllm.dev/api/execute.md): Execute a topology with streaming SSE response. - [GET /logs](https://www.hllm.dev/api/logs.md): Get execution logs for topology runs. ### Prompts - [POST /prompts](https://www.hllm.dev/api/prompts-create.md): Create a new prompt in the library. - [GET /prompts](https://www.hllm.dev/api/prompts-list.md): List prompts with optional filtering. - [GET /prompts/:id](https://www.hllm.dev/api/prompts-get.md): Get prompt details. - [PATCH /prompts/:id](https://www.hllm.dev/api/prompts-update.md): Update a prompt. - [DELETE /prompts/:id](https://www.hllm.dev/api/prompts-delete.md): Delete a prompt. - [POST /prompts/:id/usage](https://www.hllm.dev/api/prompts-usage.md): Increment prompt usage count. - [POST /prompts/generate](https://www.hllm.dev/api/prompts-generate.md): Generate a prompt using AI. - [POST /prompts/improve](https://www.hllm.dev/api/prompts-improve.md): Improve an existing prompt. ### Files - [POST /files](https://www.hllm.dev/api/files-upload.md): Upload a file for use in topologies. - [GET /files/:id](https://www.hllm.dev/api/files-get.md): Get file metadata and download URL. - [DELETE /files/:id](https://www.hllm.dev/api/files-delete.md): Delete an uploaded file. ### User & Profile - [GET /user/profile](https://www.hllm.dev/api/user-profile.md): Get current user profile. - [PATCH /user/profile](https://www.hllm.dev/api/user-profile-update.md): Update user profile and preferences. - [GET /user/stats](https://www.hllm.dev/api/user-stats.md): Get usage statistics for the user. ### API Keys - [POST /api-keys](https://www.hllm.dev/api/api-keys-create.md): Create a new API key. - [GET /api-keys](https://www.hllm.dev/api/api-keys-list.md): List all API keys for the user. - [DELETE /api-keys/:id](https://www.hllm.dev/api/api-keys-delete.md): Delete an API key. ### TPMJS Tools - [GET /tools](https://www.hllm.dev/api/tools-list.md): List all available TPMJS tools. - [GET /tools/:id](https://www.hllm.dev/api/tools-describe.md): Get detailed tool description. - [POST /tools/:id/execute](https://www.hllm.dev/api/tools-execute.md): Execute a TPMJS tool directly. - [GET /env](https://www.hllm.dev/api/env-list.md): List all TPMJS environment variables. - [PUT /env/:key](https://www.hllm.dev/api/env-set.md): Set or update an environment variable. - [DELETE /env/:key](https://www.hllm.dev/api/env-delete.md): Delete an environment variable. ### Import/Export - [GET /export](https://www.hllm.dev/api/export.md): Export user data (sessions, prompts, settings, files). - [POST /import](https://www.hllm.dev/api/import.md): Import previously exported data. ### Models - [GET /models](https://www.hllm.dev/api/models-list.md): List all available AI models with provider filtering. ### Speech & TTS - [POST /tts](https://www.hllm.dev/api/tts.md): Convert text to speech using configured TTS provider. ## Agent Configuration - [Model Selection](https://www.hllm.dev/docs/agent-config/models.md): Choose from 100+ models via OpenRouter or direct providers. - [Temperature](https://www.hllm.dev/docs/agent-config/temperature.md): Control response randomness (0.0-2.0). - [Max Tokens](https://www.hllm.dev/docs/agent-config/max-tokens.md): Set maximum response length. - [Top P](https://www.hllm.dev/docs/agent-config/top-p.md): Configure nucleus sampling parameter. - [System Prompts](https://www.hllm.dev/docs/agent-config/system-prompts.md): Define agent behavior and persona. - [Tool Configuration](https://www.hllm.dev/docs/agent-config/tools.md): Enable TPMJS tools, custom tools, or MCP servers. ## Executors Execution engines that power each topology pattern. ### Linear Executors - [SingleExecutor](https://www.hllm.dev/docs/executors/single.md): Direct model invocation with streaming. - [SequentialExecutor](https://www.hllm.dev/docs/executors/sequential.md): Chained execution with context passing. ### Fan-Out Executors - [ParallelExecutor](https://www.hllm.dev/docs/executors/parallel.md): Concurrent execution with result collection. - [MapReduceExecutor](https://www.hllm.dev/docs/executors/map-reduce.md): Distributed processing with aggregation. - [ScatterExecutor](https://www.hllm.dev/docs/executors/scatter.md): Broadcast and gather pattern. ### Adversarial Executors - [DebateExecutor](https://www.hllm.dev/docs/executors/debate.md): Multi-round argumentation with judging. ### Cyclic Executors - [ReflectionExecutor](https://www.hllm.dev/docs/executors/reflection.md): Iterative self-improvement loop. ### Mesh Executors - [ConsensusExecutor](https://www.hllm.dev/docs/executors/consensus.md): Collaborative agreement building. - [BrainstormExecutor](https://www.hllm.dev/docs/executors/brainstorm.md): Idea generation and synthesis. ### Hierarchical Executors - [DecompositionExecutor](https://www.hllm.dev/docs/executors/decomposition.md): Task breakdown and delegation. - [RhetoricalTriangleExecutor](https://www.hllm.dev/docs/executors/rhetorical-triangle.md): Multi-perspective analysis. ### Tree Executors - [TreeOfThoughtsExecutor](https://www.hllm.dev/docs/executors/tree-of-thoughts.md): Branching exploration with evaluation. ### Agentic Executors - [ReactExecutor](https://www.hllm.dev/docs/executors/react.md): Reasoning-action loop with tool integration. ### Council Executors - [KarpathyCouncilExecutor](https://www.hllm.dev/docs/executors/karpathy-council.md): Expert panel deliberation. ## Components ### Chat System - [ChatInterface](https://www.hllm.dev/docs/components/chat-interface.md): Main conversation UI with streaming support. - [MessageList](https://www.hllm.dev/docs/components/message-list.md): Render messages with markdown and code highlighting. - [InputArea](https://www.hllm.dev/docs/components/input-area.md): Message composition with file attachments. ### Topology Studio - [TopologySelector](https://www.hllm.dev/docs/components/topology-selector.md): Browse and select from 14 topologies. - [TopologyVisualizer](https://www.hllm.dev/docs/components/topology-visualizer.md): Real-time execution flow visualization. - [AgentConfigPanel](https://www.hllm.dev/docs/components/agent-config-panel.md): Configure agents for each topology node. ### Observability - [MetricsPanel](https://www.hllm.dev/docs/components/metrics-panel.md): View token usage, latency, and cost metrics. - [ExecutionTrace](https://www.hllm.dev/docs/components/execution-trace.md): Step-by-step execution timeline. - [TokenCounter](https://www.hllm.dev/docs/components/token-counter.md): Real-time token usage display. ### Media - [FileAttachments](https://www.hllm.dev/docs/components/file-attachments.md): Upload and attach files to messages. - [TtsButton](https://www.hllm.dev/docs/components/tts-button.md): Text-to-speech playback for responses. - [ImageViewer](https://www.hllm.dev/docs/components/image-viewer.md): Display generated or uploaded images. ## SDK ### Installation ```bash npm install @hllm/sdk ``` ### Basic Usage - [Client Setup](https://www.hllm.dev/docs/sdk/client.md): Initialize the HLLM client with API key. - [Streaming Execution](https://www.hllm.dev/docs/sdk/streaming.md): Execute topologies with real-time streaming. - [Session Management](https://www.hllm.dev/docs/sdk/sessions.md): Create, update, and manage chat sessions. - [Topology Configuration](https://www.hllm.dev/docs/sdk/topologies.md): Configure and customize topology parameters. ### Examples - [Single Agent](https://www.hllm.dev/docs/sdk/examples/single.md): Basic single-model execution. - [Debate Topology](https://www.hllm.dev/docs/sdk/examples/debate.md): Set up adversarial debate between agents. - [ReAct with Tools](https://www.hllm.dev/docs/sdk/examples/react-tools.md): Agentic reasoning with TPMJS tools. - [Custom Topology](https://www.hllm.dev/docs/sdk/examples/custom.md): Build custom agent orchestration patterns.