| Crates.io | autogpt |
| lib.rs | autogpt |
| version | 0.1.14 |
| created_at | 2024-04-06 13:31:48.32091+00 |
| updated_at | 2025-07-13 09:12:02.357662+00 |
| description | ๐ฆ A Pure Rust Framework For Building AGIs. |
| homepage | https://kevin-rs.dev |
| repository | https://github.com/kevin-rs/autogpt |
| max_upload_size | |
| id | 1198289 |
| size | 615,275 |
๐ง Linux (Recommended) |
๐ช Windows | ๐ | ๐ |
|---|---|---|---|
![]() |
![]() |
- | - |
| Method 1: Download Executable File | Download .exe File |
- | - |
Method 2: cargo install autogpt --all-features |
cargo install autogpt --all-features |
docker pull kevinrsdev/autogpt:0.1.14 |
docker pull kevinrsdev/orchgpt:0.1.14 |
| Set Environment Variables | Set Environment Variables | Set Environment Variables | Set Environment Variables |
autogpt -h orchgpt -h |
autogpt.exe -h |
docker run kevinrsdev/autogpt:0.1.14 -h |
docker run kevinrsdev/orchgpt:0.1.14 -h |
AutoGPT is a pure rust framework that simplifies AI agent creation and management for various tasks. Its remarkable speed and versatility are complemented by a mesh of built-in interconnected GPTs, ensuring exceptional performance and adaptability.
AutoGPT agents are modular and autonomous, built from composable components:
AutoGPT is designed for flexibility, integration, and scalability:
Please refer to our tutorial for guidance on installing, running, and/or building the CLI from source using either Cargo or Docker.
[!NOTE] For optimal performance and compatibility, we strongly advise utilizing a Linux operating system to install this CLI.
AutoGPT supports 3 modes of operation, non agentic and both standalone and distributed agentic use cases:
In this mode, you can use the CLI to interact with the LLM directly, no need to define or configure agents. Use the -p flag to send prompts to your preferred LLM provider quickly and easily.
In this mode, the user runs an individual autogpt agent directly via a subcommand (e.g., autogpt arch). Each agent operates independently without needing a networked orchestrator.
+------------------------------------+
| User |
| Provides |
| Project Prompt |
+------------------+-----------------+
|
v
+------------------+-----------------+
| ManagerGPT |
| Distributes Tasks |
| to Backend, Frontend, |
| Designer, Architect |
+------------------+-----------------+
|
v
+--------------------------+-----------+----------+----------------------+
| | | |
| v v v
+--+---------+ +--------+--------+ +-----+-------+ +-----+-------+
| Backend | | Frontend | | Designer | | Architect |
| GPT | | GPT | ... | GPT | | GPT |
| | | | | (Optional) | | |
+--+---------+ +-----------------+ +-------------+ +-------------+
| | | |
v v v v
(Backend Logic) (Frontend Logic) ... (Designer Logic) (Architect Logic)
| | | |
+--------------------------+----------+------------+-----------------------+
|
v
+------------------+-----------------+
| ManagerGPT |
| Collects and Consolidates |
| Results from Agents |
+------------------+-----------------+
|
v
+------------------+-----------------+
| User |
| Receives Final |
| Output from |
| ManagerGPT |
+------------------------------------+
ManagerGPT and individual agent instances (ArchitectGPT, BackendGPT, FrontendGPT).In networking mode, autogpt connects to an external orchestrator (orchgpt) over a secure TLS-encrypted TCP channel. This orchestrator manages agent lifecycles, routes commands, and enables rich inter-agent collaboration using a unified protocol.
AutoGPT introduces a novel and scalable communication protocol called IAC (Inter/Intra-Agent Communication), enabling seamless and secure interactions between agents and orchestrators, inspired by operating system IPC mechanisms.
In networking mode, AutoGPT utilizes a layered architecture:
+------------------------------------+
| User |
| Sends Prompt via CLI |
+------------------+-----------------+
|
v
TLS + Protobuf over TCP to:
+------------------+-----------------+
| Orchestrator |
| Receives and Routes Commands |
+-----------+----------+-------------+
| |
+-----------------------------+ +----------------------------+
| |
v v
+--------------------+ +--------------------+
| ArchitectGPT |<---------------- IAC ------------------>| ManagerGPT |
+--------------------+ +--------------------+
| Agent Layer: |
| (BackendGPT, FrontendGPT, DesignerGPT) |
+-------------------------------------+---------------------------------+
|
v
Task Execution & Collection
|
v
+---------------------------+
| User |
| Receives Final Output |
+---------------------------+
All communication happens securely over TLS + TCP, with messages encoded in Protocol Buffers (protobuf) for efficiency and structure.
User Input: The user provides a project prompt like:
/architect create "fastapi app" | python
This is securely sent to the Orchestrator over TLS.
Initialization: The Orchestrator parses the command and initializes the appropriate agent (e.g., ArchitectGPT).
Agent Configuration: Each agent is instantiated with its specialized goals:
Task Allocation: ManagerGPT dynamically assigns subtasks to agents using the IAC protocol. It determines which agent should perform what based on capabilities and the original user goal.
Task Execution: Agents execute their tasks, communicate with their subprocesses or other agents via IAC (inter/intra communication), and push updates or results back to the orchestrator.
Feedback Loop: Throughout execution, agents return status reports. The ManagerGPT collects all output, and the Orchestrator sends it back to the user.
At the current release, Autogpt consists of 8 built-in specialized autonomous AI agents ready to assist you in bringing your ideas to life! Refer to our guide to learn more about how the built-in agents work.
Your can refer to our examples for guidance on how to use the cli in a jupyter environment.
For detailed usage instructions and API documentation, refer to the AutoGPT Documentation.
Contributions are welcome! See the Contribution Guidelines for more information on how to get started.
This project is licensed under the MIT License - see the LICENSE file for details.