Crates.io | goap-ai |
lib.rs | goap-ai |
version | |
source | src |
created_at | 2023-12-24 15:58:40.760445+00 |
updated_at | 2025-01-03 22:55:22.089049+00 |
description | Goal-Oriented Action Planning (GOAP) AI. |
homepage | |
repository | |
max_upload_size | |
id | 1079784 |
Cargo.toml error: | TOML parse error at line 17, column 1 | 17 | autolib = false | ^^^^^^^ unknown field `autolib`, expected one of `name`, `version`, `edition`, `authors`, `description`, `readme`, `license`, `repository`, `homepage`, `documentation`, `build`, `resolver`, `links`, `default-run`, `default_dash_run`, `rust-version`, `rust_dash_version`, `rust_version`, `license-file`, `license_dash_file`, `license_file`, `licenseFile`, `license_capital_file`, `forced-target`, `forced_dash_target`, `autobins`, `autotests`, `autoexamples`, `autobenches`, `publish`, `metadata`, `keywords`, `categories`, `exclude`, `include` |
size | 0 |
G.O.A.P. AI
Goal Orientated Action Planning AI
Goap-AI a library for creating AI agents which plan their actions based on a set of overall goals. It is suitable for use in games and simulations where the agent needs to respond dynamically to a changing environment.
You can install the library into your project with:
[dependencies]
goap-ai = "0.2.0"
Alternatively, you can clone the repository and set your project to use the local copy:
git clone https://github.com/FreddyWoringham/goap.git goap
cd goap
Then build the goap
binary with:
cargo build --release
After which you can run the tool:
cargo run --release config.yml
A GOAP agent is configured with a set of Goal
s it is trying to achieve, and a set of Action
s it can perform.
At each step, the agent will select which Action
to perform given the current State
of the environment, and how unfulfilled the Goal
s are that it is trying to achieve.
A State
object is a list of key-value pairs which represent a snapshot of the environment:
state:
energy: 50
health: 20
num_apples: 2
num_uncooked_meat: 0
num_cooked_meat: 0
Goals
are essentially target values of the State
which the agent is trying to achieve:
goals:
health:
target: 100
kind: GreaterThanOrEqualTo
weight: 4
energy:
target: 100
kind: GreaterThanOrEqualTo
weight: 1
When planning its actions, an agent will try and minimise "discontentment": the total difference between the current State
and the Goal
s it is trying to achieve.
$discontentment = \sum_{i=1}^{n} weight_i \times state_i - goal_i$
Note: This is representative of
GreaterThanOrEqualTo
goals, but different kinds of goals will use different formulae when calculating their discontentment with the current state.
Actions
are the things an agent can do to change the State
of the environment in order to achieve its Goal
s (minimise discontentment).
actions:
gather:
duration: 1
deltas:
energy: -5
num_apples: 5
hunt:
duration: 20
deltas:
energy: -10
num_uncooked_meat: 3
cook:
duration: 2
deltas:
energy: -5
num_uncooked_meat: -1
num_cooked_meat: 1
eat_apple:
duration: 1
deltas:
energy: 5
health: 5
num_apples: -1
eat_cooked_meat:
duration: 1
deltas:
energy: 20
health: 30
num_cooked_meat: -1
rest:
duration: 5
deltas:
energy: 10
wait:
duration: 1
deltas:
energy: -1
Our planner offers three primary algorithms, each optimized for specific scenarios and requirements:
Each algorithm operates in one of two solution modes:
Description:
Utilizes an exhaustive depth-first search approach to explore all possible action sequences up to a specified max_depth
. This method aims to minimize total discontentment without considering the time efficiency of actions.
Use Case:
Ideal for scenarios where achieving the lowest possible discontentment is crucial, and there are no strict time constraints.
Description:
Focuses on maximizing discontentment reduction per unit of time. This approach favors actions that offer the most significant improvement in discontentment for the least time investment.
Use Case:
Perfect for time-critical tasks where actions have varying durations, and quick responsiveness is essential.
Description:
Combines both traditional and efficiency-based strategies. It dynamically switches between minimizing discontentment and optimizing efficiency based on the current planning context.
Use Case:
Suitable for complex environments where both optimal discontentment reduction and time efficiency are important, allowing the planner to adapt to changing priorities.
Description:
Employs heuristic-based algorithms (e.g., A*) to generate plans swiftly. While faster, these plans may not always be the most optimal.
Advantages:
Trade-offs:
Description:
Uses exhaustive search techniques to explore all possible action sequences up to max_depth
, ensuring the most optimal plan is found.
Advantages:
Trade-offs:
Scenario | Efficiency-Based Planning | Traditional Planning | Hybrid Planning |
---|---|---|---|
Time-Critical Tasks | ✅ Best choice (maximizes gains per time unit) | Struggles with time limits | ✅ Balanced choice (optimizes speed and quality) |
Critical Threshold Goals | Struggles with thresholds | ✅ Best choice (directly minimizes discontentment) | ✅ Flexible choice (adapts to threshold needs) |
Long-Term Optimization | May overlook global optima | ✅ Best for overall balance | ✅ Adaptive choice (balances long-term and short-term) |
Dynamic, Real-Time Systems | ✅ Adapts well to changes | Struggles with rigid priorities | ✅ Highly adaptable (switches strategies as needed) |
You can set the maximum number of steps the agent will take to plan a sequence of actions using the following YAML configuration:
plan:
max_depth: 10
algorithm: Traditional
solution: Fast
With a complete configuration such as the one below:
max_depth: 10
algorithm: Traditional
solution: Fast
state:
energy: 50
health: 20
num_apples: 2
num_uncooked_meat: 0
num_cooked_meat: 0
goals:
health:
target: 100
kind: GreaterThanOrEqualTo
weight: 4
energy:
target: 100
kind: GreaterThanOrEqualTo
weight: 1
actions:
gather:
duration: 1
deltas:
energy: -5
num_apples: 5
hunt:
duration: 20
deltas:
energy: -10
num_uncooked_meat: 3
cook:
duration: 2
deltas:
energy: -5
num_uncooked_meat: -1
num_cooked_meat: 1
eat_apple:
duration: 1
deltas:
energy: 5
health: 5
num_apples: -1
eat_cooked_meat:
duration: 1
deltas:
energy: 20
health: 30
num_cooked_meat: -1
rest:
duration: 5
deltas:
energy: 10
wait:
duration: 1
deltas:
energy: -1
You can then generate a plan of action:
n
u
n m
u _
m u
_ n
c c
n o o
u o o
m k k
_ e e
e h a d d
n e p _ _
e a p m m
r l l e e
g t e a a
y h s t t
50 20 2 0 0 (370.00) [init]
40 -10 20 2 0 3 +3 (380.00) hunt
35 -5 20 2 1 +1 2 -1 (385.00) cook
55 +20 50 +30 2 0 -1 2 (245.00) eat_cooked_meat
50 -5 50 2 1 +1 1 -1 (250.00) cook
70 +20 80 +30 2 0 -1 1 (110.00) eat_cooked_meat
65 -5 80 2 1 +1 0 -1 (115.00) cook
85 +20 110 +30 2 0 -1 0 (15.00) eat_cooked_meat
95 +10 110 2 0 0 (5.00) rest
105 +10 110 2 0 0 (0.00) rest
Understanding the Output:
If you're integrating this ai library into a game or simulation, use the following steps:
Goal
s that the agent is trying to achieve.State
object representing the current environment from an agent's perspective.Action
s the agent can perform to change the State
.State
, Goal
s, and Action
s.It's recommended that your agent should re-plan its actions after each step it takes in the environment so that it regularly adapts to changing conditions.