I wanted to use the HFSM (Hierarchical Finite State Machine) in my Pacman AI Agent Implementation to fully understand the concepts and to compare it with the Behavior Tree. In my experience, after reading books and papers of some technical topics, my comprehension of the topics would greatly improve if I implemented them in code.
Pacman AI is written in Python and so I tried searching HFSM implementation on Github, but couldn’t find a good one for my implementation and decided to write one and release it on Github.
In this article, I want to share what I have learned when implementing it. We will see the step-by-step transformation from concepts to code. Hopefully, this is useful for you and encourages you to code and share your works. …
We have seen how powerful the behavior tree is, in the previous post.
It is hierarchical, modular, and more importantly reactive to changes that happen in the agent’s environment.
It can be used to replace Hierarchical Finite State Machines (HFSMs), to make the systems more scalable and understandable to humans.
However, as you may have noticed behavior trees can become very complex if we want the agent to select between many methods to achieve a goal or a task.
Let’s look at an example.
When we plan a trip to a city in another country, we usually start with a higher-level plan and refine it to be more detailed plans. …
In our previous posts, we discussed about Planning and Acting. Both are for planning, one with a Descriptive Model and the other with an Operational Model of actions.
Acting with Operational Model uses Refinement Methods to refine abstract tasks into subtasks or commands that can be executed by Execution Platform. In a rather complex domain where there are multiple methods that we can use to refine a task, this approach excels.
However, for many less complex systems, we only have one/two methods that we can use to refine an abstract task into. One may argue that we don’t need planning in these systems. …
Letting AI agents work together to achieve common goals.
In previous posts about Automated Planning, we have discussed planning that takes place in our agent, as part of our agent, embedded in it. To recall, let’s look at the following diagram.
Implementing an artificially intelligent agent such as a robot or a character in video games is becoming more complex as they require to have complex behaviors to carry out their tasks in dynamic environments. Today, Finite State Machine (FSM) is still the most used algorithm to model the behaviors of AI Agents.
Despite its weaknesses that have been solved partly by Hierarchical Finite State Machine (HFSM), the fact that it is easy to understand and implement has made it the most commonly used algorithm.
This post will look into the FSM, its advantages and disadvantages, and HFSM which was developed to alleviate some of the disadvantages of the FSM. …
In the previous post, we discussed the RAE (Refinement Acting Engine) that plans the actions by refining tasks into executable actions and does that with the observed world state instead of the predicted world state. For the full article, please read here:
One thing that you may have noticed in that approach is the lack of planning in advance.
In some scenarios, planning in advance may give us a more optimal solution to our problem.
By planning, we can explore different courses of action and choose a good solution.
SeRPE is based on RAE and it only supports a single task at a time. …
In the previous post, we have discussed how Refinement Methods work, where the refinement process takes place, etc. You can read the post below if you haven’t already.
In this post, we look into one algorithm that is based on Refinement Methods, the Refinement Acting Engine (RAE).
We try to understand how the algorithm works and try it out on Pacman Planning Problem.
The RAE has three main inputs:
And it outputs commands (or primitive actions) to the Execution Platform.
Telling our AI agents how to perform tasks.
In previous posts, we looked into how our agents plan their actions with deterministic search algorithms, in state-space and plan-space.
The agents plan their actions by transforming the state or partial-plan and search through the space to find the goal.
Then the result, the solution plan is given to the Acting Engine to be executed.
Searching for a solution plan in plan-space rather than state-space, another approach to solve planning problems.
In previous posts we discussed about various forward state-space search algorithms to solve planning problems.
Let’s now recall how they generally work so that we can compare them with plan-space search.
Those algorithms work by searching through the world states in state-space to find a solution plan which will take the agent from initial state to goal state.
Relaxing the planning by not deleting atoms from old states, just adding new ones.
Recall that the simple form of our planning is basically transforming states from one to another by applying the chosen action.
The consequence of applying an action to a state is that one or more atoms in that state are changed.