Learning the step-by-step implementation of HFSM concepts in Python.

Image for post
Image for post
HFSM in Python (Image by Author)

I wanted to use the HFSM (Hierarchical Finite State Machine) in my Pacman AI Agent Implementation to fully understand the concepts and to compare it with the Behavior Tree. In my experience, after reading books and papers of some technical topics, my comprehension of the topics would greatly improve if I implemented them in code.

Pacman AI is written in Python and so I tried searching HFSM implementation on Github, but couldn’t find a good one for my implementation and decided to write one and release it on Github.

In this article, I want to share what I have learned when implementing it. We will see the step-by-step transformation from concepts to code. Hopefully, this is useful for you and encourages you to code and share your works. …

Automatically build and update Behavior Trees during run-time.

Image for post
Image for post
Automated Planning with Behavior Trees (Image by Author)


We have seen how powerful the behavior tree is, in the previous post.

It is hierarchical, modular, and more importantly reactive to changes that happen in the agent’s environment.

It can be used to replace Hierarchical Finite State Machines (HFSMs), to make the systems more scalable and understandable to humans.

However, as you may have noticed behavior trees can become very complex if we want the agent to select between many methods to achieve a goal or a task.

Let’s look at an example.

Transportation System

When we plan a trip to a city in another country, we usually start with a higher-level plan and refine it to be more detailed plans. …

Designing and Implementing Behaviors for AI Agents

Image for post
Image for post
A Behavior Tree (Image by author)

Introduction — Another way of implementing Acting Engine

In our previous posts, we discussed about Planning and Acting. Both are for planning, one with a Descriptive Model and the other with an Operational Model of actions.

Acting with Operational Model uses Refinement Methods to refine abstract tasks into subtasks or commands that can be executed by Execution Platform. In a rather complex domain where there are multiple methods that we can use to refine a task, this approach excels.

However, for many less complex systems, we only have one/two methods that we can use to refine an abstract task into. One may argue that we don’t need planning in these systems. …

Letting AI agents work together to achieve common goals.

Image for post
Image for post
Photo by Brian McGowan on Unsplash

Moving Planning Engine out of our Agent’s brain

In previous posts about Automated Planning, we have discussed planning that takes place in our agent, as part of our agent, embedded in it. To recall, let’s look at the following diagram.

Learning the details of Hierarchical Finite State Machine (HFSM), how it solves issues found in Finite State Machine (FSM), and how it compares with Behavior Trees.

Image for post
Image for post
Pacman HFSMs (Image by Author)


Implementing an artificially intelligent agent such as a robot or a character in video games is becoming more complex as they require to have complex behaviors to carry out their tasks in dynamic environments. Today, Finite State Machine (FSM) is still the most used algorithm to model the behaviors of AI Agents.

Despite its weaknesses that have been solved partly by Hierarchical Finite State Machine (HFSM), the fact that it is easy to understand and implement has made it the most commonly used algorithm.

This post will look into the FSM, its advantages and disadvantages, and HFSM which was developed to alleviate some of the disadvantages of the FSM. …

Planning actions by simulating the Acting Engine

Image for post
Image for post


In the previous post, we discussed the RAE (Refinement Acting Engine) that plans the actions by refining tasks into executable actions and does that with the observed world state instead of the predicted world state. For the full article, please read here:

One thing that you may have noticed in that approach is the lack of planning in advance.

In some scenarios, planning in advance may give us a more optimal solution to our problem.

By planning, we can explore different courses of action and choose a good solution.

Sequential Refinement Planning Engine

SeRPE is based on RAE and it only supports a single task at a time. …

AI Agents plan the tasks using Refinement Methods

Image for post
Image for post
Image by OpenClipart-Vectors from Pixabay

In the previous post, we have discussed how Refinement Methods work, where the refinement process takes place, etc. You can read the post below if you haven’t already.

In this post, we look into one algorithm that is based on Refinement Methods, the Refinement Acting Engine (RAE).

We try to understand how the algorithm works and try it out on Pacman Planning Problem.

Refinement Acting Engine

The RAE has three main inputs:

  • Library of Methods
  • Tasks
  • Observed State of the World

And it outputs commands (or primitive actions) to the Execution Platform.

Telling our AI agents how to perform tasks.

Image for post
Image for post
Photo by James Pond on Unsplash


In previous posts, we looked into how our agents plan their actions with deterministic search algorithms, in state-space and plan-space.

The agents plan their actions by transforming the state or partial-plan and search through the space to find the goal.

Then the result, the solution plan is given to the Acting Engine to be executed.

Searching for a solution plan in plan-space rather than state-space, another approach to solve planning problems.

Image for post
Image for post
Photo by Jean-Frederic Fortier on Unsplash

State-space Search

In previous posts we discussed about various forward state-space search algorithms to solve planning problems.

Let’s now recall how they generally work so that we can compare them with plan-space search.

Those algorithms work by searching through the world states in state-space to find a solution plan which will take the agent from initial state to goal state.

Relaxing the planning by not deleting atoms from old states, just adding new ones.

Image for post
Image for post
Another approach for domain-independent heuristics

Delete Relaxation

Recall that the simple form of our planning is basically transforming states from one to another by applying the chosen action.

The consequence of applying an action to a state is that one or more atoms in that state are changed.


Debby Nirwan

Software Engineering Manager who loves reading, writing, and coding.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store