Attending mind-numbing meetings is not one of them.
Waking up in the morning, feeling refreshed after a good night’s sleep, and more importantly, you know what you want to do today.
Starting your day at the office attending 15-minute stand-up, to share progress and issues you and your team faced the previous day. 15 minutes sharp and the stand-up end.
You are clear about the priority of the tickets assigned to you, you work on them in order from the highest to the lowest, but today you will focus on working on one of them.
You start your day, everything…
Step-by-step Algorithm Implementation: from Pseudo-code and Equations to Python Code. In this article, we will implement the Planning Graph and its planner — the GraphPlanner in Python, data structure and search algorithm for AI Planning.
Planning Graph was developed to solve the issues in complexity found in the classical AI Planning approaches, a.k.a STRIPS-like planners. There are two main parts that we need to implement:
If you are not familiar with the Planning Graph and want to understand more, check out my post…
I wanted to use the HFSM (Hierarchical Finite State Machine) in my Pacman AI Agent Implementation to fully understand the concepts and to compare it with the Behavior Tree. In my experience, after reading books and papers of some technical topics, my comprehension of the topics would greatly improve if I implemented them in code.
Pacman AI is written in Python and so I tried searching HFSM implementation on Github, but couldn’t find a good one for my implementation and decided to write one and release it on Github.
In this article, I want to share what I have learned…
We have seen how powerful the behavior tree is, in the previous post.
It is hierarchical, modular, and more importantly reactive to changes that happen in the agent’s environment.
It can be used to replace Hierarchical Finite State Machines (HFSMs), to make the systems more scalable and understandable to humans.
However, as you may have noticed behavior trees can become very complex if we want the agent to select between many methods to achieve a goal or a task.
Let’s look at an example.
When we plan a trip to a city in another country, we usually start with a…
In our previous posts, we discussed about Planning and Acting. Both are for planning, one with a Descriptive Model and the other with an Operational Model of actions.
Acting with Operational Model uses Refinement Methods to refine abstract tasks into subtasks or commands that can be executed by Execution Platform. In a rather complex domain where there are multiple methods that we can use to refine a task, this approach excels.
However, for many less complex systems, we only have one/two methods that we can use to refine an abstract task into. One may argue that we don’t need planning…
Using a new search-space, the Planning Graph to improve expressiveness and complexity issues found in Classical Planning approaches.
The classical approaches to AI Planning use state-space and plan-space to search solution plans to solve planning problems. In the state-space searching, the initial world state goes through several transformations by applying applicable actions until a solution plan is found to reach the goal or the search algorithm terminates and returns failure. We can use the search algorithms such as BFS, DFS, Dijkstra’s, A*, and others.
Implementing an artificially intelligent agent such as a robot or a character in video games is becoming more complex as they require to have complex behaviors to carry out their tasks in dynamic environments. Today, Finite State Machine (FSM) is still the most used algorithm to model the behaviors of AI Agents.
Despite its weaknesses that have been solved partly by Hierarchical Finite State Machine (HFSM), the fact that it is easy to understand and implement has made it the most commonly used algorithm.
This post will look into the FSM, its advantages and disadvantages, and HFSM which was developed…
In the previous post, we discussed the RAE (Refinement Acting Engine) that plans the actions by refining tasks into executable actions and does that with the observed world state instead of the predicted world state. For the full article, please read here:
One thing that you may have noticed in that approach is the lack of planning in advance.
In some scenarios, planning in advance may give us a more optimal solution to our problem.
By planning, we can explore different courses of action and choose a good solution.
SeRPE is based on RAE and it only supports a single…
In the previous post, we have discussed how Refinement Methods work, where the refinement process takes place, etc. You can read the post below if you haven’t already.
In this post, we look into one algorithm that is based on Refinement Methods, the Refinement Acting Engine (RAE).
We try to understand how the algorithm works and try it out on Pacman Planning Problem.
The RAE has three main inputs:
And it outputs commands (or primitive actions) to the Execution Platform.
Telling our AI agents how to perform tasks.
In previous posts, we looked into how our agents plan their actions with deterministic search algorithms, in state-space and plan-space.
The agents plan their actions by transforming the state or partial-plan and search through the space to find the goal.
Then the result, the solution plan is given to the Acting Engine to be executed.
Software Engineering Manager who loves reading, writing, and coding.