This is the second post in the Udacity’s AI Nano degree series. Taking the course is my attempt at learning and re-familiarizing with AI / CS concepts.
Project #2 tasked the students to implement an AI agent that is capable of playing a game of isolation. The only exception from the usual isolation game was that the moves had to follow a chess knight’s pattern. The course provided the board implementation and we had to write AI agent that is capable of searching for optimal moves through the board with the goal of defeating an opponent.
Sample isolation board. Visited squares are in the grey/black shade.
Initially, the game itself was not something to get too excited about. But when the implementation of algorithms started that’s when the fun picked up! Let’s start at the high-level definition of a problem. Given a board, it is not obvious what should the next move be in order to guarantee a win. One could run simulations and try to find it, but the search space can be too large even for the small boards (7×7) making it impossible to solve by a brute force alone. Instead, AI agent should focus on optimizing how it traverses the game tree by doing two things:
- Come up with a way to evaluate the score for the board positions. A winning board has a score of infinity, a losing board has a score of negative infinity, anything in between should have a score that correctly determines how favorable the board is for winning or losing.
- Iterate the possible solution tree in a way that is fast. You want to throw out the boards that are unfavorable to you, or select ones that were known to benefit you.
For the first problem, coming up with a way to value a board, means defining a heuristic function for the board position. The strength of the heuristic function is the difference maker in the AI agents. It requires a balance between being accurate but also fast to compute so that your algorithm can evaluate many boards during a single turn. If you make a heuristic function that is too complicated, then AI agent will be slow and with certain rules where time limits are enforced will lose.
For the second problem, iterating the possible solution tree, there are approaches such as minimax, alpha-beta pruning to optimize your move selection. Mix in iterative deepening search and you got an effective agent.
The heuristic functions I tried out where giving a score to a players position that indicated how many open moves the player had vs the opponent (the more moves over the opponent you have the better) COMBINED with how close the player was to the open positions. The idea here is not to get trapped. Another variation of the above I tried was staying as far away from the walls as possible. That one turned out to be a good heuristic, but not as effective as staying close to the open fields. And lastly I tried one combination where I combine staying away from the walls with staying close to the open fields and the results were still less than just staying close to the open fields.
All in all, it was a successful implementation that beat the baseline score set out by the project’s creators. Now I am trying to decide what to do with the agent and see what can be added to it so that it can participate in the competition against the agents of other students.
Some observations from going through the exercise:
- Visualization is the king. Visualizing the board positions and move calculations really helped me discover bugs in my implementation. I should just always start with the visualization when working on the problems and go from there.
- Iterative deepening was somewhat unintuitive at start. It is amazing how much it helps to find the solution faster without going too deep into only certain parts of the tree.
- Alpha-beta can be a bit confusing at the start and you definitely need many manual/on-paper implementations to see why it is effective.
Some things that I did not implement as part of this exercise that still need to explore:
- quiescent search. This one is a mystery and something that I will bring up on the course forums. I have read book materials on it and other online resources but something tells me that until I will try to implement it, it will not fully make sense.
- Monte Carlo tree search roll outs. I really want to implement this one and see how it would make the AI agent better. Seems like it is a big part of any game playing AIs and making them effective.
This was fun. On to the advanced search concepts and pacman lab!