Markov Brain Controllers in Robots

Evolving Markov Brains in virtual environments is one thing, but what happens when you expose Markov Brains to the real world? Here I show two examples where Thassyo Pinto and I tried exactly that. In the first example we evolved Markov Brains to follow a line. We used a fully virtual and fairly abstract environment and selected for controllers to control a bot so that it first finds a line and secondly follows it. However, we either used a simple circle for the bot to follow in order to represent a static environment or a curvy line that changed it’s curvature from generation to generation. The brains were afterwards uploaded to the bot and tested in a real environment. We find that as expected bots evolved in a noisy environment are much more robust to variation in the environment. However, this robustness comes with a price, the more robust bot if slower. Check the video:


In the second example I used Markov Brains that had internal feedback (sorry proprietary technology at present). I hardcoded the feedback mechanism so that bots are rewarded to aggregate other bots in front of them. The idea is to allow agents to end up in pairs or flocks. As a control I used random feedback. While in the video you see that indeed the bots at the top aggregate nicely, you also find clusters in the bottom half where I only used random feedback that should not lead to any meaningful behavior. However, I find a significant difference in clustering between the two treatments and accurate feedback enables the bots to aggregate, the effect is not very pronounced. In order to improve this, I am currently developing a bot that has better proximity sensors. All in all, ePucks can “see” accurately within an inch. Still, check out the video:


Comments are closed.