I was always fascinated by the idea of artificial intelligence and machine consciousness. While I am convinced that computers are very capable of hosting an algorithm that posses or instantiates human level intelligence I am not convinced that we can sit down on a computer console and code such a program. Instead I argue that one should use the one process that already gave rise to intelligence, and that is evolution.
This approach comes with a couple of neat advantages and offers a very new perspective on engineering opportunities. Richard Feynman made this famous quote:
“What I cannot create, I do not understand”
but what evolution offers us is:
“I can evolve a brain/algorithm/network to do something I otherwise don’t understand”
The only drawback in this approach is to understand how to design the world and with it the fitness landscape that is conducive to evolve our desired abilities. One might also wonder about what to evolve, because not every substrate is capable of supporting the algorithms I am interested in. However, I think we solved that problem by using Markov Networks, and I am working on making Markov Networks adaptable and dynamically changing.
The question that remains is about fitness landscapes and how one can design them appropriately. However, nobody has a good working definition of intelligence. Therefor I focused on two major aspects: Information Integration and Representations. In a nutshell an agent needs to integrate sensor information with experiences in order to form representations about the world it lives in. In addition, these representations have to be used in order to make inferences and predictions about this world. While it is common believe that representations are not necessary for cognition, I showed that they evolve naturally, and that their opaque nature prevents us from building good internal models when designing robots (see a more elaborate explanation here).