Simple Embedded Architecture for Robot Learning and Emotion

I’ve been working on a paper about robot learning for over a year now, more as a place to organize my thoughts than anything else. This paper outlines some ideas I’ve been having about how to implement a learning system that let’s a robot relate state-action sequences to a result. It’s still extremely sketchy, but I thought I’d make it public:
Simple Embedded Architecture for Robot Learning and Emotion
Sorry for the shoddy formatting, but that results from publishing a Google Doc as HTML.

I also have a few background documents that may be interesting* to some:

Senses/States Matrix
Emotional Effects on Outputs
Robot Emotions versus Movements

My current mental obsession is an ALife simulation running under Linux, and I thought I would try out this learning architecture in a simulated environment on a capable processor.

*I find this stuff extremely interesting, but then according to my teenage daughters I’m weird. Of course they think weird is a compliment.

Comments are closed.