Friday 26 June 2015

Do you have a chaotic brain?

There is some (disputed) biological evidence [1,2,3] that what goes on inside your brain is chaotic (I could find plenty of real evidence for this in my case!) But this is not referring to the common meaning of the term 'chaos', which is normally used to describe something which is highly disorganised. On the contrary, in this context we are using the term 'chaos' in its specialist mathematical sense to describe something which is highly organised, but difficult to predict.


Early in my research career, I became interested in the possibility that brain activity is chaotic in this mathematical sense. I pondered what this might mean. Does this chaos offer any advantage to the brain in terms of the memories it can store and retrieve, or in terms of how it processes information? One thing that is notable about chaotic systems is that they are dynamic, restless, ceaselessly moving, creating new paths, new possibilities, and yet remaining constrained, bounded in a small sub-region of their potential 'space'. Sounds impossible? Have a look at this:


Here you see a classic chaotic system called the Lorenz attractor [4]. The light blue line illustrates the two winged sub-region that this chaotic systems is constrained within (i.e. its 'attractor'). The red point is a particular state of the system which, as you will see, starts far away from the attractor, but moves over time towards it and then continues to describe a path around it. Although this system is constrained to its two winged attractor, it will never stop moving, it will never be in exactly the same point twice on its attractor and it will continuously trace new paths (transients) as it goes along. Pretty cool!

Although the system will never be at exactly the same point twice (i.e. it will never be in exactly the same state more than once), it will come very close to points that it had previously visited, forming complex loop-like structures that are called 'unstable periodic orbits' (UPOs - see image on the right, taken from [5]).

Another remarkable feature of a chaotic attractor is that it can embed an infinite number of these UPOs. One question we asked ourselves early on in this research was, what if each of these UPO's represented a memory of the network [5]? If this could be achieved, then the memory capacity of the network (and by implication the brain) would also be theoretically infinite.

This begs lots of questions, most of which we have been unable to answer. However, in my next blog, I will begin to outline some of the work I did with colleagues at Oxford Brookes University on developing chaotic models of neural information processing in the brain.

References

[1]  Babloyantz, A., Lourenco, C., 1996. Brain chaos and computation. International Journal of Neural Systems 7, 461–471.

[2] Freeman,W.J., 1987. Simulation of chaotic eeg patterns with a dynamic model of the

olfactory system. Biological Cybernetics 56, 139–150.

[3] Freeman, W.J., Barrie, J.M., 1994. Chaotic oscillations and the genesis of meaning
in cerebral cortex. In: Buzsaki, G., et al. (Eds.), Temporal Coding in the Brain.

Springer–Verlag, Berlin, pp. 13–37.

[4] Lorenz, Edward Norton (1963). "Deterministic nonperiodic flow". Journal of the Atmospheric Sciences 20 (2): 130–141.

[5] Crook, N.T. & olde Scheper, T. (2002) Adaptation Based on Memory Dynamics in a Chaotic Neural Network.  Cybernetics and Systems 33 (4), 341-378.

Wednesday 17 June 2015

Rudely Interrupted!

Have you ever been rudely interrupted? You're part way through saying something of significance (to you at least) and the person you are speaking to barges in with a comment or a question. How do you react? Ignore it and carry on regardless? Deal with their comment/question and return to what you were saying? This was one of the problems we faced in the Companions Project when we developed an animated avatar called Samuela capable of engaging in social conversation (see this post for a very brief overview).

Companions Dialogue System Interface

Occasionally, Samuela would make long multi-sentence utterances commenting on what they user had said about their day at work. Here's an example of one of Samuela's long utterances:
"I understand exactly your current situation. It's right that you are pleased with your position at the minute. In my opinion having more free time because of the decreased workload is fantastic. Meeting new people is a great way to pass the time outside of work. I'm sure Peter will provide you with excellent assistance. Try not to let Sarah bother you either."
These long utterances provided the opportunity for (and often provoked) the user to interrupt the avatar mid speech. We realised that Samuela would need to be able to handle these interruptions and respond to them in a human-like way if she was to engage in believable social conversation with the user. A detailed description of how we implemented this barge-in interruption handling facility can be found here (Crook et al, 2012)

In summary, we faced two problems when developing this interruption handling capability. The first was detecting the occurrence of genuine interruptions and distinguishing them from back-channel utterances from the user (e.g. 'Aha', 'Yes' etc). The second was to equip the system with human-like strategies for responding to them in a natural way and continuing with the conversation.

If the user starts speaking whilst Samuela (denoted as ECA in the figure below) is speaking, then the system uses thresholds in both the intensity and sustained duration of the audio signal from the user's microphone to determine if this counts as a genuine interruption. This is illustrated in the schematic below, which shows 4 cases of Samuela speaking, two of which (cases 3 and 4) are designated interruptions by the system:


The second challenge, which was to equip Samuela with strategies for responding to user barge-in interruptions, required us to understand more about the strategies that humans use in such situations. To gather information about this we analysed some transcripts of the BBC Radio 4 programme Any Questions.  This is a discussion programme consisting of a panel of pubic figures, including politicians who regularly interrupt each other - so this was a rich source of examples for us!

In brief, our analysis showed that two things were happening when panelists were interrupted, the first was to address  the interruption, the second was the resumption  or recovery of speech after the interruption. We found it necessary to classify the types of interruptions that we observed, and focussed on implementing the 6 that were found to be most common. We then classified the types of recovery that we observed for each type of interruptions and then implemented these in the system controlling Samuela.

Here are a couple of examples of Samuela responding to user interruptions that are taken from the paper. The down arrow in the system turn (S) indicates the point at which the user (U) interrupted (the remainder of what the system had planned to say is shown in italics). The right arrow shows what the output of the speech recogniser when the interruption occurred.



We were unable to do a full evaluation of the interruption handling before the end of the project, which is a pity because I believe that this is the most sophisticated user barge-in interruption handing system that has yet been developed.

Friday 12 June 2015

Social Robotics Motivation Part II: Human Identity

In my last post (found here) I began to explain why I found myself increasingly interested in social robotics as a focus for my research. Today, I want to complete this picture by explaining that, at heart, my motivation stems from a desire to understand what it is to be human. I want to start with  a quote from one of my all time favourite movies:
"There have always been ghosts in the machine. Random segments of code that group together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity and even the nature of what we might call the soul.
When does a perceptual schematic become consciousness? When does the difference-engine become the search for truth? When does a personality simulation become the bitter moat of a soul?" (I Robot, Alex Proyas).
Although I don't believe in the "random segments of code that group together to form unexpected protocols" part, Proyas's film I Robot raises some deeply interesting questions. 

For those of you who haven't seen it, here is the official trailer. I Robot tells the thrilling story of a future in which humanoid robots are fully integrated into society. As the sinister plot unfolds, it becomes clear that the central robot character, Sonny, is unique amongst the robot population in that he appears to be more human than robotic. The film raises deep questions about the robot’s true identity: Is he a person in his own right, possessing free will, creativity and even a soul? 

For me the film also implicitly raises important questions about human identity: If machines are created that successfully simulate personhood to the degree of accuracy portrayed here, does that mean that humans are nothing more than biological machines?

I believe that the study of social robotics has a part to play in answering this question.

Friday 5 June 2015

How I became involved in social robotics

Some say that conversation is an art. When you try to build an artificial agent capable of even a limited form of social conversation, you begin to understand what people are getting at. In 2008 I was employed as a RA/developer at Oxford University to work on the EU funded Companions Project, which sought to develop an animated avatar called Samuela that you could have a 'social' conversation with about your day at work.




Samuela was designed to be emotionally intelligent, recognising the user's emotional state through voice patterns and sentiment analysis, and using her voice, facial expressions and gestures to show empathy towards the user. She was also capable of generating long utterances in which she gave advice to the user about how they were responding emotionally to the events of their day. Here is a video which introduces the prototype system and shows a couple of the couple of sample conversations with a user:



If you want to know more of the technical details of the system have a look at the selected references listed below. I will summarise my contributions to the Companions project in following Blog posts.

Working on this project was one of the most exciting and challenging periods of my research career. It introduced me to the deeply interesting and challenging area of creating artefacts capable of social interaction with people. Such systems require us to go far beyond the traditional mainstream challenges of AI (e.g. NLP, reasoning, learning, dialog management etc), into a world dominated by social norms and protocols, emotion, ethical patterns of behaviour and much more. I also realised the importance of 'presence' in social interaction, and in particular, bodily presence. An avatar on a screen (just like a human on a screen) involves a certain remoteness and lack of presence. For this reason, I turned to the use of robots to study and develop technologies capable of social interaction with people.

In 2011 I was appointed as Head of Computing and Communication Technologies at Oxford Brookes University. Soon afterwards I opened a new Cognitive Robotics lab there and began work on producing robots capable of social interaction, including our own skeletal head-and-neck robot called Eddie (more about this in later Blog posts) which we have built to mimic human head movements during conversation. We have also recently completed a study of the effect that upper body pose mirroring has on human-robot interaction. In this series of blog posts I will summarise this and subsequent work and give some insights into the stories behind the publications.