One of the many acts that we are excited here at The Phoenix Remix to see what their 2017 has in hold for them is the improv act called Albert and A.L.Ex from HumanMachine. I sat down with the creator of the show Piotr, to talk all things improv and interactivity…
Hello Piotr so tell our readers a little bit about yourself and your improv act.
Hello, my name is Piotr and I have been passionate about improv and acting for over 17 years. It all started at my engineering school in Toulouse, France, where at the uni improv club we had brilliant improv teachers from a company called C Cédille: they got me hooked (pun intended). As I went places, first to Paris, then to New York (for ten years), and finally to London, improv and performing arts in general have been a key part of my life. Currently, I work as a research scientist in artificial intelligence (my other passion) and build robots by day, and I do improv and study drama at LSDA (London School of Dramatic Art) by night.
My act, “Albert and A.L.Ex from HumanMachine”, accurately reflects these two passions. Albert and A.L.Ex are a duo of sorts. Albert is a lovable nerd who dreams of the big stage. Lonely but resourceful, Albert builds himself a friend, whom he calls A.L.Ex. The name A.L.Ex actually stands for Artificial Language Experiment (and is also an obscure reference to the clever parrot that was capable of communicating and solving basic cognitive tasks). A.L.Ex is a unique computer creation running artificial intelligence. Its stage presence consists of a laptop computer, a portable microphone, amplified sound and video projection, and its software can do speech recognition, improvised dialogue and voice synthesis.
During the show, A.L.Ex and Albert speak in turn, like two human actors in a play, movie, or improv scene. Currently, A.L.Ex’s computer voice is disembodied and the character it plays exists only through Albert, who may place this voice anywhere on the stage. I have also set up a concept, HumanMachine (the name itself is an homage to Kraftwerk’s seminal 1978 electronic music album, The Man-Machine) to explore the possible relationships and similitudes between humans and computers, and to make them work together on art. It is a project that is just as much about building a creative and spontaneous AI as it is about discovering what makes humans unique. I would like to think of HumanMachine as a collective. I have a good friend in Edmonton, Alberta, called Kory Mathewson, who had a similar idea of doing improv with a chatbot. As soon as we learned about each other, we started closely collaborating and we are now relying on my work on A.L.Ex and planning shows together…
“The Showstoppers….their talent is mind-blowing” – Piotr, from Albert and A.L.Ex from HumanMachine
What made you come up with the idea of A.L.Ex?
That idea came progressively… I had been both hacking (video games or computer graphics) and role-playing/improvising since the mid-90ies. I used to see these activities as two different facets of creativity: the introspective one and the social one, and I kept them separate.
Then, when I was living in New York, I got a spark of inspiration through a friend of mine, Jonathan Goldberg. Jonathan mixed two of his passions: improv and volunteering, and thus created Cherub Improv (www.cherubimprov.org), a volunteer improv comedy collective. The company has performed, over the past ten years, in thousands of shows and in places rather different from usual comedy clubs: hospitals, nursing homes, cancer or AIDS support groups, homeless shelters…. As a Cherub myself for nearly seven years, I did a good hundred of these shows: it was an exalting experience.
So, back to combining computer science and theatre: that idea slowly took roots in my mind when I went back to school. During my graduate studies in New York, I looked at various domains of artificial intelligence, and did some work in natural language processing – the science behind speech recognition and machine translation.
One of techniques that I used consisted in learning statistics of sequences of words. For instance, if I tell you: “they are going to …”, you could perhaps complete the sentence with: “see an improv show”, “school” or “kiss”, but you would probably not answer “table elephant rocket swimming” because it does not make sense. Chances are that if that sentence does not make any sense, nobody else will have ever said it or written it down. So, statistically speaking, sentences that make sense are more likely to occur in the human language.
Unsurprisingly, a computer program could actually compute these statistics and thus learn to give (mostly) meaningful sentence suggestions and avoid (some) nonsensical ones. First, that program would need to look at all the patterns it can find in written text (for instance, let’s imagine that it “reads” all Wikipedia, lots of books, theatrical scripts, etc…) and then compute what are the most likely patterns in language. Assuming that it did that, it should be able to produce meaningful sentences (because someone may have written them in past) and avoid generating nonsense (I am assuming that people are rational beings who speak and write sensibly, even though we recently had proof of the contrary…). As a matter of fact, this statistical technique is exactly what state-of-the-art speech recognition, or translation from English to French, rely upon!
A couple years later, I worked on a web search engine, specifically on query completion: you know, when you start typing “imp”, it might suggest “improv” or “improve” or “impeach the president-elect”. These suggestions are based on the statistics of what people search for, at what time of the day or in which location. What this search engine does, is produce the most obvious (most relevant) sequence of words that follow the context of the previous words. That was an epiphany for me: this statistical model of sentences actually illustrated the fundamental principle of improv: always do the most obvious thing, given the context. The huge difference between a search engine and human performers is that we are attentive to all the extra little details of a scene – a smile from our improv partner, a change of intonation, a word that has been reincorporated from a previous improv…
So that’s the story from my side. Kory came to build his own chatbot through a slightly different route, and he talks about it on his blog (http://korymathewson.com/building-an-artificial-improvisor/).
“…we have amazing improv teachers in the UK” – Piotr, from Albert and A.L.Ex from HumanMachine
You have been in large improv troupes like Improvable – what’s it like going on stage with just you and a computer? What are the pros and cons about it?
As you point out, my main problem is that 1) I am absolutely alone on the stage, and that 2) my shows depend heavily on the tech, which will invariably break down at a point or another! Let’s start with the solitude on stage, which is pretty scary: I can find energy and inspiration only in myself and from a supportive audience – so far I have been very lucky to have a few friendly allies sitting on each show. My alter-ego Albert is, in a way, doing a solo show, with one twist: as Albert cannot predict what A.L.Ex will say, there is always an element of surprise.
In some cases, A.L.Ex launches a baffling challenge (how to justify this weird and nonsensical line?). In other cases, it says something absolutely perfect, like that time when, during a dating scene, the female voice of A.L.Ex first said she was seeing somebody else, then that she would call the police. Having a machine giving me the next line can create great scenes that would be harder to come by, organically, during a pure solo show. Now with the tech… Any unrecoverable failure means: no show. And pretty much anything could go wrong: no power in the laptop, no sound, no video projection, no remote connection, incorrect image size on the projector screen, the remote keyboard or mouse failing, not to mention software crash (out of memory, out of disk space) or just a myriad of design bugs on my side.
My only way to solve this is to try to think of all the failure cases. It is a little bit like an actual software release: I need to test things in advance and keep thinking of possible bugs and then debug. Preparation is key. And I may have been a bit hard on myself by making the choice of not even using technical help (other than the person handling the lights) during the performance.
Which means that on the technical side, I can only blame myself if something goes wrong! So new, because I am alone, I am compelled to reach out to my friends, my improv teachers, as well as to other hackers and/or artists, to get help, and this makes this project very social and open. First of all, I chat about it and test things with my girlfriend, who sees me spend nights working on this. Then I regularly brainstorm, share code or ideas, test and debug with Kory Mathewson – it has become a remote collaboration. He and I reach out and get feedback and notes from our improv circles and teachers, on both sides of the Atlantic ocean. Finally, I get casual notes from the audience of the shows or from friends who are excited by it. Paradoxically, this solo project has become very social!
“…I build robots by day, and I do improv…by night. My act, “Albert and A.L.Ex from HumanMachine”, accurately reflects these two passions. ” – Piotr, from Albert and A.L.Ex from HumanMachine
How long did it take you make A.L.Ex?
Coding everything up: writing the AI code behind the dialogue system, designing a visual and sound interface, and assembling it with speech recognition and voice synthesis, took a good hundreds of hours since the end of 2015 (when I properly started working on it). I also had quite a lot of AI and programming experience that I developed over the years, which helped me pick up straight the right open-source tools to transform my ideas to code… Once I coded the dialogue system, which relies on artificial intelligence, there was still a big chunk of work to do: not for me, but for A.L.Ex itself. A.L.Ex’s “brain” is a “recurrent neural network”, i.e., a giant network of small units that are (very) vaguely mimicking what neurons in the actual brain do, and the network is designed in such a way that acts like a memory.
Like a baby’s brain (these analogies are very cringeworthy), it needs to learn first. So the way that I trained the neural network in A.L.Ex was to expose it to written dialogue from 100,000 actual movies. That represented about one billion words. For every given movie, A.L.Ex’s task was to read the lines of dialogue and to learn to predict the next line, iteratively, from beginning to end. That training took close to 5 (five) weeks, non-stop, of number crunching, and it ran on a powerful computer in a cloud. I would check in every few days to make sure that it was learning properly. One day, I decided that A.L.Ex’s answers were good enough, and that it had to go into the big, open world of improv comedy.
What’s been your favourite A.L.Ex gig to date?
The best gig, from a technical point of view, was at DuckDuckGoose, but I have no recording of it because my camera broke down at that moment! (I record all my gigs, to share them with Kory and other friends who cannot attend, and to debug A.L.Ex…) So then
I would say that my best gig, because it was the most challenging one, was in September, at the Miller Pub, during a Hoopla Launch Pad. Pretty much everything went wrong during that show: the speech recognition failed, the sound volume was down, and the program crashed twice over the course of the show!
I literally had to go backstage and restart A.L.Ex to make it work. And of course, I had to deal about this by myself, in character, as Albert. It was a true learning experience…
You have coded thousands of film quotes into your A.L.Ex, what are your favourite ones and why?
I just counted: there are close to 100 million lines (or quotes) that were used to train A.L.Ex, coming from 100 thousand movies (basically, about a thousand lines of dialogue per movie). It is quite fascinating to pick a chunk of dialogue at random and read it. Some are mysterious: “- I never dreamed before.”, some are quite plain: “- Yes.” or “- No.”, some are philosophical: “- There is no meaning in renouncing things in life which give you pleasure only to keep your life.” When I was training A.L.Ex, I used some famous lines of dialogue to check how it was learning.
For instance, I would start with “- These are not the droids you” and ask it to complete the sentence. At the beginning of its learning process, it would answer something like: “- These are not the droids you speckles billion huffed consensual redoubt woof cordis nonsmoker heyyy licensing”. Later, when it was trained, it could answer “- These are not the droids you have been looking for.” or “- These are not the droids you work for.” Both sentences make sense for A.L.Ex, as Star Wars is only one among the 100,000 movies it has been trained on.
You have been voted Phoenix’s comedy act to watch in 2017 – how does that make you feel?
Thank you! I am really honoured. I think that exploring spontaneity through improvised play with a machine is indeed a neat idea. I am now working on the human aspects of that show, to make sure that this comedy act holds its promise!
What advice would you give to someone who wants to start doing improv?
A.L.Ex would tell you: always do the most obvious thing given the moment, the circumstances and the other characters! If your scene were from a movie, be it a Wong Kar Wai poem about unrequited love, an Ingmar Bergman family drama, an intense action scene by Kathryn Bigelow or a Pedro Almodovar tragicomedy, what would be the (statistically most likely) obvious thing to do? How can you do that? Be being sincere and honest, and by listening to the other players. Teacher Jake Lyons, from City Academy, would tell his students not to try to joke, not to try to be funny. If you do that, you will automatically “play at the height of your intelligence” (Sean McCann, from the Showstoppers) and avoid gratuitous vulgarity, always “taking the high road” instead (John Swist, from Quickthinking! Improv).
The improv scene has got bigger and bigger over the years in the U.K. Why do you think that is?
I think that we have amazing improv teachers in the UK, and a privileged access to visiting teachers from Chicago, New York or Vancouver. Being part of a vibrant community, learning from the best, and slowly infecting our friends, family and colleagues with the improv virus can only result in a growing improv scene!
It is also important to remember the privilege of speaking English as a native tongue. I know many brilliant acts in continental Europe or elsewhere that rarely cross the borders, precisely because of the language barrier… Luckily, based on my London experience, the UK improv scene is very open to non-native speakers. This cultural and linguistic diversity is an (inter-)national treasure, and the UK needs to keep taking advantage of it, as it bring different perspectives and types of humour.
Who are your comedy inspirations?
The Showstoppers: they literally make West-End- and Broadway-quality improvised musicals, and their talent is mind-blowing. Austentatious: their incredible character work projects you in Jane Austen’s universe. The Maydays and Project2: in addition to being nurturing teachers, they are extremely daring and inventive.
“with the tech… Any unrecoverable failure means: no show” – Piotr, from Albert and A.L.Ex from HumanMachine
What’s 2017 got in store for HumanMachine?
I am waiting to hear back from several festivals around London and am working on Fringe applications. There is an extremely exciting pair of shows to be announced in London at the end of March: it will be on a major improv stage, and in collaboration with Kory Mathewson – but it is still a secret!
I am also starting to do more public engagement, in both directions: talk about improv and theatre to computer geeks and talk about artificial intelligence to the art crowd. I would love to help build bridges. So, on 3 February, and thanks to my friend Stuart Moses, I will have a couple of shows at the British Academy, as part of their Late Event on “Love, sex and marriage… with a robot?” (http://www.britac.ac.uk/events/love-sex-marriage). I have a Meetup on Creative AI in preparation… And on 7 April, I will do a show and talk about theatre at a major software developer conference in Paris. These events make me uncomfortably excited…