
Trans-Actor • A UX for AI
Conversational Interface Design
Team Work
Xing Lu, Lee Cody, Sche-I Wang
Current conversational interfaces imitate humans—why? Computers have different strengths and weaknesses from us, they have different decision-making processes, they literally speak a different language from us. Computers need their own unique identity.
We believe that this new identity can let users better understand how their device works, they would be able to perform more customized tasks, more productive searches, and a more collaborative interaction. But in order to better collaborate with our computers, we need a sort of creole, a lingua franca.
What we are proposing here is a designed language for conversing with AI-based computational actors like conversational interfaces. This new language can mediate communication between human and computational actors, making the human-computer relationship resemble talking with someone culturally different from yourself. Just like when I talk with Lee, I always prefer using body language instead of speaking English.
We’re arguing that developing a user experience for transparency will create this new, computational identity. What we mean by transparency is that the user should be able to understand what the computer is doing and why. In this way, conversations can become more nuanced and meaningful.
To create this new, transparent language, we looked at how conversational interfaces could communicate emotions through developing a functional cui, how they might respond nonverbally with a chatbot that only responds with gifs, and what a natively computational form might look like through form studies.
Based on our research, we then developed four working prototypes that use sound, light, motion and behavior to communicate nonverbally.
We transformed the current notion of a conversational interface as a disembodied voice in a black box into an embodied set of networked objects. These objects communicate with the user through distinct modalities which facilitate the conversation and help the user understand subjectively what is happening with the device. We isolated each method of nonverbal communication into its own module in order to better understand its effect.
The light module shows how the computer is listening to the user. It uses the color and brightness of the LEDs to give a general impression of how the computer is receiving the input. This is analogous to how we use facial expressions in communication.
The audio module reveals the computer’s next actions. We’ve used tones to indicate that the computer is thinking, and to communicate some of the details about the task it is about to start. This is modeled after filler words like “um”, “uh” and “whaaa?”. These words help to indicate the speaker has more to say.
The behavior module shows the general status of the computer. Just as you can tell when someone is stressed or tired by their actions or the bags under their eyes, the user can tell how taxed the computer is by the speed of the fan cooling the CPU.
The motion module indicates the status of the task the computer is currently working on. This functions as gestures and postures do in human to human communication, revealing the response to current actions.

A user story about Conversational Interface




Interactive Prototype
As a team member, I was in charge of the physical prototyping and focusing on the visual aspect of the interaction happens between human and machine. I used different kinds of LEDs and applied them with different codes to explore how light effects communicate emotions.
The light effects symbolize machine's facial expression which enables machines to response user's input in diverse ways.
Besides the visual communication, we also designed specific scenario for specific physical interaction.
The implication of the output generating process helps to create a natural conversation between human and machine. Not only we designed the visual language for machines but also we prototyped the scenario of how people use conversational user interface.






We decided to break the CUI into several parts. We hoped that user could directly interact with the objects in a more intuitive way. Similar to what we would do during human-human conversation, user should be able to use body language such as tapping, caressing and holding to communicate with machines. We also wanted to use the geometry shape to emphasize the identity of machines.