Lauren Golembiewski: Voice Interaction can help you learn a new instrument

Lauren Golembiewski in Taking Turns

Imagine that you had a hobby of playing around with technology in your house, and then you find a way to turn your enthusiasm into a business – that’s exactly what Lauren Golembiewski, and her partner in life and business Matt Buck, did. They combined their love for voice interaction and for music into creating Voxable, a Texas-based company which helps companies and businesses create their own voice and chat bots. Why did she think that her bot “formed a mind of its own”? And how playing banjo, or any other instrument, can be easier with Voice tech? That’s among the things we talked about.


My background is in typical UX/UI design for web and mobile applications, and my partner in business and life Matt Buck, he’s in engineering and software development. Before all of this voice really hit and got big, we were just playing around in our homes with different home automation devices, turning on the TV and controlling the lights.

We got a hold of an Ubi smart speaker, which was the predecessor to the Echo and we were able to program our own voice interactions in our home, and we just found it like really really magical. It was so fun to enable our environment with these home automation devices. When the Alexa skills kit came out, we saw how big companies would start to kind of really get behind this technology, and we quit our jobs and started Voxable because we wanted to make these devices and these interactions more and more. So we kind of got into it just by being hobbyists and turned it into a business.

• What parallels do you see between your previous occupation – UX/Product design – and conversation design?

There are a lot of parallels. I used a ton of my UX design and UI design background in conversational design. I really think that there’s a lot that you can extract from those processes in UX design and software development and how to learn all this new technology coming into conversational design, but really, the underlying process of understanding users and trying to create a piece of software that really benefits them is the same.

And so, the affordances and the tools that you’re working with and the things that you’re using to build the software are different, and learning that – especially having an engineer software developer next to me – made that much easier, but I was able to kind of take that underlying process and underlying knowledge I already had and apply it to conversational design process, which is similar to creating any other web or mobile application. It’s just that the mode of input that users are interacting with is voice and that changes a lot of things but it doesn’t really necessarily change the fact that you’re still building a software application.



I’m really proud of our work with South by Southwest and their intelligent assistant Abby. It was a really big project with a lot of different data points and the data that we are working with when you’re thinking about a big tech and music conference is names of bands, names of places and venues and all of those can be linguistically similar because a band’s name can be whatever the band wants it to be, often times there’s names of the people who are speakers, but those people are also attached to bands or other projects.
So disambiguating all of the underlying entities or important data points in that system was a big challenge, and although the event didn’t take place this year and we’re not sure when it will take place again – I’m really proud of the work that we’ve done over the past few years with them.

Aside from that kind of client work that we’ve done – I’m also really proud of our introduction to conversational design course. A lot of the work that we do with companies is help teaching them how to implement these conversational design and development processes, and we kind of distilled that down and made it accessible to anyone who wants to get into this through an online course, so I’m really proud of that work because it was a lot more work than we anticipated – but we got it done and it’s out there.

Previously on Taking Turns 
💬 Michelle Zhou: “Humans talking to machines are brutally honest”
💬 Mary Tomasso: “Don’t just write a conversation – speak it”
💬 Michelle Parayil: “Bad copy can ruin a customer’s day”
💬 Henry Ginsburg: “Want to get in? Grab a pen and start writing”
💬 Kent Morita: “In the right context, humor can be very effective”
💬 Breakup, Pokemon and YASS!: Greg Bennett talks convo design
💬 Hillary Black: “Chatbots are like Social Media on its early days”
💬 Every Word Matters: Language lessons with Maaike Coppens
💬 Thorben Stemann: “Users asked my bot for her picture”
💬 Emiel Langeberg: “Voice Tech can be also a research tool”
💬 Rebecca Evanhoe: “Context is the most important thing for voice

• It’s hard to miss that one of your passions is music. How can conversational AI help in the process of, for example, learning a new instrument?

I started learning banjo about a year ago, It was my very first instrument ever. I love music and I love listening to it, but I never really felt like I had what it takes to learn music. It just wasn’t something that I was around as a youngster. And my partner Matt is a musician since he was very young and he’s constantly been trying to pull me into music because it’s one of his passions, and it wasn’t until he bought a banjo and I picked it up and I was like, I want to learn this.

So I’ve been taking online course lessons, and watching lots of YouTube – and I discovered something very important, which is – you can learn music. It is not an innate talent. It is a skill that you can pick up and even in your 30s like me, and I started learning music and especially the banjo I realized that there’s so much opportunity for voice interaction in these different learning platforms because at least in banjo – and I know in guitar because I started picking up the learning electric guitar a little bit this past year – a lot of times you’re learning chord shapes and trying to figure out the shape that your hands need to be in and you forget, what fret you’re supposed to be fretting or what string, what the exact position is.

There’s a lot that you can extract from UX design and software development to conversational design. The underlying processes are the same


And I just keep thinking like, oh, I have an instrument in my hand. I don’t want to have to like go pick up my phone that has the chord diagram on it. I want to just be able to ask my smart speaker, which is also sitting right in front of me. And so I think there’s a lot of opportunity in music education to have this voice interaction because it is something that I felt was really natural when I’m sitting there learning an instrument and the Ultimate Guitar Tabs application does have some voice control built into the mobile app, but it’s not quite like a learning experience.

And so I just keep thinking of – and Matt and I keep you know devising all these side projects which of course, you know, the best side projects on top of our regular job, but we really just want to try to make something that is for ourselves, again, tinkering around, being excited about the things that we’re engaging in, whether it be music or home automation. We’re just kind of always interested in making our own environment’s voice enabled – and so music learning, which is just the latest one that I’m really excited about enabling a voice interaction around that because I know it’d help me a lot. So that’s you know, well, I want to build it.


The most important step – and I’ve seen this in clients that we’ve worked for other people we talked to, is really having solid user research. So user research is just that activity of understanding what your users want and need. But in voice interfaces, the reason that’s especially important is because users can really say anything to a voice interface, there are given kind of open slot to speak their tone.

And of course we know that users will have certain goals and will express those goals in certain ways and technology only has so many things that it can do, especially if you’re building an application on top of an existing products, like, you have an underlying API it does certain things but really understanding the way that your customers or your users will verbalize those problems and those goals, as well as all of the other ancillary questions they’re going to have along the way, you know, if they’re in music interaction, they may ask for a chord diagram, but they may also say – okay, what scale it is that it? They may have other follow-on questions that you may not anticipate if you’re just thinking about what your application can provide, so user research is extremely vital.

If companies don’t invest in it upfront, what they end up creating when they create their first version of their voice interface is a mechanism to gather that research. So they’re essentially creating their own experiment. And that can be kind of tough for both the company and their expectations on performance as well as the user has, and their expectations about what they’re supposed to be getting out of it. So I just always caution companies: if you’re not going to find this out in the beginning you’re going to find it out along the way – that could be your strategy, but be aware of the fact that you do need to uncover truly what exact words and unique words your users will bring to your voice interface.


I think this happens when I’m really deep on a project and I have been testing it and going through and trying to make sure that all the different functionality is there and what ends up happening is I am interacting with the bot and I get something that I don’t expect or wasn’t necessarily aware was going to happen, and I feel like – oh, did this thing formed a mind of its own?

It’s kind of that uncanny valley of just ‘oh, wow, I didn’t expect that’. I’m the one who wrote it and build it but I still fool myself in the midst of interacting with these chatbots, because you do kind of lose that sense that you’re really interacting with like a kind of flat, you know stale piece of technology because bots really and voice interface is really feel alive.

And so even fools me in the midst of testing that I’m like ‘did we program that in, or where was that variation of response…’ because I wasn’t expecting it and inevitably it is there somewhere and it wasn’t, the bots having a mind of its own, but it does happen to me frequently when I’m in the midst of interacting with the bots I’ve built and it’s funny and I seeing that with users as well, kind of is like where we’re all creating this very alive technology, which has great power and great responsibilities attached to it in order to not exploit that type of connection that I and users find ourselves in, but it does happen to me and I’m always surprised.

I interact with the bot and I get something that I don’t expect or wasn’t necessarily aware was going to happen, and I feel like – oh, did this thing formed a mind of its own?



If you have roots in kind of typical web design or mobile app design, start there as your foundation. Don’t try to undo everything you’ve learned, and that can provide a really strong foundation. Like I said, user research is really important, understanding your users and forming those use cases in a way that makes sense to the developers and to the rest of the team is really important and applying a design process, you know, really strongly considering the way that the interactions will play out inside of the voice or chat interface is to me really maps a long typical design process.

If you don’t come from a design background, that’s okay. I would say start to understand and learn as much as you can about the existing technology. So a lot of the platforms like Alexa or Google assistant have documentation that tells you what those platforms can do and can’t do and while that documentation is really extensive and you can kind of get lost in it, a lot of them have created design-focused areas of their documents that help design it very specifically, understand that technology, and then if you can work with or become friends with or marry a developer in this space, and that close collaboration and relationship with a developer who can help to you know, kind of level-set you on what is possible, and not only just what is possible but
how easy it is to achieve. I think it was really important.

And then, if you don’t have a developer, if you don’t have that kind of connection to someone who’s building, you can go out and use tools that are out on the market to help you understand what is available on those platforms. So Voiceflow has a really good integration with Alexa and a lot of the tools inside of Voiceflow oriented around creating an Alexa skill, BotSociety has a lot of tools similarly and I think they’re a little bit more geared towards Google Assistant, but those tools can help you create the prototypes and start to work with the underlying technology and gives you kind of the GUI interface over top of all of the things that Alexa or Google can do, and just trying to get your feet wet trying to get in there and play around and create something.

And then, like I do, I would say to everyone try to find those use cases that really speak to you that you really want to see happen in a voice interface. Even if it’s as simple as controlling some device that you have, really start to try to make something like that work. Because it’s not until you apply something that seems kind of simple like it’s one command, but then you realize – oh, well, what if I say this or what a my internet connection is poor or what if that device doesn’t respond what are all those different situations that can happen, even on one interaction.

And really, breaking that down and getting something to work and then showing it to other people will really exhibit, all of the nuance that is within a conversation interaction and so, my devices generally like keep, you know, if you have underlying knowledge use that as your foundation, if you don’t – go out there and try to get it, and try to understand the technology as best you can and what is possible and what will work as far as a voice interaction.

• Tell us more about VOXABLE.

Voxable is a conversational design and development agency. We help companies build their conversational interfaces chatbot and voice interfaces. We also do a lot of workshops teaching companies underlying process for conversational design & development. So whether they have a design team and a development team, or they have copywriters in the development team or whether it’s just development team – we help them understand: here are the things that you need to have in place, the documentation, the here’s how to create a natural language understanding model, which is probably one of the core pieces of technology that is the most difficult for teams to assess and understand as they add conversational design and development to their wheelhouse.

Then, we built an online course for anyone to kind of learn those same skills that we teach internally to companies but to do it self-paced and as like a solo learner and we’re working on some really exciting things as well – Voxable is working on a conversational design tool, and so we think that there’s a big gap in the marketplace for designers to have a tool that supports the design process, the conversation design process specifically, so we’re creating like the Figma for conversational design and we’ll be talking more and more about that this fall & winter, but that’s something that I’m really excited about because again, it’s something that I find that there’s a huge gap for me.

I don’t really have any tool that helps me write sample scripts and build conversation flows other than kind of smart whiteboards. And so kind of going beyond the whiteboard and the Google Docs and getting to something that really aides a designer in the conversational design process because I think tooling is a really important piece of the puzzle for any designer and especially developers and having a tool that designers can actually use to support their process, that is focused on the exploration of a conversational design the and is unencumbered by implementation. So it’s agnostic of any platform. That’s what we’re building with, and it’s called Voxable studio. So we will be talking about it more and more as the year goes on.

Next week, we’ll have episode 3 of Coming To Terms with AI! In the meanwhile – subscribe to our YouTube channel | Join our Discord community | Sign up for our newsletter | Follow us on FacebookLinkedInInstagram or Twitter

Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on pinterest

Leave a Reply

Your email address will not be published. Required fields are marked *