Mike Kuniavsky

INTERVIEWER

In this podcast, Mike Kuniavski will tell us about probe methodologies. Mike’s on the line in San Francisco. Hi Mike, can you tell us a bit about yourself?

MIKE KUNIAVSKY

Sure. So, my name is Mike Kuniavsky, I am a principal in PARC’s Innovation Services Group, and I consider myself a user-experience designer, although I’ve worn several different hats in the past. At PARC, as a principal at Innovation Services, roughly speaking, leading the user-experience design practice, what my role is, is that I use user-centered methods to help our clients reduce the risk of adopting novel technologies. And we have a small team of people who are essentially forward-thinking user-experience designers whose specialty is envisioning not just the interfaces, but the entire environment that a device might be used in, or an app or any kind of artifact might be used in. And then creating or simulating that environment so that it has some degree of realistic fidelity, so that we can identify whether it’s a good idea or not.

INTERVIEWER

So your specific take on user-centric design is actually thinking of products that don’t yet exist, and you’re working out whether there is a value to them?

MIKE KUNIAVSKY

I believe that user-centered design is a risk reduction strategy. That, essentially, it’s not about, kind of, identifying new greenfield markets in order to capitalize on them, it’s actually about protecting the core value chains of the company or the organization in the first place. Really understanding people’s perspectives, really understanding what the core value is that people experience from a technology, from a product, from an experience, from a brand, are all critically important.

INTERVIEWER

This seems like it breaks with the traditional waterfall model for development. What are the problems with that traditional waterfall model?

MIKE KUNIAVSKY

That brittle set of assumptions is that at the beginning of a process, you can know enough about what the world is going to be like at the end in order to be able to define what the parameters are that the technology has to meet. That is incredibly brittle, it’s incredibly risky. Sure, in the moment it seems like it’s a really good way of going forward because everybody feels happy in that room, because they feel like they’ve been heard and, you know, their idea is up there on a post-it.

But in the end, it’s actually incredibly risky. And what it does, from my perspective as someone who works in innovation, it wastes an enormous amount of resources that could actually have been spent on focusing on what people really want, what they could really make use of, what they would really be willing to pay for it even if they’ve never thought of it before.

INTERVIEWER

So it sounds like the hardware world is trying to catch up with the more agile and lean world of software. Is that fair?

MIKE KUNIAVSKY

I think the models that were developed in software over the last twenty years, when it went from being a packaged good to being, essentially, a cloud-based service, those models for the development of those kinds of products have started to bleed into the development of other kinds of things. The notion that-, you know, for example, Tesla, which is really, you know, I think a great example of a brick-and-mortar company that is structured like a software company. Tesla collects a huge amount of data about how people are using their cars, and they not only adjust their production in response to that, they also adjust the experience of the cars themselves while they’re out in the field, the ones that have already been sold. And that’s very much like a software perspective.

INTERVIEWER

And if we can get specifically into probe methodology, can you give a brief description of what that means?

MIKE KUNIAVSKY

So, what we call “probes” are essentially a way to explore the value that a particular technological artifact -- whether, again, that thing is a physical thing, you know, it’s a fan in your house or, you know, it’s a tram, or whatever -- what kind of value people find in that physical thing, without building it. So, the idea is that what we want is we want to get as much signal as we possibly can, and what I mean by signal is, kind of, “understanding.” But as much signal as we possibly can about where people find value, and how to deliver that value for as little effort as possible.

INTERVIEWER

Can you take us through the process of a probe methodology? So, from the starting point, a business says, “We want you to explore this area,” what do you then do, what’s the next step, and what is the process of discarding certain ideas and keeping others? How do you know the difference?

MIKE KUNIAVSKY

So, our process for conducting probes essentially starts out by trying to step back as much as we can, and as much as our clients can, to try to understand, kind of, the core business value that they’re trying to create, and to try to understand the core capabilities of the technology they’re trying to use, if they’re trying to use a specific technology. So, specifically, what we do is we essentially start out with some research, some desk research about, kind of, “What else is out there that’s like this?” We do some workshops with our clients to try to generate a bunch of ideas, and these are very classic ideation-style workshops. But what we then do is we have a process that we borrowed from other groups, where we diverge and then converge.

What we do is we have a bunch of ideas in that free space, then we cluster them, which is a form of convergence, we try to identify what’s common among the things that we cluster. We try to discard some things, we try to prioritize them based on what we understand the business model is that the client is, kind of, willing to entertain. Then from there, we generate some more ideas. We essentially say, “Okay, what else? Now that we’ve picked this domain, let’s really, kind of, broaden out, again, from this smaller set of ideas, but within this new set of constraints that we’ve created for ourselves.” And then we converge again, and what we’re doing in these things is we’re trying to do these things very rapidly. So, these workshops are, you know-, we typically meet with our clients once a week. So, once a week for three to six months, we’re either all in a room together or we’re all having a conversation together. And in each one of those iterations, we try to have this kind of convergence-divergence quality to it.

And so once we get to a core set of things that we think are interesting, you know, a combination of business model plus user value plus technology, what we do is we then start drawing hypotheses about these, and extracting assumptions. So, for all these things-, and we try to bring some scientific rigour to this, because essentially, we’re trying to apply a small amount of the scientific method here, by essentially saying, “What is the explicit hypothesis about the customer value that is embedded in this, kind of, sketch of a product or service that we’ve put together? Now, let’s try to identify all the hypotheses and assumptions.” And then what we’ll try to do is we’ll try to create a probe for each one of those, or a combination of those hypotheses. And the probe may look nothing like what the end product might look like, but it helps us answer questions about whether or hypothesis is accurate or not.

INTERVIEWER

Can you give us an idea of what the probe might look like? So, what kind of tools would you use?

MIKE KUNIAVSKY

So, for example, we worked on a project that was around an automated-recommender system for consumers that were-, consumers in China who were shopping in a supermarket, and it would help them identify which products might fit their lives better versus other ones. So, that’s potentially an incredibly complex technological problem, but before even writing a single line of code, we were like, “Does anyone even want that?” And so what we did is we hired an ethnographer that we regularly work with in China, and he walked beside people as they went through supermarkets, and gave them suggestions. You know, we had primed him, we had given him a set of things that we thought were technologically possible so that, essentially, he’s simulating a recommender system. But the way that he’s simulating it is he’s literally walking, you know, having a conversation with someone and walking with them through a supermarket, giving them suggestions.

And what we did at the end of that is that we interviewed both them, to find out, “Okay, what if there was this thing that gave you suggestions out in the world? You know, okay, so it wasn’t this guy, but what if it was a piece of software? You know, how did that feel? Did you find that valuable? Would you have acted on any of these suggestions? You know, how would you like to have it different?” And we interviewed him, and we said, “Okay, what was people’s response to your suggestions and recommendations? What was is that they did when you said these things? You know, how did you find yourself phrasing them as this project went along?” And so we were able to do this in a week, so we were able to recruit the people, get the guy, write the discussion guide for him, and have him walk alongside people and give us feedback about people’s reactions to recommendations in a supermarket, in a week. And we did it, essentially for the cost of recruiting a dozen people and having a specialist for really, like, literally like three days. So, we were able to do this very inexpensive thing, and we got very good value out of that, because we then really understood much more about, like, what kinds of recommendations, and how recommendations in a specific environment, would be valuable. And so then we were able to iterate on that idea. So, that’s an example of a probe.

INTERVIEWER

When you talk about the value that you get from a probe, and the value extracted from the cheapness of just hiring twelve participants and an expert, that value, presumably, is very qualitative. Is interpreting the value of that the next stage for you, or do you hand that over to someone else?

MIKE KUNIAVSKY

So, our projects don’t always go like this, but conceptually, the idea is that we go through this probe process to understand, in a very inexpensive way, what, kind of, the envelope is for value for a technology, for this limited period of time. After that, we start what we call the “prototyping process”. And so essentially what we do is we do all the work that we’ve done with all these probes, and the probes might be high-fidelity, it might really literally look like an app that you have on your phone, or work like an app that you have on your phone. Or it might look like a thing that you have in your house, like an appliance that you have in your house. Like, we take all those probes at the end of the probe process and we throw them all away-, well, or at least conceptually throw them all away. Because now, we have learned a lot more about what the value is, and how to deliver it. The probe process is all about not being tied to a specific idea that we’re going to create a specific product, and really just understanding what the potential value is.

So, typically what we do is, you know, we hand over the process of creating the prototype, either to our clients or we do it ourselves. So, we’ve developed fairly high-fidelity prototypes, like I said. Literally apps that use computer vision systems and machine learning algorithms to create analyses of certain kinds of images, for example. And, you know, it’s literally an app that runs on your phone and, you know, runs up in the cloud. So, you know, sometimes we go all the way through to that. And again, you know, my, kind of, vision is that at the end of that prototyping process, we’ll know a lot more about how to deliver this very specific idea, and then we’ll throw it all away, to start again, to design the real product. Because that prototype is not going to be scalable or stable or flexible in the way that a real product is gonna to have to be.

INTERVIEWER

Is there any way the probe methodology can fail, or is there always something useful that comes out of it?

MIKE KUNIAVSKY

I think that we always learn something from it. The thing that is possibly the least valuable is when we get feedback that nothing that we do is really that interesting or valuable. Like, people are often very polite, and they’ll say, “Yeah, I can imagine someone using this,” but really what that means is, “No.” That means, “I would never use it, no-one I know would ever use it.” And so when we’re essentially trying a bunch of different things, and we get that as feedback, I have to admit, it’s kind of frustrating, because it tells us where to not go, which is, in itself, valuable because, you know, it certainly saves a lot of potential development effort in something that no-one is going to want, but it’s not as satisfying as having some information about where to go. And so I think that that’s the biggest kind of frustration. I mean, another one is that, you know, when our clients are dead set on a specific technology that they would really want to use, and we can’t find any way that people are actually going to want it. And that’s a slightly difficult conversation to have with our clients, to say, like, “Yeah, that sounds great on paper, but no, it’s not going to fly.”

INTERVIEWER

And if you had any advice on running a really good probe, to make sure you get the most value out of it, what would be your top tips for that?

MIKE KUNIAVSKY

I’d say the most important thing when conducting a probe is to really work on stepping back from your assumptions about what the final thing’s going to look like. And to really try to make your hypotheses clear about why you think this is going to be valuable. Either, you know, valuable to somebody as an experience, or how the value or that experience is going to be delivered. Like, “Why do you think that that’s going to be the case?” And then really, when you have those hypotheses, really try to think of good ways to test those hypotheses, to really explore them in a lightweight way. And I have to say that that’s the hardest part of the whole thing.

To me, like, making a cardboard box, and sticking a sticker on it and pretending, you know, it’s a magic sensor package, that’s easy and it’s also fun. You know, coming up with the situation where that box actually answers a question, that’s the hard part. But it’s also, I think, the most important part, and I think that would be the single piece of advice that I would give someone. It’s, like, really work on, kind of, the initial hypotheses, and the initial probe ideas. You know, it’s like when you see a badly written but very expensive film, and you’re like, “You know, the writing is the cheapest part of this, couldn’t you just have, like, spent a little more time on writing it better?” And it’s like probes are exactly that. Like, you know, spend a little more time on, like, really thinking about what it is that you’re trying to do, here.

INTERVIEWER

Thank you so much for that.