x

    READY TO BECOME A MEMBER?

    Stay up to date on the digital shelf.

    x

    THANK YOU!

    We'll keep you up to date!

    Deep Dive

    Roundtable: The Impact of AI

    An article in Vogue Business by Maghan McDowell catapults your hosts into an analysis of current and future impacts of AI in shopping.

    SHOW NOTES

    Links mentioned in the podcast: 

    Register to our D2C series: https://www.salsify.com/content/webinar-3p-online-marketplaces

    Vogue Business Taxonomy  article: https://www.voguebusiness.com/technology/taxonomy-is-the-new-fashion-tech-essential-the-yes

    Ben Evans article: https://www.ben-evans.com/benedictevans/2019/4/15/notes-on-ai-bias

    TRANSCRIPT

    Peter:

    Hi everyone, Peter Crosby coming to you from the Digital Shelf Institute's Cape Cod office. And as always, Rob's in the Berkshires. So, Rob, I want to get right to it. Artificial intelligence. What's up with that. You know, I saw there were a few things you had sent in an article my way, which was in Vogue Business talking about The Yes the shopping app that Julie Bornstein who used to be at Nordstrom, and the article was a lot about taxonomy, but it also was talking about the way artificial intelligence is being used to build their product taxonomy. And that kind of sent me down a kind of a rabbit hole of how is artificial intelligence being used today? And in what ways, what is it good for? And then also what is bad for us? So just let's get to it.

    Rob:

    This is just an awesome topic. I want to start with a little bit of history. I studied AI back, back in college and it was before AI became a big thing. Again, there was a period of time in the 1970s and 1980s, which people in the space called the AI winter. So there was a lot of buzz around AI when people first figured out the statistical and computational techniques that could make it possible to solve all kinds of problems, but it just turns out that the computing power and the data storage capacity and things like that just weren't good enough. Back then they were too expensive. And so AI took a backseat for a long, long, long period of time. And then in the 2000s computing capacity got cheap enough and data storage got cheap enough. And then also new techniques such as deep learning were coming to the front that has allowed a Cambrian explosion of AI applications. So I think we've entered a period where we're only now, I mean, just very, very recently at a point where people can take a broadly applicable base technology and play around with it in specific use cases and in the last, you know, three years or so in retail, we've seen a bunch of really interesting progress, right? it's kinda moving out of the lab and out of science fiction and into real applications that are quite interesting. And this week we've got a Roundup of about a half dozen of them that I think are pretty fascinating things to pay attention to over the next couple of years. Yeah, I think, and you tell me, I feel like one of the earliest social platforms to get good in a way.

    Peter:

    I mean, it's a, it seems one that fashion is a really early place where AI is being used mostly for identifying the sort of images and relating them like Pinterest has been doing it for fashion for a while, right. Amazon introduced a style snap. This is sort of basically Shazam for clothes that they built into their app. And it's sort of using machine learning to match the look in the photo and find similar items for sale, that kind of stuff.

    Peter:

    Yeah, people have been doing image-based, search, and reconciliation for the past 10 years in all kinds of various ways, you know, in the consumer space, you've got Google photos. So if you use Google, you know, Android phone, and you've got Google photos, Google will automatically tag every photo with the correct person's name based on face recognition and stuff like that. I think that if for whatever reason, it never really took off in terms of taking a picture of somebody's shoe and then using that picture for search. It never really exploded. So the use cases that we're seeing now, I think are more pragmatic. They fit more in line with current user behavior and they act to optimize a shopping path that already exists rather than trying to create a new shopping path. So, you know,

    Peter:

    Introduce a new habit.

    Rob:

    Yeah. I like what you were talking about in terms of asking the consumer to take the photo for search, you're asking to do something different than you normally do. So in this case the article that piqued my interest is by the great Megan McDowell. I just absolutely love what she's doing with the technology section. And there's a technology platform called Farfetch that uses image recognition and a deep knowledge graph of fashion based attributes in order to deeply tag products. And the reason that's useful is you get like, look, I'm going to screw up a bunch of these words because I am the least fashionable person. I know, but you get like a sweater, and does it have a cowl neck? Does it have a v-neck? Does it, you know, whatever all the different tech varieties are and how they change over time, it's very difficult to make sure every single sweater from thousands and thousands of sweater manufacturers all have the correct tagging, according to the way that you want to label the data for your site search algorithm, right? And for your site taxonomy. And so what the Farfetch team has done is they built a large knowledge graph of all of these potential attributes. They've trained the system to recognize products that exhibit these attributes, and then they'll tag products based on what they see in the images. So if you've got a product and you've got a bunch of angles of that product in images and image shots, and you feed it into Farfetch, Farfetch will tag the hell out of it and they'll do it quite accurately. And so what that is that when someone goes to search or browse on an eCommerce website, the tagging allows them to make sure that they find every last product that might fit their search criteria. You know, v-neck sweater, every v-neck sweater will show up, not just the ones that some human has tagged. And so what I like about that is it's a usage of AI that plugs a gap and makes an existing consumer shopping pathway more effective.

    Peter:

    Yeah. I love that. Amit Aggarwal who's The Yes co-founder and CTO and called it mapping the DNA of a product, which I thought was really cool.

    Rob:

    Yeah. It's a good line. It's a good line.

    Peter:

    One of the things that they talked about was there's a new eCommerce platform called psyckhe, and of course, it's spelled something different. P S Y K H E . But in addition to gathering all the things you would expect about products, they also assigned products of personality profile, and they use the, what turns out to be the big five personality traits model. And one of them is neuroticism, which I love being a neurotic person that I love that somebody could sort of match my neuroticism to a product. And that at the other end of that scale was stable, which means, yeah, thank you. But there's also, the other five are agreeableness, extroversion, conscientiousness, openness, and that people, and it seems like also products can sort of be mapped to that scale of personality traits. I don't know whether that works, whether that's really, but that's what they're trying to do at Psykhe.

    Rob:

    Yeah. I, that's one of those that feels a little more like a parlor trick to me. Yeah. In particular, it feels like a parlor trick because you're asking somebody to do something different. So when I go and I search for a shirt, I'm not searching for a shirt. That's like, Rob is stubborn, and OCD gimme, gimme shirts for like a stubborn OCD guy, you know?

    Peter:

    Oh my God, now I understand your wardrobe.

    Rob:

    You know? So right now I'm wearing a shirt with Metallica and Justice for all album covers, which is one of my very favorite albums of all time. And there's a lot of different people like Metallica. And there's a lot of different people that like different Metallica albums. And I don't know that you can take agreeableness and come up with this shirt, or if you're shopping for a phone, let's say, I mean, you're shopping for an Apple, like an iPhone, like whatever, you just buy, whatever the iPhone is. But if you're shopping for an Android, there's a lot of different Android phones out there. What are you going to do? Like say, I need an Android phone for someone who's got OCD tendencies. Nobody thinks to search like that. And so I think this, again, this is one of these areas where it's pretty unclear to me what the application is. And any time that you ask consumers to behave differently than they already behave, it's a big lift. I mean, on the flip side, if people do adopt it and it takes off in mass, then don't, they'll have proprietary technology and there'll be worth a billion dollars. Right.

    Peter:

    Right.

    Rob:

    But it feels like a big, big swing.

    Peter:

    One of the things that I saw that Facebook recently unveiled these fun names, Groknet, okay. GROKNET, some geek at Facebook name that, and it's essentially kind of doing what Pinterest has already done right? Automatically identifying and tagging items to help people sell items on the marketplace. But what I thought was particularly cool and not surprisingly cause it's Facebook, they have, their database has an order of magnitude of around a hundred million images with a majority taken from marketplace. So again, sort of that crowdsourced database, that's really what can start to drive AI towards interesting places where I just as sort of scope of, yeah. I think this actually is a good use.

    Rob:

    So first of all, a little bit of an aside groknet, the word grok G R O K was coined by Robert Heinlein in the book stranger in a strange land and grok basically means to really deeply understand and comprehend a thing. So it's a reference in the lineage of the word and you'll see nerds and computer science that have that background. And based on that anyway, so any stranger in a strange land, which is, I mean, we definitely are all strangers in a strange land today.

    Peter:

    These days, yeah

    Rob:

    On Groknet the reason that I like this is for the Facebook marketplace, if you are a person and you're selling something like you would on Craigslist, you gotta take a photo of the thing. Right. And what are the kinds of pains in the butt thing about Craigslist is taking the photo and then writing the product title and writing the description and all this sort of stuff. And like, let's say you've got an old toaster or whatever that you're going to put on there. What's actually the make, what's the model. What am I selling? You know? And so you just use generic search phrases. We know that the more specific you can be about the product that you're selling, the better the product will perform from a search and discovery perspective on the marketplace. And also from a conversion perspective on the marketplace, because people have confidence that they know what the thing is, you know, minimally what they're going to do, whether it's on Craigslist or Facebook marketplace is they'll take the make and model to go to Amazon and looking reviews before they then transact on Facebook marketplace. Right? So getting that specificity is key. If you're going to take the photo of the thing anyway, in order to upload it in order to sell it on the marketplace, if Facebook can then identify the exact make and model and what the thing is, and start auto-populating this data, it's just a better shopping experience. It just helps people do what they're already going to do, but do it better. And so that it's for me, again, this is the theme of this stuff. It makes the whole process that's already happening, more efficient, good use of AI.

    Peter:

    Yeah, and every social network is trying to make every moment shoppable that that's, that's what they're about. And so the more they can link any of this content on their site to something that people could buy, they're going to try and do it, right? Yeah.

    Rob:

    I mean, what's the next, the next step here is, I mean, Facebook obviously uses a ton of AI for the feed. For every individual. It's going to be interesting whether the marketplace shoppable items start showing up there as part of the regular newsfeed and that kind of mix will be interesting. Maybe, Facebook will use Psykhe. Maybe Facebook will say, Rob has OCD. He needs this toaster. Exactly. I wouldn't put it past them. And I think particularly, you know, we've had a week of big tech in front of Congress, right. Talking about sort of the risks and overreach potentially of these platforms. And, and I think that whenever you're talking about AI, I think you need to talk about the downside. It had a bunch of downsides. So the whole facial recognition thread, I think, has received a lot of media attention over the last year. We don't need to need to go into it. But Benedict Evans wrote an article last year called, notes on AI bias. And one of the things that's interesting about AI is that what the machine does really is biased by how you train it. So AI at its base is a statistical technique that takes patterns and allows you to recognize patterns. So it can, for example, given enough examples of spam emails, it can with a pretty high degree of accuracy, guess whether a new email coming in that it's never seen before is spam or not spam, given enough examples of V next sweaters, it can identify whether the sweater coming in is v-neck or not. V-neck gave enough examples of sheep versus dogs. It can, it can, you know, with a high degree of confidence, say whether this animal is a sheep or a dog and so on and so forth, right? And this is, this is true of almost every single thing. So if you look at, for example, self-driving cars, give them, this is what makes a fully self-driving car. It's so difficult, given enough examples of a car pulling up to an intersection, it's going to be, it's going to make the right decision as to what to do in that intersection. Now, the issue is that with self-driving cars, there is a budget, billion, different variations on the theme. So there's a, is there a dog in the intersection? Is there an Academy and intersection, is there an old lady with a Walker in the intersection? Did somebody just blow a red light? Is that, you know, where's the stop sign located? Is it easy to, you know, there's just a bitch, there's so much variation. Is it a night as a day? Is it raining? Is it snowing? It's just hard to get nine. Yeah. So, so like the number of examples that you need for all of the different variations, make it, make it hard to completely train a car for every single scenario that might possibly happen. Right?

    Peter:

    Yeah. What Ben Evans said is machine learning doesn't understand anything. It just looks for patterns in numbers. And if the sample data isn't representative, the output won't be either.

    Rob:

    Exactly garbage in garbage out, so I'm going to read. That was great, I love that quote. Let me, let me read another one, which is my favorite. What does this mean in practice? My favorite example is the tendency of image recognition systems to look at a photo of a grassy Hill and say sheep. Most of the pictures that are examples of sheep were taken on grassy Hills because that's where sheep tend to live. And the grass is a lot more prominent in the images than the little white, fluffy things. So that's where the system's placed most of the weight. And so the crafting  example data sets and training datasets that don't fall into this technique is really problematic. You know, for the infant facial recognition. For example, if you look at a pad trying to pattern criminal behavior and you give it the wrong dataset, like let's say that you give it a dataset of people that are currently in jail and the jail happens to be, they'd be more heavily weighted towards being African American to white. The system by accident might be racist. You know, the system might come out there and just see pictures of African Americans and say, criminal, criminal, criminal, criminal, criminal. We've actually seen this happen in police systems. And it's not because people sought out to create an AI-based recognition system. That's racist. It's just getting the training set, correct, to make sure that you're accounting for all of the different examples and you're not biasing it. And all this sort of stuff is really, really difficult. So you know my tendency, here again, is if you, if you're looking at a problem, that's a reasonably well-defined problem. That's reasonably well contained. And that is part of what somebody is already doing than that, that can be a really good usage of AI, you know, tagging products. Good, good use of AI trying to predict when you, when you might go out of stock. You know, so first for supply chain optimization, really good use of AI, you know, stuff like that, that are just really contained though. Those are where you're going to see the use cases shine.

    Peter:

    Yeah. Ben Evans said the answer is to build tools and processes to check on bias and to educate the users, to make sure people don't just do what the AI says, love this machine learning is much better at doing certain things than people, just as a dog, as much better at finding drugs than people, but you wouldn't convict someone on a dog's evidence and dogs are much more intelligent than any machine learning.

    Rob:

    That's a really good point. And then there's another aspect of this. That's, that's also worth taking into account, which is the AI is not going to design your category structure. It's not going to design your attribution model. It doesn't design your knowledge graph. The AI will, it's good at tagging things. It's good at maybe identifying, like, I'm not sure this might be a new tag. You know, where does this fit? It's good at doing those things, but it's pretty, not good at defining the model and defining the model. There's an art to it, and it's got to evolve. And that, and that means that you've got to constantly be playing a mix of where the human comes in and does the creative design, and then where the AI comes in and just makes something repeatable. And automatable my favorite illustration, and maybe we can end with this is in the nineties, Garry Kasparov who's by many considered the greatest chess player who ever lived lost to deep blue, the IBM AI-based chess machine. And that was a turning point in AI where people said, Oh my God, you know, this AI is real here. Kasparov, it's actually interesting. Went on to create a new international chess competition where the player on each side could be anything instead of a human to human. It could be human two against a team of humans. It could be human versus a machine. It could be human and a machine as a team versus just a human and so on and so forth. And for many years now that in that tournament structure, the human-machine teams have been utterly dominant, meaning a human and machine working together, beats just a machine or beats just a human or beats, a team of humans or beats a team of machines. And I think that's really interesting. And I think that that's a good illustration of where we are with AI and, and it'll help guide where the, where the good usages of it are. It's not a magic button

    Peter:

    That makes a ton of sense. And I think we should close on that. I do need to point out one of the things that involve sort of both human and machine interaction is that the folks of The Yes, discovered that a new attribute that had come up very recently is the zoom top. So I presume in this case, cause they call it a top, you know, women searching for what is a top that I can wear on a zoom call and look good. And so they've now added that to their attributes. So they say you have the machine finding the pattern and then the humans decide to add it. That is the essence of the collaboration of machines and humans,

    Rob:

    a zoom top.

    Peter:

    That's what you're wearing right now with you, with your album cover. All right, that is it for today, enough artificial or real intelligence for the day. We do want to give you a heads up on the next session in the strategy playbook series for DTC, Jamie Dooley, he's a former e-commerce and merchandising leader for brands like Keurig, Dr. Pepper, Dorel Juvenile, Wayfair, and Target. He'll be doing a deep dive on how to drive sales on marketplaces and also how to use the marketplace as a way to test and learn your way into a D2C strategy. It's going to be a really great continuation of our series. Annie will put the link to register in our show notes in your podcast app. Thanks for joining us and thanks as always for being part of our community.

     

    Links mentioned in the podcast: 

    Register to our D2C series: https://www.salsify.com/content/webinar-3p-online-marketplaces

    Ben Evans article: https://www.ben-evans.com/benedictevans/2019/4/15/notes-on-ai-bias