x

    READY TO BECOME A MEMBER?

    Stay up to date on the digital shelf.

    x

    THANK YOU!

    We'll keep you up to date!

    Podcast

    Boosting the Search and Content Flywheel - An AI Case Study, with Bob Bowman, Search and Digital Shelf Expert at Win the Shelf and NEEM

    While Global Head of Digital Commerce Search at Unilever, Bob Bowman, now Search and Digital Shelf Expert at Win the Shelf and NEEM, was at the forefront of introducing AI into his global search and content operations to scale the business impact they were able to have. He joined the podcast to share the under the cover essential components and strategies required to test and learn his way to better content reaching more product pages at a lower cost. 

    Transcript

    Our transcripts are generated by AI. Please excuse any typos and if you have any specific questions please email info@digitalshelfinstitute.org.

    Speaker 1 (00:04):
    Welcome, Bob to the podcast. We are so happy to have you on. Thank you.
    Speaker 2 (00:10):
    Thanks for having me.
    Speaker 1 (00:11):
    Well, you bring so much search and digital shelf expertise here, and so zooming in on that, in these times when AI is kind of changing the game really in terms of how search is going to work and how shopping conversations are going to work in the future, and you've actually built out an AI tool to help with content creation and efficiency in the organizations that you work with. So we just have to pick your brain really is really what this is all about. So I'd love to start by having you share what your role was at Unilever and the problems you were trying to solve and sort of what you're up to now.
    Speaker 2 (00:54):
    Sure. So yeah, so definitely pick away. I'm really excited to share some of what we had been working on and some of the lessons learned. But in terms of my background, I was the head of search at Unilever for the global digital commerce team. So overseeing search in digital commerce across the globe. And what that meant was I particularly looked after the organic side of things for search within digital commerce, which meant then I also did most of our content guidelines as well. So creating content guidelines, and then really the goal was to use search and content to win the digital shelf. So I also oversaw that side of things, which was actual measurement. So what do we need to do to win on the digital shelf with search and content, and then how do we measure and know whether or not we won? So I kind of ran the gambit on that.
    (02:04):
    And then in order to deliver that, we had a service called Content as a service that a teammate of mine ran, and that was so we could drive content creation within the markets across the globe. So my role in that was to create the infrastructure that whole program and all of our content and search ran on. So we had something called the Content Suite, which was a few different tools, but one in particular I had created, kind of designed and developed called Query. And Query was a proprietary tool, and it was where we orchestrated everything in terms of developing the content for our products. So we'd have our keywords in there, we'd write our content in there, we'd create visual assets elsewhere, but they'd still be approved in Query. Once everything flowed out of query and was delivered to the digital shelf, all of our measurement of the digital shelf then flowed into Query.
    (03:09):
    So we had that kind of end to end view of how we were doing. And then what we did is we had created to make all of this run. So we had infrastructure, we had agencies, but we had a process. We called the searching content flywheel. And that process allowed us to be successful. But what we did is we took every step of that process and we built it over time into Query. And what that allowed us to do is to kind of start moving towards AI and automation. So once you have something in a system, you can do AI and automation. And why we needed it was we were trying to do this all at scale. So we had at any given time about a hundred thousand products in our system that we were managing and trying to create content for. We had about 10,000 products coming in every year.
    (04:03):
    We were overseeing over 30 markets. So this is why we built the process and we built the infrastructure, but we still had more to go in terms of reaching our goals. So the AI and automation is kind of what we turn to. And then for today's topic, in terms of what we did in ai, we created something called the AI Content Generator. And what that was was inquiry where we used to manually type in our names, our bullets, our descriptions. We were now able to create that using ai, and we are able to do it with SEO on brandand voice to our global standards, incorporating topics and claims. And then we had a whole feedback and approval system. So that is in a nutshell what I did and then what we created.
    Speaker 3 (04:57):
    So you really did AI before AI became a big craze, which is awesome. And that's why we want to pick your brain because you learn from your experience as people are building out these AI tools strategies right now, I'm sure there's a lot of lessons learned that you got out of your entire project that you went through. So let's start with that. How did you know AI was the right solution before you even jumped in to Build Query? And how would you recommend brands assessing that now with so many different AI technologies out there?
    Speaker 2 (05:30):
    Yeah, yeah. So that's tough. So first it's good that you, I didn't mention that, but it's good that you mentioned that it was a little bit before it was a thing. So the timeline on this is when we went live with what I just described, actually the first parts of it went live before chat GPT went live. So we actually beat open AI to market. Obviously we're using their tools, and
    Speaker 1 (05:57):
    You're going to be a trillion dollar company just by yourself too. I sure,
    Speaker 2 (06:00):
    Yeah. Hey, why not? So questions like this, I don't know that they're getting easier, but back then we were kind of making decisions around this kind of in a vacuum. We didn't really have anything to look at in order to know what to do. So in terms of figuring out whether or not the best solution mean, so the lesson we learned is by starting not knowing the answer of whether or not it was the best solution. So when we started off, we had this idea that we wanted to use ai, and I think a lot of people have that idea now is, oh, we got to do AI somehow somewhere for something. I think we had a pretty good use case of, Hey, we have this content that we're writing, we use ai. I think it was logical, but we started off with the solution that I just mentioned, and that was really too much to start with to be able to do all of our written content with ai.
    (07:01):
    So what we found out was we went through an RFP process and try to find a partner that could help us create this. And what we found was that there were a lot of people willing to help us create this solution, but the formula was it was going to cost a lot of money and it might or might not work, and that was just a risk that we couldn't take. So that wasn't really a proposition for our perspective. So that's the thing that made us then start to think, well, is AI the best solution? What are we trying to do? So I think that's the lesson is what are you trying to do and then determine whether or not AI is right. So in our case, what we're trying to do is we're trying to scale our content production. So we wanted to reach more products and we wanted to do it for less money and less money could mean the same amount of money, but more products.
    (08:01):
    So that was what the goal was. So what we thought about was first, well, what could we do instead that could get us there? Is there something besides ai? And what we came up with was content reuse. And we'd been reusing content, but we said, what can we use AI or automation to help us to reuse content really in a really easy way? We have all this content, can we reuse it? So we thought that we could do that with certainty, and we could do that at a price we could afford. So that was our alternative to ai while we then worked on the sidelines on AI to see if we could regroup and come up with a better solution. So we work towards content reuse. And then we also came up with this approach of there, there was so much risk that we heard in terms of going big with AI that we started to break down, well, what's causing the risk?
    (09:03):
    And much of it was not having the right data in the right place. So we thought, well, could we get the data in the right place in the meantime? And what's the most efficient way of doing that? And I'll talk about that a little bit more later, but the basic approach was could we build other things that we need to build anyways and that would get the data where it needs to go? And then the third thing that we did in trying to assess whether or not AI was right, was we went forward with an AI solution, but for the easiest thing, which was for what we call our perfect names. So our product names and our perfect names, we had eight part names, and really three of those parts were creative. So we just focused on could we automatically create three of the eight parts with ai?
    (09:53):
    And that's kind of what we went forward with. So we took that big thing that we're aiming for, which was basically shooting for the moon, but not knowing where the moon was and how much it was going to cost to get there and breaking it down and turning into three different, we followed three different paths, but they all had very little risk and the costs were known. So that's kind of how we assess things and how we were able to go forward. And I think for anybody listening, it is really important to get over that first part, which is why are you trying to use AI and does it make sense? And even if it makes sense conceptually, does it make sense in terms of realistically and how can you get, what's your fastest path to success?
    Speaker 1 (10:43):
    I'm interested to hear from you who the we is, because so often these projects, you can't just do them in a silo because they have, as you were talking about, they have risk implications, they certainly have technology implications. So who were your partners? Who were the people in the room that needed to gather to move this forward across this process of consideration and implementation and measurement?
    Speaker 2 (11:10):
    So it's a really long list. Of course, it depends on,
    Speaker 1 (11:17):
    Maybe AI can shorten it for you. Give it a sec.
    Speaker 2 (11:21):
    Yeah, it depends on how you slice it and dice it. But in terms of doing this, really, who was in the room? I'm trying to think of the shortest version of the list. It would be, so my team, so we have search experts, content experts managing this, the business owners managing all this,
    Speaker 1 (11:43):
    And you reported up into where E-commerce or up into
    Speaker 2 (11:50):
    E-commerce, which ultimately goes up into sales.
    Speaker 1 (11:54):
    Okay.
    Speaker 2 (11:55):
    Organization.
    Speaker 1 (11:56):
    Thanks.
    Speaker 2 (11:59):
    So we, as business owners, we had internal technology leads, so to help us manage the technology side of things. And then we had an external kind of developer partner. So that was kind of the core of it, but we were working obviously with the markets and the end user. So at the end of the day, we were creating, I was a product owner and the product was Query and we had a lot of constituents using it. So we had anywhere from agencies to marketers themselves to DCOM teams. So in terms of who's in the room, and it really depends on what we were building, but we were getting feedback in terms of what the markets needed, what the users needed as well. But the core team around the AI piece was kind of the business owners, the technology leads, the external developer, and then we also included the agency as well that was creating the content.
    Speaker 1 (13:13):
    Is this me? Oh, we're under, oh my gosh, I'm so sorry. Okay. I was fascinated by the answer, so
    Speaker 2 (13:20):
    No problem.
    Speaker 1 (13:21):
    So you were talking a little bit earlier about the importance of being able to create content that sticks with your brand voice that is compliant as it's made, hopefully, and so that it's really adopting your tone and your voice, and it's using all of that rich source data that AI can use in order to be able to do that. So the importance of standards in this and that it be as little editing work on the backend as possible. What did you learn about having the right standards in place, which then probably, I would assume, creates the right prompts so that AI can have less hallucination, more accuracy? And I'm just wondering about your process to do that.
    Speaker 2 (14:14):
    Yeah, so in terms of if you think about it, so I mean, it was kind of a long process and the process wasn't all related to ai. So we were lucky enough that we developed a process that then kind of fit seamlessly with our needs for AI once they came around. So these are things I would recommend to anybody, whether or not you're pursuing AI or not. So what we did is the first thing we did is we created global content standards and search standards. So we had those kind of for years. And then that's why we got into the business of creating Tools was to help ensure that those standards were met When we built Query, query was able to measure those standards and make sure that we were keeping those standards. We had a keyword health score, we had a content completeness score, and we had a content quality score.
    (15:16):
    So it was all baked into the system. So the original need was how do we as a global team and how does Unilever as a business, how do we know whether or not we're adhering to our own standard? So do we have a standard, are we adhering to it? And we were able to see that from a distance and then we were able to keep agencies accountable, et cetera. We were able to instantly see at a minimum was the content hitting the minimum threshold that we were able to measure for our standard. So that was really important. So the keyword health is looking at things like volume and relevance. Completeness is looking for just what it sounds like. Are there six bullets? Are all of our name parts filled out? Things of that nature. And then what the quality score is looking at is it's looking at the obvious spelling, grammar, readability, but it's also applying our SEO standards as well.
    (16:14):
    So it's looking at are those keywords, are they good and are they in the right place for SEO? So we had that established and we had that measurement in place, and that it turns out was actually critical for ai. If you think about it, well, actually there's two things. First, there's the input part. So one of the risks that we found in our initial RFP was that there, as I mentioned, there were gaps in content. So what we figured out was it's really important for the AI to know what good looks like and to be able to do it in a scaled way. So to be able to do it in a scaled way, it has to know, it has to be able to see other content that may be similar to what we're asking it to create and know that it's good. And that's what our content standards and our measured content was able to do.
    (17:12):
    So first, the standards could be directly programmed into the prompt suite, but then also the measurement part helps the AI find or helps the system find things to give the AI as good examples. Then from an output perspective, I mean if you think about it, you can go, especially now, you can go to any LLM and you can probably create a pretty cool looking set of bullets or description, et cetera, and I think at first pass it would look great, but the challenge is it always looks great unless you have a standard to which you're trying to achieve and then it starts to not look as good. So having that standard and having it measurable is really critical for the output from the ai. So being able to see instantly is at least meeting the minimum standard of what we're trying to achieve. And we know that through the data.
    (18:14):
    And then the other thing that we were able to do, and again, it's because we had it in place to begin with, it was part of our process, it was part of our tooling set was we had a feedback and approval process. So again, we're using ai, but we're still getting human feedback and approval. And in fact, it's the same human feedback and approval that we were getting before. And in fact, that person doesn't even have to know that the content and that we were being sneaky with it, but they don't necessarily have to know that it's coming from AI because they are approving it on the same standard that they've always approved it and through the same process. So having that in place was critical as well. And again, it keeps you kind of safe from the obvious risks of just sending something out the door that was created by ai.
    Speaker 3 (19:05):
    Speaking of risks, there is a level of risk when working with AI, and there's also a risk of doing too much with AI and automating too much. So how did you work through slash convince your organization, get legal and regulatory and everybody on board to accept that risk?
    Speaker 2 (19:25):
    Well, from a legal and regulatory standpoint, the company as a whole had standards. So we just simply had to go through the kind of processes that were in place to say that we were AI certified. So that part was kind of easy because we didn't have to come up with a standard or to convince somebody of the standard. We just had to show that what we were building kind of met that standard. And then in terms of not biting off more than you can chew, I mean, I talked about that a little bit in the beginning about the original idea was to do it all and we ended up doing it all, but it just wasn't possible from the get-go. So we had to come up with something else. I think the other thing in terms of how this applies to people that are listening is a lot of people are being asked to do it all, and I think the risk comes more from what you're being asked than maybe what you're proposing to do.
    (20:34):
    So it's really hard to kind push back on that. So one of the ways that we approached it is, and it was difficult even for us, even though it was a number of years ago, it was still difficult to try to navigate that, Hey, we have to do something with ai, and then what are you, is this really ai? What are you doing? And it definitely took some pushback to get, we got to where everybody wanted to be, but we had to do it in a kind of responsible way. So the way that we did it, and it's a way of thinking about it, is number one, go back to the first lesson of think about what is the part that you absolutely need AI for, and let's focus on that. But then I've kind of hinted at this along the way, but what we were able to do in terms of mitigating risk is we were able to build a number of tools that we needed anyway, and those tools individually we needed, but collectively allowed for ai.
    (21:36):
    So we were able to go through the process of reaching AI without the huge risk of saying, we're going to do all these things and then we're going to have AI and let's see if we can do it. So what we did is we had that content scoring that I mentioned. So we were using it as it had a standalone benefit. It was something we had always used, it gave us our gap identification. So we were able to look in a market, what are the gaps? We were able to create briefs and go fill those gaps. So always had a use, it's totally worth building, but as I mentioned for ai, then it's going to have, it's your measurable inputs and it's your measurable outputs. Then I mentioned that we took a little bit of a left turn and we said, you know what? Let's do content reuse.
    (22:25):
    So content reuse is going to help us get towards our goal of scaled content for less, but in order to do that, we had to create a product to product network. And again, there wasn't a lot of risk. We were confident we could do it and we could afford it. But the product to product network basically with that said, is which product is related to what other product within Unilever? And believe it or not, the system didn't actually know that on its own. So if you had an eight count of Dove white bar and a four count of Dove white bar, the system didn't know those were the exact same thing, just different counts. So we had to create this network, and by having that network now from an AI perspective, AI now knows what products are related to others. So it can start making comparisons, and as you're trying to create content, it knows what to look for that's maybe similar, and it knows if it's good or not.
    (23:23):
    We have the content scoring. The next thing we had to do is I mentioned we had a keyword score, a keyword health score, our original keyword health score, which just based on volume, and as you can imagine, you could game that system by putting keywords of high volume, but low relevance, completely irrelevant keywords. So we trying to find a way to close that gap. And the way we did that is we created a tool for keyword relevance, but that created a product keyword network. So now we have a product to product network, a product to keyword network, and this now allows the AI to know not only which products are alike, which content looks good, but now it knows which keywords to use on a given product when it's creating the content. Then finally, on the input and output side. On the input side, we created a digital content brief.
    (24:21):
    So again, we'd always used the brief, we just made this digital, it made the whole process easier, but now AI has the information it needs for innovation through the brief. And then we had the feedback and approval. Again, we had all of it before, but it was really critical. And again, to your risk mitigation, I mean, that was a critical thing in terms of selling it to the business and getting that AI certification was that we had an ai, or sorry, a feedback and approval process and tool in place. And then one other thing I'll add to this, and again, I'm thinking through the lens of trying to manage expectations. It's really important whether you're in an off the shelf tool or you're creating your own, if it works, it's pretty magical, but getting it to work, it's not magical. There's a lot of work involved.
    (25:21):
    So you had asked earlier who were the people that were in the room? Well, once this worked, anybody could look at it and be like, oh, wow, this is amazing. But to get it there, not only did we have to do all these things that we've been talking about, but every week we had a call with a search expert, a content expert, a technology, an infrastructure expert. The developer was Capgemini, I don't think I mentioned that, but Capgemini was our developer, and then we had an agency on the call. So we had the process people, the subject matter experts, and then we had the AI experts and technologists on a call every week going through trying to make this thing meet a standard. So the output is magical, but the inputs to it was a lot of expertise going into it. And I think it's really important to paint that picture when you're getting asked or pressured into creating these results. They are magical. So don't get me wrong, but there's a whole lot of work, and I don't know if it's understood always how much expertise and forethought, et cetera into making the magic happen.
    Speaker 1 (26:47):
    It really needs that training data, which you had to create to give it the parameters of standards and quality and et cetera in order for it to not, because AI will try to give you the answer you're looking for based on whatever it has at its disposal. And so it seems like if you just left it to its own devices and you didn't have those descriptions, it would've gone somewhere to try and figure out to do what it would do, and then you get content that doesn't meet standards. But it sounds like in some ways AI was a, I dunno if it was a forcing function because it sounds like you were down those paths already for your non-AI scaled work, but putting it in a format in a way that AI could access it as part of its training, was the critical piece to achieving trustworthy scale, is that,
    Speaker 2 (27:44):
    Yeah. Yeah, 100%. I guess this is a specific use case, but if you're going to do what we did, doing it the way we did it is probably, I mean, at any place on that path, if you stop, you're in a really good place, if that makes sense, even if you don't achieve ai. So if you have content guidelines, you're in a good place and a better place than a lot of other CPGs, if you then have a structure to then scale that, now you're in an even better place. If you have a way to measure all that from end to end, oh my gosh, you're in a fantastic place. You're an
    Speaker 1 (28:26):
    Elite. Yeah.
    Speaker 2 (28:27):
    And then if even within that, if you can just add some automation to it, you're in amazing place. And then of course there's the AI at the end. So that's the way we try to do it. We try to do it in a way that was the best way at any given moment, even if we stopped that, we'd be in a really good position. So I think that was kind of the trick. Now, it wasn't always planned. We didn't know, we weren't on our way to AI for years, but we were always trying to set and keep our standards and scale them is basically the overarching direction we were taking.
    Speaker 1 (29:07):
    And at the end of the day, the magic I would imagine is, wow, look at those results. And I dunno what, when you sit back now and look at it and what you were trying to achieve, which is scale at the required level of quality and accuracy, et cetera, we did two x more content. What was the way in which once you stood back and you've been running it for a bit, were like, this is actually magic. Do you have a stat that really leaps out at you that gave you joy at the time?
    Speaker 2 (29:49):
    Basically the way that we measured it is we felt that we were getting to about a 40% reduction in kind of time and therefore cost and about a 50% increase in quality. So that's kind of how we measured things. And then the idea behind that, as I mentioned before, it's not necessarily to pocket that 40%, it's more to can we, there's really two things that were trying to, or we always are trying to achieve is one, can we reach more products, right? Because oftentimes you have to prioritize and we're prioritizing our best sellers or whatever the case is. So can we expand that? And then the other thing is that search and content flywheel that I've mentioned, the final step on that flywheel before you go back to step one is continuous improvement. And we had a lot of ways of measuring in order, and we conceptually had all the ways that we could continuously improve, but frankly, it was hard enough to get around the flywheel once. So to then go around a second time on the same products was really challenging for us. So that was the other thing that we were aiming for with this is could we get to more optimization?
    Speaker 3 (31:13):
    And Bob, just a question, because doing a lot of research on org structures of the future and such, and a big piece of it is that a lot of this is going to get automated. Do you see a world where this eliminates certain types of roles because the automation is able to do the briefing and the content creation and you have more of an orchestrator versus having a fully built out content team and a retail media team and a search team. How do you see this evolving, knowing that today, this is not, most organizations are not, this is advanced, but let's look 10 years in the future. What do you see that looking like?
    Speaker 2 (31:52):
    Oh, geez, I'm not quite sure, but you
    Speaker 3 (31:58):
    Can't be
    Speaker 2 (31:58):
    Wrong, Bob, so whatever you think. No,
    Speaker 1 (32:02):
    I'm not going to be in 10 years to check on what you said, so you just go for it.
    Speaker 2 (32:06):
    Well, I think I am self-limited on the AI is not magic thing that I've already said. So I understand that there is a time and a place where it does become magic, but right now, I mean it's magical, but everything is a tool and everything has to be turned into a product, for lack of a better word. An open chat bar is not necessarily a product if you're trying to achieve a specific thing. And we here what I just described, we're trying to achieve a specific thing, and it took years of work in order to achieve that specific thing with ai. So I think assuming that there's not this magical development, there's going to be a lot of this kind of creation of, I mean, to my knowledge, these tools don't really exist or there's only a few players that have tools that can do what we created.
    (33:18):
    So I think there's a long way off before this becomes kind of scaled in the sense of a lot of people having access to it for this kind of role. That's number one. And then in terms of what you would want to do with that, I mean our vision for it, and again, could this vision change over many years? I'm sure it can, and maybe it has to, but our vision again was to expand reach. So we had a lot that we weren't achieving with the people that we already had. So it wasn't about getting rid of people, but it was about eliminating the gap between what we were able to achieve with the people we had and what we really wanted to achieve. So I think there's a long runway for that to play out of these tools, to actually be in people's hands and to be useful and scaled, and then to reach the objectives that people have before it starts changing structure. Now, with that said, I know from a corporate perspective, I'm sure there'll be a lot of pressure there, but from a realistic perspective, again, unless some magic happens someday, even if you had a magical solution right now and you dropped it in any CPGs lap, I say magical meaning it works not magical like godlike, but how do you get
    Speaker 3 (34:57):
    Not Harry Potter status? Got it,
    Speaker 2 (34:59):
    Got it. Right. How do you get their content into it, not the content, the data into it. Where is the data? Who has the data? What is the data that you need and have you been collecting it? Where have you been collecting it? There's so much that goes into it that I think this is totally achievable, and I think anybody listening can achieve what everything I just described. But you have to begin a process and you have to go through all the steps. I think in the near, well, you're asking me maybe further into the future, but I think for the foreseeable future, there is plenty of runway to make useful within the construct of what we currently have in terms of org structure, but I know there'll be pressure against that as well.
    Speaker 1 (35:50):
    Yes, of course. And so to close out, I think I'd love to bring it really back to the humans because you're creating this tool, this platform, this solution, and there's a lot of human feelings that come about when AI begins to enter somebody's work stream and it feels maybe risky. It feels maybe this is threatening my job. Don't make me figure all this out. How was the tool adopted that you created? Do you have any advice for folks that are trying to bring humans along in this adventure together?
    Speaker 2 (36:36):
    Yeah, I'm trying to think. I think maybe there's two things. So first, in terms of the ai, honestly, we didn't see anybody take it really as a threat, I guess, the way that we were creating it or the way that we were deploying it. So we didn't have that challenge, although obviously that's completely reasonable that somebody would see it that way. But I think that we had two challenges. One we were overcoming and one we had to kind of rethink in order to overcome. So the first thing is I did a lot of change management and a lot of adoption of tools and standards and all this stuff at Unilever, and I'd like to think we were quite successful, but this I thought would be the easiest ever because well, it's AI and everybody's talking about AI and everybody wants ai and what are you doing in ai?
    (37:39):
    But it actually wasn't because it still required you to go into a system and to do a few things, approve a few things, and click a button and make a choice. So there were still some steps even though you didn't have to write anything and you didn't have to really know how to do any of those things. So on the one hand, we were lucky enough that we fed it through a system. We had created the flywheel, we had created content as a service, we had an agency partner, so we were able to get it adopted that way. That was like no problem, because we had a whole process to run it through and we had a partnership, but from the everyday user who just wanted to go in and maybe could unlock the potential of ai, we had some markets that were desperate for help and therefore it was easy.
    (38:30):
    But then we had others that, well, they weren't really creating it before, so it was a luxury for them to be able to create it on their own. So that means even though it's so much less effort, it's a little bit more effort actually for them to do it on their own. So that's where we kind of ran into an adoption issue that we just thought everybody would be so excited to be able to use it. And it was so evident how useful it was, but that wasn't the case. So there we had to pivot and we had to really work backward from, well, what step is too much to take? What click is too many clicks? So we were working on that basically looking for ways to do things in bulk. So the answer just came out as opposed to maybe going through a process to get to the answer.
    (39:26):
    So that was something we were working on. So on the agency side, we were in pretty good shape on the user adoption. Again, if somebody was creating the content already, great adoption, if we wanted somebody to get their feet wet with, it was a little bit harder and we had to rethink the kind of user journey. But the other thing we were doing and that was successful and then I would recommend is something that we were always doing because we had query, because we had all these processes in place, we already had a whole system in place for creating adoption. So what we did is we did a number of things, but we basically tried to give adopters or high achievers as much sunshine as possible. So we had different awards. So people using Query every single month, they would receive just the tiniest token of an Amazon gift card, the top users, we had a lot of different outlets where we would show off people's work or case studies or whatever the case may be.
    (40:42):
    We had an entire education series that was really well attended every two weeks we'd have, I mean, it wasn't all the same topics or topic, but we would have training around search and content. So we made education as accessible as possible. Query was filled with little short videos for how to do everything. And then we worked with our users as much as possible to create a feedback loop to keep on bringing them things that would excite 'em. So we did all these things. And so we had a really great community. And overall we had really, really great adoption. And I would recommend that to anybody trying to deliver anything, whether it's AI or any kind of process or system, is to just think about how you can kind of reward people. And we never did it through punishing or calling out folks. We always did it through highlighting the people that were kind of the highest achievers or reaching our goals and having those metrics, like I mentioned, being able to show which markets were green and completeness and quality and all these different things was really helpful. And again, we didn't have to show off the red ones, but we could show off the green ones. And that usually got,
    Speaker 1 (42:10):
    And the reds know who they are.
    Speaker 2 (42:12):
    Exactly.
    Speaker 1 (42:13):
    Exactly.
    Speaker 2 (42:13):
    That's the
    Speaker 1 (42:13):
    Important part. Well, Bob, thank you so much. I mean, I think as always, any discussion of AI goes along with the discussion of the humanity involved and how does all of that work together towards very clear objectives and goals. And your case study here today has been incredibly helpful and I think helping our listeners start to imagine or continue their imagining of how to make it come to life in their organization. So it's really generous of you to share with us. Thank you so much.
    Speaker 2 (42:44):
    Excellent. Thank you so much for having me.
    Speaker 3 (42:46):
    Thanks, Bob.