Request Ticket
    x

    READY TO BECOME A MEMBER?

    Stay up to date on the digital shelf.

    x

    THANK YOU!

    We'll keep you up to date!

    Podcast

    Calming the Roiling Waters of AI Hype, with Bjorn Rosander, Founder and CEO at Pyyne

    As Generative AI plunges into the Trough of Disillusionment filled with the roiling waves of AI hype, we wanted to bring on a guest to inject some wisdom and expertise to steer your ship of AI on to calmer waters. Holy tortured metaphors, Batman, we found him! Bjorn Rosander is Founder and CEO at software & applied AI consultancy Pyyne, and he brings to the podcast sage advice and a framework that puts measurable business value, strong use cases, and change management at the core. 

    Transcript

    Our transcripts are generated by AI. Please excuse any typos and if you have any specific questions please email info@digitalshelfinstitute.org.

    Lauren Livak Gilbert (00:00):

    Welcome to Unpacking the Digital Shelf, where industry leaders share insights, strategies, and stories to help brands win in the ever-changing world of commerce.

    Peter Crosby (00:22):

    Hey everyone. Peter Crosby here from The Digital Shelf Institute as generative AI plunges into the trough of disillusionment filled with the roiling waves of AI hype. We wanted to bring on a guest to inject some wisdom and experience to steer your ship of AI onto calmer waters. Holy, tortured metaphors, Batman, we found him. Bjorn Rosander is founder and CEO at software and applied AI consultancy , spelled P-Y-Y-N-E, and he brings to the podcast sage advice and a framework that puts measurable business value, strong use cases, and change management at the core. Welcome to the podcast, Bjorn. We are so happy to have you on. Thank you.

    Bjorn Rosander (01:04):

    Thanks for having me. I'm excited to be here.

    Peter Crosby (01:07):

    The past year has been a roller coaster of AI rollouts and experiments and hype, and now here we are, certainly more than a year, but with the intensity of it, believe it or not, generative AI has already sunk into Gartner's trough of disillusionment on their hype cycle. And I think everyone's a little frustrated with how do we make this actually part of our, the way we work and how do we get business value out of this stuff, actually. And so they know it's important, they know it's going to change the way they work and the way their customers shop, certainly, but they're trying to navigate around all of that and how to get to use cases that can really make a difference. And you spend your day at  P-Y-Y-N-E, talking about AI and researching what's coming next. So that's why you're here. Can you solve all that for me or are you singing that with your customers?

    Bjorn Rosander (02:04):

    Yes. How much time do we have?

    Peter Crosby (02:06):

    Yes, ready? Well, 34 minutes go.

    Bjorn Rosander (02:10):

    I mean, obviously this is a space that is moving incredibly fast and you're absolutely right. I think companies are scrambling and working really hard to figure out where to use it, how to use it. There is definitely a notion of fear of missing out, and there are of course incredibly prominent people out there who hype this up for various reasons. And to a degree, I think there is a degree of overhype from my perspective. There are also a lot of opportunities out there, and I think one of the key questions that we are working with our clients on is exactly that. Where should we use it? And if we go and start experimenting, how do we do that? What kind of skill is needed? I think one topic or one area that we are touching upon a lot right now is also this complexity threshold. What's the level of complexity these models can solve where we can actually allow them to run a little bit freely, if you will? So finding that, tuning that in, have reasonable explanations or expectations and actually engineer these from scratch. I think there is definitely a wish that we'll just let AI do this and you slap an AI on top of something and we've seen that many times and it just simply does not work.

    Peter Crosby (03:54):

    And are you finding that your customers or clients are coming to you having already tried a bit and failed or are they coming to you from the beginning or is it just a mix of both and what do you see coming out of that?

    Bjorn Rosander (04:11):

    It's definitely a mix of both. I think there are people who are very humble in regards to kind of admitting that we don't know too much about this, everyone is talking about it and everyone is telling us that we're going to run out of business if we don't do something with ai. In many cases that I think fear is a little bit over exaggerates it. I do think companies should always looking at evolving, automating, innovating to keep up. But I think with any trend, you need to understand your own business really well, what the problem you are trying to solve and then slowly start working and building up those capabilities.

    Lauren Livak Gilbert (04:58):

    And you even said it yourself. I mean right now you can't just slap an AI on something and it only has a certain level of intelligence that it can work through, but it will learn and continue to grow. I think I hear from a lot of brands and just people who are experimenting with AI that they're concerned that they're going to try something and then it's going to change. But I think that's just the reality of where we are, right, because the AI has already advanced from when it was launched, it's going to continue to get smarter and advance, but the key there is just to be able to adapt and have the core principles. Would you agree with that as you're seeing some of these AI models and some of the changes in AI rapidly increase?

    Bjorn Rosander (05:40):

    Absolutely. I think one of the key areas to focus on is that continuous experimentation and also the models change. And a lot has happened the last couple of years when we started playing around with the first models, nothing really worked and then it became better and better and better. And even though I do see that we are seeing sort of a diminishing improvement over time, and this is why I very often come back to building up the capabilities and experiment. And then you also need to have the guardrails. You need to start with having a human in the loop so that you kind of know, again, the complexity level, what is the level of complexity that you can actually solve and when do you feel confident enough that you can actually let the agent do its own thing with perhaps not non-existent oversight, but minimal oversight. There still needs to be governance, there still needs to be privacy concerns, needs to be regarded and things like that.

    Peter Crosby (06:56):

    I'm super interested in who in the organization are you connecting with? Who's the person who's often reaching out and is it business, is it it, is it data? Is it C-suite? And then how does that sort of buying team or the team that you work with evolve over time to get the right people in the room?

    Bjorn Rosander (07:22):

    That's a very good question, and I think what we see, it varies a lot. We speak from everything from head of HR to marketing teams to technology teams. I think it depends a little bit what kind of product or agent or automation they're trying to achieve. I think we do work a lot with startups and then usually we talk to the founders and they have some often very, very interesting idea and problem that they do want to solve. With more mature companies, it can be everything from the analyst team, the marketing team, the product team, business owners. So we see a lot of variation in that. Sometimes they have defined something very, very clearly where they actually do have a good understanding of the problem that they're trying to solve, and then you can focus on that or it's more open questions that we have worked with several companies to investigate and that is more the open question, we want to use it, we just don't know where and how. And then you can start identifying those use cases. And you can partly also educate people or companies in the strength and the weaknesses of these models, and you can showcase examples from a similar industry to sort of exemplify what we have been able to do thus far and what we think we will be able to do.

    Lauren Livak Gilbert (09:10):

    And let's talk about some specific examples because examples are always great and bring everything to life. So what are some use cases that you've seen with brands that have worked really well around ai?

    Bjorn Rosander (09:23):

    That's a very good question. We very often still come back to customer service, customer support, chat assistance, navigating different customer support cases, and call center reduction is one of them. I think marketing and content generation is another area where we see a lot of focus, where it's a lot around, again, content generation, blog posts, social media and personalization I would say is also a very big area. And I think AI and actually the strength of these large language models can take personalization to an entirely entirely new level. Knowledge management and search guiding people into learning more about their products, learning more about their services, searching and getting that kind of support. We've seen some very interesting cases in checkout support where they are in the final stages of their acquisition or procurement of a specific item where they can receive guidance and answers and interacting with a chatbot text or more like a multimodal agent where they can actually ask questions about return policies, whatever it might be, payment terms and things of that nature.

    (11:24):

    Then of course there is also a lot of that kind of productivity enhancements, like internal back office. So we do talk a lot about that fancy consumer facing things that we can do. But I think a lot of the value is also that more sort of less sexy stuff in terms of back office and making your team more productive on the internal operations side of things. And there I would say it's still a little bit of a wild west. I think a lot of people, they have their own personal jet, GPT, and they might be pasting in sensitive information, not really knowing you don't really have that kind of governance. So I think that's more for CIOs or CISOs to actually define policies and perhaps subscribe more to enterprise models and limit that usage and make sure that personal I personal identifiable information isn't being shared going into the unknown.

    Lauren Livak Gilbert (12:40):

    And Bjorn, you mentioned personalization. I think that's a really hot topic right now, and I would love to hear from you how you define personalization and how you're seeing it work with ai. Because I think sometimes when people think personalization, they say they think, Hey Lauren, this is what we suggest, but it's more of like, we've noticed that you purchased gluten-free things, we're only going to recommend gluten-free things. So can you talk a bit about how you're seeing AI shape personalization and what's possible there?

    Bjorn Rosander (13:11):

    Yeah, I think I would break it down in two major components. One is how well do you actually know your customer? What kind of multi-touch points identifiers do you have? So what we see when we approach these challenges is that there is a lot of the traditional engineering work that needs to be done. You need to have a well-built and well configured CRM system. You need to start picking up these signals and queues. And again, a lot of the builds that we do, like a lot of the actual AI implementations, it's somewhere in the 80% more traditional software data engineering, and then it's like 20% prompt engineering agent automation. And this again is why slapping an AI on top of something isn't usually the solution because you need to think about it very holistically. You still need to zoom out and think about what kind of customer journeys do we have? What are they looking at? What are they clicking on? How are they interacting with our social media? What kind of questions do they ask our customer support two years ago when they had a complaint as an example, and how do you find the right level of frequency? So again, it's a deep understanding of existing business processes, C-R-M-C-R-M, data architecture, how do you collect that? What kind of changes do you need to do?

    (15:00):

    How have you labeled your data? And how do you then get started in customizing that?

    Peter Crosby (15:09):

    That's so often true with technology implementations that they are the great revealer of the mess that lies underneath it and the AI or whatever technology won't solve for that. Generally it needs to be a full change management and research project to figure out what is the state of things, where do they live, how do we make it? And I'd love to go over to the chat bot side of the house simply because what we're starting to see with conversational commerce and things like that is that the AI wants everything that's true throughout the capacity of the content. So it wants manuals, it wants reviews it, so it wants the full funnel of content to be able to actually power a conversation correctly. And is that the kind of thing that you are working on for people is not only how does the chatbot do its thing, but where is the information going to come from that makes it reliable and personalize, et cetera?

    Bjorn Rosander (16:26):

    Yes, a hundred percent. And I think we usually come down to how do we define that use case and user scenario, and then again, as opposed to doing a little bit of prompt engineering to facilitate that interaction, you also need to think through what kind of data do these chat bots have access to? And I think this can be architected in many different ways. And I think again, what we're seeing is there are a lot of ideas out there. Data is scattered as always, and I think to a degree, you need to get started and you need to start testing this out. And again, finding this level of how much can you trust them and what are the expectations from the consumers.

    Peter Crosby (17:33):

    So you were just mentioning trust, and I think that is so important and certainly for a number of companies that we've talked to, there can be that level of mistrust enough so that they don't want to engage with ai. Have you found that that's changed over the last year? Do you feel like some of that's starting to go away, or when you have conversations with customers that are in that spot, is it something, what does it take to make those barriers go away because it's so important to start getting into this game?

    Bjorn Rosander (18:14):

    The tail end of that question I think is very difficult to answer. And I mean, there is a degree of mistrust, and thankfully, so rightfully so, I would say mean we have a field where the company's building these models or people or stakeholders with big stakes in them, right? They're talking them out. That's the solution that will solve everything tomorrow. And that if you're not jumping on the AI train, you are approaching a fast and painful death. I don't think that's going to happen. I do think that, again, you need to find out what works for you and your company. And we do see skepticism. And I think that is, that's because people and companies have tried this out and maybe it worked for a couple of things that they were doing their own sort of prompt engineering. But going from that tiny experiment to actually scale something up into production, that's where a lot of people and companies have failed. And there is a degree of skepticism in the adoption of tools that are being built internally. And again, I think there should are very clear weaknesses with these models still, and you need to be careful how you navigate that. So skepticism is healthy, but it shouldn't stop you from experimenting and finding what works for you. When you were a company

    Lauren Livak Gilbert (20:02):

    And you had said that a lot of them fail in the implementation side of things, what are some of the reasons why change management is probably one of them, but are there any other kind of watch outs for anyone that's implementing or testing and learning with AI that they should be like, Ooh, let me watch out for this to happen.

    Bjorn Rosander (20:19):

    Yeah, I'm going to try to choose my words wisely now, but every time there is a big change coming into tech and there is a new trend and everyone is jumping on that train. And there are fantastic people and companies out there, but there are also very opportunistic companies out there who might not have that deep foundational understanding of data and machine learning. We come into this wave of ai, I mean AI and machine learning, it's been around forever. The mathematical models haven't really changed really the way we're able to build them out, the data processing that we have access to now at a reasonably affordable rate has changed. But again, there are a lot of opportunistic companies out there who are, I believe over promising things. And they start with some kind of implementation, which is a little bit slapping a model on something without that very deep and foundational understanding.

    (21:29):

    And we've seen that happen time and time again. It's like, oh, we try this and oh, it didn't work right off the bat. So it's just like, okay, can we take a look at it? I mean, I'll give you one example. A small e-commerce company, they tried to build a business intelligence assistant. They basically wanted a chat agent, business intelligent assistant asking for what products are trending? Who are our most valuable customers? What margins do we have, which states do we have the highest revenue from, et cetera, et cetera, et cetera. And they had tried this with a competing company and they basically just took a bottle and pointed it to a big data store, nothing else, and it's failed miserably. And we took a look at that and we immediately saw, okay, that's not how you do it. And we asked for a chance to re-architect it, and we actually came up with a product that worked surprisingly well. So that's actually another, I think, important use case that I didn't mention before, data analysis and research. I think every time when we think about these models, when we are thinking about going through large amounts of information and summarizing information, combining information from different sources, that's where LMS can be incredibly valuable.

    (23:12):

    So yeah, again, coming back to, even though you are using an LL M1 of the well-known models out there, you still need to think through the foundational engineering of how you go about that.

    Lauren Livak Gilbert (23:29):

    And speaking of foundational, so one of the things that your company has done is built out an AI adoption framework, which I think is really important specifically just to even know what questions to ask and how to think about it, which I think you've been touching on a bit. So we can't go through the whole thing because we don't have time. But can you talk a bit about some of the critical elements of that AI adoption framework where if anyone's trying something out, they can be like, oh, let me think about these couple of things, and then we can kind of point them in the direction to read more.

    Bjorn Rosander (23:59):

    Absolutely. And we saw this very early on in this when these L LMS started to gain more and more popularity, and we tried to put together a guide, a framework that we could basically walk through with our clients to make them a understand what are the strengths and weaknesses of these models? What do they have today? What are they trying to solve, and how do you figure out a reasonable sort of ROI and prioritization of those? So it's usually the way we approach it is that we have a couple of workshops with key stakeholders to gauge where they stand today, what level of understanding of these models do they have today, what have they done historically? And trying to step by step build up an approach where you can, again, start small and think big.

    Peter Crosby (25:12):

    I love that. And I think what we've been finding as we talk to our brands about it is just really anchoring on the business value that you're trying to get out of that use case. And from the way you're talking about it feels like, and please tell me if I'm misunderstanding this, there's some value in narrowing the use case. Maybe to start just to be super clear on what phase one really looks like, and maybe is it true that if that phase one goes well, that engenders a lot of courage and belief to then do the big heavy lifting, which is then rolling it out at scale and really making it part of the day to day. Does any of that make sense?

    Bjorn Rosander (26:01):

    A hundred percent. And I think there are definitely companies who are trying more a big bang approach, and that is more maybe go big or go home approach. I think we normally recommend a little bit more of a lean approach where you start something small and when you roll it out, you roll it out to a limited amount of customers, perhaps your most loyal customers who are progressive and more open to experimentation. But you also need to be careful. We also can't use our customers as Guinea pigs, right? But I think there's definitely a degree of AB testing, multivariate testing that should be applied when you roll out new things because just because you have a customer chatbot doesn't mean you're actually solving the right problem. I think from my perspective, a lot of the times when I interact with these people, they do not solve the rights problem. I still want to get to a human as fast as humanly possible. And also, I mean, it's, again, there is no silver bullet. I think a lot of the times where you think about, oh, we need a customer support chatbot that might be so, but maybe your membership pages just simply don't have the right features where they could have done a much more self-served approach.

    Lauren Livak Gilbert (27:41):

    And Bjorn, what do you think about the future of more of an agent to agent type of interaction where you have maybe a brand who has an agent that's trained on their products and their style guide, and then that's interacting with another agent who is either shopping or coming from a retailer and you're removing the human and the consumer from that loop and just having those agent to agent conversations. How do you think that brands, retailers, companies should prepare for that kind of world? And what do you think that looks like?

    Bjorn Rosander (28:19):

    Another very good question, and I can't say I have a very straightforward answer to that. I think all of those scenarios I think are always way more complicated than they sound. And when you start to have this multi-agent and removing people completely, I think you also need to ask yourself what kind of brand and what kind of brand experience do we want to offer our customers? And I think for some brands, a more automated agent approach might work really well. I think for some more upscale luxury brands, maybe there's still a need for human touch. I think that's a very difficult question. I haven't really yet seen that very multi-agent fully materialize in a sort of value adding sense. I think

    Lauren Livak Gilbert (29:25):

    Yeah, same yet. We're

    Bjorn Rosander (29:26):

    Probably going to get there. We're probably going to get there. But again, you need to crawl before you start walking and you need to walk before you start running.

    Peter Crosby (29:37):

    And I think that's a good place to close this out. I think there's so much pressure to move and figure it out. And I know companies all over are really looking at every area of the business and going, where could this actually drive our efficiency or raise our productivity or hopefully drive growth and trying to figure out prioritization and all that. And it's creating a lot of, I think, anxiety, excitement, and it feels like the way to move through that as is has been true with every era of tech technology innovation is get a plan, have a framework, start with the people that can really dig in and are going to really groove with you on making this come to life. And then you see how it goes from there. And there's time for that. And I think that's part of why it's been great having you on as reminding everyone, it's okay. Everything's going to be

    Lauren Livak Gilbert (30:40):

    Okay.

    Peter Crosby (30:40):

    Yeah, everything's going to going to

    Bjorn Rosander (30:41):

    Be okay. I think everything is going to be okay. And I think there are a lot of exciting opportunities, but I also encourage people to think about the opportunity costs of this ai. I don't want to call it a distraction, but there is a core business for a lot of people or for a lot of companies that is working. So if you all of a sudden start thinking about, oh, LLMs are going to replace everyone and change everything, I am afraid on company's behalfs that that can lead to distraction from their core offering. I think again, you need to combine, you need to integrate, you need to experiment, but also not maybe loose focus of what's actually core for that brand

    Peter Crosby (31:42):

    And what's driving your business. Yeah. Well, Bjorn, thank you so much for joining us. I presume on LinkedIn folks can reach out if they're interested. You can

    Bjorn Rosander (31:52):

    Reach out on LinkedIn if they're interested in a conversation.

    Peter Crosby (31:56):

    Great. And I also wanted to point people towards that blog post that you mentioned. If you just go to Google and search , P-Y-Y-N-E into the unknown or into the unknown with LLMs, that'll find you a great blog post that kind of lays out some of the work that you folks did around trying to articulate what's going on in the world and what to think about. So Bjorn, thank you so much for sharing this knowledge and experience with our community. We're really grateful.

    Bjorn Rosander (32:25):

    Thank you for having that, and I enjoy the conversation very much.

    Peter Crosby (32:29):

    Thanks again to Bjorn for calming all our AI waters. If you're near London, you could be further calmed and buoyed by joining the community at the DSS Summit Europe on October. For an action packed, half day thought leadership and networking event for industry leaders, request your ticket at digitalshelfinstitute.org/DSSEurope. Thanks for being part of our community.