READY TO BECOME A MEMBER?
THANK YOU!
Transcript
Our transcripts are generated by AI. Please excuse any typos and if you have any specific questions please email info@digitalshelfinstitute.org.
Lauren Livak Gilbert (00:00):
Welcome to Unpacking the Digital Shelf, where industry leaders share insights, strategies, and stories to help brands win in the ever-changing world of commerce.
Peter Crosby (00:22):
Hey everyone. Peter Crosby here from The Digital Shelf Institute. Your future incremental growth in an era of conversational commerce, machine shopping and agentic AI will at its foundation, require winning a whole new search competition. What are the elements of victory here and what do you need to do to get ready? John Andrews, co-founder and CEO at Cimulate with a C comes to the podcast with a lot of answers, some guesses and excitement about the questions and answers to come in this fast moving space. Rob Gonzalez joins as co-host for a conversation on the new generation of search you won't want to miss. John, welcome to Unpacking the Digital Shelf. Rob and I are so excited to have you on the show. Thank you so much for joining us.
John Andrews (01:08):
Super excited to be here. Good to see you guys.
Peter Crosby (01:11):
So as you know, our listeners are already doing the hard work of project experiences. The content that meets retailer demands drives performance across the digital shelf, et cetera, et cetera. But now they got to deal with AI powered shopping journeys, which where bots are curating the shortlist, and you, John, at Simulate with a C, have been thinking deeply about what this next generation of the commerce stack needs to look like to win in that arena. So I thought I'd start just with what was the big idea that sparked you and your co-founder of Vivek to build simulate?
John Andrews (01:47):
It's a good question, and where we started from, it evolved a little bit and honestly, I fought it at first a little bit. I'm glad I didn't fight it too hard, right? We find ourselves in a pretty good position right now based on how things are moving. But let me give you a little bit of the background. And I is a professor at MIT is one of the two co-chairs of the Gen AI consortium at MIT. So it's super deep within the space and very early on reading the paper from Google out of 2017 on Transformers and through his lab at MIT, it's been deep within the space and he and I started working together about a decade ago. He and another MIT professor founded a company called Select, also with a C that I joined on super early as CEO. We sold that company to Nike and I then worked for Nike for about three and a half years.
(02:40):
So with the big brand with the swoosh out in Portland, Oregon. And he and I were trying to figure out, hey, what is the next big thing? And he was educating me on these transformer models and how they were evolving. And our original idea was if you think about transformer models and generative AI in general as a sequence model, think of chat GBT. Most people have used it. If you haven't, you should. And it's basically next best word, it takes a sentence and it does auto complete, but it does it in a probabilistic manner that should blow your mind. We took that same idea and that same concept, but we looked at the customer journey and how every customer is interacting with the brand, everything that they're doing, what are they clicking on, what are they buying, what are they searching for? And taking all that information and through that customer journey, looking at not just not next best word of a sentence, but rather next best action, but next best action in a way that will blow your mind.
(03:43):
And that was kind of the original start of taking that kind of transformer model, that sequence model, and now using that to optimize effectively every pixel on the screen to optimize towards that next best action as we kept going and building out kind of a core model. The second thing that we realized that these LLMs do an amazing job at is taking in a lot of data, a lot of information and being able to understand in a way effectively everything the intranet knows about a particular product. Everything like how people talk about it, how people search for it, all the ins and outs, how people describe these products. So those two things, when you think about that customer journey and then you think about understanding everything that the internet knows about these products in a way that you can then make conversational with a customer, gives us a platform now effectively to have a much more engaging experience with the customers.
(04:47):
They're doing product discovery and commerce. And so we were very much focused initially on commerce, where we focused the company initially is search. This is what I fought at first with Vivec because Rob knows he and I spent a good deal of time at a company called Endeca that is the OG in commerce search. I had spent 10 years working there and then was the person left holding the bag when we were acquired into Oracle, let up the Oracle commerce product and thought that search was a bit of a solved problem. One, it's not, it's still really challenging, very hard and not great. And two, we're just at a point in time now where there is a much better technology stack to be able to solve this problem in a much different way, in a much more unique way than it's been possible before.
Rob Gonzalez (05:39):
Yeah, I love that you guys picked up search. I mean, we've all had this experience where especially when you're not an expert in a space and you don't really know what the space is that you're searching for, the canonical example I always use is a backpack. There's so many backpack brands that are out there, and if you go to Google and you're like commuter backpack or you go to Amazon and you're like commuter backpack, there's hundreds of possibilities and you want to be able to narrow down those possibilities somehow, but where do you even begin? And so if you're using search to try to understand the space, you end up opening a hundred Chrome tabs and you're lost and it takes hours to kind of sift through all the information. Where with LLMs you can give it a lot more context about who you are, what you're trying to accomplish, what types of things you put in the backpack, how much weight you want to carry, all that type of stuff. And it can help you narrow it down. I want to talk first about limitations on that and so I can understand it better. And then I want to go into how if you're a manufacturer, what do you do about that? So for limitations, the characteristic you gave on LLMs is they're good at auto complete effectively,
(06:53):
But then you also said they're good at understanding. And for me, the understanding of it is they don't actually understand anything. They're just sort of giving a good simulacra of understanding, and it's basically by using auto complete really effectively, they're saying what people tend to say in aggregate on the internet about this thing, this is what I'm going to say about this thing. So it's not really an understanding of the topic, it's more of just better auto complete using context of everything that's on the internet. Is that a distinction that's even worth making from a technical perspective or is there a level of understanding of these systems for search that is deeper than I'm giving it credit for?
John Andrews (07:38):
Great question. And I think one thing I'll just say, you mentioned context, right? And these LLMs, you can now take in more context of what the, so rather than commuter backpack, you can say, I want a backpack, I carry a laptop, I want it to be lightweight, I want it to be snug to my body. I am five, 10, whatever. You can add a lot more context and bring back to your point on the limitations in terms of, hey, it's just auto complete. This is one man's perspective. What we are seeing in terms of the capabilities of these models and how powerful they can get when tuned appropriately for a particular kind of data set. Now, some like chat, GBT, Gemini, Claude, they're spending tens, hundreds of billions of dollars to suck up the entire internet. So they have a lot of context. We are building a model that is very specific to a certain customer and pulling all of their data in everything the internet knows about their particular products and context.
(08:46):
I do feel like we are at a point where if it was just an average of what a few people were saying and it doesn't necessarily apply, these models have gotten so good at being having very distinct answers for the context that's put in that you can then converse with and go back and forth to further refine what you're looking for. Where I don't have a lot of concern that the model's just going to send off gibberish, nor that the model is going to just give you a very basic generic answer. It can get very specific if you need it to. It requires good data that needs to be brought into it, which I think is one of the things that we've seen in terms of the customers we're working with, making sure that they have good data about their content, about their products that we can pull in.
Rob Gonzalez (09:50):
So there's a funny book, it's actually a short story by Andy Weir, the guy who wrote The Martian and Project Hail Mary about an AI speaking about the specificity of the data and the quality of the data having a big difference in terms of maybe how much understanding it can simulate, simulate with an S in this case. And the AI in Andy Weir's novel was trained on YouTube comments.
John Andrews (10:18):
Oh gosh, that's actually
Rob Gonzalez (10:19):
Funny. It's really, really quite good. That's good. And not say for work to read. Sure.
(10:30):
So it's interesting in the search space, thinking about what AI is going to be good for so much better than traditional keyword search and what it might be bad for my mental model on what it's bad for are looking up facts. It's not a database, but looking up patterns. It's really good at, so if you're looking for a bunch of examples of things, it can be really, really good at that. If you're looking for what is the exact population of Paris, it tends to be pretty bad at those specific questions. In the case of shopping, given that, I mean presumably when you were talking about the next best action for shopping, you're training it on the buyer's journey that would lean into the LLM strengths. I would think it's a pattern based solution. People typically when they're shopping, they're not looking for a specific known individual answer. They're looking for a guided journey on options. And is that model correct if I'm thinking about how the strength of it can be brought to bear?
John Andrews (11:35):
Yeah, I think that's a perfect way of thinking about it. Now keep in mind there are still some points where you are looking for something very, very specific and we've built capabilities into our platform that allow for matching a SKU number. If you're a manufacturer or you're searching a manufacturer or a brand's website and you just know the SKU number, you want to bring that SKU number. We can recognize that. We can understand that. But more generally, people are doing product discovery, people are putting in context and their particular context at that point in time for what they're looking for and they're describing generally their needs. These LLMs do an amazing job of bringing back an assortment of products and components that actually meet that context. Listen, I mentioned I fought it at first. I am now, and I'm biased, obviously this is the business that I'm building right now, but everybody who powers a digital commerce experience on app on their website is going to need to upgrade their infrastructure to take advantage of an LLM based operating system.
(13:00):
We have crossed the point where frankly, this should be open for debate in terms of some way, maybe not for all elements of the experience, but a big part of it. These models just do a significantly better job and when built appropriately, a significantly better job of understanding that context and helping the customer think of having your best sales associate, somebody who knows everything about your products, can empathize and relate with the customer that they're talking to, having kind of a back and forth to give them the perfect products or view that meet their particular needs. That's what these LLMs can mimic when trained appropriately.
Peter Crosby (13:44):
Yeah. So John, I remember because Rob and I have these day jobs at a company called Salsify. I'm not sure if our listeners are aware of that, but we do
John Andrews (13:55):
S with an S, right?
Peter Crosby (13:57):
Yes. Although we're thinking of changing it now because I love what you're doing with it with the C. So I remember back when I first met Rob, this is going to be 10 years, God next month when we air this last month, and one of the things he was talking about that led to the founding of Salsify out of Endeca was just the discovery that all the data that was needed to be able to sell these things, which just didn't exist in any form that's usable. And it feels like, I was wondering, as you start engaging with these clients, you were talking about you need the data, what kind of shape are you finding people are in compared to your endeca days? We would hope that things have gotten better, but I'd love to know what kind of data does the LLM need and then how are you finding the state of the world out there?
John Andrews (14:50):
Yeah, so when Rob and I started at Endeca, what people were dealing with was they had implemented, and this was early two thousands, they had implemented ERP systems where they put in a product description that could only be 24 characters long, something ridiculous. And that's how they were describing BLK 14 TPI. So it's like, okay, so it's black, it's 30 threads per inch, whatever, and you had to try to understand this. And what we wanted to do is have people then refine by attribute this guided navigation idea. And so it was a huge issue getting data to where it needed to be to really leverage our platform. We figured that out. It's now industry standard that you can do that. Where we're getting to now is you want to have a lot of data, but the core data that describes your particular products, it needs to be good data.
(15:58):
And I don't know how to better describe this. You can't just have a ton of generic data because then it just kind of gets, what's the word I'm looking for? Diluted, useless, useless, diluted, right? When people are searching through it, you want to have your product data sheets, your FAQs, you want to have all this information that's got the right data, and this is where Salsify is the industry leader and helping brands be able to structure all of that core data and brands that have invested in that have a headstart in terms of now being able to use these models to bring that data in. Now the second part of this is that these models thrive off an enormous amount of data, but what that additional data is, and the way that we've approached this at simulate, and this is where our name comes from, is we actually simulate a bunch of customer journeys to create even more synthetic customer data to be able to add into the model building process to now have a very robust and rich model that can handle being able to understand the probabilistic nature that Rob, Rob was talking about earlier to make good decisions through that product discovery.
(17:24):
So those two parts are like one that customer's internal data, there's going to be some clickstream data that they have in behavioral data, but then very good product catalog information, different content data sheets, et cetera, all the description, all the things that Salsify does extremely well. And then adding to that, the synthetic data that gives us the amount of data that we then need to build a really robust model that can handle the customer experience on their website. Does that make sense?
Rob Gonzalez (17:55):
Yeah, it does. I want to dig into the data element a little bit because I think for our brands, the folks that we work with, a lot of what Salsify does for them is help them optimize their data for the search models of the different retailers. Walmart has a search taxonomy and that search taxonomy, a lot of it are attributes you don't not even see on the product detail page. They're columns of data that you fill out and you submit to Walmart and behind the scene they might drive the left hand navigation on the Walmart site or they just might be used in the background for keyword matching so that they've got better relevance. And so a good percentage of the data are these attributes that are used for search. And when you say LLMs are data hungry, there's an element of it which is that Walmart has chosen some subset of attributes that they've figured out for search. Amazon's has some overlap but is actually pretty different in most product categories.
(18:57):
If your Home Depot, home Depot has way different attributes than either Amazon or Walmart for even the same products. And so I got to imagine from an LLM perspective, LLMs want all of it any possible attribute that you could possibly say about a product, and then they also might want a lot of descriptive information about the product that typically you wouldn't even send to a retailer. I got to imagine, I mean I'm just making this up, but I'd imagine that simulate might benefit from having the instruction manual from a dishwasher. And so what are the bounds of the data that help your system or systems like your system do what you help with product discovery and help with the search process?
John Andrews (19:43):
Yeah, so I think there's two ways to think about it. One is the building of the model, kind of the base model itself and the core model and then the data that you might pull from for the conversational side of things where you might want some additional data to incorporate into a conversational back and forth with the customer if they're asking very detailed questions about the dishwasher and how to service it, how to run it when it's blinking three times rapidly, and then whatever the case may be in terms of building the model. And let me explain just for a quick second, lemme take 60 seconds and explain kind of the simulation thing that we do. I think it's informative in terms of what these LLMs thrive off of.
(20:46):
I mentioned on in our conversation everything that the internet knows about a dishwasher. Let's just keep on that example and how people ask about them, talk about them, the capabilities, why they might choose different dishwashers and these massive LLMs, right? Let's think of chat, GBT, deep Sea Claw, Gemini Llama, all these different models. These companies have spent billions upon billions upon billions of dollars sucking up the entire internet. So they have gotten all this data and then they've created all these embeddings and kind of built this massive 750 billion parameter model like coming up on a trillion parameter model, just massive, massive models that only a few companies have the resources to be able to build. They now know everything. They know a lot about everything. The simulation that we do, we'll take dishwashers, so let's just take Home Depot there. We would take Lowe's as catalog and we would take that dishwasher and we say, and we go to chat PT or Claude.
(21:52):
We don't really care and say, Hey, generate a persona for me. If somebody who would buy this dishwasher and it does an amazing job of coming up with a really detailed persona, these are very helpful marketing personas. We do this tens of thousands of times. What we then do is we take those personas and we make them shop, and we use chat GT on some of these massive models to say, Hey, here's a persona. Here's five dishwashers, which one will this persona buy and why? And the why part is important, and it generates a response and it generates, and we do this millions of times. We can create an enormous amount of synthetic data. What this is doing now is it's effectively understanding the knowledge of what these massive models know about how people talk about and shop based on everything that they've sucked up and pulling just the little piece of the world around dishwashers and using that data and that synthetic data as the basis to build a core model.
(23:04):
The why they chose it can come back with a bunch of information like, Hey, we want something that's quiet. We want something that's compact. We want something that doesn't melt the plastic lids that we put in, but we want a heating. Who knows? But it's going to give us a lot of this information and we use that to build out this model. When you're then conversing live some of the very rich data around the FAQ documents and the data sheets that you may have about that dishwasher come, they then come in. Super important when a customer asks a particular question about that product, super valuable in that live interaction with the customer.
Rob Gonzalez (23:51):
If you're a brand now, right now, just for the sake of argument, let's say that Walmart selects simulate to be their new AI search provider for walmart.com, and I'm a dishwasher manufacturer, like a leader like Bosch and today, and I've got to prepare data to send to Walmart to optimize the experience of Bosch products on walmart.com. And I don't want to use the word game the system, but the way that keyword search works is everyone games keyword search, you're doing keyword stuffing, you're doing all this stuff right in the LLM world keyword stuffing and keyword search. It's not going to be as helpful.
John Andrews (24:41):
It's not going to be as helpful.
Rob Gonzalez (24:42):
So if I'm Bosch, what am I providing to Walmart, which is going to feed into simulate, which is going to help my products show up when they're supposed to in places where the relevancy matters? Is it more selling information? Is it more technical details about my products? Is it more good reviews? Is there anything I can do about it? What's in my control?
John Andrews (25:06):
Yeah, great question. And I think we're learning a lot about this right now and people are talking about agents at commerce, people are talking about agent to agent communication, and you can think of almost like what you're asking for here as kind of potentially an agent to agent type conversation where the context coming in isn't just quiet dishwasher, right, or home use dish or commercial dishwasher, whatever. It's a much broader prompt asking because Walmart is allowing their customers to ask much more broader questions and provide more context within that prompt and in the backend, let's just use the example where Walmart is asking now simulate to come in and drive the backend. We are able to take that in and provide then come back and provide a much more rich response exactly what the structure of the data that's being fed in needs to look like.
(26:12):
We're figuring that out as we go on. It is not where we have 20 years worth of understanding page rank algorithm or the Google algorithm for SEO. There's a burgeoning market around either GEO or a EO generative engine optimization or answer engine optimization. What we are seeing and what we want to bring in is very rich, detailed descriptive data, and it can be in long form paragraphs, it can be user review information. All of that is going to be, but it needs to be good, not just a bunch of crap. It actually needs to be data that describes the product and describes the product well and has all the detail and information in there. It doesn't being obsessive about making everything perfectly attributed within, that's still going to be important, but it's less important than now adding in some of this additional content and data that you want to add into and associated with your product catalog.
Peter Crosby (27:31):
So John, believe it or not, Gartner has already put generative AI in the trough of disillusionment of their hype cycle. And the reason why I think is that there's so much hype out there and noise and things that are not tied to actual business value so that people can understand is this worth the investment? What do I do with that? So I kind of want to center on that because as I'm listening to you, I'm hearing the opportunity that you're talking about when you can simulate at that scale is you can drive incremental growth for your business. And I'm saying this as a statement, but I would love for you to tell me if that is true, which is if you can respond to all those contexts and all those personas at scale, then you will be able to gain the customer set that's inside of that persona structure as opposed to the more generic stuff you can only offer today because you don't have the resources and the horsepower to meet that moment. And so I'm wondering, does that resonate with you? Is that what your clients are trying to achieve with their engagements with you? Is it growth in addition to I'm sure efficiency and cost savings? And then secondly, is this unlocking that future world of actual personalization because of these capabilities? How do you think about all that?
John Andrews (28:56):
Yeah, so when we started out in 2023, and you talk about the trough of disillusionment, lemme just explain what we've seen over the last couple of years. In 2023, people were just doing pilots. People had a budget to do a 20 5K pilot on something and they were doing a bunch of them, but it was all just testing things out. And there were a lot of companies that are like, oh yeah, we have 50 customers. Like yeah, 50 people tipping their toe and dipping their toe in the water with five different vendors. They're not going to be able to use all of them for this particular problem. 2024, we saw people actually no, starting to pick a few to work with and actually putting where they saw ROI being able to start to get some projects going with particular vendors to start to see particular value.
(29:50):
And it was our first year of really selling. We had a great 20 24, 20 25 I feel like is starting to get really interesting because we have, whether we stumbled upon it or whether we saw the tea leaves early enough to kind of make some good decisions, who knows, probably a lot of stumbling in a directionally correct direction. We have found ourselves where we are using generative AI and large language models to solve much better a problem today. And it's search. And to your question, Peter, the ROI and the business benefit, we are basically replacing keyword search and vector-based search on our customer's websites and just an experience and providing a more robust experience and showing anywhere from eight to 14% improvement in revenue per user within a session where they engage with search. So really, really meaningful numbers. And so the chief digital officer loves it.
(31:02):
The chief technology officer loves it. The CFO loves it. The CEO loves it, right? So we're seeing that. The second thing we're seeing in 2025 though is that while we're solving that problem, a traditional problem, we're also basically future proofing the customer for what they see coming down the pike, which is this idea of ag agenta commerce where you actually go to perplexity and you describe that backpack that Rob was looking for, or you go to chat GBT and you describe that backpack you're looking for. And that's where the real question comes in on GEO and SEO where how can the brands get their data and their content in a way that works perfectly. And I think a big part of that is these brands need to have an agent on the backend, not just a keyword searcher, not just their website that somebody is going to search, but an MCP server or something similar.
(31:58):
MCP is the standard that seems to be winning out. And our platform is the perfect tool to sit within this MCP server so that when perplexity and open AI are coming and asking like, Hey, my human Rob is looking for a backpack for his commute, blah, blah, blah, blah, blah. We're able to bring the right result. Perplexity takes in all those, but it's now described in a way that's not just from them traditional searching their website to be able to have that particular map backpack manufacturer come to one of the top recommendations coming from perplexity or Jet G bt. So 2025, you're really future proofing towards that agent to commerce that we see coming down the pike.
Rob Gonzalez (32:42):
Yeah, I think there's a lot of open questions as to how that web of agents is going to be talking to each other because I mean, you can imagine I go to perplexity and I start searching for my backpack. If that thing has to then federate agent queries to however many other systems, it's not going to work, right? It's going to be too slow. Yeah. I have so many questions as to how that future worked out, but I guess the meta point though is that if you assume that there's going to be agents and you're going to have agentic interfaces and you're going to have to be pulling data from it, then you guys are building for that. And if you're a classical keyword, I mean, if you're a classical keyword search, you're a classical vector search, you're not attached to this world at all anyway. You're screwed. Yeah, totally get it. Yeah.
Peter Crosby (33:30):
And I think John, that's why to close out, I'd love to just have your point because really what you're saying is that over the next year and a half, two years, our listeners and also the retailers, whether they're retailers themselves or they need to interact with retailers or agents will kind of need a foundational upgrade in their tech stack, like ripping out their current way of driving search and perhaps other things. And so I'm wondering if our folks want to start that conversation or add the urgency of that conversation at their companies. Who are the people that they need to get on board to be able to be ready for that conversation with you or whoever else they talk to
John Andrews (34:18):
About? Yeah, great question. I think where we have seen the most success is where we have a on the tech side and on the business side. And sometimes a chief digital officer, for example, on the business side might also own the supporting tech staff or team and stack to drive that, but it really is a CTO or CIO and a chief digital officer or kind of a VP of commerce as the key people that need to be leaning in and need to be making the decision on what the right stack is for them to lay that foundation for now to be able to provide a better experience. And you mentioned personalization a minute ago. We actually believe you can provide when you have more context, and I keep coming back to this word context, when you have more context through these prompts that are much richer in terms of Rob describing his backpack as opposed to commuter backpack, you can actually personalize that more.
(35:31):
And we know a little bit more about Rob, but more importantly is in the session we're picking up every sense of what he's asking for. And using all of that information, not just trying to do keyword matching to bring back, to bring back the right products, you have a much better chance of being able to answer that. So the CTO and Chief digital Officer important, I will say, I think you need to bring your chief merchant and for these retailers, you need to bring your chief merchant or whoever is in charge of the visual merchandising and the merchandising experience on the website along on this journey as well. Because these models work differently than keyword, than keyword search. You have more flexibility in terms of the experience that you can provide. And having somebody who can embrace that I think is really important.
Peter Crosby (36:26):
Yeah, we recently had a podcast with the head of retail media at 8 4 5 1 at Kroger, and who's best pals now with their digital merchandising, the head of digital merchandising. And that sort of odd friendship has driven the collaboration that's needed to make all of that work as an experience for the consumer. I think that's exactly what you're talking to here, is it's going to require an alliance in the old breaking down of silos to take full advantage. And John, I lied. One more question. What's the first mover advantage window here in your mind? If you really want to get the benefit of this and you missed it when search came out or when retail media came out, what is that urgency in
John Andrews (37:20):
Your mind? Yeah, it's funny. I don't think about it. I don't think about it in terms of what advantage do you have as the first mover. I really do feel like things are moving so fast that it's, and I hate saying this, but it's more like the fear factor. If you don't do it, things are moving so fast that if your infrastructure is still keyword based, you're kind of screwed. So it's not like, Hey, do it now and you're going to have a two year advantage over your competitors. Like no, if you're not embracing this now and not embracing a LLM based operating system to drive your customer experience, it's going to be painful for you over the next 18 to 24 months to then try to play catch up and put something in place that can live and interact within this new world. So
Peter Crosby (38:25):
I guess I'd say just to close out, the only thing that's really, really sorting here is that this will continue to change rapidly, and that's change with an S change. And so I would recommend that our listeners go to simulate.ai with a C and just make sure you keep in touch with, I'm assuming you folks have a blog that you're going to be telling a lot of these stories. So I think it's a great place to stay in touch. John, thank you so much for joining us and opening our eyes to what's going on out there because it's happening fast and what you're doing is super exciting.
John Andrews (39:04):
Always a pleasure. Great to chat with you guys, and it is a really exciting time right now. So it's pretty invigorating a lot of the different conversations that we've been having and people leaning into it and seeing what is possible now. So appreciate the opportunity to talk to you guys about it.
Peter Crosby (39:22):
Thanks to John for putting the commerce search fear of God into us. We are going to be keeping on top of these strategies for the next era of commerce. So become a member at digitalshelfinstitute.org to keep up. Thanks for being part of our community.