Lanny Roytburg: Hey, everybody. Thanks for joining the webinar today. We'll give everybody another 30 seconds or so as people are dialing in on this Monday morning, and then we'll get started.
All right, well, let's go ahead and get started. Thank you all for joining the webinar today. The topic is bridging the data to the decision divide. With me, I have Oleg Royz and Jan Mehmet, with me from both DataArt and retail. We'll do some introductions here in a little bit.
But today's discussion is really going to be around the deployment of AI agent accounts in the enterprise. We'll set our own agenda here and talk about why we feel that the topic of decisions is where AI can create and drive value for organizations. But we'll also talk about the role of really getting the right data ecosystem. What does it actually take to set up your data assets infrastructure to enable this feature in a generic enterprise, and drive those decisions?
If you have any questions during the webinar, please feel free to drop them in the actual Q&A box. It should be on your screen here. I will review those questions throughout the presentation and pause at the appropriate times to answer any questions that you might have. We're also going to save about 5 to 10 minutes towards the end to be able to answer any questions and have a little bit more of an open Q&A process, as well.
We've got kind of a broad range of different attendees today. And so we're not going to get into too much detail around different data environments, infrastructure, etc. It's a bit more of a higher-level discussion, but I’m happy to jump on any calls afterwards and answer any specific questions that you might have.
So, let's get to some of the introductions here. I'll be the main moderator for today. My name is Lanny Roytburg. I'm one of the co-founders of Cloverpop, a leading decision intelligence company. I've spent the last 15 years on the growth strategy side within management consulting and driving data analytics for commercial areas. With me as well, I think Oleg wants you to do a little bit of an intro.
Oleg Royz: Thank you, Lanny. Glad to be here. I'm here in Chicago, where the summer is at its peak. And for the past 25 years, I've been working on delivering data AI-driven platforms and capabilities, primarily in the retail and CPG industries. Today, I lead a manufacturing, distribution, and retail vertical at DataArt, a global software and engineering firm with a 5,000-plus global workforce. And we are delivering breakthrough data analytics AI platforms to the world's most demanding organizations.
Jan Mehmet: Yeah. Hi, everyone. Thanks, Lanny. Nice to be here. I've been in retail for 25 years, and digital for just about the same amount of time. So I've seen all things retail over the course of time. Most recently, I'm working with a company called End, which is a quarter of a billion luxury streetwear multi-brand entity. But historically, I've worked with numerous brands. Last year, the Capri organization, which owns Michael Kors, Versace, and Jimmy Choo, came out of a business I'd worked with for around ten years. So looking forward to our discussion today.
Lanny Roytburg: Great. So let's get started. And I want to start at a place here where we'll say we're going to be saying the word decisions quite often. And the reason for that is that organizations have to make about 10 million decisions per year. Everything from the very strategic - where should we open up new retail stores - to the more operational, how do we actually change pricing, to the very tactical, how do we set the various markets and those types of elements. And these millions of decisions that have to be made have to be made in a very volatile and uncertain environment. And again, we'll talk a little bit more about that.
Jan Mehmet: Yeah. I mean, the reality is that retail is undergoing a pretty profound structural transformation, which is driven by macro and industry-specific shifts. Geopolitical uncertainty is clearly impacting businesses, disrupting supply chains and markets. And then there's margin compression and cost volatility, which is forcing businesses to do more with less. But it's not really just about efficiency. That's no longer optional. There's more to it than that. And then you have things like sustainability, ESG pressures, ethical sourcing, carbon footprint reduction, and circular economy models that are becoming strategic imperatives. So it's not so much about just responding to compliance measures. It's reacting to the needs of the organization.
Of course, modern retail customers are connected, empowered, and increasingly unforgiving. Values are no longer about prices but about relevance, speed, convenience, and trust. And then you've got this hyper consciousness that's developing, where Gen Z millennials are increasingly selective about what they buy. So there's a lot going on.
Within the business itself, a number of factors need to be taken into account. The speed of innovation, data, and workforce dynamics are increasingly significant topics in businesses and are often compromised. So, agility really is an essential factor for all organizations now. And cross-functional working - you often see that that's quite limited. And sadly, there are still many more silos than I think we'd like. And then workforce dynamics in that retail knowledge and the knowledge drain that we experience in businesses is really prevalent now. That institutional knowledge and learning that disappears out of the door when you're not able to retain effectively is really compromising for businesses.
With so much going on, there's a real dependency on leveraging solutions that can support businesses' needs through this moment and beyond. And I think that's where our discussion today really plays out, because AI and the automation maturity that we're going to be talking about are really going to be the foundation for supporting businesses in the future.
Lanny Roytburg: And if we go back just a couple of years and just set ourselves up in November of '22, when OpenAI launched ChatGPT publicly, all of a sudden, there was a massive amount of interest in the topic. Every CEO, every CXO was going on their investor calls, trying to embed the word AI as much as they possibly could. Large companies started announcing big partnerships to really drive generative AI pilots within their organization, starting to rethink what AI can do for them.
Now, the expectations for the technology are absolutely massive. Automating up to one-third of our work, increasing the GDP by approximately 7%. That's the equivalent of adding an entire G7 nation to the globe. An really improve productivity by about 40%. And there's no doubt that generative AI will have a transformative impact on working methods. Less time spent on retrieving data, connecting different data sources, doing all that manual labor, and hopefully getting us to a point where we're focused more on the strategic elements, value capture, and those types of things.
But if you look at those initial deployments of AI - things like virtual assistants, customer chatbots, HR support - it was a “let a thousand flowers bloom “ type strategy. However, what executives found was that the first wave of these deployments had insufficient value creation, didn't have a strategic focus, and without spending much time and building out the skill sets, changing the processes to incorporate these capabilities.
And so that's okay. It's a new technology people are playing around with. They're trying to test it. However, as we look to the future, organizations are really starting to rethink that overall strategy. Playtime is over. It's time to get practical. We see many organizations starting to pull back on their overall disparate use cases. They're trying to be much more strategic about where and how they deploy it.
And if we take a couple of lessons from that initial 2 or 3 years, that first wave, we can think about three key lessons learned. Number one was hype over value. AI launched many scattered pilots, creating many really cool demos, but did not serve a very clear business focus or a business priority. Number two: models without a mindset shift. So, lots of sophisticated models and capabilities have been coming out. However, sufficient focus on adjusting the talent, upskilling the talent, and adjusting the processes did not happen. And number three: information without insight. We could generate lasting information. However, what value does it create unless we're actually deriving that insight and acting upon it?
So, as we look to the future, we need to understand those three key lessons: AI creates real value when it's anchored to business goals, designed for how people work, and built to turn information into action. And what better place to deploy this type of thinking than the foundational building block of an organization, which is the decision?
So, if we step back, let's think about what actually happens and what needs to occur for a decision to be made. We need to be able to integrate the organization's information, data analytics, human experience and expertise, and business context in the process. Those three elements have to come together for a decision to be made and action to be taken for value to be created.
In fact, Bain did a great study a few years ago, and they found that the quality of decision-making within an organization has a 95% correlation with its financial performance. So, if we were trying to deploy AI somewhere, why not focus on the actual decision-making process of the organization and do it very explicitly?
And that's what we describe as a decision-back approach, which we'll talk about today, rather than starting from a very tactical play and saying, "Okay, what kind of data analytics do we have? How do we then bring AI on top of that to search for information better and hopefully get some recommendations?" It's orienting ourselves to the actual value that we need to capture. Understanding and mapping the decisions to be made, then starting to understand, "Okay, how do I actually deploy the different technologies that align very well with that piece?"
And so, if we look at what's in this decision, let's take an example: how do I optimize my media spend ROI? Well, there are lots of different considerations and questions that we need to ask ourselves. Most of the time, these are cross-functional in nature. The best people suited for those answers are in different parts of the organization. They're going to pull different data sources. They're going to pull different analyses. But once that happens, hopefully, some discussion, synthesis, and debate will happen. A recommendation is made, and a decision is made. And that then serves some type of business goal.
And that is the perfect place to take those lessons learned and apply those different business decisions. Every decision needs to be connected to this goal. It has to align with the process and the ways of working and be built to turn information into action.
And so, if you look at what happens in an end-to-end decision-making process, we have a decision trigger. The situation has changed. An event has occurred that will spur us to make a decision. Also, there are just natural cadences for decisions. Once that is done, we need to model the decision to be made. What is the decision that we need to make? What are the key questions we need to understand, or the drivers we need to consider? And who in the organization is best suited or has the authority actually to provide that input or make that decision? We call that the decision model or orchestration.
And then we have to make the decision. We have to synthesize multiple different data inputs. Some of them are from clear data sources. Some of them are more experienced or expert in the organization itself. We then need to synthesize that information to create the recommendation that is then executed. Many organizations fail to do this but actually track those decisions, learn from them, and improve from them.
So this is what we describe as a five step end to end decision making process. The first step is the decision trigger when we consider applying AI to these different areas. So, how do we monitor both internal and external data sources to identify critical signals such as demand shifts, supply disruptions, and cost of goods changes? Then, it will prompt a decision that needs to be made and signal that to the owner of that decision.
And then AI could support the actual framing of those decisions. So, what questions should we ask so that we have a comprehensive decision logic and can make that decision? We should also have organizational graphs that tell us, "Okay, who in the organization should be in that overall decision-making process?"
Once we have that model and the decisions are orchestrated, we can make that decision. So, how do we connect structured and unstructured data sources, synthesize those various inputs, and generate insights and recommendations about what actions need to be taken?
From there, we want to execute against that decision. We could trigger operational steps or various workflows across enterprise systems. For example, we could decide whether to update the pricing, identify shifts in media budgets, and record them in our ERP systems.
Once those decisions are made, we have to treat them as a new type of data. From there, we're now able to track them, measure their impact, and understand what is going well and what is not so that we can integrate that learning into the actual decision model and orchestration.
We will share some of these examples today during both presentations, and I will bring this to life in a demo shortly. But let's look at three examples where this type of AI decision-making works very well. And we'll do three different stages. So you can think of the first, as we'll call it, AI, as a human-led AI-assisted. The second one will describe it as AI-augmented. So, humans and AI both answer different questions. And the third: a fully autonomous version of it.
So in the first example, this is for a pharmaceutical company that makes an oncology drug. That oncology drug has about 130 unique raw material inputs from about 100 different suppliers. Anytime one of the suppliers is late on delivery, manufacturing of that batch of oncology drugs is delayed. It's about $10 million per batch. And that's not mentioning the patient's health that is at risk. And so in this case, the organization uses a human-led AI-assisted agent that helps them really understand, "Okay, what are the key questions we need to understand for why we are late?" Which vendors can we then communicate with so that they can actually get us that drug on time to get the actual materials into the supply chain to get the oncology drugs going?
The second example is what actions we can take to improve brand performance. So, in this case, the CPG company spent $1 million plus a year on brand equity studies. They were doing the surveys, and they had analysts look through a 300-page document to say, "This is what you should be doing." So in this case, we created a decision-making model that says, "Okay, should we adjust our brand performance? What are the key metrics that we need to be looking at? What are the key KPIs we need to look at and then automate that insight recommendation process?" This allowed us actually to deliver those insights about two weeks faster. But most importantly, the marketers no longer had to sit through presentations of 300 pages. They were able to get to the specific recommendations and act on those.
The third example is consumer health organizations. They spent about a billion dollars plus on media allocation every single year. However, if you could imagine that, how do you say, "Okay, how much do we invest in the brand side, then on the specific channels," etc. Based on that role model, we were able to automate the process of making about 1000 recommendations every couple of weeks. Where should we allocate our media budget? So, the actual insight time is about an 80% acceleration. And Jan, you've got some retail examples as well.
Jan Mehmet: Yeah. I mean, it's always interesting, isn't it? You see a lot of companies that have resisted moving forward with AI in a deep way. Of course, predictive analytics and certain analytics have been available for a long time. However, some organizations have also pressed ahead. I will probably talk to Walmart as one huge global story, with over 10,000 stores in 24 countries. They recognize the need to harness a lot of data. And so they went through a strategic implementation leveraging AI in their inventory management, and they were really able to affect their weeks of supply and reduce stockholding cost. So, of course, it has a huge impact on their business.
From a supply chain optimization standpoint, they were able to monitor performance, transportation, logistics, etc., reduce their lead times, and improve their product availability. So, it's going to be really valuable across a business of that scale. It would be valuable for any business, but certainly for their size.
Just thinking about it from a brand standpoint, a great example is Levi's. And you would think brands that have existed for so long would have a really good view of their business, but they recognized that they needed to understand some of their data and insights better. And so wanting to respond to consumer fashion trends, they looked at how to detect and capitalize on emerging trends. They created a unified data platform using machine learning and insights that they could derive from various KPI points of interest. So, point of sale, e-commerce purchases, online browsing, loyalty program, all the data points they could pull in.
And of course, they've got an enormous distribution footprint. I think it's around 50,000 points of distribution. So, bringing in all of that information and then looking at what it would tell them about trends, they were actually able to see that within one category, there was a much wider denim demographic for a product, and they'd assumed that, which enabled them to build that category by about 15%. So again, sometimes things that appear quite obvious in their nature are unmasked when you have the data to really support it.
Lanny Roytburg: Great. So, we want to talk about how you orchestrate some of these genetic AI on the topic and the space of decision intelligence. And so, Oleg, I want you to take it from here.
Oleg Royz: Thank you, Lanny. I will not uncover anything new, but as a footprint, we produce a lot of data as human beings. And just if you think about an enterprise, there's a lot of data. Your CRM, ERP, manufacturing IoT devices, demand signals, and marketing signals. Everything in the enterprise is producing massive amounts of data.
But data without action is meaningless. In Jan's and Lanny's examples earlier, agility plays a big part in it. Whether we like it or not, we rely heavily on our subject matter data experts. And sometimes it takes weeks to collect the relevant information and synthesize it and make decisions, whether you are looking at the activeness of your marketing campaign and deciding on whether to reinvest or move your money somewhere else, whether it's inventory management, everything else, agility, latency, and accuracy are playing a big part right now.
When we talk about AI, we've had AI capabilities for many, many years, from natural language processing to computer vision to optimizations, etc. So there's been a big boost in reading invoices and entering them into your accounts payable systems to deal with unstructured text or pictures to synthesize or maybe increase product descriptions. But now, in the past 12 to 18 months, a new term has entered our vocabulary: agentic AI.
And I think the key difference here is that now it can add context to decision intelligence, so that the growth of large language models and you hear terms like Claude, OpenAI, DeepSeek, and many others. There's a lot of industry competition happening out there. Which model is more optimized? But I'm not gonna advocate which one it is. But the key element that differentiates everything is how you observe, understand the signal, take actions, learn, and make decisions. Obviously, with humans involved.
I just wanted to highlight four key characteristics that define agentic AI. First is autonomy and self-optimization. How do I analyze data, rate options, and execute decisions? How do I use reinforcement learning for the dynamic reward functions without any type of hard coding in my logic?
Second, and this is very important: AI needs a playbook. They need to understand how I am making a decision. What is the best decision? How do they interpret the signals? What's my playbook regarding optimizing, reinforcing, and recommending a specific outcome? I think Lanny, when he goes deeper into his examples, will talk a lot about those types of KPIs and how you create that playbook for making the decisions. Not just humans, but AI agents rely on those, too.
Now, the other one is adaptations. And many systems kind of freak out when there's a lot of data changes, but I think agentic AI thrives on it. I think they operate in a very dynamic environment. You've heard a lot of examples, maybe in finance, where high-frequency trading is used to interpret a signal in milliseconds and make decisions on the fly. And all the rest of the industries are trying to catch up and apply a similar type of strategy and bring it to the supply chain world, marketing world, and other aspects.
The final and most important one is continual learning. I think it's not just humans who provide the reinforcement. Technology is moving to where you can actually have large learning models that act as referees, looking at the different outputs and feeding that into the loop, while humans can make those assessments and improve performance.
Now, adoption is growing. And as you can see on the right, there are definitive metrics regarding agentic AI accuracy, response scale, and ability to operate reliably. Now, there's still autonomous, which is a tricky word because there must still be a human in the loop, and everything else. Avoid bias, and make sure that AI is ethical. And I'll talk a little bit more about that later, but at least agentic AI is pivoting and taking the best out of each capability. As I said, natural language processing, computer vision, and other terms provide context orchestrations and drive enablement across organizations.
Lanny Roytburg: Thanks. So let's talk a little bit more about these agents. So we'll talk about decision agents from our definition. And this can be debated a bit. But when we talk about decision agents, we mean purpose-built agents that either assist, augment, and/or automate different steps of a decision-making process.
And really, not all decisions are created equal. Some are better suited for more automation, and some are better suited for less automation, depending on the level of data homogeneity, the number of stakeholders involved, the actual risk of the decision, and other factors. But generally, as we go through the example here, we'll think about three different types of agents.
One is the insight agent, helping to understand what's actually happening. These agents will help detect those decision triggers. They'll understand the questions within the decision context. They'll be able to pull and connect data sources, synthesize that data, and provide insights that fit those purposes. So, what is that information actually telling us in the context of those questions?
Then there are recommendation agents. What should we do? They help in terms of framing the decision logic, understanding the interrelationship between the insights, contextualizing and automating those recommendations, and helping you act on those recommendations.
Then there are the optimization agents. So the decision gets made. How do we improve it over time? We recommend the next business question or driver to ask, monitor and evaluate those outcomes, and then improve those recommendations through those feedback loops.
So, we'll discuss those three levels here in this example. But keep in mind that little flow that I had talked about in the past: the decision trigger, decision modeling or orchestration, decision making, execution, and then learning and optimization as I go through this example.
So let's take an example. So the first step is essentially analytically connecting the various, we'll call it, leading indicators, lagging indicators. So, if market share declines, we consider that a lagging indicator. Many different things had to happen for the market share to decline. And so what we need to understand when we get to leading indicators is how the metric, the various metrics that are connected to market share, and how they impact that final KPI.
And so we could see here, for looking at a retail example, maybe we're looking at market share, which is driven by volume, and volume is driven by potentially the price gap. What drives that price gap? Cost of goods sold, promotional investments, consumer sentiment, and other elements like that. And what an insight agent will then do is understand, "Okay, what is actually driving that change?" So we can say, "Okay, cost of goods sold is the main driver of gross margin decline." And then that tells us, "Okay, we need actually to make a decision around that. How should we respond to commodity cost increases?" Maybe it's so, yeah. Who is in charge of that decision? And that gets routed for them to start making.
The next step is to understand, "Okay, how do we structure this decision logic?" What questions do we need to answer? Which role, team, or people within the organization should be responsible for both the decision and the impact inputs? Do we have the data to answer the questions, or do we need to rely on some type of human experience, expertise, or some other source to get to that information?
And then finally, once we have that decision logic together, connecting it to the actual data sources, all of a sudden, these decision agents start working together from the insight agents that are pulling the right information. The recommendation agents read that data, synthesize it, interpret it among the answers, and then provide it to the recommendation. And that decision is made. We're tracking those decisions over time, and from there we could then identify, "Okay, what are the decision insights, what works? What doesn't work when we're making these decisions? How do we improve the decision quality? How do we improve the speed? What questions should we ask in the future to optimize for that decision model?"
And then, how do we actually help people within that process? By giving them those prompts throughout the actual decision-making process, can they make better decisions in the future? We're going to do a demo, and we'll show this live here in a second.
But really, regarding the knowledge graph and the critical components, when we design these various decision agents, we need to understand the following: "Okay, what is the business decision that we want to make?" So, what do we need to optimize our media spend? What are the key business issues or questions that we need to ask? What are the insights needed? What are the variables that will help us understand those answers to those insights? And then what data sources need to be connected for us actually to do that?
And the data piece is incredibly important and something that we see so often in the AI agentic context. But even outside that context, so much money has been spent on building that infrastructure, acquiring data assets, and democratizing all that information via business intelligence dashboards. More advanced capabilities are on there, but too often, both the business stakeholder side says, "I've got all this information, I've got all these technology assets, I've got all this data. How do I make a decision on what I should innovate or how we should grow the business?"
And what we see in that data decision disconnect is really about 60% of that investment is wasted, meaning the actual decisions made do not use the inputs from that overall process. It's very much more human-driven. And so Oleg's going to talk a little bit here about how you actually get the right data infrastructure and the data product in place to really start optimizing on those decisions, or realizing those decision agents?
Oleg Royz: Thank you, Lanny. Yeah. As technology evolves and there are many investments and innovations happening across the hyperscalers, Googles, Microsofts, Amazons of the world, and others, organizations still face multiple barriers. I just want to talk and highlight some of the challenges and approaches to solving them. So, let's unpack this.
As I mentioned, organizations are drowning and struggling with the decisions. And the gap between raw data and actionable intelligence isn't just a technology challenge. It's a strategic roadblock. The first, the most common one, is data quality, access, and connectivity. Bad data means bad decisions, and it's driven by incomplete, siloed, or unreliable data. A lot of members, including data scientists, very highly paid professionals, are wasting time cleaning up that data.
Just to give you an example, I was doing a remodeling project, and I called my - I went online and looked at Home Depot to make sure that certain products I needed to finish my projects were available at a store. I had these indicators in stock on the shelf, ten items. But I drive by and it's not there. So obviously, dissatisfied. My project is delayed. And now, brand loyalty is taking a hit. But again, it's due to a very simplistic fact that the data didn't sync up in time, or it was not rendered correctly.
So, to address this, there are some ways, and AI plays a big part in it too, such as how you drive data contracts with your data suppliers. And building the tools around schema validation and the freshness checks. For example, I'm expecting a million rows every day from this system. Today, I'm getting 700,000. Maybe I should flag it or trigger somebody actually to take immediate action. How do I make sure that there are no blanks? Or other anomaly detections through my automated data pipelines? And again, there are a lot of new technologies. I'm not going to go into depth, but there are tools available. And we, as a solution technology company, are bringing a lot of expertise to address those points. And finally, does the data need to be findable? You need to understand how it evolves. How the KPIs are constructed, and to make sure that you're not reinventing the wheel twice. And the same formula is applied consistently through your different brands or global markets.
The second one, the emphasis is, is really, again, how do I connect multiple data sources to drive holistic recommendations? And like I said, each organization has a ton of different systems. They have their websites, they have their CRM, they have their call centers, they have ERP, they have spreadsheets. Without unification, I will only see part of the pictures. So, for example, as our marketing example, if I need to optimize a campaign based on the clicks, but I don't have the context for my CRM data, which clicks actually convert to revenue, I may be missing a solution and capability.
So the way to address this is to apply unified knowledge graphs, and providing very robust context and semantic layer, bringing in into your data catalogs and create data stewardship across domains that are providing context, both business and technical and have bronze, silver and gold medallion architecture, where applicable, to drive efficient data products, which I'm going to talk a little bit about later.
And the third one is really I have the data. For the past 20-plus years, we've invested heavily in Tableau, Power BI, and other visualization tools. But now what? What does my observation tell me? Dashboards just tell you what happened, but not what to do next. So now the emphasis is to drive unbiased, transparent. AI-enabled data factories that are focused on the closed-loop intelligence where insights signals trigger and actions are under human supervision.
Those are some of the key barriers to data quality, data harmonization, and providing context to decision intelligence. The ecosystem has tools to address those. Let's move on to the next slide.
It is a little bit of a busy slide, but I'm going to try to provide the context for a modern data ecosystem. First, it needs to have a mission. The mission should be acting as an enabling force to drive cross-functional impact on business decision-making, and taking advantage of all available data assets to make AI-enabled data ready.
From a design perspective, not many organizations are starting from a green field, so you have a lot of legacy, potentially technical debt, and other issues. But the conceptual design of the modern data platform must be a scalable solution where each component, whether it's ingestion, storage, or compute, provides a level of isolation because technology moves so quickly, a new player or a new innovation comes up every 3 to 6 months. So what you want to do is to build that into your design so that you can extend, replace, and optimize your components. As time permits, to make it scalable and cost-efficient.
I've seen implementations where customers were making a big bet, and all of a sudden, their cloud costs, instead of $20,000 monthly bill, are skyrocketing to $80 or $100,000. So all of those things need to be taken into consideration.
However, some key pillars need to be considered. Your platform needs to drive towards self-service capabilities, where technology is not the provider but an enabler. How do you drive that type of data democratization and transparency, leveraging the modern data stacks or the modern data catalogs? How do you capture a metadata layer? And a business semantic layer to give your data assets context? And how do you create the marketplace through your data products? That people can see what's available and what knowledge I can derive? And finally, with AI-enabled decision intelligence capabilities, how do I take that knowledge and build actions around it?
And again, here are some examples. Whether you're in inventory marketing, the persona in the organization doesn't matter. There can always be a use case that understands what value, decisions, data I need, and their source systems to support it. And hopefully in the background through this type of purpose-built products and capabilities, you have a very agile way to launch those use cases and drive.
The final piece I'll just say is that it's never about technology. It's always in the context of people, processes, and organizations. Technology is the easiest part, but we all have to make sure that we understand the personas, the purpose, and the values that they are trying to generate out of the system.
I've used the term data product, and I think it may eventually evolve into AI-rated data—or maybe some other term. But what it means is that just like any other digital product that we purchase or use in our everyday lives, a data product has raw materials, and it's meant to be reused and extended to cover a broad range of different use cases.
Let me give you an example. A data product that's looking at our consumer. So, over time, it can evolve to include our transactional data. It can include our demographic data. It can help connect and see what might be loyal. And I may have interacted with this brand or product since I was in college. But I'm evolving now. I have a family that I have kids. So it's how you track from the first click on the website from an unknown, a registered user, to the point that I'm a student and a father, and how my tastes, preferences, and channels change over time so, as you think about that.
So, from a product perspective, how do you collect the relevant information? How do you expose it to your internal stakeholders, external stakeholders via dashboards with AI-enabled capabilities, or through even APIs? If you're trying to monetize your data, but always have that in mind. Who's your target persona? What value? Trying to bring up what the raw materials are that are needed to make this data product work?
And again, just like everything else, just like your mobile application on your phone, data product will have incremental releases. And maybe today it has two data sources in kind of harmonized with the next release. You're bringing additional data sources, and you're building that capability for this data product and service to the market. And according to McKinsey, regardless of the organization's size, whether it's a small mom and pop or a global organization, ultimately it comes down to anywhere between 5 and 15 data products that will cover all aspects of your organization. And if you start applying that type of mindset to a vertical of your data assets, you will drive your innovation and use cases, whether it's AI-enabled use cases or anything else, by 40%, just by thinking and applying this data as a product strategy.
So with that in mind, earlier this year, DataArt and Cloverpop entered into a strategic partnership agreement where we saw this amazing opportunity of closing the data-to-decision divide and addressing some of the challenges that I spoke about earlier. And bringing Cloverpop and their playbook and decision intelligence capability across different domains and DataArt's technology, solutions, frameworks, and accelerators to build this pyramid of connecting raw sources, transforming them into transformations, turning information into knowledge, and ultimately building wisdom by evaluating how effective our decisions are and how we can improve over time. So, again, as Lanny mentioned, he will go into the demo mode right now. But if you have technical questions or capabilities evaluating different technologies, please let us know, and we can evaluate after this webinar.
Lanny Roytburg: Thanks. So let's do a quick demo to show you a little bit of what a decision agent looks like. And I've got a bit of a demo here. And, essentially, we need a bit of context before we get into this. This is an example of how the system is connected to sources around brand equity, POS, and some other commercial data sources. Here, we can see the decision trigger. It's understood that perhaps the brand funnel is waning, and something within the creative needs to be improved. And so we could quickly click on this decision area. And all of a sudden, the playbook shows up. So we can see here what the decision is to be made. Do we need to adjust our creativity?
And we've listed a set of different business questions that need to be answered to make that type of decision. When you see how this deals with the scope, this is a different type of decision agent that is set up to pull the right data, interpret it, analyze it, and then answer these questions. And then you have apparent, we'll call this the recommendation agent that understands how these KPIs are connected to these questions interact, and then what the actual recommendation should be.
So let's get that process started. So, let's say we need to adjust our creativity. And I'll say for the good line brand. I'll go ahead and get that process going. Now, in this case, we could make any adjustments to the actual decision logic we want to. So that's the first place that we're thinking about the human loop. However, the system is also trained on many decisions that have been made in the past. It could also provide us with different types of recommendations that we might need to add to that decision tree to ensure that the decision is holistic.
Now, let's say I want to add my own question. I want to add a driver around how my brand is performing among Hispanic consumers in this case. This system will actually start looking through those data sources and pre-filtering them. See, okay? Can we actually answer this question with the data on hand? So it's looking through it. It looks like nothing is potentially there. Oh, it looks like we do have brand funnel KPIs pop up. And we could actually add that to the tree itself.
So we've got our decision logic. We want to go ahead. And start the decision. A person can review the actual data sources within each of these nodes. And it's important to know that now we could actually see the appropriate data, the right data in the right context at the right time. So everything is fully visible for us right here. So any changes to the content, mental presence, we could find that information that's also been pulled in for us. Additionally, we could interact with the data in natural language. So could ask questions. It will provide us with the information. It will also pre-filter the appropriate data sources for us to work with.
Now we've got all the data connected. We've got all these questions that we need to answer. But I don't want to go through and have to do all that work myself. I like the insight agent to actually answer all these questions for me. So we'll go ahead and complete the drivers. What the system is right now doing is live pulling the appropriate data in that right context, synthesizing that information, and answering each one of those business questions.
And so just like that, the system has now answered every single one of those questions, and it's doing it in both a traceable and a transparent way. Traceable in the sense that we know exactly the business logic it's using, and transparent in the sense that we could actually go back and see all the data informing that. So it's fully auditable in that sense. Once we have all this information, remember we don't have the recommendation yet. The insight agent pulls the information and answers the questions. For us now, I could go ahead and actually create the recommendation.
The system is looking through it. It understands how those various questions relate to one another. What happens when, let's say, our innovativeness scores are going down versus our top-of-mind awareness going up, and all those different elements? It'll provide us with that overall recommendation. Now, as a decision maker, I can focus more on the strategic element and those types of things.
Once these decisions are made, I can just go ahead and imagine that I make this decision. This now creates a decision record. I could request approval. I have the full list of all these elements here. We can share this externally with a decision memo. However, all of this creates a new type of data called the decision as a data source.
All of that information is now accessed via the decision bank, which is essentially what the system is doing to train, learn, and improve. You'll see a lot of the same decisions here from the demo environment. But as the system collects more and more decisions, it's able to provide better and better recommendations for the end user as well, so that is what a decision agent looks like.
This would fall into the second level of decision agents. Keep in mind that we talked about human-led AI-assisted. The second one was AI-augmented decisions, and the third was AI-automated. This is the case where it's AI-augmented. So AI is actually helping us answer these questions, but the human is still driving the decision-making process itself.
We have about five minutes left. I want to open the discussion to any questions that you might have, and we'll take it from there.
Yeah. I'll maybe kick it off. From thinking about that data to decision connection and those data products. I mean, how do you typically start from a data product standpoint? We have to make these decisions. The first step is creating some of these data products. How do you typically approach that type of problem, and how long does it typically take?
Oleg Royz: Yeah. Great question, Lanny. Like everything else, data products are not about having good data. It's about what value it needs to generate. And it's impossible to boil the ocean because, like I said, even in my consumer example, you may end up over the life of the evolution of a product, which was up to maybe a dozen different systems. So, the important thing to know is what value-based strategies and use cases we want to support. How do we correlate value, effort, and impact? So that starts with understanding who your personas are, whether they're people or even maybe processes and thinking, "Okay, if I were to start my most valuable product for the first release of my data product, what answers, what capabilities, what guardrails I need to have built into the data products."
So it all starts out with what my business needs are and what my capabilities are. I follow in a similar fashion to your tree value tree from KPIs and decisions that need to be made to the source systems that need to support it, applying that value at a matrix of impact and effort, and going from there, building your first release of your data product.
Lanny Roytburg: Thanks. So we have another question. I think maybe a bit better suited is driving organizational change, and really thinking about the processes. What do you see as success in terms of really bringing AI to the enterprise and driving that real adoption?
Jan Mehmet: Well, it is a great point because I think fundamentally it's about trust. And what's been interesting thus far is that you find organizations have used data for a long time. Sometimes not necessarily. Well, sometimes, as we've already discussed, the data is flawed in itself, leading to issues. But if you get past that, I think you get to a point where it becomes about the confidence in the decision being made.
Initially, it ensures that there's foundational data that everybody can sign up for. Second, it's about moving from that point around human interaction to a model that is agentic in its full capacity. And I think that's a process that most organizations would need to work through to build a level of confidence so that in certain areas where they're comfortable testing into, they can move to agentic decision-making without compromising something.
That testing opportunity, I think, is really critical for businesses because it will allow them to understand that it is very viable for business rules to wrap around the process and actually monitor how those business rules are adhered to, to then come to the outcomes that they'll achieve by using the process itself. So I think it's really about testing and learning, funnily enough, being confident enough to take steps towards the proposition.
And I think the initial phases are really important because those elements of firstly identifying and setting key objectives. Second, we apply the testing structure to say we've used this from a, let's call it, human observation point of view. So you go from the interaction of the human to the human observing the process, and then really seeing what those outcomes look like. And then I think you'll find it difficult not to move forward with it. I just think it's going to take a little bit of time.
Lanny Roytburg: Great. Thanks. So everybody, thanks so much for joining the webinar. As a follow-up, we'll send a copy of this presentation to everybody who has attended. If you have any questions, please feel free to reach out. All of our emails are on the right-hand side here. If you'd like a demo, please also reach out. You can either contact us directly or book a demo at cloverpop.com. But the recording and presentation will be sent out to everybody. Feel free to share it. Thank you so much for joining us this Monday morning. Thank you all.
Oleg Royz: Thank you.
Jan Mehmet: Thanks very much.