You are opening our English language website. You can keep reading or switch to other languages.
Webinar
September 03, 2025 11:00 AM ET

Your Data Wants to Talk to You: The Age of Conversational Intelligence

Watch our webinar on how conversational intelligence and AI-driven data are transforming organizational decision-making and strategic capabilities. Alexey Utkin and Oleg Royz demonstrate practical implementation strategies for governing enterprise assets while accelerating trusted AI adoption that enhances agentic reasoning skills. Essential viewing for data leaders, executives, and solution architects ready to deploy conversational AI for competitive advantage.

 

Key Takeaways

  • Enterprise Data Growth Drives 30% Annual Increase in Conversational Intelligence Adoption: Organizations unlock competitive advantage through AI-powered analytics that transform silent data into business insights.
  • Semantic Models Eliminate SQL Dependencies in Modern Business Analytics: Companies enable natural language queries without technical expertise, replacing traditional database barriers.
  • Conversational AI Reduces Data Analysis Time from Days to Minutes: Chat interfaces replace multi-dashboard workflows, accelerating decision-making through real-time business insights.
  • Data Governance Framework Critical for Trusted Enterprise AI Analytics: Robust lineage and quality controls prevent hallucinations while ensuring accurate conversational intelligence results.
  • Three-Pillar Strategy Enables 2-3 Week Conversational Analytics Pilot Programs: Data bedrock, semantic models, and trust engines deliver measurable ROI through structured implementation.

Speakers

Alexey Utkin
Alexey Utkin
Oleg Royz
Oleg Royz

Transcript

Oleg Royz: Good morning, good afternoon, and welcome everybody. Thank you for joining us today. It's a pleasure to have you here for our webinar: "Your Data Wants to Talk to You: The Age of Conversational Intelligence." In this webinar, we'll explore how conversational intelligence gives a voice to your enterprise data, allowing you to gain deep, actionable insights at scale.

We will show you what it takes to build the operational and technical ecosystem necessary to leverage these insights, from improving sales and customer data to unlocking new opportunities for your entire organization. We welcome your questions. Please enter them in the comment section, and we'll cover them within the last 10 to 15 minutes of this webinar.

My name is Oleg Royz, and I'm Vice President of Solution Consulting in Data Analytics and AI at DataArt, a global technology services engineering company. I have 25 years of experience driving large data, digital, and AI transformations, always with an emphasis on business outcomes and ROI. I'm going to be wearing a business hat for today's webinar, and I'm going to introduce my colleague, Alexey Utkin.


Alexey Utkin: Hello, everyone. Thank you for joining. I'm Alexey, based in London. My role at DataArt is to lead our Data Analytics Lab, which is our horizontal consulting service related to data. I have more than 20 years of experience supporting our clients in various transformations – specific transformations related to technology, digital, and data.

Industry-wise, I have most of my experience in finance – capital markets, financial services, and fintech. But today I will be wearing the technical hat and speaking about all these great conversational intelligence capabilities.


Oleg Royz: Let's get into it. As the title of this webinar suggests, "Your Data Wants to Talk to You" is both a promise and a reflection of our current reality. The business world, as we know, is changing at an unprecedented pace. The challenges are more complex than ever before across all industries, from supply chain disruptions going back to COVID years and geopolitical uncertainty, tariffs, ups and downs in the market, where customer expectations are higher than they've ever been.

At the same time, we are experiencing a technological revolution and neck-breaking innovation in AI, especially generative AI. We all remember – I think it was 2022 – it seems like ages ago, when ChatGPT popped up, and it just changed perceptions of how we, as individuals and enterprises, are thinking about adoption, uses, and application in every single industry.

It's a lot to navigate. There's the velocity and complexity of business on one hand, and the force of technological innovation on the other. This dynamic creates a critical question for every business leader: How do we harness it? How can we generate a competitive advantage in this new environment?

The answer lies in our data – your data. For years, since the internet and digital revolutions, we've collected an enormous amount of data from customer calls and chat logs to emails and social media interactions, and the list goes on. But this data has been largely unstructured, siloed, and silent. It's full of hidden signals and insights just waiting to be heard. Dashboards were the only windows to business intelligence and our storytelling capabilities.

The purpose of today's webinar is that we believe this is going to be the next great competitive advantage for enterprises – unlocking this conversational capability with deep reasoning on top of your data is going to be the next great competitive advantage for enterprises.

Just to set the perspective, Alexey, can you give us a sense of by how much data has grown at an enterprise level over the past several years?


Alexey Utkin: I think we've all seen these numbers. Data generally is growing about 30% a year on a global scale. But your question is more about what happens in the enterprise world. I think for the last five to ten years, a lot of organizations bought into this idea of data-driven transformation and use of a wide range of third-party data and so on to drive their business and work with their suppliers, partners, and clients in the data ecosystem. The amount of data they have just exploded. The question of deriving value from it still stands, and the capabilities we're discovering today – talking about today – are highly relevant to utilizing all this data.


Oleg Royz: Thank you. Let's go to the next slide. As you can see on the right-hand side of this slide, there are some things that we are very familiar with – things that we actually got to do a lot for our clients in the past five to ten years. That is really the transition of what's inside our enterprise and our data into visual storytelling capabilities.

Sometimes, it becomes too much. Sometimes, we create something at the core of an organization that is used once and never again. Sometimes, people have to browse through dashboards. Let me illustrate with an example: Let's take Sarah, a director of marketing at organization X. Usually, we're trying to answer questions, and in this case, she's trying to answer, "What Q3 campaigns drove the most revenue in the Western region for my company?"

Usually, it takes quite a bit of time. There's not one single answer. You have to pull digital data together. Maybe you have to browse through multiple dashboards. You have to export the data. You have to align it, maybe match it, and then possibly one dashboard tells you one KPI, another dashboard tells you another one.

In reality, in today's world, when businesses ask a question, they usually go to an analyst, who goes through all of these details. There's nothing wrong with these dashboards because, as humans, we are natural storytellers. We make sense of the world by creating narratives, and these visualizations help us do this. BI has always been about leveraging data to tell the story of a company's performance.

But as we look backward at the promise of the new technology, it created a bit of a gated community. Organizations rely on a handful of skilled analysts to be the gatekeepers of data creation, and this has created significant bottlenecks. One is the language barrier—you had to know SQL and translate business questions into a language the database could understand. SQL, by the way, stands for Structured Query Language.

The second one is transparency. There's a black box of KPIs because it was not clear how the data was derived, which tables were used, how it was calculated, etc. We talked a lot throughout the years about the emphasis on data governance. But at the same time, governance was a challenge in managing these types of ecosystems.

Alexey, any thoughts on why it was a challenge?


Alexey Utkin: In general, there are many facets of why governance can be challenging. Some of them are the roles of business and governance. Some of them are whether it's to address regulatory pressure or derive business value. But in this context, it's more about the picture you've drawn—all this diversity of tools, dashboards, Excel on the side. The technology foundations and plumbing around it weren't built to actually govern data. All these calculations, analytics, and models were spread throughout the data ecosystem.

I think historically, in this context, that was one of the biggest challenges, which is increasingly solved through the platforms we'll be covering today.


Oleg Royz: Thank you. Now, let's jump ahead and see the promise of conversational intelligence. As you can see on the right-hand side, it's a much cleaner interface. There's no dashboard, no details. There's simply a prompt to enter your questions, just like we love our Google search engine, Gemini, or ChatGPT.

In this case, we're going to be using Databricks Genie to demonstrate this capability. Alexey mentioned other technologies, but in this case, the angle is flipped upside down. Instead of searching and stitching for information, Sarah can now go ahead and put her questions into the chat. The chat—and we'll talk about what it takes to provide that answer—but let's just assume everything is set up. The chat will say, "Your Q3 campaigns compared to Q2 were down 10%. Would you like to see more channels?" It prompts you and anticipates your questions, and you can actually follow up or change the question.

In this case, it actually gives you a breakdown by channel. It seems like your email engagement was a weak spot due to the timing of the day or something like that. What I simply wanted to show – and we'll get into the demo later – is how the opportunity to drive intelligence, trigger action, and insight can be done much faster.

This promise can become truly a strategic partner and company-wide capability because it turns data from a difficult-to-access asset into a living, breathing part of your business conversations. Eventually, it empowers everybody, from the CEO to the front-line managers, because it adapts to different personas. The CEO can ask more high-level strategic questions like "What's our potential market share for our new product?" while a marketing director like Sarah can ask maybe more granular questions.

Again, we'll get into it. This is the magic and the promise, but we'll get to what it takes from an operational perspective and what it takes from a technology perspective. Let's sum up some of the benefits and what it means operationally.

As you can see, the gaps that I've described and the delays in traditional BI can easily be addressed by the new technology and capability. It's about democratizing data access. We are moving from a model where data access was a privilege – you needed to know SQL – to simple conversational intelligence like you would have when talking to another subject matter expert. Every employee can become a citizen data analyst.

It opens up a cluster of opportunities we've never seen before. Transparency and auditability – for example, you can ask how this measure was calculated. What is the source table? How did you come up with this answer? You can interrogate your agent to give you that detail.

Ultimately, we'll discuss other features that help build trust and how feedback loops and everything else improve your capabilities. This is the framework, but I'm going to pose a question to Alexey. With all of this power, what really happens to your traditional data team?


Alexey Utkin: It's a good question. All these capabilities – conversational intelligence and what we see on the slide here – are certainly very powerful. They're kind of new worlds. A question we always get is: "Do I still need my data team? Can businesses just do it by themselves?"

On the right side of this slide, you see more of the operational model, which you still need to define and solve for because these are all organizational decisions that need to be embedded. When you're rolling out conversational intelligence capabilities, you need to figure out what happens governance-wise, who will be supporting it, and who will be tuning the model. We'll spend quite a bit of time today talking about how exactly that can be done. In other words, you still need humans in the loop. You probably need the loop to be bigger, so you probably need more capable, more skilled humans because AI doesn't solve the problem of you not having the skill or knowledge. Once you have the skills and understand what you're doing, it can accelerate, enhance, and amplify what you're actually doing and get a much bigger benefit.

The people who were looking after your data, modeling, and doing the AI before will find new roles and new purposes in this new conversational intelligence reality. We will describe how exactly we see that happening.


Oleg Royz: Let's get into what is actually behind this. I heard the term "LLM." Maybe you can add some color to it.


Alexey Utkin: Sure. Let's maybe dive a little bit into all this magic Oleg was describing. How does it work? Of course, ever since the gen AI explosion started a little bit less than three years ago, no doubt there were many people who tried to bring LLM models and gen AI models into the context of their organization and their data.

You can think of them as your junior colleague who just joined the company. They might know many things about the world outside, might have theoretical knowledge, but they don't know anything about your organization, your rules, how you call things, etc. Many people learned from the first attempts to use LLMs for internal data that, outside of trivial use cases, there are many challenges and problems to overcome.

Accuracy, like hallucinations. Every time it doesn't find the right context, the right data, it will give you an answer that looks plausible but is not actually correct based on the real information. There were, of course, challenges to do with access to data. There are stories when a copilot would go and find a dataset of payroll data and help the curiosity of colleagues in the organization to discuss compensation, which, of course, you don't want.

Other clusters of challenges were related to data quality and data lineage. If you bring the model to your data platform, it will find a dataset that arguably is not fit for purpose, not high quality, maybe raw data, etc. I will be happy to give answers based on this data.

These early challenges belong to the past. But if you think conceptually about what is needed to make it work, here on the left side, you see the typical contents of your databases or data platforms: tables. Depending on your business, you have some data – users, products, orders. Then you may have some schema, some descriptions, and so on, but that's not enough.

On the right side, you see all the different things that must be provided as context to the conversational functionality – to generate AI and language models – to make them work reliably, accurately, and trustworthy in the context of your business. You need things like data models, glossaries, and definitions. You need indications of the quality of data.

You may need more sophisticated semantic models and knowledge graphs to provide the context – we'll discuss those. You also need access controls, quality controls, and feedback loops. That's part of the discussion, just to give you an initial idea of what can be required.


Oleg Royz: One of the questions that pops up in my mind is: if there's such a large proliferation of different models – open source, proprietary – how do leaders need to determine one from another? Can you share some thoughts on that?


Alexey Utkin: It's a good question. Indeed, it's really hard to follow the race and evolution of the models. If you only do that, it's probably a full-time job just there, because every model – a new model gets released every several days. They all have great capabilities.

But the truth is, bringing it to the context of having conversational intelligence capability, the reason it's not so much about the model – you need to be able to plug different models, different versions, update models, and maybe bring new providers. It is more about all this context, all the surrounding ecosystem, and this is what we'll be speaking about—how to enable it and bring it to the context of any model that you actually choose to utilize.


Oleg Royz: That's great.


Alexey Utkin: All right. Let's make it a little bit visual and real for you. As promised, let's have a quick demo of what we are talking about here. For the demo, we chose to show it to you on Databricks Genie, one of the prominent platforms with these conversational intelligence capabilities.

In the interest of time, we also do these things on Snowflake Cortex, AWS, GCP, Azure, and many other technologies. Of course, there are many bespoke technology providers that focus exactly on conversational use cases like ThoughtSpot and others. We're not covering all of that in the session today, but we want to give you the general business idea.

Here, you will see the Databricks workspace. Let's first assume I am our data analyst, data ops, data engineer – this kind of data specialist person who first needs to set things up. The crucial part of this experience is the data catalog. The data catalog is where your data is described, and we have a use case of a grocery store. The types of data you see are customers, orders, sales, stock, inventory, etc. These are our datasets.

What's important is that you see, for example, descriptions of the tables. You see descriptions of the fields, which are pretty important because Genie or conversational intelligence needs this context to sustain the conversations with business, because they may be referring to these terms. There is functionality where you can actually use AI to help you generate these descriptions to speed things up.

For the tables, for example, there are things like lineage. You can see what data is coming from, what sources. You can visualize these dependencies, including upstream and downstream dependencies. That plays a role in explaining how certain answers were derived. The indications of data quality, as we said, are pretty important—whether it's high-quality data and so on.

The version of Databricks we're showing you today is the free version. The paid version has more sophisticated governance features, like the ability to mask and filter data. Typically, a governance person would apply data masking to certain columns, whether it's personal information, etc., and use governance tags for that.

In general, this is how you set up your data and governance. When you switch to actual conversational intelligence – the Genie experience – you control what you see and don't see.

Here, you have what's called Genie spaces. You segregate each space for a certain type of conversation. Typically, organizations organize those around business domains. Let's first cover the configuration side, and then I'll show you the user experience.

In terms of configuration, you put a few things in here explicitly. You need to bring it in here from your data catalog – the data you want to be part of the conversation. That speaks to the fact that these types of capabilities are grounded in governance; they are tuned to produce accurate, trustworthy results. You explicitly choose the tables.

There are other things, like instructions for the model – for example, what role it should assume. Maybe you want to cover some terminology, business terminology, or part of your semantic model. You can cover certain formats, how to take information in, how to output information, and so on. Basically, similar to your ChatGPT or any other experience, you bring a lot of context and instruction, system prompts here, which alter the way the model will behave.

Certain other bits exist – for example, you can describe how different tables are related to one another. The Genie will understand that these relations exist and use the tools in the analysis, but it can also run some of these joins more optimally.

Then there is the capability to define the queries that will be run for specific questions, if you want, for example, to have an optimal query for a certain question, or where maybe Genie doesn't do what you want it to do. You can overwrite what it's doing for certain queries. Then you have monitoring functionality, which I'll cover a little bit.

Let's now see the user experience. What does the user see? Right away, you have the conversational interface, which is similar to what you normally have in the context of ChatGPT and so on. But it's of course plugged into the data you have. Right away, you see some of the suggested questions to get the users going with ideas of what they can ask. You can explicitly ask, "Hey, explain to me what data is there and what questions can I ask?" But let's start with the first question: "What are our sales during the last month?"

I'll cover a little bit of what the user interface shows you. A few important bits to pay attention to: First, the model actually plays back how it understood the question you're asking, so you can validate that it got it correct. That's one point of validation.

Then it runs the actual query. Depending on your technical abilities, you can see the query generated based on the question you asked, which is helpful for data-savvy people and data ops people. Here's a little mix because I'm an admin here, so I see functionality like instructions that is normally seen by the data specialist people, not only by the end users.

That's the format of the output. You see the data output, you see the charts, you can download it, etc. But what's more important is to then have the feedback control. You can say, "Okay, it was correct, it's fine." The system will learn from this experience, and the ops team will see that it works fine.

You see follow-up questions. Let's now have another, a little bit more elaborate example. I'm asking, "Can you show sales for the last year per quarter?" The model was thinking, and gives me my sales, but I don't quite like that in the chart. For those of you who pay attention, it actually broke it down by months rather than quarters. The date is actually quarterly data, but the chart is by month, so I didn't like that.

I can say right away, "Please fix it." I prompted with, "I want to see this data as year and quarter labels." Right away, it takes the model, regenerates the answer for me, puts the labels in, and now I see those in the chart. I quite like that.

In general, this is the user experience you get on this side. Now, I promised to show you the monitoring. Coming back to my data specialist role, looking after it, I can go and see what's actually happening. I can see users asking these questions, having the conversations, and giving me feedback, whether something works or not. That gives you an idea of what needs to happen.

As a specialist, you can see what conversations were happening. I can see that the user asked for this format, which I quite like. I can look at the query and validate it's correct. But what's more important, I can edit this instruction. Once I add that, it becomes part of my rules, my semantic model. Next time, after I've done it, if I ask the same question again, it will hopefully save this query and give me all the things broken down by quarters.

Let's see. That happened. So, in terms of the user experience, I think that's really what I wanted to show. But now, in other minutes, I'll bring it back to the context of BI and dashboards because I see conversational intelligence and traditional dashboards complementing each other.

This Genie experience actually now exists as part of the BI experience. Once you have your dashboards published, you can talk to Genie about the dashboards. You see, on the screen, you can click on a specific dashboard and ask your "why" question.

The user looks at it – here we have regions and revenue trends by regions. I chose this chart, which focuses on the revenue trend. I can ask some question: "What region is the most important revenue-wise?" Now, it will answer and reason about this specific piece of data on the screen. It supports workflow analysis. Hopefully, it gives me – yeah, it figures out that the United Kingdom is the most important based on the data you see here.

I hope it gives you an idea of the functionality. Maybe the last bit to mention functionality-wise is that a lot of users find this capability to find and discover the data particularly useful because unless you're a data specialist, you wouldn't know about all the 50 dashboards or 500 dashboards you have in an organization if you're not using those day to day.

You can do what's called a reverse lookup. If you want to find something about everything about a company, for example, you can ask about this client of mine, and it'll give you data from different contexts around this particular company. We find this feature particularly useful for many business users.

With that, I want to conclude the demo.


Oleg Royz: Alexey, thank you. I mean, this is wonderful. I think you were several heads: a governance resource, a data analyst, and a business user. Thank you for demonstrating the trends. Obviously, the functionality is amazing. I'm glad to see that dashboards are not dead. But can the Genie domain, once you create it, be connected – not as a standalone, but can it be connected in a business process or operational process, whether it's a chatbot or maybe a customer service agent? What are your thoughts?


Alexey Utkin: That's actually a great question because I think this is one of the most interesting aspects of these conversational capabilities – they're completely embeddable. All these things become broken down. You can embed those into your Teams or Slack chats. You can bring those into operational systems and applications for your customers. You can bring features, like for example, here you can have "copy link to widget" and embed into different places, as well as the conversational interface itself is accessible via API. Actually, via API, it's even more flexible. You can choose models and so on and so on. It's built to be embedded.

This is why I think the whole BI space will change. Previous generations of BI were this one tool that was supposed to cater to everyone, but it wasn't really – it was good for IT specialists. But not everyone. I think we are at the forefront of the BI space being broken down into smaller parts. Then the base, which will become data ops bits, which will become conversational interfaces for which you don't actually need to build dashboards. There will still be space for dashboards for the most high-value, visually rich types of analytics for organizations.


Oleg Royz: Thank you. I think this is a good demonstration of the features. But let's maybe transition to the next topic of what it actually takes to enable it. We covered a lot of operational angles, but let's maybe look at the technology aspect of it.


Alexey Utkin: Of course. After working on all these conversational intelligence and other data analytics projects, we came to this breakdown of the foundation – the things you need to take care of to make it work for your business. We formulated these three pillars.

The first is the data bedrock. This is your unified and governed data ecosystem, the platform, and solid foundations. The second pillar is how to actually get to the semantic models. Basically, how to get to the MVP of your conversational intelligence, how to get started. The third pillar is the trust engine. It's how to make it work going on, how to increase accuracy, how to build trust with business users, and how to scale it across organizations.

Let's examine these pillars one by one. The first one is data bedrock. It's largely your data platform and data infrastructure foundations, but with a few interesting aspects. We've broken it down into layers.

The first is the data layer. It's basically what we've been talking about with clients on modern data platforms and so on. Typically, you have, depending on your organizational needs, data warehouses, lake houses, data mesh, and whatever platform choice you have, it's still relevant. You need it. Of course, you need to have data provided to conversational intelligence. You need to be able to work with structured data and real-time data. You need to be able to bring data from different systems. All that is still important.

Then we have the governance layer, which takes on a new level of importance in all these conversational intelligence capabilities. That's where you see providers like Snowflake, Databricks, etc. shifting their focus. This is where they start to compete more and more.

As we said before, for LLMs to work in the context of your data, you need data lineage, you need end-to-end data quality validation. Otherwise it will use data which is not supposed to be used. You need things like data privacy so as not to leak private data in all these chatbots and copilots. You need, of course, access controls. If you're using data from many systems, you need these access controls to work in the entirety of your ecosystem. That's the increased importance of governance.

Then you have the yellow block, which is your traditional BI analytics capabilities for organizations. But more important is the red block, which is relatively new. These are all the things that are important to enable conversational AI or gen AI applications—things like vector stores, which enable you to index your data for semantic search.

There are things to support the existence of semantic models, or for more sophisticated setups and needs, organizations may choose to develop knowledge graphs, which have more flexibility and depth in describing terms and relations that exist in their business domains—the business they are doing.

The last bit, which is super important, is this ability to swap the models themselves. You're not solving for one model, which is a pretty essential part. This ecosystem basically wraps the model and bridges the data. You have governance. You have tools for interacting with the model.


Oleg Royz: There's a renewed renaissance towards data governance because it seems like it's underpinning a lot of these capabilities. But I have some questions, and I think a couple of them are coming from our participants, which I think are relevant. Thank you for posting those. Please keep going, and we'll address them towards the end.

But the first one is that, as we've talked about, there's such a huge velocity and volume of data. Do enterprises need to think about centralizing it or keeping it federated? How does it impact their AI capabilities?


Alexey Utkin: It's a good question. The ability to work with federated data is very nice to have for this setup. It doesn't have to be there. In theory, you can bring the conversational intelligence experience into centralized data. But I think going on more and more, and we will be discussing some of it in advanced capabilities, you want to work with data that includes new sources and external data that doesn't have to sit inside of your data platform. Your ability to federate data and govern data for the federated setup can be pretty important, or very nice to have for your business.


Oleg Royz: The second aspect – and one of the questions that got posted – is about data integrity. It's important. How do you ensure that your data does not leak into the metaverse of open space, or even sometimes your data inputs in terms of maybe your prompts? How do people not see outside of your enterprise what you've been asking? Can people see everything within an enterprise because it's important? But how can you talk a little bit about those different chains of data integrity?


Alexey Utkin: Super important questions. Data integrity from different facets – one is like relating data to each other, which is super important and can be done at different levels, but data relations are in the context of conversational intelligence. However, regarding access to the data, this is where I think the platforms play in the data bedrock.

The providers, again, such as Databricks and Snowflake, etc., have a very strong value-add because if you think – if you have your own setup on-prem, technology doesn't matter, you run it yourself, and you bring models like Claude to your setup – you still need to solve for this governance, for security, etc. This is where all these providers focus on solving it for you.

For example, let's take Snowflake. As part of their data catalog, they would have access permissions to internal and external data, which you expose to the conversational intelligence. Every conversation is basically subject to all these controls. Even if you bring external data, if a user cannot see this data, the model actually will not get this data in the context and will not use it as part of the conversation. That's the way to control not leaking.

I just want to emphasize that this ability to do this is maybe more important than the actual model you use these days because you want to make it safe and secure for business.


Oleg Royz: Thank you, Alexey.


Alexey Utkin: Right. Let's keep the pace going. I want to cover the second pillar, and the second pillar is really how you do the MVP—how you get to the place where you start having conversational intelligence. Here we break it into two parts.

One is that you need to build this semantic model. It may be called slightly differently by different providers and platforms. But basically, you need to make this bridge between conversational intelligence and what is discussed there – terms, definitions, metrics, KPIs, relations between data, etc. – and the data you have on your data platform. Historically, a lot of that was happening in the BI world. The BI tools had semantic layers.

One thing that has happened in the last five to seven years is that the semantic layers were pushed to the left, so they were pushed closer to data platforms when organizations realized they have to have multiple BI tools; they don't want to duplicate these semantic layers.

There was always a push to externalize it and bring it closer to the platform. That's super important in the context of conversational intelligence. You want to set it up once, have consistent terminology, etc. The exercise of doing this is very similar to what was happening in data analytics and data engineering in this space. You need to model the data specifically for LLMs to work.

There are tweaks and caveats to that, but essentially it's the same sort of skills, same sort of knowledge required – data modeling, talking to business, etc. For more sophisticated setups, you may need knowledge graphs. As I said before, they just give you more flexibility. They can have an edge in terms of complexity and so on. But that's quite an investment, a bigger investment in terms of modeling.

The second category of what you need to think about is how you go with adoption and best practices. First, you need to form a partnership with the business unit with the right mindset, which agrees to pioneer these capabilities together with you. One, to model the semantic layer properly, to make sure you capture all this terminology, make sure the model knows what your financial year is, it knows what your platinum customer segment is, knows what your cash flow runway is – all these terms that the business uses, you need to work with them to model it.

The second is that you need to set expectations with this business. People who will be using it should know that it's not a production-ready project or a 100% accurate system that they will start right away. They're supposed to work with you and go through feedback cycles and reviews to fine-tune, augment, and enhance these capabilities.

Then, the last bit – as you go further in building more sophisticated semantic models and knowledge graphs, there are tools and AI that can be used for that. You also need to solve for, going on, once you build all the semantic models, who will be looking after it. What should the governance around it be? Because if you allow everyone in your business to change your semantic models, that will become a quality control issue. Your results will be affected, and you don't want that to happen. You need to figure out a governance model for the semantic layers.


Oleg Royz: Thank you. I think one of my questions, but I think you answered it, is that AI can be used to derive that contextual information. I think it can actually accelerate the semantic layer. But what about trust? I think that's your third pillar.


Alexey Utkin: Yeah. Trust is, of course, at the core of it. If this is – and this is actually why all these providers focus on trust and groundedness of these models – because it's easy to break trust, and business won't look at it again.

As I said before, the functionality to support the feedback cycles and trust is critical, actually at the very top of importance for conversational intelligence capabilities, which I've demonstrated some of it. There are other features in all these platforms. There are things like benchmarking and evaluations, for example, where you can have the model questions and answers or queries and make a library of those, like unit tests. You continuously, periodically run those and make sure that the system doesn't degrade in quality of answers. Maybe the model changes, or it can degrade with some change to the semantic model or context. You need to continuously monitor it.

Again, we already talked about the governance model for semantics. You increase the number of verified queries, which you define. With all this, you build trust.

The guardrails can come in different forms, but a lot of that can, in the context of the demo I showed, be part of instructions. For example, you specifically instruct the model if it's unsure, if it cannot directly answer the question with the data you have, just tell the user, "Look, I don't have this term defined" or something like that. Instead of trying to hallucinate and be creative, be grounded. A lot of that is configured upfront, but you also learn from the real experience of business users using it with the data ops people.

The last bit here is, of course, that you need to monitor and optimize the system going on. A few things. One, of course, is that the LLM model can generate queries that are not optimal. You need to monitor your compute costs for certain queries. You might need to specifically rewrite those with more cost-effective queries.

There is also another bit, which is the token usage. Since all LLMs are used, they consume tokens, which you provide as context – all of your queries and semantic model are the context for them. You need to monitor the token usage. If it explodes, your data ops people need to look at maybe reducing the context, maybe bringing down how many tables you have in the semantic model, how many columns you want to submit to the model on each call, etc. There are optimization techniques that you need to take care of so that it doesn't cost you a fortune as well.


Oleg Royz: Let's look into the future and see how we go beyond conversational intelligence. What are the different flavors?


Alexey Utkin: That's a great point. We really see three clusters of capabilities there. The first one – "talk to data" – which we highly described and talked about today in the demonstration, is designed to be grounded, trusted, and work with specific data. It's very controlled.

Then there are other emergent capabilities, which are more powerful in research and reasoning, and multi-hop answers, which can potentially look at different sources, some external data. They look at internal sources, and so on, and combine them with the data. One frame of how you can think about it: there's the "talk to data", similar to BI. It largely works for the data you have on your data platforms, which is mostly historical or current data. But it's not future data, not predictions, not projections. The second category, reasoning, can look and make projections, be more creative, and be more powerful. But of course, on the other side, it's much more difficult to govern, to ground. The user is expected to check all the inputs, all the reasoning, plans, and make sure it's actually valid for the research they're getting.

In the context of Databricks, this capability comes in the form of Genie Deep Research. In Snowflake, it's what they announced at the Snowflake Summit in June—Snowflake Intelligence. We see evolution going this way.

The third cluster is agents because agents are really twofold. One is how agents can be applied for conversational intelligence. For example, our second and third pillars – building the semantic model and then building trust – I think in the future, some of these tasks, activities, and motions will be picked up by autonomous agents. Maybe semantic models will be largely defined by agents, but you know, specialists need to review. This is where evolution goes.

I think agents are exciting. Although the industry hasn't really solved all the challenges with them yet, this is the direction in which it will evolve.

The other aspect of agent work is that agents are the thing of the year or maybe decades, depending on the time perspective. But conversational intelligence is supporting these agents in two ways. One, you can expose your conversational intelligence capabilities, like for example Genie, as a tool for your agent, which you build externally. You might have a Claude desktop agent and expose Genie spaces as an MCP tool with model context protocol or agent communication protocol. In this way, some external agent you're building suddenly becomes able to talk to your data and reason about data as well.

We see this as pretty powerful. The other bit that supports this – on this first pillar of the data bedrock, I called the red blocks – is the vector ability to bring models, knowledge graphs, etc. All these capabilities are also super important for you to build agents because you solve the same challenges. You need to build trust. You need to evaluate, and you need to monitor.

In this way, it's all foundational. If you invest in conversational intelligence today, you will certainly use it to build agents, maybe tomorrow.


Oleg Royz: Thank you. I think we are ready for Q&A. Let me just remind you that you can post your questions in the comments. Again, if we run out of time, please feel free to connect with Alexey and me. Our emails are at the front, or you can scan the QR code and connect with one of our experts.

As you guys post the questions, let me recap some of the key messages. Hopefully, you see the opportunity of conversational intelligence. But as we've highlighted, it's not magic. It's the outcome of solid data discipline. It starts with your data foundations and then the importance and emphasis on curation, building trust, and then, through conversational intelligence, unlocking additional agent capabilities.

If you're thinking about how to start – and again, we do this for a lot of companies as well – it starts with your data readiness. It's always important to start small. Just like you're building an F1 car, you don't go racing. You take it in a wind tunnel, you take it into test spins. Just like with everything else, you pilot it with a small group and then scale it out.

Ultimately, we see this as an opportunity for transformative competitive advantage for enterprises that can harness it. Your data already knows the answers. The question is, are you ready to listen?

With that, let me go into our queue of questions. Since I mentioned the word pilot, one of the questions I see is that they have a point of view: How long does it take to actually set up a pilot for conversational capability within one domain? Let's say it's finance.


Alexey Utkin: Let me take a stab at it. Of course, the understanding is that it depends because it depends on whether you have all these data foundations, the platform, whether you use Snowflake, Databricks, or none of those. This really depends.

But if you are in a good place in terms of your data foundations, what we see with our clients is that you can get good working pilots, where CEOs start talking to business about it and start getting feedback cycles, sometimes in a matter of one, two, or three weeks.

Of course, rolling it out beyond the first working group can take one, two, or three months, depending on the size of the domain as well. I think you still want to capture the topics and use cases, and you still need to have objectives and targets in terms of what you actually see and the types of questions you want to answer.

But yeah, I think it can add value relatively quickly, at least compared to some of the traditional BI activities.


Oleg Royz: Thank you, Alexey. The second question, I think, probably goes to you as well. Can semantic models be generated via a fast-track process? Example: Absorb a traditional BI dimensional model from a trusted platform?


Alexey Utkin: That's a great question. It's the right type of thinking because, of course, in a world where AI is ruling the conversation, you want to apply automation to everything. The short answer is yes. It depends a little bit on what BI tool and what semantic model, but like, for example, in Snowflake, again, announcement of this summer – semantic model, you can plug in your Tableau and extract the semantic layers, metrics, and bring those in the context of semantic models and semantic use in Snowflake.

So it's the right type of thinking, even if it's not available for your combination of data platform and BI. We actually have quite a lot of initiatives where we just apply AI models as they are in bespoke contexts. It's a little bit more formal, but we get quite an amplification of efforts on such tasks.


Oleg Royz: The next question is maybe a little bit provocative. It touches on data engineering and the importance of the future of data integration. Is data integration still important?


Alexey Utkin: Yeah. To me, if anything, it's more important. I think my take on it is that, of course, AI helps all things – all modeling and data engineering. Again, we use it in our data work. But I think these capabilities only grow the demand, and they are not – we are not talking about removing the dashboards. We're talking about the importance of having all these right foundations for AI and data governance, integrations, lineage, etc., which are just growing.

As engineers, we solve for efficiency, delivering this to the business. But the importance of it is there, and you still need specialists. One thing – one perspective I take on that is if you look broadly at AI adoption in the world, the hot spot is AI for engineering.

Why is it so? Because engineers are well-suited to deal with imperfections of AI today. It can give inconsistent answers, etc. Engineers have dealt with it for many years. In this way, engineers are more empowered with AI tools. But that's a topic for another conversation. But you still need all the skills and knowledge, and knowing what to do.


Oleg Royz: Thank you. From my perspective, which is maybe less hands-on technical, I do see that platforms such as Databricks and Snowflake are putting the capabilities that are blending the role of traditional data analyst and data engineer in ways that you can actually think of the type of data transformation logic without being too technical. That's where AI is coming together, and the translation of business rules into technical scripts is being supplemented by some of the latest AI capabilities.


Alexey Utkin: Just maybe to add to that. The platforms you mentioned—Databricks and Snowflake – again this summer, they all released their own pipelines in data engineering, declarative data engineering inside these platforms. I think that in the future, some of the use cases for data-savvy business users will be able to source and bring some of the data using conversational interfaces into the analysis.

Again, I think that will just add to what's already there and what they need. It's about building, enhancing, and extending the use of data. But I welcome it because I think the whole data industry needs to be aligned with business, responsive, and really bring things quickly to the business users.


Oleg Royz: Thank you, Alexey. As we're approaching on time, I want to thank all of our audience members. Thank you for your questions and for your time. Again, we look forward to continuing the conversation offline. Please reach out to us or contact our lab at dataanalytics.coordination@dataart.com.

Again, just to wrap up, I think we're thinking that this capability can be transformational, and we look forward to collaborating with our clients and our partners on delivering these insights and innovation to you.

Thank you for your time. Have a great day.


Alexey Utkin: Thanks, everyone. Thank you for joining.

Subscribe & Be the First to Receive Updates

Check Out All of Our Webinars:

Image
Video

Your Data Wants to Talk to You: The Age of Conversational Intelligence

Image
Video

Microsoft + DataArt Panel Discussion: The Role of AI in Clinical Trials

Image
Video

Overloaded with Data? Learn to Future-Proof Your Data Strategy

Image
Video

From Data to Trust – Validating AI in Medical Devices with Real-World Data

Image
Video

How Salesforce is Empowering Agents and Delighting Travelers

Image
Video

Closing the Data-to-Decision Divide: How to unleash your Data & Analytics ROI

Image
Video

Beyond Patchwork Fixes: Capital Markets Data Transformation

Image
Video

From Legacy to Lift-Off: The New Era of Airline Retailing

Contact Us
Please provide your contact details, and we will get back to you promptly.