Episode 16: We Need to Talk About Data
In this episode, Anni sits down with Alexey Utkin, Head of Data and Analytics Lab at DataArt, for a candid conversation about what's really going on in the world of data. They talk about the myths companies cling to, the real value behind the buzzwords, and what it takes to move from dashboards and demos to actual business outcomes. GenAI makes an appearance, but so do the unglamorous truths about governance, context, and why your «data strategy» might just be wishful thinking. And! Alexey shares his unpopular opinions on the topic and wants to hear yours!
Key Takeaways
✓ Historical Context: Legacy systems, regulatory compliance, and data silos have long challenged organizations, making data governance, data quality, and foundational infrastructure top priorities in the early days of digital transformation.
✓ Current Landscape: AI and generative AI are reshaping data analytics, with businesses focused on open data formats, data contracts, semantic layers, and knowledge graphs. Key challenges include data quality, adoption gaps, and aligning technology with business outcomes, while concerns around data privacy and energy consumption are rising.
✓ Anticipated Trends: Expect broader AI integration for data engineering and business use, but adoption will be gradual due to ongoing governance and validation challenges. Data literacy, AI-powered education, and sustainable AI will become more important, with a likely period of disillusionment before AI delivers reliable, scalable data solutions.
Transcript
Anni Tabagua: Welcome to BizTech Forward, your go-to podcast for cutting-edge insights at the intersection of business and technology. Let's move forward together. Welcome back to BizTech Forward. I'm Anni, your host. And today, finally, a topic that is nearly impossible to avoid: data. Data has to do with everything now. How we make decisions, how products get built, how companies compete.
But with all this data flying around, it feels like we're still just scratching the surface. So today, I'm joined by someone who spends his days right in the middle of it all: Alexey Utkin, Head of the Data and Analytics Lab at DataArt. Alexey, hi. Welcome to the show.
Alexey Utkin: Hello, and thank you for having me.
Anni Tabagua: I want to introduce our guest briefly. And I must say, Alexey, I thought your LinkedIn bio was outstanding. So if it were up to me, I would just read it out loud. But just the gist: Alexey got into the world of fintech and data over 15 years ago, and as he himself says, it's the combo that most would call boring.
And he said, throw things like mandatory regulatory transaction reporting for derivatives, but with a party animal alter ego. He's made it his mission to shake things up, fighting legacy systems, unlocking data, and moving fast with smart tech and sharp teams. I'm so happy to have you here, Alexey. Again, welcome to the show.
Alexey Utkin: Thank you for bringing up this LinkedIn bio. And by the end, I think some of the angles of it will get into our conversation for sure.
Anni Tabagua: Perfect. Alexey, I want to start with breaking the ice a little bit, and I want to ask you: day to day, are you personally tired of all the data?
Alexey Utkin: No, actually, I'm tired of some aspects of it. So, as you see, it's in my bio, which was written some time ago. Data on one side is super exciting and enables great outcomes, understanding of clients, optimizations, automations, and great decisions. All this kind of great stuff.
But internally, on the back end, it is tiring. There is a lot of effort and work to be done, a lot of complexity, and a lot of inaccuracies. All these things are tiring, and very few people are excited about them. So, in a way, it is kind of tiring, but there is hope.
Anni Tabagua: Okay, there is hope. I also have high hopes for my ability to ask the questions. So the first would be: right now, Alexey, in your world, where not a lot of people are excited about data, what are people actually talking about right now? What is keeping them up at night, if anything? Or what is still something that gets them excited?
Alexey Utkin: Yeah, and that really largely depends on where these people sit and what they do because, for outsiders, data is this kind of space where data people exist, but inside they do lots of different things. There are data architects and engineers who sort out all the data preparation to make it available to be used.
There are also data analytics, data scientists, and consumers who get value out of it. There are also operations and governance people to keep it safe, compliant, and so on. So it largely depends on where people sit.
In the world of data architecture, technology, and engineering, some exciting things are happening. Many people discuss open table data formats, which allow you to work with data in different platforms and do not lock you into proprietary formats with each individual platform. This allows you to share data and access data across different tools and technologies.
Some technologies are excited by that. There are also things like data contracts. Five years ago, the term "data mesh" was like the big story in the industry. It's all about having data distributed and decentralized teams working with data in the company way across the company boundaries. And that concept kind of transferred to many things.
One of those is data contracts, which allow you to explicitly define your data products and how to work with them and capture them in a similar format. So again, they're easily interchangeable. You can push these data products into different platforms and ecosystems, and people can use them.
So this is the technology aspect. But then, on a more commonplace business side of things, a lot of people discuss AI, of course. There are different facets of how AI touches data. And of course, you need a lot of quality data for AI. We will talk about this, but also there is a big potential of applying AI to the space of data on its own, and what it can or cannot do. It's being discussed every single day. Some things are already there. I see people discussing semantic models, semantic layers, and knowledge graphs every day. These are ways to shape your data so that your AI works better.
So this is another topic. And then, ultimately, there is the eternal topic, which is, okay, you built all this technology, built all these dashboards and programs, but does it actually work in the business? Does it change people's lives? Is there adoption? Is there any impediment to adoption for people to use all these data analytics products, to make decisions, to have automation really applied to appropriate business outcomes?
And oftentimes, there is still some gap between the technology and delivery of dashboards to insights, and then people actually starting to use it. And that's again an eternal topic of conversation in this space.
Anni Tabagua: From your experience working with clients, what data-related topic comes up most these days? Would you say it's AI or gen AI, or is it more data privacy and compliance stuff like that?
Alexey Utkin: It's interesting. Yeah, it was going in waves, I think, year to year. I think before generative AI, we certainly saw the spike of security, compliance, and governance, which is still there because, of course, if anything, AI only brings up the importance of decisions, because now you also need to govern the AI, and you need to govern all the data going into AI and into customer-facing AI.
This makes this challenge even more complex. But I think these days, we certainly have a strong stream of conversations around AI and different aspects. It's from putting AI in front of the end user data consumer, for example. So things like talking to data or giving your users the ability to build their own visualizations and dashboards, using AI to make use of data, and getting insights.
That's often talked about. But increasingly, we also see these conversations around applying AI for what they call AI for data. These are things that typically need to be dealt with in the data space, things like data integrations, data validations, monitoring, operations, data modeling, data transforming, data builds, and pipelines by creating data.
So this is what I call AI for data. We see an increasing level or number of conversations, at least at this pace, although I keep seeing the big gap in misalignments of expectations versus reality. However, that might be prevalent in any application of AI to engineering.
Anni Tabagua: I want to stay on this topic of clients for a moment. I wonder, so when you have your first meeting or when clients first describe what they want and when they come to us for help with data, specifically, what is the most common misconception that they have?
Alexey Utkin: There are certainly a few. So I would say among the clients I see, I would name a couple. So one, we often get to these technology conversations, which may be a reason being that DataArt is largely a technology company. So we often have this conversation about, okay, let's modernize, migrate, build a new data analytics platform, and move to the cloud, etc., etc.
For many clients, these programs do have very specific benefits, especially depending on where they're moving from. So if they're paying top dollar for their legacy on-premise infrastructure, which is not only limiting them but also costing them a fortune, that may be a good thing to explore anyway.
But sometimes, this technology effort really hides away the more fundamental needs of examining how our organization should use data. What's our opportunity for in two or three or five years’ time? Where should we be with data in terms of what we should be using it for, and for what purpose?
To make our decisions and automate processes, we need to know where we are in terms of organizational readiness. Do we have people who know how to use it? Is data leadership buying into making those happen? Is there a training program, adoption program, etc., etc.?
Sometimes, in some client organizations, there might be strong technology leadership but not very strong data leadership in the business. And that could lead to, from a technology perspective, some successful, for example, data transformation or data platform modernization programs, which will struggle to deliver ultimate benefits to the business. So that's certainly one of the pitfalls of this underappreciation of the organizational and business side of things.
Another one might be to do with... There are a few still. So it's like one is AI. So, let's start doing AI without having the proper data foundations. Which, sometimes, still you can reason is a good idea, but I think that if you're serious about going into the data analytics AI track, you really need to start thinking about the foundations, because otherwise the demands in this space can be explosive.
Tomorrow can happen ten times today, and the day after tomorrow can happen 100 times today. So you need to be a bit ready for what comes because your appetite grows quickly.
And then sometimes we might see a client come and say, okay, we sit on top of piles of data. We want to monetize the data. There are some wonderful business magazines where many companies are monetizing their data. So they should also have a new revenue stream. What can be better than this? And it's all great. But again, similar thing, there are a number of foundational things. For example, data as a product paradigm must be put in place for any monetization, internal or external, or any sufficient, valuable use of data.
And yeah, I think these misconceptions always involve trying to jump the maturity stages of any of the data analytics AI capabilities too far in one job. That's my take on it.
Anni Tabagua: Right? And specifically, when clients struggle with data quality or data governance, what is usually at the root of it? Is it a lack of ownership? Like you said, is it one wanting AI before they fix basic data hygiene? Is it unrealistic ambitions, I wonder?
Alexey Utkin: Yeah. No, that's a hard one because it is one of the hardest things on the path to adoption. There are a number of typical problems related to data quality and governance. One is that no one really wants to do it. There's an asymmetry of effort versus benefit. Some people have to deal with it, and other people benefit from it. That's commonplace in any data governance, data quality-related thing.
Anni Tabagua: Can I ask you for an example on this? I really like that you just said it so directly, as if nobody wants to do it. Can you draw a parallel to an example so that I can link it to something from real life?
Alexey Utkin: Yeah. Like, look, I assume you do some online shopping, right? If you're like me, you get frustrated every week, several times, when you get something that's not what you expected. So, with all this product catalog data, etc., and inventory data, I ordered something. I expected it tomorrow, but it will come in three weeks because the data is inaccurate.
So someone has to deal with it, clean it up, look after it, etc. Typically, it's not the person who's actually selling it to you, not the commercial leads, or those who know how much they sold of the stuff. It's someone deep in the organization who has to assume responsibility for actually looking into this and making sure the data is accurate and correct. So that's just one example, but it's prevalent in this space.
Anni Tabagua: That actually does sound very unexciting. Now I get it, now I get it. Okay. So that's the first thing. What's the data quality and governance struggle since nobody wants to do it? What else?
Alexey Utkin: Yeah, nobody wants to do it. And the asymmetry of the benefit. So, who benefits from accurate data? You, as a client and maybe someone who attributes your purchase to their success, and someone who has to do it, typically are somewhere in other parts of the business, looking after some housing system, etc.
They are not directly. Their performance is not directly linked to paying for these goods, such as how much you buy. So they are not directly linked to this benefit. And that's part of the problem. And yeah, and again, there are a lot of other complexities, such as systems change, data change, data providers change, etc., which contribute to the complexity.
Many organizations are also not ready to get on the path of data governance and quality because they lack some basic definitions. It’s just that data quality has many dimensions. People think that if I look at some numbers, the numbers are wrong, but there are many other things. The number I'm looking at is one week old, or it shouldn't be a number, or like there are so many aspects of it that need to be captured to deal with data quality issues properly.
But again, there is... I want to take a positive spin on it. I think this is one of the areas where I actually get excited about AI. All these issues with data quality, data governance, etc., have existed for quite a long time now, if not forever. But I think this is one of the areas where AI will actually make a big difference because it really will shift what is possible at the right level of effort for organizations to achieve in terms of governance and data quality, etc. So, to have it in place without spending many millions to achieve this goal.
Anni Tabagua: Right. We did touch briefly already on gen AI, but I want to spend a bit more time here, and I mostly monitor media requests. I see what journalists are interested in, and I see that generative AI is constantly changing headlines. It's just that you cannot find a headline that doesn't have generative AI in it.
And I wonder, is it changing everything, or is it just changing the headlines? I don't know, with data, is it just adding another layer to an already very complex area? Or if you could just comment on, yeah.
Alexey Utkin: There are many facets to this question, really. So I think in the long run, I think generative AI and AI in general have huge potential, and especially in the data area, I have no doubt about it. But at the same time, of course, as everyone sees, a lot of hype and misaligned expectations of what it can or cannot do realistically today.
And what to expect. I think if we try to be realistic with Gen AI today, we see emerging potential for supporting specialists. Really. Engineers, data analysts, etc., like those who know what they're doing, for sure, the tools are emerging. They're already helpful. I think, according to the estimates, the productivity gain can be between 10% and 20% overall.
Again, it depends on how these tools and models are used. There is also a cost, not only the dollars paid for licenses for models and computer usage but also the cost of choosing and monitoring all these models and making sure you're using the right one for the right type of task, etc.
I see this potential developing. One problem here is that sometimes, in conferences or the media space, you hear AI does everything five times faster, ten times faster, telex productivity gain, etc. Most of what I look at might prove that in a very small part of the overall workflow and task.
And you still have to deal with everything else. So, for example, gen AI works out of the box most of the time because it gives you something. So if you go to ChatGPT and you give it a prompt, it responds, so it doesn't need data. And I think many people initially thought, okay, with gen AI, we don't need data.
Soon after, they learn they still need data because they want it to behave in your organization's specific context and deal with documents you possess, etc. So it's how you bring this data in, but then also, instead of maybe using a big volume of data to train the AI, you still need a substantial volume of data to validate it, because otherwise, you cannot really use it in production or in any meaningful use case.
So, I think this validation of making sure AI is doing what you're comfortable doing is still very early. To be honest, I think a lot of what we see is greater use cases that seem to work in demos or prototypes or something like that. But then, if you're really thinking of how to make sure AI can run a certain business process, a big part of my critical workflow is that it is very challenging to make sure it all works all the time.
I think this will be a big part in the coming years, too. But so, yeah, I think there is largely potential for supporting specialists because they can address today this part of validation and make sure they are doing what is safe and what is right. In terms of AI, which is facing an end user, I think it's still been very limiting for some years now.
It's just that I think critical use cases, high-risk use cases, etc., will be difficult to push to production. And again, until maybe this validation part is solved at some new level. So that's what I see. But yeah, I think the potential is huge in the long run.
Anni Tabagua: Right? Alexey, is there a lot of talk among clients about code written by AI these days? And if so, do you have strong opinions on this?
Alexey Utkin: I have some, so, yeah, I think this is a part of the conversation. I think what I said in my last remark applies heavily here. So many clients expect AI to do a lot of engineering, coding, etc. I think, as I said, what I see now, realistically, you're talking about a 10- 20% productivity gain, and that's something comparable.
One of my colleagues compares that ten to 15 years ago, there was a big adoption of visual IDEs, the development tools that engineers use to code. And they oftentimes have this, as well, completion of your statements, etc. So that was a clear productivity gain, maybe even more than 20%, for developers who use these tools compared to those who don't. So it's not something that didn't happen in the coding or engineering industry; the advancement of tools is really a big productivity gain.
I think of AI along these lines from what I see today. I have some hands-on experience, and some of my colleagues do as well. I think the demand for strong specialists, experts is actually arguably even higher because sort of coding agents, the copilots, etc., etc. they give you code, but they don't really think instead of you in terms of like architecture, design, all these things, you still need to know very precisely what you're going for and being able to validate their outputs. So, that's certainly what I think, although there are some niche cases where I think for engineering, AI is really giving you productivity greater than before. For example, I can work with documentation. So if you're using any library framework with 500 pages of documentation, the first time you're using that, for sure.
Having the AI that can talk to this documentation is rather helpful. Yeah. So we see these things. But I'm also a little bit worried here about even satisfaction of work for engineers because they're this might take 10 to 20% of productivity gain at the cost of whatever huge percentage of reduction in satisfaction for the experts, really, because instead of doing the creative work where you build something and works, etcetera, etcetera.
You end up cleaning up the mess after what the AI has generated, and you're doing it all the time because it generates the mess on steroids. So that's something to think about as well. To a certain degree, I think that's something we need to deal with in the long run as well.
Anni Tabagua: Oh, Alexey, the more you talk, the more follow-up questions I have. And it's really interesting, really important. And I have a million more questions. But I'm going to try to speed it up a bit, a bit of a detour, but I really want to ask, we might edit it out because I'm not even sure why I'm asking, but I'm very curious.
Remember that blackout in Spain at the end of April? In Portugal, was there a blackout in two countries? I wonder, like I never even thought about AI and electricity in the same sentence and context. But can AI actually mess with power grids? Do you think stuff like that can a blackout have anything to do with how much we rely on AI and data?
Alexey Utkin: I think it's in a few ways, really. So one, I think the risks related to AI are pretty big. So, using it for unethical things like hacking or damaging democracy and public opinion, there are very, very real concerns. So I'm not a hacker, but can AI help hackers hack power grid systems if you ask me? I think it can potentially. Potentially, yes. So that's one scary thing.
But some people are also realizing that AI, in its current form, is very power-hungry. Some people say if the current bottleneck of chips like Nvidia is removed, the next one will be on actual power, and even now, it's not trivial that a percentage of the world's energy is actually spent on AI.
I think very few people realize this, and this is a long-term challenge. Coming back to your previous question about AI for engineering and coding, maybe this energy question is another facet of it. Sometimes, you see clients' expectations that look like, for example, we saw this data from this data provider. We shaped it in this way and talked to our data consumer.
And they think, okay, the shape of this data that comes in can be complex. So I have two options. I either get a data engineer to deal with this complexity and write code around it, or I will just put some magical AI there that will deal with it for me. So it's a little bit dependent on whether it's using AI to help this engineer, which will result in code that still runs on normal computers, machines, etc.
Or if you actually have AI in production there, if you have AI to churn this data as it comes in and shape it, at the very least, you're paying much, much, much higher, like orders of magnitude higher prices for compute, energy, and everything. So, if we end up replacing the software that works in production now with AI agents, the electricity bill for the planet will go through the roof.
So that's something to keep in mind as well.
Anni Tabagua: Oh, okay. I see. So it wasn't a silly question after all. It makes sense. AI is very power hungry.
Alexey Utkin: It is, it is. Again, I think again, now it's the very beginning of the thing. Countries are trying to figure out the energy strategy because it's no longer completely based on the number of calories we have and what temperature of heating we want in the winter. It's also what your AI will be going for, which may play a role.
And that's very much beyond my area of expertise. But sometimes I think that in the next generation global world, what countries will be in the AI game, and possibly some of those with cheaper energy will be in a better position for it.
Anni Tabagua: And speaking of next generations and all things next, I'm jumping a bit here, but where is this all headed? You think so? What is the next thing in data for one or two years? Is that new data, better tools, new skill sets, or something else entirely?
Alexey Utkin: So, I think if we start from the data consumers, right? My prediction is that they probably won't be using powered by AI copilot, so things that will give them insights at scale very aggressively in the next couple of years. There are a few reasons why I don't think it will happen.
So, one is, I don't think having simple dashboards is the problem today. To be honest, for most companies, I think the problem lies further. It lies in trust and understanding of where this data is coming from, etc., like how it was generated, and so on, which the current adoption of AI probably doesn't solve on its own.
But at the same time, I think for this category of users, there is a very strong way AI can be used for education, for learning, for enhancing and enriching what they see in terms of data, giving them context, giving them sense about what it can and cannot be used for, where it's coming from, etc. This rich context allows them to trust it more, to trust what maybe comes from data engineers, to trust it more, and hence adopt it further.
When I think about this AI play, I always keep thinking about how their expertise distribution will play out in the future. So one of the things you see today, even in data engineering, is that all these AI tools are very strong in the hands of those who know what they're doing. And they probably limit the need for very low-skilled junior people.
It's a little bit harder to enter the industry now because the things newcomers would do ten years ago, a strong senior engineer with AI can do themselves much faster or cheaper, etc., etc So, it affects the distribution of skill and expertise in this way. And I think to compensate for it, we need to see far more adoption of AI for learning and education, which can help a lot as well.
And we should not have a scenario where the gap between experts and those who are not experts is growing because I think this is largely one of the big problems in data: this gap between the data specialist and data consumer. That's why so many organizations speak about data literacy and culture, etc., just because the dashboard, if it's there, the problem is adoption, which requires literacy.
You need to have skills among those who use it. So, if you use AI to build skills or to bring consumers to the right point of getting value out of all these dashboards and consumption, I think that's great potential. One of my positive, optimistic predictions for the future is that we will see more AI for enrichment, education, and data literacy.
Yeah, data literacy. There will certainly be a big evolution on the engineering side. So, AI applied for engineering, and we probably will see more of these semantic models, semantic layers, and knowledge graphs. These are like shaping the data an organization has so that AI can perform better. AI answers questions like, What were my sales in the last quarter? So we probably will see more of that happening to remove the simple use cases.
But ultimately, it does not replace your existing data analytics capabilities. It augments them for different audiences at different levels of effort. So, these are some of my top reasons.
Anni Tabagua: Yeah, that's great. I will just ask you two more quick questions. I don't know if there's a way of answering them quickly, but what advice would you give a CTO or head of data who is trying to modernize their stack in 2025?
Alexey Utkin: Very good question. That's a little bit linked to the previous, looking into the future, because I think a part of the answer is reliance on a little bit of a view of the future. So I think that looking into the future, I will always ask myself this question: who will be in terms of the next generation’s capabilities in data analytics?
AI for data will be winning, whether it will be the big companies like Snowflake and Databricks, which integrate everything in your data context around your data AI and tools to work with data, or it will be another round. As it has happened over the last decades, open source and all this diverse toolset ecosystem will have another round of innovation, bringing capabilities that platforms don't possess at the moment, or maybe a combination of both.
So I think to one extent, my advice to CTO would be to make sure in your, for example, modernization program, yes, you remove the very painful things like outdated legacy limits, inflexible, costly infrastructure, etc., etc. but also, I think, have a right level of thinking about the future of like in a year or two years in terms of supporting the organization with AI capabilities, making sure they come at the right level of complexity, cost and effort.
Those would be considerations to make the right choice. And then another one is my eternal theme for teachers. Although I must moderate for some types of companies because some companies are more technology-centric and technology-driven, I think all the data efforts, all the data programs, really need to be structured around adoption, the business outcomes, delivery of use cases, etc., etc.
So even if you are dealing with some legacy infrastructure and platforms, which you really need to modernize, make sure you go the extra mile to engage with the business to understand what they are willing to get from data. Maybe not today, but tomorrow, and solve for that as a part of your modernization effort. Those would be my things.
And then, as well, you need to strike the right balance between platform investment, use cases in terms of what you deliver to the business, and innovation. As I've said before, some companies jump on the AI bandwagon sometimes prematurely. But I want to make sure I get that across. So, I believe they still need to be looking at experiments and doing pieces, doing some adoption of all these capabilities.
Maybe for internal engineering, maybe for the teams, maybe for end users. But I don't think it is wise to ignore it completely and hope that when the dust settles, when AI tools mature, you will choose the right one and make the best use of it without going on the journey of trying it out for years.
I think you need to have your own view on when it's right for your business to adopt it and an opinion based on your real, hands-on experience of what works and what doesn't work for your business today.
Anni Tabagua: Well, that's great, that's great. I love that you just gave three clear pieces of advice for CTOs. I just wrote them right away, that's really valuable. And the last one, and my favorite one, Alexey, tell me something bold. What is the widely accepted belief in the data world that you completely disagree with? Or in other words, what's your unpopular opinion?
Alexey Utkin: Yeah, there are so many opinions in the data world, and so many people argue. Let me pick this one. Since we heavily discussed Gen AI and, of course, its relation to data, let me connect this unpopular opinion to what we discussed before.
So, I think that before we see the next cycle where AI really works for data, data engineering, and operations, it really supports data quality governance. I don't know all the data-driven data analytical use cases. I think before we get there, we will have at least another phase or long phase of disillusionment in terms of struggles with the new issues and challenges brought to the data world by AI itself.
So things like, I don't know if you heard, ChatGPT and other big language models struggle a little bit with training because they're now exposed to a lot of data generated by the models. In the last two or three years, content on the internet, etc., has been largely produced with the help of AI. And this new content interferes with model training, etc. So it's a challenge on its own.
So, I think in this regard, the risks and problems associated with AI, there is a piece of the pie called AI-generated code, which works here. It gives us these anomalies, these errors sometimes, etc. We will see more and more of that. So I think with all this great potential, the entropy, the chaos will also be growing until we make better use of AI to validate and deal with all these errors and anomalies.
So that's my prediction: It will get a little bit even harder before AI adoption for data gets better.
Anni Tabagua: Oh, and that's unpopular. Not a lot of people share the sentiment.
Alexey Utkin: I don't think it's discussed often together. It's not that. So I think it's unpopular in a way that more often you hear this opinion that AI is just around the corner and solves all our issues tomorrow or today, so that is a pretty popular opinion. So in this way, it's a little bit more resourceful than, yes, I think it has great potential, but there is still a lot of work to be done before it can be reliable for the data on which you can run your business.
Also, like I mentioned, it is for engineers who now basically have to deal with all these validation issues, which again drive some of them crazy. I think that's the next round of properly dealing with this validation and governance.
And I don't think an equal amount of goodness came in this validation and governance if you compare how much has come in in terms of the positive sense from AI, in terms of what it can do. As of now, I think this gap is growing, and probably people will be speaking more about this governance and relations.
Anni Tabagua: Perfect. Perfectly clear now. Thank you. Alexey, I realized we need a whole podcast series just on data because there's too much to discuss. But thank you so much for giving us the little sneak peek, at least, and for helping us separate the signal from the noise. Really? Thank you for being here. It was a pleasure to talk to you.
Alexey Utkin: Thank you for having me. And yeah, there are so many things we haven't discussed. They will probably be in some future podcasts.
Anni Tabagua: Yeah. And to our listeners, thank you so much for tuning in. And if you enjoyed this episode, please subscribe, rate, and share, and we always want to hear from you. What is your unpopular opinion about data? Reach out to us at biztech.forward@dataart.com. That's all for today. We'll see you next time. Thanks for tuning in to BizTech Forward.
If you enjoyed the podcast, please subscribe and share to help us reach more listeners. Thank you for being part of the conversation. See you next time.
About the Guest
Alexey Utkin joined DataArt as Systems Architect and Team Leader in 2004, and has been in charge of leading major finance enterprise accounts since. With over 14 years in the IT industry, eight of them in the financial services sector, Alexey brings a wealth of industry expertise to DataArt and has become a core member of its Finance practice. With a dedicated focus on solution, technology, regulation, and process consulting, he now leads DataArt's most seasoned industry practice from its London office.
A sought-after speaker and media contributor, Alexey regularly attends key industry events and is often quoted in the press. Alexey is a regular media commentator on banking digitalisation and financial technology issues and has been quoted in The Financial Times, Spears, IDG Connect, BobsGuide, ITV News, The Guardian, Waters Technology, and numerous other news outlets.
Check Out All of Our Episodes

In this episode, Anni sits down with Alexey Utkin, Head of Data and Analytics Lab at DataArt, for a candid conversation about what’s really going on in the world of data.

In this episode, Anni speaks with Olesya Khokhulia, VP of Global Enterprise Accounts at DataArt, about the evolution of client relationships, the quiet signals that build trust, and what it takes to stay relevant in an environment where expectations are always shifting.

Travel is back — but it’s not the same. In this episode, we sit down with Greg Abbott, a veteran of the travel tech world with over three decades of experience, to talk about how the industry is evolving.

From ancient artifacts to AI-curated collections—art is evolving, and fast. In this episode, host Anni chats with Doron Fagelson, SVP of Media & Entertainment at DataArt, to explore how technology is transforming the art world, from online marketplaces to data-driven personalization and virtual galleries.

In this episode, host Anni chats with Julia Zavileyskaya, Chief People Officer at DataArt, about the biggest hiring trends, AI’s role in recruitment, and what really keeps employees engaged." Please find the episode's cover attached.

In this episode, Anni sits down with Marcos Mauro to discuss what’s fueling Latin America’s tech boom, how businesses and clients are adapting, and why it’s more than just an emerging market — it’s a global leader in the making.

In this episode of BizTech Forward, Anni chats with Maryna Melink, Head of Learning and Development at DataArt, about how companies can create a culture of continuous learning, scale it across thousands of people, and deliver real business value.

In this episode of BizTech Forward, Anni sits down with Mike Peterson, Advisory CTO / CIO, Mentor, and Coach, who discusses how client expectations from IT vendors have evolved over the past decade, what clients miss from the ‘old days,’ and how vendors can stay ahead in an ever-changing tech landscape.

This is a bonus episode of BizTech Forward: Season One Recap. Host Anni takes you through the eight episodes of the debut season, highlighting some of the best moments and setting the stage for season two!

In this episode of BizTech Forward, Anni chats with Scott Rayburn, VP Marketing at DataArt, about how marketing has evolved with the rise of data and technology.

In this episode, Anni chats with Sheetal Kale, Head of DataArt India, about the country’s modern tech views, AI and data, IPO boom, and India’s gravitational pull in global decision-making.

In this episode, we're joined by Tim McMullen, a true veteran in aviation tech, to discuss the latest aviation technology trends from the latest industry conferences and the future of aviation.

In this episode of BizTech Forward, Anni sits down with Anastasia Rezhepp, DataArt's Head of Design Studio, to talk about the evolution of design processes in the world of UX.

In this episode of BizTech Forward, we chat with Yuri Gubin, Chief Innovation Officer at DataArt, about why data quality is critical for AI success.

In this episode, we chat with Anna Velykoivanenko, Global Employer Branding Director at DataArt, about the perfect blend of technical know-how and human-centric skills.

In this episode of BizTech Forward, Anni from DataArt’s Media Relations team chats with Alexei Miller, Managing Director at DataArt, about how businesses can truly measure the value of their IT investments.

Join Anni Tabagua as we kick off our very first episode with a fascinating topic: AI in Automotive. Our guest is Dmitry Bagrov, the Managing Director of DataArt UK!
We Want to Hear From You!
Reach out to us with any comments, feedback, and questions by filling out the form.
Thank you for contacting us!
We will be in touch shortly to continue the conversation.