Luca Rovesti, Author at Datactics https://www.datactics.com/author/luca-rovesti/ Unlock your data's true potential Sun, 28 Jul 2024 21:41:52 +0000 en-GB hourly 1 https://wordpress.org/?v=6.7.2 https://www.datactics.com/wp-content/uploads/2023/01/DatacticsFavIconBluePink-150x150.png Luca Rovesti, Author at Datactics https://www.datactics.com/author/luca-rovesti/ 32 32 Data Quality Management – Gain The Competitive Edge https://www.datactics.com/blog/data-quality-management-gain-the-competitive-edge/ Mon, 25 Apr 2022 11:51:19 +0000 https://www.datactics.com/?p=18667 Managing data assets has become a way to gain competitive advantage for businesses and is a keen focus area for regulatory bodies around the globe. With the emergence of data standards and a plethora of technology businesses, the data management landscape has never been so competitive.  The following blog will look at how to maximise […]

The post Data Quality Management – Gain The Competitive Edge appeared first on Datactics.

]]>
Data Quality Management

Managing data assets has become a way to gain competitive advantage for businesses and is a keen focus area for regulatory bodies around the globe. With the emergence of data standards and a plethora of technology businesses, the data management landscape has never been so competitive. 

The following blog will look at how to maximise your data assets and gain that competitive edge by developing a data quality management framework, making use of best-of-breed technologies.

Tech solutions for data quality management

Back in the 16th century Sir Francis Bacon wrote that “knowledge itself is power”, conveying the idea that having knowledge is the foundation of influence, reputation, and decision-making.  In modern times, the information we use to acquire knowledge is highly digitalised, often in a collection of organised data points. Every individual and every organisation in the world is faced with the challenge of managing data. The bigger the organisation, typically the more complex the challenges are in managing data used to make business decisions. 

How to gain the competitive edge in such fast-evolving industry? Arguably, a mix of technologies and process accelerators aimed at solving the right problems.  

The goal of best-of-breed-technology should be to automate manual, time consuming tasks required to govern enterprise data (such as data profiling and data cleansing), whilst streamlining anything that is not possible or desirable to automate. The automation challenge can be solved by deploying supervised machine learning solutions, which should be functional to solving problems identified in the data governance framework and should adhere to data stewardship and data security principles. Explainable AI with person-in-the-loop feedback and decisions is a sensible way to achieve this, as users can understand the rationale behind a model’s prediction.

In terms of Data Quality, the aim is to provide a self-driving solution that can cope with sensitive data in big volumes and at scale. In order to gain a competitive edge, it should be interconnected with other players in the data management ecosystems such as governance, lineage and catalogue platforms. In order to add value to an organisation’s data assets, data stewards from all data domains should feel empowered to get on top of quality issues and make use of powerful tools to perform remediation. 

Let’s explore some of the conceptual synergies that exist between actors in the data management ecosystem. A data governance framework is the theoretical view of how data should be managed. Integration with data quality and lineage provides the practical evidence of guidelines being followed or overlooked.

Data ownership is a fundamental aspect driven by governance, the definition of which is a must-have success criterion for any data quality initiative. Governance and data catalogues also set the glossary and terminology standards to be used across the data management tools. 

Looking at data quality and lineage, these two players are complementary in their ability to track data records in motion, from data source to output. Quality of the data ultimately means acknowledging that it is ‘fit-for-purpose’ and lineage tools are powerful in visualising the ramifications of where data originally comes from and what they are being used for. Data quality in motion is compelling as it looks at how the data flows through your organisation; identifying where problems originate from (root cause analysis) and it also gives a clear picture of the ultimate impact of DQ issues. 

Why we do data quality

We’re now going to shift the focus from “how we do data quality” on to “why we do data quality”. In today’s marketplace, technology vendors are always faced with the challenge of quantifying their value proposition. In other words, quantifying the “cost of data quality”. 

It is a complex function made up of multiple variable, but here is an attempt at summarising the key concept. Whilst good quality data has the potential to improve business operations and outcomes, poor data quality is capable of hindering this (or worse). Below are a number of potential outcomes if bad data is allowed to manifest within an organisation:

  • Regulatory risks. Depending on the industry, there will often be a considerable risk in terms of monetary fines and/or operational restrictions depending on the severity of the data quality issues. Importantly, the risks are not only associated to the actual data problems, but also to the lack of processes. For example, having a data governance program in place to identify and rectify potential data problems is invaluable to a financial institution battling with regulation and compliance, in order to avoid costly fines (e.g. GDPR).
  • Opportunity cost. This refers to the ability (or inability) to make use of information in order to gain competitive advantage. For example, this could refer to the reduced ability to cross-sell products to the customer base because of missing connections between customer and product data. Sub-optimal data assets can therefore have costly repercussions for a business losing out on available opportunities.
  • Operational risk. If a business or organisation continues to work with poor quality data, there is a chance it could fall victim to operational risks. The failure of identifying a risk factor such as fraudulent behaviour and/or conflicts of interest could be due to a lack of data integration, missing links or unclear hierarchies in the data. This is a typical focus area for Anti-Money-Laundry and KYC projects. 
  • Severity of data quality issues. This simply refers to the capability of a data owner to understand the business impact of a data quality issue to an operation. This is in itself is a difficult variable to estimate, for it needs to take into consideration the context around the faulty information and the criticality (size) of the problem for the operation. To give a practical example, put plainly, a data quality issue associated with a large investment is more severe than the very same issue associated with a smaller investment. 

Overall, the ability to gain a competitive edge will hinge on the existence of a mix of technological and process-driven innovation (=how we do it), correctly focused on addressing the key problems in data management (=why we do it!). There are plenty of business benefits from creating a data governance framework and mastering data quality, from less risk potential, to greater business opportunities and improved operations.

If you are developing a data management framework or are seeking to understand our data quality solution, you can read more about it in our datablog. Similarly, if you would like to speak directly to us at Datactics, you can reach out to Luca Rovesti.

The post Data Quality Management – Gain The Competitive Edge appeared first on Datactics.

]]>
Data Health: Why does it matter? | Luca Rovesti https://www.datactics.com/blog/good-data-culture/data-health-why-does-it-matter/ Fri, 16 Jul 2021 09:03:01 +0000 https://www.datactics.com/?p=15148 Introduction  Getting into shape is a big focus of summer advertising campaigns, where we’re all encouraged to eat, drink and do the right things to keep our bodies and minds on top form. You’ll be relieved to read that at Datactics we’re far more concerned about the health of your data, so open that packet of cookies, pour yourself […]

The post Data Health: Why does it matter? | Luca Rovesti appeared first on Datactics.

]]>

Introduction 

Getting into shape is a big focus of summer advertising campaigns, where we’re all encouraged to eat, drink and do the right things to keep our bodies and minds on top form. You’ll be relieved to read that at Datactics we’re far more concerned about the health of your data, so open that packet of cookies, pour yourself another coffee, and let Luca Rovesti, Head of Product at Datactics, focus on your data wellness! 

Why data health matters 

With data being relied on more heavily than ever before, a clearer difference is emerging between being surrounded by data and using data to make business decisions. It makes sense then that these data needs will drive a corresponding greater need to keep that data healthy. It may be well known that healthy data means things like being complete, cleansed and compliant, but the problem is that the health of data is often not fully understood, and that this lack of understanding fuels a lack of trust. Trust in data is everything as more and more of the business is built on it. 

To fully grasp the healthiness of your data, you must be able to prove the validity and completeness of that data. By achieving this, analytics can then be provided to key decision makers who can trust the data to make those critical business plans. In this blog, we will dig deeper into what’s required to achieve data health; how you can measure it; and the importance of having an understanding of what you are measuring. 
 

Willpower needed! 

Firstly, what do we mean by data wellbeing? Factors in personal wellbeing include social, emotional, and psychological wellbeing; in much the same way, in the data space you have tooling and strategy. And in the same way that in personal wellbeing you need to focus on these factors and commit to specific courses of action, you need to commit and execute on your data health strategy if you’re going to see a difference. 

For instance, if the commitment is missing typically that means that tooling is brought in but not the right people are committed to make the data quality initiative work. Typically, what happens is that you fail to hit the target on the value-adds, the real-world business results that are ultimately relevant to the people you need to convince to pay for it and invest in it. You could have a bunch of data quality rules that you think make sense, but nobody really values the outcome of the rules; or you could have an implementation that does not follow the practices or the guidelines that the end-users of the data are willing to commit to. If it is difficult to get buy-in because you are trying to change their way of working, we would then align the DQ initiative with how the business users are already working, with the aim of giving valuable insight that makes sense in their existing context. 

As with everything then, commitment is vital. One of the major differentiators that Datactics brings to the table is that we are not just about measurement, but also about remediation – doing something material and meaningful to the business with the metrics and the exceptions we have identified from the system. We are very often playing the role of a trusted partner; working alongside the business to enable them to make the necessary changes to data and to how they view it.  

Ultimately, it’s about our client taking that step, armed with our support, technology, and coaching, to commit and fix the broken data. Some of our clients have taken the ‘carrot’ approach, for example publishing on an internal site the office with the best data quality compliance score across the whole of the UK. They hold metrics on how many exceptions have been remediated and they give an honourable mention to the ones that are doing something about them. Whereas other firms we work with are going down the ‘stick’ route, publishing metrics to point out those who are lagging behind on these types of initiatives. Either way, the point is that accountability of who is having to do something with the data drives the commitment to do something about it. 

Knowing what to measure: it’s a business problem  

One of the biggest risks – if not the biggest risk – in a data quality initiative is a disconnect emerging between the parties involved. On the one side there are people tasked with the project, to create metrics, and the technological solution that monitors that data. And on the other side, the people who are using that data and the purpose they have for it.  It means that it’s critical to make the measurements relevant to why the data is being used. The only way to do that is to gain an understanding of what the data is, and the ultimate purpose for that data, so that the whole initiative can be aligned with that purpose. 

So how can you measure data health? 

It is important to remember that data quality is only one aspect of a data capability framework. There are some early steps required around the overall governance and infrastructure. Governance is more conceptual, where you need to have the logical framework on what you want to do with the data nailed down, and there is the practical aspect of doing something about it and that starts with having the infrastructure to do it. We, of course, are a technology solution, so we need hardware to be able to execute and we need the correct connectivity from our hardware to the various points that we need to measure, and then from there we can start to measure data health – in motion and on the various required points of the data journey. Our solution has a highly configurable rules-based approach, so these rules can be as granular or as business focused as the client wants. The metadata around the rules is there to describe the difference and what the rule is designed to measure. Almost inevitably then, there needs to be a reporting layer that displays metrics that are understandable from the point of view of somebody that is at executive/business level. The Datactics platform does just that, converting the rule results into business-driven areas such as critical data elements, regulatory compliance and source system scores. Combined, these give a strong insight into how the data quality relates to the business activities across the lines of business. It also gives the possibility of prioritising the resolution of the most critical issues. This makes data quality truly relevant to every business user from CEO to entry level. 

Changing attitudes towards data health 

As Datactics has been in this business for years, it’s reasonable to say we have developed a solid equation for continuous measurement of data quality and data health. Clients are now wanting to take the next step, and usually that next step is data health ‘in-motion’ – wherever the data flows throughout the enterprise. We now have all these metrics gathered by running this process over time, so now we must help the business establish what they can do with this data. For instance, helping them get insight into patterns showing how the data is being fixed. Clients are also talking about moving averages and measuring the time/effort required to correct an error as key indicators of a culture of healthy data.  

The important thing to remember is that perfection doesn’t exist. Instead, there needs to be a concept of priority. Asking things like -why are you measuring data? Will a key rule impact downstream operation or will it ensure the data is in good shape for the future? What will make an operational difference that needs to be factored in to prioritising actions taken on the metrics? As clients are increasingly keen to take this next step, we are equally keen to help them achieve data health. 

To continue this conversation, you can reach out to Luca, Head of Product at Datactics.

About Datactics 

Our next gen, no-code toolbox is built with the business user in mind, allowing business subject matter experts to easily measure data to regulatory & industry standards, fix breaches in bulk and push into reporting tools, with full visibility and audit trail for Chief Risk and Data Officers.  

If you want to find out more about how the Datactics solution can help you to automate the highly manual issue of data matching for customer onboarding, then please reach out to us! Or find us on LinkedinTwitter or Facebook.

The post Data Health: Why does it matter? | Luca Rovesti appeared first on Datactics.

]]>
Tackling Practical Challenges of a Data Management Programme https://www.datactics.com/blog/good-data-culture/good-data-culture-facing-down-practical-challenges/ Mon, 03 Aug 2020 13:58:40 +0000 https://www.datactics.com/?p=5916 “Nobody said it was easy” sang Chris Martin, in Coldplay’s love song from a scientist to the girl he was neglecting. The same could be said of data scientists embarking on a data management programme! In his previous blog on Good Data Culture, our Head of Client Services, Luca Rovesti, discussed taking first steps on […]

The post Tackling Practical Challenges of a Data Management Programme appeared first on Datactics.

]]>
Nobody said it was easy” sang Chris Martin, in Coldplay’s love song from a scientist to the girl he was neglecting. The same could be said of data scientists embarking on a data management programme!

data culture

In his previous blog on Good Data Culture, our Head of Client Services, Luca Rovesti, discussed taking first steps on the road to data maturity and how to build a data culture. This time he’s taking a look at some of the biggest challenges of Data Management that arise once those first steps have been made – and how to overcome them. Want to see more on this topic? Head here.

One benefit of being part of a fast-growing company is the sheer volume and type of projects that we get to be involved in, and the wide range of experiences – successful and less so – that we can witness in a short amount of time.

Without a doubt, the most important challenge that rears its head on the data management journey is around complexity. There are so many systems, business processes and requirements of enterprise data that it can be hard to make sense of it all.

Those who get out of the woods fastest are the ones who recognise that there is no magical way of solving things that must be done.

A good example would be the creation of data quality rule dictionaries to play a part in your data governance journey.

data management programme

Firstly, there is no way that you will know what you need to do as part of your data driven culture efforts unless you go through what you have got.

Although technology can give us a helpful hand in the heavy lifting of raw data, from discovery to categorisation of data sets (data catalogues), the definition of domain-specific rules always requires a degree of human expertise and understanding of the exception management framework.

Subsequently, getting data owners and technical people to contribute to a shared plan that takes the uses of the data and how the technology will fit in is a crucial step in detailing the tasks, problems and activities that will deliver the programme.

Clients we have been talking to are experts in their subject areas. However, they don’t know what “best of breed” software and data management systems can deliver. Sometimes, clients find it hard to express what they want to achieve beyond a light-touch digitalisation of a human or semi-automated machine learning process.

data management

The most important thing that we’ve learned along the way is that the best chance of success in delivering a data management programme involves using a technology framework that is both proven in its resilience and flexible in how it can fit into a complex deployment.

From the early days of ‘RegMetrics’ – a version of our data quality software that was configured for regulatory rules and pushing breaks into a regulatory reporting platform – we could see how a repeatable, modularised framework provided huge advantages in speed of deployment and positive outcomes in terms of making business decisions.

Using our clients’ experiences and demands of technology, we’ve developed a deployment framework that enables rapid delivery of data quality measurement and remediation processes, providing results to senior management that can answer the most significant question in data quality management: what is the return on investing in my big data?

This framework has enabled us to be perfectly equipped to provide expertise on the technology that marries our clients’ business knowledge:

  • Business user-focused low-code tooling connecting data subject matter experts with powerful tooling to build rules and deploy projects
  • Customisable automation that integrates with any type of data source, internal or external
  • Remediation clinic so that those who know the data can fix the data efficiently
  • “Chief Data Officer” dashboards provided by integration into off-the-shelf visualisation tools such as Qlik, Tableau, and PowerBI.

Being so close to our clients also means that they have a great deal of exposure and involvement in our development journey.

We have them ‘at the table’ when it comes to feature enhancements, partnering with them rather than sell and move on, and involving them in our regular Guest Summit events to foster a sense of the wider Datactics community.

It’s a good point to leave this blog, actually, as next time I’ll go into some of those developments and integrations of our “self-service data quality” platform with our data discovery and matching capabilities.

Click here for the latest news from Datactics, or find us on LinkedinTwitter or Facebook

The post Tackling Practical Challenges of a Data Management Programme appeared first on Datactics.

]]>
The Road to Data Maturity https://www.datactics.com/blog/good-data-culture/good-data-culture-the-road-to-data-maturity/ Thu, 23 Apr 2020 12:31:56 +0000 https://www.datactics.com/good-data-culture-the-road-to-data-maturity/ The Road to Data Maturity – Many data leaders know that the utopia of having all their data perfect and ready to use is frequently like the next high peak on a never-ending hike, always just that little bit out of reach. Luca Rovesti, Head of Client Services for Datactics, hears this all the time […]

The post The Road to Data Maturity appeared first on Datactics.

]]>
The Road to Data Maturity – Many data leaders know that the utopia of having all their data perfect and ready to use is frequently like the next high peak on a never-ending hike, always just that little bit out of reach.

Good Data Culture - Data Maturity

Luca Rovesti, Head of Client Services for Datactics, hears this all the time on calls and at data management events, and has taken some time to tie a few common threads together that might just make that hike more bearable, and the peak a little closer, from the work of data stewards to key software features.

Without further ado: Luca Rovesti’s Healthy Data Management series, episode 1:

Lots of the people we’re speaking with have spent the last eighteen months working out how to reliably measure their business data, usually against a backdrop of one or more pressures coming from compliance, risk, analytics teams or board-level disquiet about the general state of their data. All of this can impact the way business decisions are made, so it’s critical that data is looked after properly. 

For those who have been progressing well with their ambition to build a data driven culture, they’re usually people with a plan and the high level buy-in to get it done, with a good level of data stewardship already existing in the organisation. They’ve now moved their data maturity further into the light and can see what’s right and what’s wrong when they analyze their data. In order to build a data culture, they’ve managed to get people with access to data to stand up and be counted as data owners or chief data officers. This enables business teams to take ownership of large amounts of data and encourage data driven decision making. Now, they are looking at how they can push the broken data back to be fixed by data analysts and data scientists in a fully traceable, auditable way, in order to improve the quality of the data. The role of a data scientist is paramount here, as they have the power to own and improve the organisations critical data sets. 

The big push we’re getting from our clients is to help them federate the effort to resolve exceptions. Lots of big data quality improvement programmes, whether undertaken on their own or as part of a broader data governance plan, are throwing up a high number of data errors. The best way to make the exercise worthwhile is to create an environment where end-users can solve problems around broken data – those who possess strong data literacy skills and know what good looks like. 

The best way to make the exercise worthwhile is to create an environment where end users can solve problems around broken data – those who possess strong data literacy skills and know what good looks like.

As a result, we’ve been able to accelerate the development of features for our clients around federated exceptions management through integrating our Data Quality Clinic with dashboarding layers for data visuals, for example, PowerBI, Qlik, Tableau etc. We’re starting to show how firms can use the decisions being made on data remediation as a vast set of training data for machine learning models, which can power predictions on how to fix data and cut the amount of decision-making time manual reviews need to take.

It’s a million light-years away from compiling lists of data breaks into Excel files and emailing them around department heads, and it’s understandably in high demand. That said, a small number of firms we speak to are still coming to us because the demand is to do data analytics and make their data work for their money. However, they simply can’t get senior buy-in for programmes to improve the data quality. I feel for them because a firm that can’t build a business case for data quality improvement is losing the opportunity to make optimal use of its data assets and is adopting an approach prone to inefficiency and non-compliance risk. 

…a firm that can’t build a business case for data quality improvement is losing the opportunity to make optimal use of its data assets and is adopting an approach prone to inefficiency and non-compliance risk.

Alongside these requests are enquiries about how data teams can get from their current position – where they can’t access or use programming language-focused tools in IT and so have built rules themselves in SQL (relying on those with computer science expertise) – to the place where they have a framework for data quality improvement and an automated process to implement it. I’ll go into that in more detail in the next blog.

For more on Data Maturity and Information Governance, click here for the latest news from Datactics, or find us on LinkedinTwitter or Facebook

The post The Road to Data Maturity appeared first on Datactics.

]]>
Who are you really doing business with? https://www.datactics.com/blog/good-data-culture/who-are-you-really-doing-business-with/ Fri, 04 Jan 2019 15:51:45 +0000 https://www.datactics.com/who-are-you-really-doing-business-with/ Entity Match Engine for Customer Onboarding. Challenges and facts from EU banking implementations.  Customer onboarding is one of the most critical entry points of new data for a bank’s counterparty information. References and interactions with internal systems, externally sourced information, KYC processes, regulatory reporting and risk aggregation are all impacted by the quality of information […]

The post Who are you really doing business with? appeared first on Datactics.

]]>

Entity Match Engine for Customer Onboarding. Challenges and facts from EU banking implementations. 

Customer onboarding is one of the most critical entry points of new data for a bank’s counterparty information. References and interactions with internal systems, externally sourced information, KYC processes, regulatory reporting and risk aggregation are all impacted by the quality of information that is fed into the organisation from the onboarding stage.

In times of automation and development of FinTech applications, the onboarding process remains largely manual. Organisations typically aim at minimising errors by means of human validation, often tasking an off-shore team to manually check and sign-off on the information for the new candidate counterparty. Professionals with experience in data management can relate to this equation: “manual data entry = mistakes”.

There are a variety of things that can go wrong when trying to add a new counterparty to an internal system. The first step is typically to ensure that a counterparty is not already present: onboarding officers rely on a mix of name & address information, vendor codes and open industry codes (e.g. the Legal Entity Identifier) to verify this. However, inaccurate search criteria, outdated or missing information in the internal systems and the lack of advanced search tools create the potential for problems in the process – an existing counterparty can easily get duplicated, when it should have been updated.

Datactics’ Entity Match Engine provides onboarding officers with the tools to avoid this scenario, both on Legal Entities’ and Individuals’ data. With advanced fuzzy logic and clustering of data from multiple internal and external sources, Match Engine avoids the build-up of duplication caused by mistakes, mismatches or constraints of existing search technology in the onboarding process.

Another common issue caused by manual onboarding processes is the lack of standardisation in the entry data. This creates problems downstream, reducing the value that the data can bring to core banking activities, decision making and the capacity to aggregate data for regulatory reporting in a cost-effective way.

Entity Match Engine has pre-built connectivity into the most comprehensive open and proprietary sources of counterparty information, such as Bloomberg, Thomson Reuters, GLEIF, Open Corporates, Companies House, etc. These sources are pre-consolidated by the engine and are used to provide the onboarding officer with a standardised suggestion of what the counterparty information should look like, comprehensive of the most up-to-date industry and vendor identifiers.

“Measure and ensure data quality at source” is a best practice and increasingly a data management mantra. The use of additional technology in the onboarding phase is precisely intended as a control mechanism for one of the most error-prone sources of information for financial institutions.

Luca Rovesti is Presales R&D Manager at Datactics

The post Who are you really doing business with? appeared first on Datactics.

]]>