Good Data Culture - Archives with Luca Rovesti - Datactics https://www.datactics.com/category/blog/good-data-culture/ Unlock your data's true potential Thu, 17 Oct 2024 08:55:25 +0000 en-GB hourly 1 https://wordpress.org/?v=6.7.2 https://www.datactics.com/wp-content/uploads/2023/01/DatacticsFavIconBluePink-150x150.png Good Data Culture - Archives with Luca Rovesti - Datactics https://www.datactics.com/category/blog/good-data-culture/ 32 32 Datactics Awards 2024: Celebrating Customer Innovation https://www.datactics.com/blog/datactics-awards-2024-celebrating-customer-innovation/ Tue, 24 Sep 2024 15:28:14 +0000 https://www.datactics.com/?p=27124 In 2024, our customers have been busy delivering data-driven return on investment for their respective organisations. We wanted to recognise and praise their efforts in our first-ever Datactics Customer Awards! The oak-panelled setting of historic Toynbee Hall provided the venue for the 2024 Datactics Summit, which this year carried a theme of ‘Data-Driven Return on […]

The post Datactics Awards 2024: Celebrating Customer Innovation appeared first on Datactics.

]]>
In 2024, our customers have been busy delivering data-driven return on investment for their respective organisations. We wanted to recognise and praise their efforts in our first-ever Datactics Customer Awards!

The winners of the Datactics awards gather for a photograph. Caption describes who is in the picture.
Datactics Customer Awards winners 2024 gather for a group photo.
(From L to R: Erikas Rimkus, RBC Brewin Dolphin; Rachel Irving, Daryoush Mohammadi-Zaniani, Nick Jones and Tony Cole, NHS BSA; Lyndsay Shields, Danske Bank UK; Bobby McClung, Renfrewshire Health and Social Care Partnership). Not pictured: Solidatus.

The oak-panelled setting of historic Toynbee Hall provided the venue for the 2024 Datactics Summit, which this year carried a theme of ‘Data-Driven Return on Investment.’

Attendees gathered for guest speaker slots covering:

  • Danske Bank UK’s Lyndsay Shields presenting a ‘Data Management Playbook’ covering the experiences of beginning with a regulatory-driven change for FSCS compliance, through to broader internal evangelisation on the benefits of better data;
  • Datactics’ own data engineer, Eugene Coakley, in a lively discussion on the data driving sport, drawing from his past career as a professional athlete and Olympic rower with Team Ireland;
  • and Renfrewshire HSCP’s Bobby McClung explaining how automation and the saving of person-hours or even days in data remediation was having a material impact on the level of care the organsation is now able to deliver to citizens making use of its critical services.

The Datactics Customer Awards in full

In recent months, the team at Datactics has worked to identify notable achievements in data in the past year. Matt Flenley, Head of Marketing at Datactics, presented each with a specific citation, quoted below.

Data Culture Champion of the Year – Lyndsay Shields, Danske Bank UK
Data Culture Champion Award graphic

“We’re delighted to be presenting Lyndsay with this award. As one of our longest-standing customers, Lyndsay has worked tirelessly to embed a positive data culture at Danske Bank UK. Her work in driving the data team has helped inform and guide data policy at group level, bringing up the standard of data management across Danske Bank.

“Today’s launch of the Playbook serves to showcase the work Lyndsay and her team have put into driving the culture at Danske Bank UK, and the wider culture across Danske Bank.”

Data-Driven Social Impact Award – Renfrewshire Health and Social Care Partnership
Data Driven Social Impact Award graphic

“Through targeted use of automation, Renfrewshire Health and Social Care Partnership has been able to make a material difference to the operational costs of local government care provision.

“Joe Deary’s early vision and enthusiasm for the programme, and the drive of the team under and alongside Bobby, has effectively connected data automation to societally-beneficial outcomes.”

Data Strategy Leader of the Year – RBC Brewin Dolphin
Data Strategy Leader of the Year Award graphic

“RBC Brewin Dolphin undertook a holistic data review towards the end of 2023, culminating in a set of proposals to create a rationalised data quality estate. The firm twinned data this strategy with technology innovation including being early adopters of ADQ from Datactics. They overcame some sizeable hurdles, notably supporting Datactics in our early stages of deployment. Their commitment to being an ambitious, creative partner makes them stand out.

“At Datactics we’re delighted to be giving the team this award and would also like to thank them for being exemplars of patience in the way they have worked with us this year in particular.”

Datactics Award for Partner of the Year – Solidatus
Partner of the Year award graphic

“Solidatus and Datactics have been partnered for the last two years but it’s really in 2023-2024 that this partnership took off.

“Ever since we jointly supported Maybank, in Malaysia, in their data quality and data lineage programme, we have worked together on joint bids and supported one another in helping customers choose the ‘best of breed’ option in procuring data management technology. We look forward to our next engagements!”

Datactics Data Champion of the Year – NHSBSA
Data Champion of the Year Award graphic

“For all the efforts Tony, Nick and team have made to spread the word about doing more with data, we’d like to recognize NHS Business Services Authority with our Datactics Data Champion of the Year award.

“As well as their advocacy for our platform, applying it to identify opportunities for cost savings and efficiencies across the NHS, the team has regularly presented their work to other Government departments and acted as a reference client on multiple occasions. Their continued commitment to the centrality of data as a business resource is why they’re our final champions this year, the Datactics Data Champion 2024.”

:yndsay Shields of Danske Bank celebrates winning her award.
Lyndsay from Danske Bank UK
Bobby McClung from Renfrewshire HSCP celebrates winning their award.
Bobby from Renfrewshire HSCP
Clive Mawdesley and Erikas Rimkus from RBC Brewin Dolphin celebrate winning their award
Erikas and Clive from RBC Brewin Dolphin
Winners from NHS BSA celebrate winning their award.
Tony, Rachel, Nick and Daryoush from NHS BSA

Toasting success at Potter & Reid

The event closed with its traditional visit to Shoreditch hot spot Potter & Reid. Over hand-picked canapés and sparkling drinks, attendees networked and mingled to share in the award winners’ achievements in demonstrating what data-driven culture and return on investment looks like in practice. Keep an eye out for a taster video from this year’s event!

The post Datactics Awards 2024: Celebrating Customer Innovation appeared first on Datactics.

]]>
Shaping the Future of Insurance: Insights from Tia Cheang https://www.datactics.com/blog/shaping-the-future-of-insurance-with-tia-cheang/ Tue, 02 Apr 2024 13:55:13 +0000 https://www.datactics.com/?p=25115 Tia Cheang, Director of IT Data and Information Services at Gallagher, recently delivered an interview with Tech-Exec magazine drawing from her knowledge and experience in shaping the future of the insurance industry at one of the world’s largest insurance brokers. You can read the article here. Tia is also one of DataIQ’s Most Influential People […]

The post Shaping the Future of Insurance: Insights from Tia Cheang appeared first on Datactics.

]]>

Tia Cheang, Director of IT Data and Information Services at Gallagher, recently delivered an interview with Tech-Exec magazine drawing from her knowledge and experience in shaping the future of the insurance industry at one of the world’s largest insurance brokers. You can read the article here.

Tia is also one of DataIQ’s Most Influential People In Data for 2024 (congratulations, Tia!). We took the opportunity to ask Tia a few questions of our own, building on some of the themes from the Tech-Exec interview.

In the article with Tech-Exec, you touched on your background, your drive and ambition, and what led you to your current role at Gallagher. What are you most passionate about in this new role?

In 2023, I started working at Gallagher after having an extensive career in data in both public and private sectors. This job was a logical next step for me, as it resonates with my longstanding interest in utilising data in creative ways to bring about beneficial outcomes. I was eager to manage a comprehensive data transformation at Gallagher to prepare for the future, aligning with my interests and expertise.

I am responsible for leading our data strategy and developing a strong data culture. We wish to capitalise on data as a route to innovation and strategic decision-making. Our organisation is therefore creating an environment where data plays a crucial role in our business operations, to allow us to acquire new clients and accomplish significant results rapidly. The role offers an exciting opportunity to combine my skills and lead positive changes in our thinking towards data and its role in the future of insurance.

The transition to making data an integral part of business operations is often challenging. How have you found the experience? 

At Gallagher, our current data infrastructure faces the typical challenges that arise when a firm is expanding. Our data warehouses collect data from many sources, which mirrors the diverse aspects of our brokerage activities. These encompass internal systems, such as customer relationship management (CRM), brokerage systems, and other business applications. We handle multiple data types in our data estate, ranging from structured numerical data to unstructured text. The vast majority of our estate is currently hosted on-premise using Microsoft SQL Server technology, however, we also manage various other departmental data platforms such as QlikView. 

“…we want data capabilities that provide flexibility and agility, to enable us to quickly react to new market opportunities.”

A key challenge we face is quickly incorporating new data sources obtained through our mergers and acquisitions activity. These problems affect our data management efforts in terms of migration, seamless data integration, maintaining data quality, and providing data accessibility. 
To overcome this, we want data capabilities that provide flexibility and agility, to enable us to quickly react to new market opportunities. Consequently, we are implementing a worldwide data transformation to update our data technology, processes, and skills to provide support for this initiative. This transformation will move Gallagher data to the cloud, using Snowflake to leverage the scalability and elasticity of the platform for advanced analytics. Having this flexibility gives us a major advantage, offering computational resources where and when they are required.

How does this technology strategy align with your data strategy, and how do you plan to ensure data governance and compliance while implementing these solutions, especially in a highly-regulated industry like insurance?

Gallagher’s data strategy aims to position us as the leader in the insurance sector. By integrating our chosen solutions within the Snowflake platform, we strive to establish a higher standard in data-driven decision-making. 

This strategy involves incorporating data management tools such as Collibra, CluedIn, and Datactics into our re-platforming efforts, with a focus on ensuring the compatibility and interoperability of each component. We are aligning each tool’s capabilities with Snowflake’s powerful data lake functionality with the support of our consulting partners to ensure that our set of tools function seamlessly within Snowflake’s environment.

“…we are contemplating upcoming AI and automation regulations and considering how to futureproof our products and approaches…”

We are meticulously navigating the waters of data governance and compliance. We carefully plan each stage to ensure that all components of our data governance comply with the industry regulations and legislation of the specific region. For example, we are contemplating upcoming AI and automation regulations and considering how to futureproof our products and approaches to comply with them.

The success of our programme requires cooperation across our different global regions, stakeholders, and partners. We are rethinking our data governance using a bottom-up approach tailored to the specific features of our global insurance industry. We review our documentation and test the methods we use to ensure they comply with regulations and maintain proper checks and balances. We seek to understand the operational aspects of a process in real-world scenarios and evaluate its feasibility and scalability.

Could you expand on your choice of multiple solutions for data management technology? What made you go this route over a one-stop shop for all technologies?

We have selected “best of breed” solutions for data quality, data lineage, and Master Data Management (MDM), based on a requirement for specialised, high-performance tools. We concentrated on high-quality enterprise solutions for easy integration with our current technologies. Our main priorities were security, scalability, usability, and compatibility with our infrastructure. 

By adopting this approach, we achieve enhanced specialisation and capabilities in each area, providing high-level performance. This strategy offers the necessary flexibility within the organisation to establish a unified data management ecosystem. This aligns with our strategic objectives, ensuring that our data management capability is scaleable, secure, and adaptable.

Regarding the technologies we have selected, Collibra increases data transparency through efficient cataloguing and clear lineage; CluedIn ensures consistent and reliable data across systems; and Datactics is critical for maintaining high-quality data. 

“As we venture into advanced analytics, the importance of our data quality increases.”

In Datactics’ case, it provides data cleansing tools that ensure the reliability and accuracy of our data, underpinning effective decision-making and strategic planning. The benefits of this are immense, enhancing operating efficiency, reducing errors, and enabling well-informed decisions. As we venture into advanced analytics, the importance of our data quality increases. Therefore, Datactics was one of the first technologies we started using.

We anticipate gaining substantial competitive advantages from our strategic investment, such as improved decision-making capabilities, operational efficiency, and greater customer insights for personalisation. Our ability to swiftly adapt to market changes is also boosted. Gallagher’s adoption of automation and AI technologies will also strengthen our position, ensuring we remain at the forefront of technological progress.

On Master Data Management (MDM), you referred to the importance of having dedicated technology for this purpose. How do you see MDM making a difference at Gallagher, and what approach are you taking?

Gallagher is deploying Master Data Management to provide a single customer view. We expect substantial improvements in operational efficiency and customer service when it is completed. This will improve processing efficiency by removing duplicate data and offering more comprehensive, actionable customer insights. These improvements will benefit the insurance brokerage business and will enable improved data monetisation and stronger compliance, eventually enhancing client experience and increasing operational efficiency.

Implementing MDM at Gallagher is foundational to our ability to enable global analytics and automation. To facilitate it, we need to create a unified, accurate, and accessible data environment. We plan to integrate MDM seamlessly with our existing data systems, leveraging tools like CluedIn to manage reference data efficiently. This approach ensures that our MDM solution supports our broader data strategy, enhancing our overall data architecture.

“By including data quality activities in our approach, we anticipate significant benefits from the MDM initiative.”

Data quality is crucial in Gallagher’s journey to achieve this, particularly in establishing a unified consumer view via MDM. Accurate and consistent data is essential for consolidating several client data sources into a master profile; we see it as essential, as without good data quality the benefits of our transformation will be reduced. By including data quality activities in our approach, we anticipate significant benefits from the MDM initiative. We foresee a marked improvement in data accuracy and consistency throughout all business units. We want to empower users across the organisation to make more informed, data-driven decisions to facilitate growth. Furthermore, a single source of truth enables us to streamline our operations, leading to greater efficiencies by removing manual processes. Essentially, this strategic MDM implementation transforms data into a valuable asset that drives innovation and growth for Gallagher.

Looking to the future of insurance, what challenges do you foresee in technology, data and the insurance market?

Keeping up with the fast speed of technology changes can be challenging. We are conducting horizon scanning on new technologies to detect emerging trends. We wish to include new tools and processes that will complement and improve our current systems as they become ready.

“We prioritise the security of our data assets and our clients’ privacy because it is essential for our reputation and confidence in the market.”

Next is ensuring robust data security and compliance, particularly when considering legislation changes about AI and data protection. Our approach is to continuously strengthen our data policies as we grow and proactively manage our data. We prioritise the security of our data assets and our clients’ privacy because it is essential for our reputation and confidence in the market.

Finally, we work closely with our technology partners to leverage their expertise. This collaborative approach ensures that we take advantage of new technologies to their maximum capacity while preserving the integrity and effectiveness of our current systems. 

Are there any other technologies or methodologies you are considering for improving data management in the future beyond what you have mentioned?

Beyond the technologies and strategies already mentioned, at Gallagher, we plan to align our data management practices with the principles outlined in DAMA/DMBOK (Data Management Body of Knowledge). This framework will ensure that our data management capabilities are not just technologically advanced but also adhere to the best practices and standards in the industry.

In addition to this, we are always on the lookout for emerging technologies and methodologies that could further enhance our data management. Whether it’s advancements in AI, machine learning, or new data governance frameworks, we are committed to exploring and adopting methodologies that can add value to our data management practices.

For more from Tia, you can find her on LinkedIn.



The post Shaping the Future of Insurance: Insights from Tia Cheang appeared first on Datactics.

]]>
FSCS compliance: The Future of Depositor Protection https://www.datactics.com/blog/fscs-compliance-the-future-of-depositor-protection/ Wed, 27 Mar 2024 16:44:31 +0000 https://www.datactics.com/?p=25044   Why does FSCS compliance matter? HSBC Bank plc (HBEU) and HSBC UK Bank plc (HBUK)’s January 2024 fine, imposed by the Prudential Regulation Authority (PRA) for historic failures in deposit protection identification and notification, alongside the 2023 United States banking crisis, jointly serve as stark reminders of the importance of depositor protection regulation. Both […]

The post FSCS compliance: The Future of Depositor Protection appeared first on Datactics.

]]>
HSBC Bank

 

Why does FSCS compliance matter?

HSBC Bank plc (HBEU) and HSBC UK Bank plc (HBUK)’s January 2024 fine, imposed by the Prudential Regulation Authority (PRA) for historic failures in deposit protection identification and notification, alongside the 2023 United States banking crisis, jointly serve as stark reminders of the importance of depositor protection regulation.

Both events, emblematic of the broader challenges faced by the banking sector, underscore the necessity of rigorous data governance and quality for FSCS compliance and depositor protection.

HSBC’s penalty, the second largest imposed by the PRA, highlights the consequences of inadequate data management practices, while the 2023 US banking crisis, characterised by the failure of three small-to-midsize banks, reveals the systemic risks posed by liquidity concerns and market instability.

These incidents draw attention not only to the pressing issues of today, but also to the enduring mechanisms put in place to safeguard financial stability. The Financial Services Compensation Scheme (FSCS), established in the United Kingdom, embodies such a mechanism, created to instil consumer confidence and prevent the domino effect of bank runs.

What is Single Customer View (SCV)?

The FSCS’s role becomes especially pivotal in times of uncertainty: if a bank collapses, the FSCS’s compensation mechanism needs to activate almost instantaneously to maintain this confidence.

According to the Prudential Regulation Authority (PRA) Rulebook (Section 12), firms are required to produce a Single Customer View (SCV) — a comprehensive record of eligible guaranteed deposits — within 24 hours of a bank’s failure or whenever the PRA or FSCS requests it.

This response, underscored by the accuracy and rapidity of depositor information, is a bulwark designed to avert a banking crisis by ensuring timely compensation for affected customers. Over time, as the FSCS has amplified depositor protection to cover up to £85,000 per individual, the 24-hour SCV mandate has marked a significant stride towards a more secure and robust financial sector, solidifying the foundation where depositor trust is paramount.

What data challenges does SCV pose?

When it comes to implementing the SCV regulation, the devil lies in the details. The demand for accuracy and consistency in depositor records translates into specific, often arduous, data quality challenges. Financial institutions must ensure that each depositor’s record is not only accurate but also aligned with SCV’s granular requirements:

Below are 5 data challenges associated with SCV:
  • Identification and rectification of duplicated records– Duplication can occur due to disparate data entry points or legacy systems not communicating effectively.
  • Lack of consistency across records– Customer details may have slight variations across different systems, such as misspelt names or outdated addresses, which can impede the quick identification of accounts under SCV mandates.
  • Data timeliness– SCV necessitates that data be updated within a 24-hour window, requiring real-time (or near-real-time) processing capabilities. Legacy systems, often built on batch processing, may struggle to adapt to this requirement.
  • Discrepancies in account status — determining if an account is active, dormant, or closed — must be resolved to prevent compensation delays or errors.
  • Aggregating siloed data– the comprehensive nature of depositor information mandated by SCV involves aggregating data across multiple product lines, account types, and even geographical locations for international banks, a task that can be formidable given the legacy data structures and the diversity of regulatory environments.

The HSBC fine, in particular, underscores the ramifications of inaccurate risk categorisation under the depositor protection rules and the insufficiency of stress testing scenarios tailored to depositor data. Without robust data quality controls, banks risk misclassifying depositor coverage, which could potentially lead to regulatory sanctions and reputational damage.

Why integrate SCV with wider data strategies?

By incorporating meticulous data standards and validation processes as part of an enterprise strategy, banks can transform data management from a regulatory burden into a strategic asset.

The crux of effective depositor protection lies not just in adhering to SCV requirements, but in embracing a broader perspective on data governance and quality. This means positioning SCV not in isolation but as a critical component of a comprehensive account and customer-level data strategy.

To overcome these challenges, financial institutions must not only deploy advanced data governance and quality tooling but also foster a culture of data stewardship where data quality is an enterprise-wide responsibility and not one that is siloed within IT departments. By incorporating meticulous data standards and validation processes as part of an enterprise strategy, banks can transform data management from a regulatory burden into a strategic asset.

An enterprise approach involves:
  • Unified Data Governance Frameworks: Establishing unified data governance frameworks that ensure data accuracy, consistency, and accessibility across the enterprise.
  • Advanced Data Quality Measures: Implementing advanced data quality measures that address inaccuracies and inconsistencies head-on, ensuring that all customer data is up-to-date and reliable.
  • Integration with Broader Business Objectives: Aligning SCV and other regulatory data requirements with broader business objectives, including risk management, customer experience enhancement, and operational efficiency.
  • Leveraging Technology and Analytics: Employing cutting-edge technology and analytics to streamline data management processes, from data collection and integration to analysis and reporting.

How does Datactics support FSCS compliance?

 

The recent HSBC fine and the 2023 US banking crisis serve as critical catalysts for reflection on the role of depositor protection regulation and the imperative of a holistic data strategy.

FSCS regulatory reporting compliance underscores the evolution of depositor protection in response to financial crises, whilst the challenges presented by these regulations highlight the need for advanced data governance and quality measures.

At Datactics, we understand that the challenges posed by regulations like SCV indicate broader issues within data management and governance.

Our approach transcends the piecemeal addressing of regulatory requirements; instead, we advocate for and implement a comprehensive data strategy that integrates SCV within the wider context of account and customer-level data management.

Our solutions are designed to support regulatory compliance but also to bolster the overall data governance and quality framework of financial institutions.

Datactics Data Readiness

 

We work closely with our clients to:
  • Identify and Address Data Quality Issues: Through advanced analytics and machine learning, we pinpoint and rectify data quality issues, ensuring compliance and enhancing overall data integrity.
  • Implement Robust Data Governance Practices: We help institutions establish and maintain robust data governance practices that align with both regulatory requirements and business goals.
  • Foster a Culture of Data Excellence: Beyond technical solutions, we emphasise the importance of fostering a culture that values data accuracy, consistency, and transparency.

We are committed to helping our customers navigate FSCS compliance, not by addressing regulations in isolation but by integrating them into a broader, strategic and more sustainable framework of account and customer-level data management. By doing so, we ensure compliance and protection for depositors whilst paving the way for a more resilient, trustworthy, and efficient banking sector.

 

FSCS compliance Datactics

The post FSCS compliance: The Future of Depositor Protection appeared first on Datactics.

]]>
Four Essential Tips to Build a Data Governance Business Case https://www.datactics.com/blog/4-tips-on-how-to-build-a-data-governance-business-case/ Mon, 26 Feb 2024 14:30:36 +0000 https://www.datactics.com/?p=24721 In an era where data drives strategic decision-making, data governance and the quality of that data become increasingly vital. Building a business case for data governance can bring a number of enterprise-wide benefits. This is especially true in banking and financial services, where the risk-focused mindset can sometimes overshadow the potential to become data-driven. However, […]

The post Four Essential Tips to Build a Data Governance Business Case appeared first on Datactics.

]]>
How to build a business case for data governance

In an era where data drives strategic decision-making, data governance and the quality of that data become increasingly vital.

Building a business case for data governance can bring a number of enterprise-wide benefits. This is especially true in banking and financial services, where the risk-focused mindset can sometimes overshadow the potential to become data-driven.

However, it is often a challenge to communicate the value of investing in a data governance and analytics programme.

Successful data governance programmes are influenced by more than the deployment of advanced technologies or methodologies. They are also determined by fostering an organisational culture that fundamentally prioritises data governance. Often referred to as ‘data literacy’, this focus on encouraging a data-driven culture helps ensure that better data management efforts are adopted and sustained over time. Consequently, this can lead to improved data quality, better adherence to rules, and smarter decision-making across the company.

In a recent roundtable in London with some of our customers, we gained first-hand insight into how they are tackling the challenge of fostering a company culture that values data governance. As thought leaders in their fields, we thought we’d share some of their insights. We’ve broken these tips down into simple summaries below.

The Four Essential Tips

Here are four ways that our customers cultivate a company culture that prioritises data governance:

  1. Start with Data Quality
  2. Highlight Success Stories
  3. Use Positive Language
  4. Tap into the Human Side of Data Governance

1. Start with Data Quality

Our customers agreed that this is one of the most impactful steps. Data quality is the foundation, ensuring consistency and accuracy across the organisation’s data landscape. This is essential for any governance and analytics programme to succeed. This step helps make the benefits of data governance more apparent and relatable to all employees, as stakeholders see how data quality can enhance decision-making, reduce errors, and streamline processes. Equally, better data powers better decisions, more of which to follow…

2. Highlight Success Stories

When trying to gain buy-in internally, it’s important to be able to create a compelling story that your key stakeholders can relate to. Success can look different depending on every organisation and it’s particularly important to shout about the wins, big or small. For one organisation, having proper data governance can drive efficiencies and profits. For another, it could result in more lives saved. Real-life examples of how improved data governance has led to better outcomes can be an excellent motivator for change.

3. Use Positive Language

The way data analytics and governance are talked about has the power to significantly influence key stakeholders. This can be as simple as talking about the opportunities and benefits of having a robust data governance programme, instead of framing it as something that’s necessary to comply with regulations. Compliance is critical, but so is growing your business; consequently, demonstrate the value your improved data quality is bringing in clear dashboards.

4. Tap into the Human Side of Data Governance

While it may be true that people will frequently resist change, it doesn’t have to derail your ambitions. To deal with this effectively, try to identify some of the areas of frustration felt by other teams across the organisation. To begin with, ask them about their daily work challenges. Oftentimes, these challenges are caused by underlying problems with data quality. Understanding this helps convince them of the value of investing more in data governance to make their day-to-day jobs easier. Our customers also commented on the value of having good interpersonal skills to work effectively with stakeholders and deal with push-back.

        Maintaining a Successful Data Governance Programme

        Once these initial steps have been taken, continue the conversation through ongoing education and training. Offering workshops, seminars, and online courses can help demystify data governance and analytics, making it more accessible across the business.

        Another way to sustain an enterprise data governance programme is by leveraging technology. User-friendly, no-code tools and platforms are a great way of democratising data governance, making it more accessible across the business. With AI, these tools can automate mundane tasks, extract valuable insights from the data, and ensure data accuracy. Accordingly, this makes it easier to encourage a company-wide culture that values data governance.

        Conclusion

        Fostering a company culture that values data governance is a multifaceted process. With this in mind, it’s worth seeing how our customers have gone about it. In general they achieve buy-in by starting with data quality; leveraging the power of storytelling; providing continuous education; and embracing data management technologies. By focusing on these areas, organisations can ensure that their data governance efforts move beyond compliance requirements to become strategic advantages driveing better decision-making and operational efficiency.

        How Datactics can help

        Looking for advice on how to build a business case for data governance within your organisation? This is something we’ve done for our clients.

        We have developed Datactics Catalyst, our professional services offering, to deliver practical support in your data strategy. 

        From augmenting your data team to working on specific data projects, delivering training or providing a short-term specialist to solve a specific data quality problem, let Datactics Catalyst accelerate your ambitions, help you increase data literacy and foster a data-driven culture.

        Have a look at our Catalyst page to find out more: www.datactics.com/

        The post Four Essential Tips to Build a Data Governance Business Case appeared first on Datactics.

        ]]>
        5 Steps to Build the Case for Data & Analytics Governance https://www.datactics.com/blog/good-data-culture/5-steps-to-build-the-case-for-data-analytics-governance/ Fri, 06 Oct 2023 10:20:22 +0000 https://www.datactics.com/?p=23746 The task of creating a compelling business case for data and analytics governance is often complex and daunting. In this blog, we break down the big picture into five simple steps to follow and turbocharge your data management strategy.   Why is data and analytics governance important? Data and analytics governance is crucial for organisations […]

        The post 5 Steps to Build the Case for Data & Analytics Governance appeared first on Datactics.

        ]]>

        5 Steps to Build a Business Case for Data and Analytics Governance

        The task of creating a compelling business case for data and analytics governance is often complex and daunting. In this blog, we break down the big picture into five simple steps to follow and turbocharge your data management strategy.

         

        Why is data and analytics governance important?

        Data and analytics governance is crucial for organisations in a number of ways.

        • It ensures that the quality of information is maintained to internally agreed standards for accuracy, consistency, and reliability and that these measures (or ‘dimensions’) are consistent across the business;
        • It means that the business knows which data can be relied upon for making informed decisions, building trust in data-driven insights, and helping the business to grow;
        • It helps organisations comply with regulations and industry standards, ensuring proper data handling practices and minimising potential legal and reputational risks;
        • It enables better data access and sharing practices, fostering collaboration across departments and promoting a data-driven culture within the organisation.

        Overall, it plays a vital role in maximising the value of data assets, mitigating risks, and driving better business outcomes.

         

        What does a successful business case for data and analytics governance look like?

         

        “Data and analytics leaders often struggle to develop a compelling business case for governance. Those who succeed directly connect data and analytics governance with specific business outcomes — and then sell it.”

         

        –Gartner®, ‘‘5 Steps to Build a Business Case for Data and Analytics Governance That Even Humans Will Understand’, By Saul Judah, 3 January 2023.

         

        GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

         

        Making a business case simple and relatable should be the top priority of data leaders in developing their data strategy. Keeping it up-to-date and relevant should also be top of mind.

        This blog explores the five key steps set out by Gartner® in their research, “5 Steps to Build a Business Case for Data and Analytics Governance That Even Humans Will Understand.”, looking at the areas that data leaders need to consider and providing suggestions on how to implement them.

         

        Step 1: Identify Which Business Outcomes Cannot Be Delivered Because of Poor or No Data and Analytics Governance

         

        Business and data leaders will find their tasks much easier if they can demonstrate the pain that the business will encounter if data and analytics governance is not implemented. Quantifying this can be both persuasive and informative.

        While stories that focus on the negative impacts can be considered a high-risk strategy, they are greatly effective in highlighting the urgency of needing to act. Identifying these pain points and challenges associated with unmanaged data and analytics is a key part of building a successful business case.

         

        Step 2: Connect Business Performance With Information Value Through Key Metrics

         

        Data-driven organisations require metrics to demonstrate the value of data and analytics governance. These organisations heavily depend on metrics, key performance indicators (KPIs), and data to make informed business decisions. Budgets held by senior executives are under constant pressure; data and analytics governance leaders will need business-critical metrics that drive actions to demonstrate why data and analytics governance deserve a bigger slice of the pie.

        Starting with metrics that matter to the business will help to ensure that the data is speaking the same language.

         

        Step 3: Outline the Scope and Footprint for Governance Through the People, Process and Technology Needed to Deliver the Prioritized Outcomes

         

        Nobody likes ‘scope creep’, and data and analytics governance is no different. A clear scope will help drive the budget to the areas that will make the biggest difference, in prioritising programs that address clear business outcomes.

        This will help identify the roles needed to deliver these outcomes, including specialists, engineers, owners of processes, and those responsible for the metrics that have already been identified.

        It follows that after identifying the scope, roles, and people, you are now in a position to start specifying the technology needed to help deliver it.

         

        Step 4: Define the approach, outputs, timescales, and outcomes.

         

        Many IT projects and programmes have a reputation for being over-large and taking too long. Creating a business case at this stage is the perfect opportunity to challenge that myth.

        There are multiple methodologies for implementing data governance, and assessing a variety of models will help immensely. Equally, seeking guidance from specialist consultancies (there are many!) will be of great use. At Datactics, we have hand-picked a few consultancies in the UK and USA who we trust to offer great recommendations. You can find more details on our ‘Partners’ pages.

        From a timing perspective, it’s important to balance your desire for quick results, with the discipline to deliver; a healthy bias for action will serve you well at this early stage.

         

        Step 5: Complete the Financials for Your Proposal

         

        When it comes down to it, senior executives will expect a price to be paid for the work you are proposing. You can create a compelling business case for data and analytics governance by incorporating:

        • Demonstrations of cost saving, risk mitigation, and new business opportunities arising from better quality data;
        • Total cost of ownership for the proposed technological solution(s);
        • Return on investment;
        • A “Do-Nothing” option – with financial implications included.

         

        The post 5 Steps to Build the Case for Data & Analytics Governance appeared first on Datactics.

        ]]>
        Uncovering the Root Causes of Data Quality Issues https://www.datactics.com/blog/uncovering-the-root-causes-of-data-quality-issues/ Mon, 25 Sep 2023 11:23:40 +0000 https://www.datactics.com/?p=22640   We all know data quality issues when we see them. They can often impair the ability of an organization to work efficiently and comply with regulations, plus it makes it harder to generate any real business value from messy data.  Rather than simply just measuring and patching up issues, we help our clients understand […]

        The post Uncovering the Root Causes of Data Quality Issues appeared first on Datactics.

        ]]>

         

        We all know data quality issues when we see them. They can often impair the ability of an organization to work efficiently and comply with regulations, plus it makes it harder to generate any real business value from messy data. 

        Rather than simply just measuring and patching up issues, we help our clients understand why issues are surfacing by identifying the root cause and fixing it at the source. 

        To some, this concept may seem like a pipedream. But many of our clients are recognizing the true value that this brings. 

        Recently we have been exploring the industry-standard opinion on this, with Kieran Seaward taking to the stage at FIMA US in Boston earlier this year to host two roundtables on the topic: “Uncovering the Root Causes of Data Quality Issues and Moving Beyond Measurement to Action”. 

         During these roundtable discussions, data management practitioners from diverse backgrounds and industries (and with considerable experience in the field) shared their insights on dealing with poor data quality. Participants had the opportunity to learn from each other’s experiences and explore the actions they have taken to address this challenge. 

        We were grateful for such candid and open conversation around what is a challenging topic. We wanted to share some of the key themes and insights that resonated with us during the sessions to help you get started:

        1. Proactive (not reactive) Data Quality Management

        Historically, data quality management has been viewed as a reactive measure to fixing bad data that has negatively impacted a report or a decision. Now, with the advancement in capabilities and technology, firms should look to become proactive and try to prevent issues from occurring in the first place- this will help restrict downstream impact on critical data elements. 

        In other words, prevent the fire from starting rather than stopping the spread. 

        But how can you achieve this? There are a number of key steps. 

        • Firstly, define data quality metrics and establish baseline measurements by setting targets for each metric and implementing a monitoring process. 
        • Then, you can conduct regular assessments to measure progress and develop improvement processes. 
        • Finally, implementing reporting and visualization mechanisms to communicate data quality measurement is important for highlighting the impact (and ROI) to business teams and senior leadership – this can be continuously iterated and refined, as necessary. 
        2. Automation of manual processes

        Automation plays a vital role in modernizing approaches to data quality management. Gone are the days when data scientists must spend 80% of their time wrangling with data to ensure it is fit for purpose. By using advanced techniques such as artificial intelligence, machine learning, and statistical modeling, practitioners can reduce the manual effort of boring, repetitive tasks and become more proactive in how they manage data quality. 

         Some technologies in the market offer automated profiling and recommended data quality rules for validation, cleansing, and deduplicating based on the column headers (metadata) as well as the underlying values. These tasks are often performed by writing complicated programming scripts, are unscalable, and can take considerable time. By automating this process, technical resources can be reallocated to more value-adding activities.

        3. Root cause analysis of Data Quality issues

        With an effective data quality measurement and monitoring process in place – which is by no means a trivial exercise to implement – you can start to identify trends of data quality breaks and act upon them. 

        As a reference point, it’s helpful to consider the  Five Ws: 

        What Data Quality break has occurred? 

        Where  has the Data Quality issue occurred or surfaced? Has the DQ issue occurred at multiple points in the journey or propagated through other systems? 

        When  is the break occurring? 

        Who  is responsible for this element of information? Who is the data steward or data owner? 

        Why  is it occurring? Hopefully, the previous four questions have shed some light on the reasons for the issue. 

         If you can accurately know the answer to each of these, you are in a good position to resolve, or fix, that data quality issue. 

        AI can also help users to continuously monitor data quality breaks over time. By doing so, you can generate a rich set of statistics that enables analysis of data quality breaks and identify relationships between issues. This helps users predict future breaks, predict break resolution times, and understand the downstream impact of breaks.

        4. Remediation

        Remediation is uniquely important in the data management process because it does something about the problems being reported. With a comprehensive understanding of where and why breaks are occurring, you have the opportunity to put out that fire and fix your broken data. 

        Some people are understandably hesitant about fixing data, but without acting, the rest of the process remains passive. 

        We do not believe in handing off the responsibility to another team or system – but instead taking action to deal with and fix the breaks that have surfaced. 

        We are currently working with a customer to fix those breaks at source, using a federated approach to solving data quality issues in the business by utilizing SME knowledge of what good looks like. 

        This part of the story, where you are doing something proactive about making the data better, is the element that is often missing from solutions or processes that spend all their time noticing breaks or passively monitoring systems. 

         

        Our recent engagement with industry experts at FIMA US in Boston reinforced the significance of proactive data quality management. With advancements in capabilities and technology, firms can now take a proactive approach. By defining data quality metrics, automating manual processes, conducting root cause analysis, and implementing remediation strategies, businesses can enhance the quality of their data and maximize its impact. 

        We believe that taking ownership of data quality and embracing a proactive approach is the key to harnessing the full potential of your data for business success. In a world where data is a critical asset, it’s critical to move beyond merely noticing data quality breaks and to start actively working towards making data better. 

         

        Kieran will be discussing data remediation at the Data Management Summit in New York on 28th September, speaking on a panel entitled ‘How to scale enterprise data quality with AI and ML’. You can also contact us to learn more here. 

         

        The post Uncovering the Root Causes of Data Quality Issues appeared first on Datactics.

        ]]>
        Data Quality Management – Gain The Competitive Edge https://www.datactics.com/blog/data-quality-management-gain-the-competitive-edge/ Mon, 25 Apr 2022 11:51:19 +0000 https://www.datactics.com/?p=18667 Managing data assets has become a way to gain competitive advantage for businesses and is a keen focus area for regulatory bodies around the globe. With the emergence of data standards and a plethora of technology businesses, the data management landscape has never been so competitive.  The following blog will look at how to maximise […]

        The post Data Quality Management – Gain The Competitive Edge appeared first on Datactics.

        ]]>
        Data Quality Management

        Managing data assets has become a way to gain competitive advantage for businesses and is a keen focus area for regulatory bodies around the globe. With the emergence of data standards and a plethora of technology businesses, the data management landscape has never been so competitive. 

        The following blog will look at how to maximise your data assets and gain that competitive edge by developing a data quality management framework, making use of best-of-breed technologies.

        Tech solutions for data quality management

        Back in the 16th century Sir Francis Bacon wrote that “knowledge itself is power”, conveying the idea that having knowledge is the foundation of influence, reputation, and decision-making.  In modern times, the information we use to acquire knowledge is highly digitalised, often in a collection of organised data points. Every individual and every organisation in the world is faced with the challenge of managing data. The bigger the organisation, typically the more complex the challenges are in managing data used to make business decisions. 

        How to gain the competitive edge in such fast-evolving industry? Arguably, a mix of technologies and process accelerators aimed at solving the right problems.  

        The goal of best-of-breed-technology should be to automate manual, time consuming tasks required to govern enterprise data (such as data profiling and data cleansing), whilst streamlining anything that is not possible or desirable to automate. The automation challenge can be solved by deploying supervised machine learning solutions, which should be functional to solving problems identified in the data governance framework and should adhere to data stewardship and data security principles. Explainable AI with person-in-the-loop feedback and decisions is a sensible way to achieve this, as users can understand the rationale behind a model’s prediction.

        In terms of Data Quality, the aim is to provide a self-driving solution that can cope with sensitive data in big volumes and at scale. In order to gain a competitive edge, it should be interconnected with other players in the data management ecosystems such as governance, lineage and catalogue platforms. In order to add value to an organisation’s data assets, data stewards from all data domains should feel empowered to get on top of quality issues and make use of powerful tools to perform remediation. 

        Let’s explore some of the conceptual synergies that exist between actors in the data management ecosystem. A data governance framework is the theoretical view of how data should be managed. Integration with data quality and lineage provides the practical evidence of guidelines being followed or overlooked.

        Data ownership is a fundamental aspect driven by governance, the definition of which is a must-have success criterion for any data quality initiative. Governance and data catalogues also set the glossary and terminology standards to be used across the data management tools. 

        Looking at data quality and lineage, these two players are complementary in their ability to track data records in motion, from data source to output. Quality of the data ultimately means acknowledging that it is ‘fit-for-purpose’ and lineage tools are powerful in visualising the ramifications of where data originally comes from and what they are being used for. Data quality in motion is compelling as it looks at how the data flows through your organisation; identifying where problems originate from (root cause analysis) and it also gives a clear picture of the ultimate impact of DQ issues. 

        Why we do data quality

        We’re now going to shift the focus from “how we do data quality” on to “why we do data quality”. In today’s marketplace, technology vendors are always faced with the challenge of quantifying their value proposition. In other words, quantifying the “cost of data quality”. 

        It is a complex function made up of multiple variable, but here is an attempt at summarising the key concept. Whilst good quality data has the potential to improve business operations and outcomes, poor data quality is capable of hindering this (or worse). Below are a number of potential outcomes if bad data is allowed to manifest within an organisation:

        • Regulatory risks. Depending on the industry, there will often be a considerable risk in terms of monetary fines and/or operational restrictions depending on the severity of the data quality issues. Importantly, the risks are not only associated to the actual data problems, but also to the lack of processes. For example, having a data governance program in place to identify and rectify potential data problems is invaluable to a financial institution battling with regulation and compliance, in order to avoid costly fines (e.g. GDPR).
        • Opportunity cost. This refers to the ability (or inability) to make use of information in order to gain competitive advantage. For example, this could refer to the reduced ability to cross-sell products to the customer base because of missing connections between customer and product data. Sub-optimal data assets can therefore have costly repercussions for a business losing out on available opportunities.
        • Operational risk. If a business or organisation continues to work with poor quality data, there is a chance it could fall victim to operational risks. The failure of identifying a risk factor such as fraudulent behaviour and/or conflicts of interest could be due to a lack of data integration, missing links or unclear hierarchies in the data. This is a typical focus area for Anti-Money-Laundry and KYC projects. 
        • Severity of data quality issues. This simply refers to the capability of a data owner to understand the business impact of a data quality issue to an operation. This is in itself is a difficult variable to estimate, for it needs to take into consideration the context around the faulty information and the criticality (size) of the problem for the operation. To give a practical example, put plainly, a data quality issue associated with a large investment is more severe than the very same issue associated with a smaller investment. 

        Overall, the ability to gain a competitive edge will hinge on the existence of a mix of technological and process-driven innovation (=how we do it), correctly focused on addressing the key problems in data management (=why we do it!). There are plenty of business benefits from creating a data governance framework and mastering data quality, from less risk potential, to greater business opportunities and improved operations.

        If you are developing a data management framework or are seeking to understand our data quality solution, you can read more about it in our datablog. Similarly, if you would like to speak directly to us at Datactics, you can reach out to Luca Rovesti.

        The post Data Quality Management – Gain The Competitive Edge appeared first on Datactics.

        ]]>
        Data Health: Why does it matter? | Luca Rovesti https://www.datactics.com/blog/good-data-culture/data-health-why-does-it-matter/ Fri, 16 Jul 2021 09:03:01 +0000 https://www.datactics.com/?p=15148 Introduction  Getting into shape is a big focus of summer advertising campaigns, where we’re all encouraged to eat, drink and do the right things to keep our bodies and minds on top form. You’ll be relieved to read that at Datactics we’re far more concerned about the health of your data, so open that packet of cookies, pour yourself […]

        The post Data Health: Why does it matter? | Luca Rovesti appeared first on Datactics.

        ]]>

        Introduction 

        Getting into shape is a big focus of summer advertising campaigns, where we’re all encouraged to eat, drink and do the right things to keep our bodies and minds on top form. You’ll be relieved to read that at Datactics we’re far more concerned about the health of your data, so open that packet of cookies, pour yourself another coffee, and let Luca Rovesti, Head of Product at Datactics, focus on your data wellness! 

        Why data health matters 

        With data being relied on more heavily than ever before, a clearer difference is emerging between being surrounded by data and using data to make business decisions. It makes sense then that these data needs will drive a corresponding greater need to keep that data healthy. It may be well known that healthy data means things like being complete, cleansed and compliant, but the problem is that the health of data is often not fully understood, and that this lack of understanding fuels a lack of trust. Trust in data is everything as more and more of the business is built on it. 

        To fully grasp the healthiness of your data, you must be able to prove the validity and completeness of that data. By achieving this, analytics can then be provided to key decision makers who can trust the data to make those critical business plans. In this blog, we will dig deeper into what’s required to achieve data health; how you can measure it; and the importance of having an understanding of what you are measuring. 
         

        Willpower needed! 

        Firstly, what do we mean by data wellbeing? Factors in personal wellbeing include social, emotional, and psychological wellbeing; in much the same way, in the data space you have tooling and strategy. And in the same way that in personal wellbeing you need to focus on these factors and commit to specific courses of action, you need to commit and execute on your data health strategy if you’re going to see a difference. 

        For instance, if the commitment is missing typically that means that tooling is brought in but not the right people are committed to make the data quality initiative work. Typically, what happens is that you fail to hit the target on the value-adds, the real-world business results that are ultimately relevant to the people you need to convince to pay for it and invest in it. You could have a bunch of data quality rules that you think make sense, but nobody really values the outcome of the rules; or you could have an implementation that does not follow the practices or the guidelines that the end-users of the data are willing to commit to. If it is difficult to get buy-in because you are trying to change their way of working, we would then align the DQ initiative with how the business users are already working, with the aim of giving valuable insight that makes sense in their existing context. 

        As with everything then, commitment is vital. One of the major differentiators that Datactics brings to the table is that we are not just about measurement, but also about remediation – doing something material and meaningful to the business with the metrics and the exceptions we have identified from the system. We are very often playing the role of a trusted partner; working alongside the business to enable them to make the necessary changes to data and to how they view it.  

        Ultimately, it’s about our client taking that step, armed with our support, technology, and coaching, to commit and fix the broken data. Some of our clients have taken the ‘carrot’ approach, for example publishing on an internal site the office with the best data quality compliance score across the whole of the UK. They hold metrics on how many exceptions have been remediated and they give an honourable mention to the ones that are doing something about them. Whereas other firms we work with are going down the ‘stick’ route, publishing metrics to point out those who are lagging behind on these types of initiatives. Either way, the point is that accountability of who is having to do something with the data drives the commitment to do something about it. 

        Knowing what to measure: it’s a business problem  

        One of the biggest risks – if not the biggest risk – in a data quality initiative is a disconnect emerging between the parties involved. On the one side there are people tasked with the project, to create metrics, and the technological solution that monitors that data. And on the other side, the people who are using that data and the purpose they have for it.  It means that it’s critical to make the measurements relevant to why the data is being used. The only way to do that is to gain an understanding of what the data is, and the ultimate purpose for that data, so that the whole initiative can be aligned with that purpose. 

        So how can you measure data health? 

        It is important to remember that data quality is only one aspect of a data capability framework. There are some early steps required around the overall governance and infrastructure. Governance is more conceptual, where you need to have the logical framework on what you want to do with the data nailed down, and there is the practical aspect of doing something about it and that starts with having the infrastructure to do it. We, of course, are a technology solution, so we need hardware to be able to execute and we need the correct connectivity from our hardware to the various points that we need to measure, and then from there we can start to measure data health – in motion and on the various required points of the data journey. Our solution has a highly configurable rules-based approach, so these rules can be as granular or as business focused as the client wants. The metadata around the rules is there to describe the difference and what the rule is designed to measure. Almost inevitably then, there needs to be a reporting layer that displays metrics that are understandable from the point of view of somebody that is at executive/business level. The Datactics platform does just that, converting the rule results into business-driven areas such as critical data elements, regulatory compliance and source system scores. Combined, these give a strong insight into how the data quality relates to the business activities across the lines of business. It also gives the possibility of prioritising the resolution of the most critical issues. This makes data quality truly relevant to every business user from CEO to entry level. 

        Changing attitudes towards data health 

        As Datactics has been in this business for years, it’s reasonable to say we have developed a solid equation for continuous measurement of data quality and data health. Clients are now wanting to take the next step, and usually that next step is data health ‘in-motion’ – wherever the data flows throughout the enterprise. We now have all these metrics gathered by running this process over time, so now we must help the business establish what they can do with this data. For instance, helping them get insight into patterns showing how the data is being fixed. Clients are also talking about moving averages and measuring the time/effort required to correct an error as key indicators of a culture of healthy data.  

        The important thing to remember is that perfection doesn’t exist. Instead, there needs to be a concept of priority. Asking things like -why are you measuring data? Will a key rule impact downstream operation or will it ensure the data is in good shape for the future? What will make an operational difference that needs to be factored in to prioritising actions taken on the metrics? As clients are increasingly keen to take this next step, we are equally keen to help them achieve data health. 

        To continue this conversation, you can reach out to Luca, Head of Product at Datactics.

        About Datactics 

        Our next gen, no-code toolbox is built with the business user in mind, allowing business subject matter experts to easily measure data to regulatory & industry standards, fix breaches in bulk and push into reporting tools, with full visibility and audit trail for Chief Risk and Data Officers.  

        If you want to find out more about how the Datactics solution can help you to automate the highly manual issue of data matching for customer onboarding, then please reach out to us! Or find us on LinkedinTwitter or Facebook.

        The post Data Health: Why does it matter? | Luca Rovesti appeared first on Datactics.

        ]]>
        Tackling Practical Challenges of a Data Management Programme https://www.datactics.com/blog/good-data-culture/good-data-culture-facing-down-practical-challenges/ Mon, 03 Aug 2020 13:58:40 +0000 https://www.datactics.com/?p=5916 “Nobody said it was easy” sang Chris Martin, in Coldplay’s love song from a scientist to the girl he was neglecting. The same could be said of data scientists embarking on a data management programme! In his previous blog on Good Data Culture, our Head of Client Services, Luca Rovesti, discussed taking first steps on […]

        The post Tackling Practical Challenges of a Data Management Programme appeared first on Datactics.

        ]]>
        Nobody said it was easy” sang Chris Martin, in Coldplay’s love song from a scientist to the girl he was neglecting. The same could be said of data scientists embarking on a data management programme!

        data culture

        In his previous blog on Good Data Culture, our Head of Client Services, Luca Rovesti, discussed taking first steps on the road to data maturity and how to build a data culture. This time he’s taking a look at some of the biggest challenges of Data Management that arise once those first steps have been made – and how to overcome them. Want to see more on this topic? Head here.

        One benefit of being part of a fast-growing company is the sheer volume and type of projects that we get to be involved in, and the wide range of experiences – successful and less so – that we can witness in a short amount of time.

        Without a doubt, the most important challenge that rears its head on the data management journey is around complexity. There are so many systems, business processes and requirements of enterprise data that it can be hard to make sense of it all.

        Those who get out of the woods fastest are the ones who recognise that there is no magical way of solving things that must be done.

        A good example would be the creation of data quality rule dictionaries to play a part in your data governance journey.

        data management programme

        Firstly, there is no way that you will know what you need to do as part of your data driven culture efforts unless you go through what you have got.

        Although technology can give us a helpful hand in the heavy lifting of raw data, from discovery to categorisation of data sets (data catalogues), the definition of domain-specific rules always requires a degree of human expertise and understanding of the exception management framework.

        Subsequently, getting data owners and technical people to contribute to a shared plan that takes the uses of the data and how the technology will fit in is a crucial step in detailing the tasks, problems and activities that will deliver the programme.

        Clients we have been talking to are experts in their subject areas. However, they don’t know what “best of breed” software and data management systems can deliver. Sometimes, clients find it hard to express what they want to achieve beyond a light-touch digitalisation of a human or semi-automated machine learning process.

        data management

        The most important thing that we’ve learned along the way is that the best chance of success in delivering a data management programme involves using a technology framework that is both proven in its resilience and flexible in how it can fit into a complex deployment.

        From the early days of ‘RegMetrics’ – a version of our data quality software that was configured for regulatory rules and pushing breaks into a regulatory reporting platform – we could see how a repeatable, modularised framework provided huge advantages in speed of deployment and positive outcomes in terms of making business decisions.

        Using our clients’ experiences and demands of technology, we’ve developed a deployment framework that enables rapid delivery of data quality measurement and remediation processes, providing results to senior management that can answer the most significant question in data quality management: what is the return on investing in my big data?

        This framework has enabled us to be perfectly equipped to provide expertise on the technology that marries our clients’ business knowledge:

        • Business user-focused low-code tooling connecting data subject matter experts with powerful tooling to build rules and deploy projects
        • Customisable automation that integrates with any type of data source, internal or external
        • Remediation clinic so that those who know the data can fix the data efficiently
        • “Chief Data Officer” dashboards provided by integration into off-the-shelf visualisation tools such as Qlik, Tableau, and PowerBI.

        Being so close to our clients also means that they have a great deal of exposure and involvement in our development journey.

        We have them ‘at the table’ when it comes to feature enhancements, partnering with them rather than sell and move on, and involving them in our regular Guest Summit events to foster a sense of the wider Datactics community.

        It’s a good point to leave this blog, actually, as next time I’ll go into some of those developments and integrations of our “self-service data quality” platform with our data discovery and matching capabilities.

        Click here for the latest news from Datactics, or find us on LinkedinTwitter or Facebook

        The post Tackling Practical Challenges of a Data Management Programme appeared first on Datactics.

        ]]>
        The Road to Data Maturity https://www.datactics.com/blog/good-data-culture/good-data-culture-the-road-to-data-maturity/ Thu, 23 Apr 2020 12:31:56 +0000 https://www.datactics.com/good-data-culture-the-road-to-data-maturity/ The Road to Data Maturity – Many data leaders know that the utopia of having all their data perfect and ready to use is frequently like the next high peak on a never-ending hike, always just that little bit out of reach. Luca Rovesti, Head of Client Services for Datactics, hears this all the time […]

        The post The Road to Data Maturity appeared first on Datactics.

        ]]>
        The Road to Data Maturity – Many data leaders know that the utopia of having all their data perfect and ready to use is frequently like the next high peak on a never-ending hike, always just that little bit out of reach.

        Good Data Culture - Data Maturity

        Luca Rovesti, Head of Client Services for Datactics, hears this all the time on calls and at data management events, and has taken some time to tie a few common threads together that might just make that hike more bearable, and the peak a little closer, from the work of data stewards to key software features.

        Without further ado: Luca Rovesti’s Healthy Data Management series, episode 1:

        Lots of the people we’re speaking with have spent the last eighteen months working out how to reliably measure their business data, usually against a backdrop of one or more pressures coming from compliance, risk, analytics teams or board-level disquiet about the general state of their data. All of this can impact the way business decisions are made, so it’s critical that data is looked after properly. 

        For those who have been progressing well with their ambition to build a data driven culture, they’re usually people with a plan and the high level buy-in to get it done, with a good level of data stewardship already existing in the organisation. They’ve now moved their data maturity further into the light and can see what’s right and what’s wrong when they analyze their data. In order to build a data culture, they’ve managed to get people with access to data to stand up and be counted as data owners or chief data officers. This enables business teams to take ownership of large amounts of data and encourage data driven decision making. Now, they are looking at how they can push the broken data back to be fixed by data analysts and data scientists in a fully traceable, auditable way, in order to improve the quality of the data. The role of a data scientist is paramount here, as they have the power to own and improve the organisations critical data sets. 

        The big push we’re getting from our clients is to help them federate the effort to resolve exceptions. Lots of big data quality improvement programmes, whether undertaken on their own or as part of a broader data governance plan, are throwing up a high number of data errors. The best way to make the exercise worthwhile is to create an environment where end-users can solve problems around broken data – those who possess strong data literacy skills and know what good looks like. 

        The best way to make the exercise worthwhile is to create an environment where end users can solve problems around broken data – those who possess strong data literacy skills and know what good looks like.

        As a result, we’ve been able to accelerate the development of features for our clients around federated exceptions management through integrating our Data Quality Clinic with dashboarding layers for data visuals, for example, PowerBI, Qlik, Tableau etc. We’re starting to show how firms can use the decisions being made on data remediation as a vast set of training data for machine learning models, which can power predictions on how to fix data and cut the amount of decision-making time manual reviews need to take.

        It’s a million light-years away from compiling lists of data breaks into Excel files and emailing them around department heads, and it’s understandably in high demand. That said, a small number of firms we speak to are still coming to us because the demand is to do data analytics and make their data work for their money. However, they simply can’t get senior buy-in for programmes to improve the data quality. I feel for them because a firm that can’t build a business case for data quality improvement is losing the opportunity to make optimal use of its data assets and is adopting an approach prone to inefficiency and non-compliance risk. 

        …a firm that can’t build a business case for data quality improvement is losing the opportunity to make optimal use of its data assets and is adopting an approach prone to inefficiency and non-compliance risk.

        Alongside these requests are enquiries about how data teams can get from their current position – where they can’t access or use programming language-focused tools in IT and so have built rules themselves in SQL (relying on those with computer science expertise) – to the place where they have a framework for data quality improvement and an automated process to implement it. I’ll go into that in more detail in the next blog.

        For more on Data Maturity and Information Governance, click here for the latest news from Datactics, or find us on LinkedinTwitter or Facebook

        The post The Road to Data Maturity appeared first on Datactics.

        ]]>
        Run (E)DMC – Where Data Quality & Data Governance Collides https://www.datactics.com/blog/good-data-culture/run-edmc-data-governance/ Thu, 21 Feb 2019 12:39:26 +0000 https://www.datactics.com/run-edmc/ Towards the end of 2018, Head of Client Services, Luca Rovesti was invited to speak at the Enterprise Data Management Council (EDMC)’s expanded Member Briefing sessions on Data Governance. We took some time to catch up over an espresso in a caffè around the corner from his swanky new Presales Office in Milan (ok, it […]

        The post Run (E)DMC – Where Data Quality & Data Governance Collides appeared first on Datactics.

        ]]>
        Towards the end of 2018, Head of Client Services, Luca Rovesti was invited to speak at the Enterprise Data Management Council (EDMC)’s expanded Member Briefing sessions on Data Governance.

        We took some time to catch up over an espresso in a caffè around the corner from his swanky new Presales Office in Milan (ok, it was over email) and asked him to share what he spoke about at the events: how banks and financial firms can truly make a standard like Data Capability Assessment Model (DCAM) work in practice.

        [Ed] Ciao Luca! Quindi cosa hai fatto negli eventi EDMC?

        Ciao! But for the benefit of our readers perhaps we’ll do this in English?

        Actually that’s easier for us too.

        Great! At Datactics I have been the technical lead in a number of data management projects, putting me in a good position to talk about the “practitioner’s view” – how does the DCAM framework play out in practice? There is a characteristic about Datactics’ business model which I believe makes our story interesting: we are a technology provider AND a consulting firm. This combination means we are not only able to advise on the data programmes and overall strategy, but we are also able to implement our advice. We are not only talking about “how it should be”, but also playing an active role in the implementation phase.

        I was able to live through very successful projects…and less successful ones! Looking back at these experiences to determine what made certain interactions more successful than others will be the basis of this blog post.

        Ok, so what is the DCAM framework anyway?
        The EDMC describes the Data Management Capability Assessment Model (DCAM) as “…the industry-standard, best practice framework designed to assist today’s information professional in developing and sustaining a comprehensive data management program.” You can find more out about it on their website, here.

        Can you tell us a bit more about the data management frameworks you’ve encountered?
        I’d like to borrow some of DCAM’s terminology to describe to you how the building blocks of this framework typically interact with each other, as it became evident during our conversations.
        When Datactics comes into the game, the high-level group data strategy is already laid out and we are brought into the picture precisely because someone within the organisation is facing challenges in operationalising such a strategy. This has been the case for every large organisation we have worked with: in all my involvements with Tier 1 banks, I am yet to see no theoretical framework for a data strategy. Clearly BCBS 239 principles, published in 2013, had a prominent role in shaping these (you could even spot a bit of copy/pasting from the principles if you read these frameworks carefully!), but the idea is there to accurately manage the data lifecycle and capture the meaning of the information, the relationship between different data points.

        So how does this compare with practice?
        Translating theory into practice – the implementation phase – is where things get challenging. There are real business cases to be solved and data programs are set up for this purpose. Solving the business case is critical to prove the return on investment of data management activities. This is where the activities can bring measurable efficiency, avoid regulatory fines and operational losses, and dreaded capital lockdowns. Data programmess would have people responsible for making things happen; how interconnected these people are within the organisation and how acquainted they are with the real business pain points can make a material difference in the success of the projects.

        What we see is that there are 2 complementary approaches to service the data program:

          • Data Governance is all about creating a data dictionary, understanding its lineage, ownership and entitlement. This is the top-down approach and it defines “how things should be”.
        • Data Quality is all about measurement, accountability and remediation of data, according to test conditions and rules. This is the bottom-up approach and it defines “how the data actually is”.

        Do these two approaches, Data Quality and Data Governance, intersect neatly then?
        We often get asked about the relationship between Data Quality and Data Governance. Can one exist without the other? Which one is more important? Which one should start first?
        I mentioned I was going to give the “practitioner view” so I’ll answer from past experience: in the most successful projects I have seen, they were both there. Two parallel activities, with a certain degree of overlap, complementing one another. Governance with policies and definitions, data quality with real metrics on how the data is evolving.

        I like to think that governance is like a map, and data quality is the position marker that tells us where we are in the map, making it a lot more useful. The technology architecture is of course where all of this logic plugs in, connecting it with real information in the client’s systems or from external, open or proprietary sources.

        Can you give us an example of a live deployment?
        Sure! As promised, let’s see how things played out in a successful interaction. We are typically engaging with organisations to enable the Data Quality part of the data management strategy. This means being instrumental in measuring, understanding and improving the information at the basis of data programmes, and I cannot stress enough the importance of the connection with the underlying business case.

        I have seen different models working: whether there would be different programs, each supporting a particular business case, or a single program to service them all.
        In our most successful interactions at Tier 1 investment banks or major European banks, we could leverage the data quality initiative to support a number of key business activities, such as regulatory reporting, KYC and AML, because all of these activities rely on complete, consistent and accurate data.

        Is there anything else you’d add, something that connects the most successful projects perhaps?
        Yes, definitely. The single most important thing that can drive a successful project is buy-in from the technical side. The best implementations we have worked on have depended on a close connection between technical (i.e. IT) teams and the owners of the business case from an implementation point of view. It is extremely rare that a financial institution would go through a level of technical restructuring that would allow a complete change of core technology just to support a business case for regulatory reporting, for example.

        In the vast majority of the cases the implementation is “plug-and-play” with the existing technology architecture, which suits our open architecture and deployment approach completely; and let’s face it: banks’ IT, DBAs and infrastructure teams are always swamped! More than once it happened to me that it was all ready to go in a project except…for that one server…

        But you roll with these punches, and you work on good relationships with internal teams, to help smooth out these wrinkles because the downstream benefits of perfected, cleansed data are almost limitless in their use cases. I mean, these guys know their stuff and care hugely about the quality of their systems and data. So yeah, I’d say financial institutions where there is a close connection between senior teams owning the business case; those responsible for the technology architecture; and the downstream users of the data that would be able to convey its business meaning and measureable benefits, are the perfect grounds for getting projects off the ground and into production in rapid time.

        Thanks, Luca! Where can the kindly folk of the internet find out more about you, or maybe meet you if they’re so inclined?
        Well, they can always email me and I’ll gladly meet up. I’m going to Amsterdam in March as part of the Department for International Trade’s Fintech Mission to the Netherlands, so I’d be more than happy to catch up in-person there if that suited. I’m pretty easy-going, especially if coffee’s involved…

        The post Run (E)DMC – Where Data Quality & Data Governance Collides appeared first on Datactics.

        ]]>
        Who are you really doing business with? https://www.datactics.com/blog/good-data-culture/who-are-you-really-doing-business-with/ Fri, 04 Jan 2019 15:51:45 +0000 https://www.datactics.com/who-are-you-really-doing-business-with/ Entity Match Engine for Customer Onboarding. Challenges and facts from EU banking implementations.  Customer onboarding is one of the most critical entry points of new data for a bank’s counterparty information. References and interactions with internal systems, externally sourced information, KYC processes, regulatory reporting and risk aggregation are all impacted by the quality of information […]

        The post Who are you really doing business with? appeared first on Datactics.

        ]]>

        Entity Match Engine for Customer Onboarding. Challenges and facts from EU banking implementations. 

        Customer onboarding is one of the most critical entry points of new data for a bank’s counterparty information. References and interactions with internal systems, externally sourced information, KYC processes, regulatory reporting and risk aggregation are all impacted by the quality of information that is fed into the organisation from the onboarding stage.

        In times of automation and development of FinTech applications, the onboarding process remains largely manual. Organisations typically aim at minimising errors by means of human validation, often tasking an off-shore team to manually check and sign-off on the information for the new candidate counterparty. Professionals with experience in data management can relate to this equation: “manual data entry = mistakes”.

        There are a variety of things that can go wrong when trying to add a new counterparty to an internal system. The first step is typically to ensure that a counterparty is not already present: onboarding officers rely on a mix of name & address information, vendor codes and open industry codes (e.g. the Legal Entity Identifier) to verify this. However, inaccurate search criteria, outdated or missing information in the internal systems and the lack of advanced search tools create the potential for problems in the process – an existing counterparty can easily get duplicated, when it should have been updated.

        Datactics’ Entity Match Engine provides onboarding officers with the tools to avoid this scenario, both on Legal Entities’ and Individuals’ data. With advanced fuzzy logic and clustering of data from multiple internal and external sources, Match Engine avoids the build-up of duplication caused by mistakes, mismatches or constraints of existing search technology in the onboarding process.

        Another common issue caused by manual onboarding processes is the lack of standardisation in the entry data. This creates problems downstream, reducing the value that the data can bring to core banking activities, decision making and the capacity to aggregate data for regulatory reporting in a cost-effective way.

        Entity Match Engine has pre-built connectivity into the most comprehensive open and proprietary sources of counterparty information, such as Bloomberg, Thomson Reuters, GLEIF, Open Corporates, Companies House, etc. These sources are pre-consolidated by the engine and are used to provide the onboarding officer with a standardised suggestion of what the counterparty information should look like, comprehensive of the most up-to-date industry and vendor identifiers.

        “Measure and ensure data quality at source” is a best practice and increasingly a data management mantra. The use of additional technology in the onboarding phase is precisely intended as a control mechanism for one of the most error-prone sources of information for financial institutions.

        Luca Rovesti is Presales R&D Manager at Datactics

        The post Who are you really doing business with? appeared first on Datactics.

        ]]>