Banking Archives - Datactics https://www.datactics.com/tag/banking/ Unlock your data's true potential Sun, 28 Jul 2024 22:45:55 +0000 en-GB hourly 1 https://wordpress.org/?v=6.7.2 https://www.datactics.com/wp-content/uploads/2023/01/DatacticsFavIconBluePink-150x150.png Banking Archives - Datactics https://www.datactics.com/tag/banking/ 32 32 “How to conquer data and drive automation in banking and financial services” | An interview with Matt Flenley https://www.datactics.com/blog/marketing-insights/how-to-conquer-data-and-drive-automation-in-banking-and-financial-services-an-interview-with-matt-flenley/ Tue, 04 May 2021 16:31:25 +0000 https://www.datactics.com/?p=14629 As we know FinTech is changing the way people bank and the way of course banks wish to operate. Our very own, Marketing and Partnerships Manager Matt Flenley contributed to a webinar discussing how to conquer data and drive automation in banking and financial services. The session sought to unpack the best practices for using technology and optimising […]

The post “How to conquer data and drive automation in banking and financial services” | An interview with Matt Flenley appeared first on Datactics.

]]>

As we know FinTech is changing the way people bank and the way of course banks wish to operate. Our very own, Marketing and Partnerships Manager Matt Flenley contributed to a webinar discussing how to conquer data and drive automation in banking and financial services. The session sought to unpack the best practices for using technology and optimising technology to break new ground in banking. 

Matt was joined by Karen Bradbury of Invest NI, Brendan McCarthy of Analytics Engines and Andy Wallace, The Robot Exchange.  

We sat down with Matt to explore the rising employment of FinTech employment in the UK, why FinTech is rapidly changing people’s approach to banking, how organisations can combat messy data and what makes Datactics different. 

To kickstart, why do you think FinTech is changing people’s approach to banking? 

For an awfully long time, strength has been in established institutions having all the customers and that providing economies of scale for transactions. Back in the day, transactions made using your cash card would go over the link network and there are fees paid between the bank you have your card with to the bank that operates the ATM – meaning a massive amount of interchange. All of this money going between banks and card issuers creates a big market for the banks to make money. What has happened recently with open banking and PSD2 is that those services have been able to be offered by Revolut, TransferWise and Starling, who challenge the need for cash. They believe that digital payments are the way forward, previously they were slow but now due to internet technology and privacy standards being improved, there is now the ability to manage all digital payment activity through an application. These customer-led innovations are in effect disintermediating what the banks used to be able to only offer themselves, which has led a big drive towards FinTech solutions. For example, Loyalbe has created a means to pull all of your loyalty cards and accrue loyalty benefits at local shops in a way that beforehand would’ve been at the sole reserve of the bank or the card issuer. 

Why NI? Why has NI become such a prevalent FinTech hub? 

I think NI has the benefit of being a small, concentrated community. There is much lower cost of employment and cost of living but an extremely high standard of education. There is a perfect combination of an educated population with a low cost of living, and all the work that Invest Northern Ireland and others have done by way of FDI that has brought the likes of Citi, Allstate and Liberty to the area has allowed people to think they can train to be a software and have the option to work for a huge multi-national corporation right on the doorstep. There has been a sizeable amount of government money invested to secure those overseas firms but they haven’t just invested and ran away, they have invested and stayed. Which has given rise to a highly literate and highly capable financial services technology workforce based here. 

Just to bring Datactics into the mix. Messy data is everywhere – how can organisations combat this? 

Messy data is everywhere. Being able to fix it is a challenge that faces every institution. We focus on Financial Services because they are the ones with typically the most data literacy, they understand the criticality of data and how having better data quality data can help them to outflank competition. In many institutions there are many problems with errors of capture and out of date information creating negative customer experiences and regulatory fines.  

Where can they start and where can they fix it?  

We have led the way with business unit led data quality software, this doesn’t involve amassing all of your coders and programmers and asking them to hard code rules into their central IT systems. Your business teams know what the data should be, and they should be able to interact with that data and fix it as necessary. Then downstream be able to report it, use it and analyse it. We would suggest working with a company like us to be able to deploy that on a specific business problem, we work with commercial organisations that have business goals to hit, it’s now time to put the capability in those business teams too. 

Data formats, entries and elements often present different versions of the truth, some of these are important distinctions but others are issues of quality, including typos and outdated records – tell me more about this… 

There is a valid distinction. For example, it’s important to understand that different records need to be reflective of different time periods. I may want to write to one Jamie Gordon, but I may have 5 on my systems due to having multiple different banking systems. You don’t want to be constantly pestering Jamie, it’s also massively inefficient to have to send a high volume of data automatically as it is expensive. At a corporate level, it’s hard to understand who the beneficial owner of institutions that are similarly named. It’s really important for anti-money laundering and KYC. Being able to interpret those differences, understanding common elements is critical to eradicating the manual effort and ambiguity around KYC and AML. 

What would you say is Datactics’ biggest differentiator in the market? 

Business user focus is imperative – we are seeking to serve the business teams allowing them to self-serve for analytics and quality all at once. Our clients don’t want a year long process over their heads before any results are visible. We are keen to make our clients as literate of the platform as possible because we want them to be able to use the platform fully as soon as possible, we also support them initially and offer them 24/7 assistance if they so need it.  We are all about self-service. 

We thoroughly enjoyed being involved with this webinar. We love the fact that Invest NI has sought to connect the dots between vendors and markets. We have had many opportunities to pitch ourselves to new markets and work with huge brands. They have been a strong advocate for native Northern Irish companies and have built on the legacy as we move into an increasingly technologically driven future. 

If you would like to discuss the FinTech market in NI, how Datactics can help with messy data or how the world of banking is changing, please reach out to Matt Flenley today. You can also book a one-to-one consultation with him, if you’d like to explore our solution further. 

For other news about Datactics please visit this page. Alternatively, you can contact us by clicking here.  

Or find us on LinkedinTwitter or Facebook.

The post “How to conquer data and drive automation in banking and financial services” | An interview with Matt Flenley appeared first on Datactics.

]]>
InvestNI with American Banker | FinTech on Main Street: Trading Legacy Systems for Digital Transformation | 25/03 https://www.datactics.com/events/ini-ab-webinar-fintech/ Fri, 19 Feb 2021 10:52:30 +0000 https://www.datactics.com/?p=13994 FinTech is rapidly changing the way people bank and the way banks wish to operate. But for the majority of financial institutions, their desire to deploy modern digital, AI, machine learning, and data solutions (either enterprise-wide or for specific functions) is usually quashed by a familiar culprit: the antiquated and often disparate core processing systems […]

The post InvestNI with American Banker | FinTech on Main Street: Trading Legacy Systems for Digital Transformation | 25/03 appeared first on Datactics.

]]>
INI FinTech Webinar

FinTech is rapidly changing the way people bank and the way banks wish to operate.

But for the majority of financial institutions, their desire to deploy modern digital, AI, machine learning, and data solutions (either enterprise-wide or for specific functions) is usually quashed by a familiar culprit: the antiquated and often disparate core processing systems that lay at the back end of most organizations.

In this webinar, we will discuss real case studies with other leading FinTech/RegTech companies hivera and Vox Financial Partners. This group will explain how they help banks accelerate digital transformation, overcoming the significant challenges presented by ancient legacy systems. Together they will provide CIOs, CISOs, CDOs, and CROs valuable insight and best practices for using technology and optimizing data to break new ground in banking.

The post InvestNI with American Banker | FinTech on Main Street: Trading Legacy Systems for Digital Transformation | 25/03 appeared first on Datactics.

]]>
Datactics is selected by the UK Department for International Trade to showcase best in British Innovation https://www.datactics.com/press-releases/datactics-is-selected-by-the-uk-department-for-international-trade-to-showcase-best-in-british-innovation/ Mon, 08 Feb 2021 09:00:00 +0000 https://www.datactics.com/?p=14008 Belfast, London, New York, 8th February 2021 Belfast-based Regtech company Datactics, a leading data quality software provider to global financial services firms, has been selected by the UK Government’s Department for International Trade (DIT) to showcase the best in British innovation during a US roadshow. The DIT’s RegTech Roadshow takes place virtually 22-26 February 2021. […]

The post Datactics is selected by the UK Department for International Trade to showcase best in British Innovation appeared first on Datactics.

]]>
Belfast, London, New York, 8th February 2021

Belfast-based Regtech company Datactics, a leading data quality software provider to global financial services firms, has been selected by the UK Government’s Department for International Trade (DIT) to showcase the best in British innovation during a US roadshow.

Roadshow

The DIT’s RegTech Roadshow takes place virtually 22-26 February 2021.

It will provide ten innovative UK-based regulatory technology companies with the opportunity to meet industry stakeholders, regulators and potential partners. They will also receive guidance and training from DIT partners on how to set up and grow a physical presence and hear from experts on the key challenges facing the US market and how their technology can help solve them.

Datactics was selected following a highly competitive recruitment process. To qualify for the roadshow, each company had to meet the following criteria set by DIT: 10 or more employees, a minimum annual turnover of £1 million, an enterprise-ready solution, and an existing base of clients in the UK and US markets. The companies selected have all achieved great success both domestically and globally, cultivating a strong international client base and achieving a reputation of excellence in helping revolutionize the way the financial services industry manage risk, compliance, and regulatory change. The cohort represents diversity and female leadership within the UK industry, with each firm offering a unique and innovative solution that is revolutionizing regulation compliance and all the companies combined service nearly every Tier 1 and Tier 2 financial institution.

Kunal Khatri, Director for DIT North America, said:

The UK has been at the heart of the global financial services innovation for decades, and in 2021 we will continue to lead the global FinTech and RegTech revolution. We’re excited to showcase the talent and expertise that UK companies have to offer. This roadshow is a great opportunity to deepen our bilateral engagement on financial services with the US and encourage private sector collaboration to make the world a safer, easier, and more equitable place to do business.

Commenting on the selection, Stuart Harvey, Datactics CEO, said:

This is a great opportunity for Datactics to showcase our best-in-breed data quality and matching solutions to the US financial market. We have proven success in the UK and Europe and already work with clients in the US. We are looking forward to helping financial institutions transform their data-driven business, manage risk and regulatory compliance.

The UK leads in the field of regulatory innovation thanks to its high concentration of financial services firms and the institution of the Financial Conduct Authority’s regulatory sandbox. British RegTechs are well-positioned to add value and help revolutionise the way US companies approach compliance.

In addition to the 10 companies, DIT is excited to welcome media partners and advisers who helped to develop the program, including the A-Team Group’s RegTech Insights, and the COMPLY Conference, host of one of the largest RegTech industry events in the US.

For other news and awards about Datactics please read this page. Alternatively, you can contact us by clicking here.

The post Datactics is selected by the UK Department for International Trade to showcase best in British Innovation appeared first on Datactics.

]]>
Comply Virtual Summit | 20/05 https://www.datactics.com/events/comply-virtual-summit-20-05/ Mon, 01 Feb 2021 09:00:00 +0000 https://www.datactics.com/?p=13909 COMPLY brings together Local, Federal and International Regulators, Compliance Leaders, Marketing Executives, Legal Experts, Leading Consultants, RegTech and FinTech Innovators, Operations Professionals, RegTech and FinTech Investors from around the globe for groundbreaking learnings and connections. An event for anyone interested in best practices in compliance, regulations, consumer protection, register your details here and the agenda will be released […]

The post Comply Virtual Summit | 20/05 appeared first on Datactics.

]]>

COMPLY brings together Local, Federal and International Regulators, Compliance Leaders, Marketing Executives, Legal Experts, Leading Consultants, RegTech and FinTech Innovators, Operations Professionals, RegTech and FinTech Investors from around the globe for groundbreaking learnings and connections.

An event for anyone interested in best practices in compliance, regulations, consumer protection, register your details here and the agenda will be released soon. It’s free and qualifies for CRCM or CCB credits too!

The post Comply Virtual Summit | 20/05 appeared first on Datactics.

]]>
Comply Virtual Summit | 25/03 https://www.datactics.com/events/comply-virtual-summit-25-03/ Mon, 01 Feb 2021 08:30:00 +0000 https://www.datactics.com/?p=13907 COMPLY brings together Local, Federal and International Regulators, Compliance Leaders, Marketing Executives, Legal Experts, Leading Consultants, RegTech and FinTech Innovators, Operations Professionals, RegTech and FinTech Investors from around the globe for groundbreaking learnings and connections. An event for anyone interested in best practices in compliance, regulations, consumer protection, register your details here and the agenda […]

The post Comply Virtual Summit | 25/03 appeared first on Datactics.

]]>
Comply march

COMPLY brings together Local, Federal and International Regulators, Compliance Leaders, Marketing Executives, Legal Experts, Leading Consultants, RegTech and FinTech Innovators, Operations Professionals, RegTech and FinTech Investors from around the globe for groundbreaking learnings and connections.

An event for anyone interested in best practices in compliance, regulations, consumer protection, register your details here and the agenda will be released soon. It’s free and qualifies for CRCM or CCB credits too!

The post Comply Virtual Summit | 25/03 appeared first on Datactics.

]]>
Comply Virtual Summit | 04/02 https://www.datactics.com/events/comply-virtual-summit-04-02/ Mon, 01 Feb 2021 08:00:00 +0000 https://www.datactics.com/?p=13896 COMPLY brings together Local, Federal and International Regulators, Compliance Leaders, Marketing Executives, Legal Experts, Leading Consultants, RegTech and FinTech Innovators, Operations Professionals, RegTech and FinTech Investors from around the globe for groundbreaking learnings and connections. An event for anyone interested in best practices in compliance, regulations, consumer protection, check out the agenda and register your […]

The post Comply Virtual Summit | 04/02 appeared first on Datactics.

]]>

COMPLY brings together Local, Federal and International Regulators, Compliance Leaders, Marketing Executives, Legal Experts, Leading Consultants, RegTech and FinTech Innovators, Operations Professionals, RegTech and FinTech Investors from around the globe for groundbreaking learnings and connections.

An event for anyone interested in best practices in compliance, regulations, consumer protection, check out the agenda and register your details here. It’s free and qualifies for CRCM or CCB credits too!

The post Comply Virtual Summit | 04/02 appeared first on Datactics.

]]>
UK FinTech Mission to Austria & Switzerland https://www.datactics.com/events/uk-fintech-mission-to-austria-switzerland/ Thu, 28 Jan 2021 11:30:00 +0000 https://www.datactics.com/?p=13883 In late January, the UK Department for International Trade (DIT) and Scottish Development International (SDI) organised a unique setting for exchange and discussion on the latest trends, demands and solutions in finance and banking, insurance and investment between UK FinTechs and Swiss and Austrian organisations.  Datactics is proud to be one of 37 UK tech firms that were selected across […]

The post UK FinTech Mission to Austria & Switzerland appeared first on Datactics.

]]>
In late January, the UK Department for International Trade (DIT) and Scottish Development International (SDI) organised a unique setting for exchange and discussion on the latest trends, demands and solutions in finance and banking, insurance and investment between UK FinTechs and Swiss and Austrian organisations. 
UK Fintech mission

Datactics is proud to be one of 37 UK tech firms that were selected across a wide range of sectors and verticals, to showcase our solution at the UK FinTech Mission.

The virtual event aimed to offer participants, attendees, and partners an inspiring network and learning opportunity.  

For those attendees that were from Austria and Switzerland, the company profiles of the UK companies were made available, as well as the possibility to learn more about the products and solutions on the marketplace that they are offering or extend the talks via 1:1 meetings.

Our representative, Jordan Wray, described his experience at this virtual event here. To keep up to date with future events that Datactics is attending, visit out events page here.

The post UK FinTech Mission to Austria & Switzerland appeared first on Datactics.

]]>
How can banks arm themselves against increasing regulatory and technological complexity? – FinTech Finance https://www.datactics.com/blog/ai-ml/2020-the-year-of-aml-crisis/ Tue, 03 Nov 2020 10:00:22 +0000 https://www.datactics.com/?p=12885 Datactics Head of Artificial Intelligence, Dr. Fiona Browne, recently contributed to the episode of FinTech Finance: Virtual Arena. Steered by Douglas MacKenzie, the interview covered the extent of the Anti-Money Laundering (AML) fines currently faced by banks over the last number of years and start to unpack what we do at Datactics in relation to […]

The post How can banks arm themselves against increasing regulatory and technological complexity? – FinTech Finance appeared first on Datactics.

]]>
Image of Fiona Browne

Datactics Head of Artificial Intelligence, Dr. Fiona Browne, recently contributed to the episode of FinTech Finance: Virtual Arena. Steered by Douglas MacKenzie, the interview covered the extent of the Anti-Money Laundering (AML) fines currently faced by banks over the last number of years and start to unpack what we do at Datactics in relation to this topic: helping banks address their data quality, with essential solutions designed to combat fraudsters and money launderers.  

How can banks arm themselves against increasing regulatory and technological complexity?

Fiona began by highlighting how Financial Institutions face significant challenges when managing their data. However, the increase in financial regulations since the financial crisis of 2008/2009, ensuring data quality has gained in its importance, obliging institutions to have a handle on their data and make sure it is up to date. Modern data quality platforms mean that the timeliness of data can now be checked via a ‘pulse check’ to ensure that it can be used in further downstream processes and that it meets regulations.

Where does Datactics fit in to the AML arena? 

A financial institution needs to be able to verify the client that they are working with when going through the AML checks. The AML process itself is vast but at Datactics, we focus on the area of profiling data quality and matching – it is our bread and butter. Fiona stressed the importance of internal checks as well as public entity data, such as sanction and watch lists.

In a nutshell, there is a significant amount of data to check and compare and with lack ofquality data, it becomes a difficult and costly task to perform so we at Datactics, focus on data quality cleansing and matching at scale.

Why should banks look to partner, rather than building it in house? 

One of the key issues of doing this in house is not having the necessary resources to perform the required checks and adhere to the different processes in the AML pipeline. According to the Financial Conduct Authority (FCA), in-house checks and a lack of data are causing leading financial institutions to receive hefty fines. Fiona reiterated that when Banks bring it back to the fundamentals and get their processes right and data into order, they can then use the partner’s technology to automate and streamline these processes, which in turn speeds up the onboarding process and ensure the legislation is being met.

Why did the period of 2018/2019 have such a high number of AML breaches?

Fiona explained that many transactions go back over a decade, it takes time to identify such transactions. AML compliance is difficult to achieve and regulators know that it is challenging. The regulators are doing a better job at providing guidelines to financial institutions, enabling them to address these regulations. Fiona reaffirmed that perhaps 2018/2019 was a wakeup call that was well needed to address this issue. 

And with AML fines already at $5.6 billion this year, more than the whole of 2019, what can banks do? 

Looking at the US, where although the fines for non-compliant AML processes are not as high as 2019, there is still a substantial number of fines being issued, Fiona said that it is paramount to ensure financial institutions have the right data and the right processes in place. Although it can be considered as an administrative burden, there is real criminal activity behind the scenes, which is why AML is so important. It is vital that financial institutions get a handle on this, enabling them to also improve the experience for their clients. 

The fines will continue to be issued. Why should firms look to clean data when they just want to get to the bottom line? 

It is essential to have the building blocks in place. Data quality is key for the onboarding process, but it is also essential downstream, particularly if you are wanting to do more trend analysis. Getting the fundamentals right at the start will pay back in dividends.  

Are there any other influences that Artificial Intelligence (AI) and Machine Learning (ML) can have on the banks onboarding process? 

According to Fiona, there is no silver bullet. One AI/ML technique will not solve all the AML issues. It is about deploying these techniques when approaching the issues in different ways. A large part of the onboarding process is gathering data and extracting relevant information from the data set. Fiona has seen a lot of Neuro-Linguistic Programming (NLP) techniques employed to extract the data from documents. At Datactics, we use Machine Learning in the data matching process to reduce the manual review time. ML techniques are employed in supervised and unsupervised approaches geared to pinpoint fraudulent transactions. We think that the graph databases and network analysis side of machine learning is an interesting area, we are currently exploring how it can be deployed into AML and fraud detection. 

Bonus content: In the US and Canada, one way to potentially identity fraud was to look at transactions that were over $10,000. The criminals however become increasingly savvy and utilise Machine Learning to muddy their tracks. By doing this, they can divide transactions into randomised amounts to make them appear less pertinent. As Fiona put it ‘the cat and mouse game’. 

If you are employed in the banking sector or if you must deal with large and messy datasets, you will probably face challenges derived from poor data quality, standardization, and siloed information. 

Datactics provides the tools to tackle these issues with minimum IT overhead, in a powerful and agile way. Get in touch with the self-service data quality experts today to find out how we can help.

The post How can banks arm themselves against increasing regulatory and technological complexity? – FinTech Finance appeared first on Datactics.

]]>
All things AML and FinTech Finance: Virtual Arena – weekly round-up https://www.datactics.com/blog/marketing-insights/weekly-round-up-aml-ff-arena/ Fri, 30 Oct 2020 14:00:15 +0000 https://www.datactics.com/?p=12865 We started by looking at why data matching is a key part of any AML & KYC process. It’s made more complex by the different standards, languages, and levels of quality in the different data sources on which firms typically rely on. It’s expensive too: a recent Refinitiv article states that some firms are spending up to […]

The post All things AML and FinTech Finance: Virtual Arena – weekly round-up appeared first on Datactics.

]]>
AML

We started by looking at why data matching is a key part of any AML & KYC process. It’s made more complex by the different standards, languages, and levels of quality in the different data sources on which firms typically rely on. It’s expensive too: a recent Refinitiv article states that some firms are spending up to $670m each year on KYC. 

As the week went on, we looked at some of the key areas where Datactics makes a real difference in helping firms to reduce manual effort, reduce risk, and bring down the extremely high cost of client onboarding. 

We then looked at the impact of the EU’s fifth AML directive and how firms are able to automate their sanctions screening with the sanctions match engine.  

We also explored how we support efforts to reduce risk and financial crime involving the clever tech we’ve used to transliterate between character sets and perform multi-language matching. 

Finishing up, we shared our talk with the EDM Council that explored how AI can make a real difference to the story. Bringing even more predictive capabilities to human effort means that finding those edge cases, don’t have to wait until all the obvious ones have been ruled out. We also composed a piece entitled ‘Lifting the lid on the problems that Datactics solves’, if you missed it out can check it out here

AML

If you missed any of the pieces we shared this week, feel free to read them on our DataBlog or on our social media platforms.  

In other news this week, our very own Head of AI, Dr Fiona Browne contributed to the FinTech Finance: Virtual Arena. This session discussed the huge AML fines faced by the banks over the last number of years.

AML

At Datactics we are a company that helps banks gain quality data – a tool that is equipped to fight fraudsters and money launderers. Fiona was able to share her experience as Head of AI at Datactics to shed light on how banks can arm themselves sufficiently to allow them to stand up to increasing regulatory and technological complexity. 

Datactics provides the tools to tackle these issues with minimum IT overhead, in a powerful and agile way.  If you missed the session, you can watch it back on LinkedIn by following this link.  

Have a great weekend! Hope you enjoyed this week’s round-up.    

Click here for more by the author, or find us on LinkedInTwitter or Facebook for the latest news. You can also read the last round up here or keep an eye out for our next one! 

The post All things AML and FinTech Finance: Virtual Arena – weekly round-up appeared first on Datactics.

]]>
EDM Talks: Lifting the lid on the problems that Datactics solves https://www.datactics.com/blog/marketing-insights/lifting-the-lid-edm/ Fri, 30 Oct 2020 09:00:00 +0000 https://www.datactics.com/?p=12630 Recently we partnered with the EDM Council on a video that investigates the application of AI to data quality and matching. In this EDM Talk, we lift the lid on how our AI team is developing solutions to help our clients, especially in the area of entity matching and resolution. This plays an important role in on-boarding, KYC and obtaining a single […]

The post EDM Talks: Lifting the lid on the problems that Datactics solves appeared first on Datactics.

]]>
Recently we partnered with the EDM Council on a video that investigates the application of AI to data quality and matching.

In this EDM Talk, we lift the lid on how our AI team is developing solutions to help our clients, especially in the area of entity matching and resolution. This plays an important role in on-boarding, KYC and obtaining a single customer view.

problems

What is the the data challenge? 

Institutions such as banks, often have large sets of very messy data which may be siloed and subject to duplication. When onboarding a new client or building a legal entity master, institutions may need to match clients to both internal datasets and external sources. These include vendors such as Dun and Bradstreet and Bloomberg, or taking data from a local company registration authority, such as Companies House in the UK.  This data needs to be cleaned, normalised and matched to create a single golden record in order to verify their identify and adhere to regulatory compliance. For many institutions, this can be a heavily manual and time-consuming process.  

What needs to be done to improve entity matching? 

In entity resolution, there are two main challenges to address: the data matching side; and the manual remediation side which is required to resolve those instances where we have low confidence, mismatched or unmatched entities.  

Datactics undertook a recent Use Case where we explored matching entities between two open global entity datasets Refinitiv ID and Global LEI. We augmented our fuzzy matching rule-based approach with ML to address and improve efficiencies around the manual remediation of low confidence matches.  We performed matching of entities between these datasets using deterministic rules, as many firms do today. We followed the standard approach in place for many onboarding teams, whereby entity matches that are low confidence go into manual review. Within Datactics, data engineers were timed to measure the average time taken to remediate a low confidence match which could take up to one minute and a half per entity pair. This might be fine if there are just a few entities that you need to check but whenever you have hundreds, thousands or many hundreds of thousands this highlights how challenging the task becomes and the resource and time required to commit to this task.  

At Datactics we thought this was an interesting problem to explore. We were keen to fully understand whether AI-enabled data quality and matching would bring benefits in terms of efficeincy and improvement to data quality to our clients who undertake such tasks. 

What did Datactics want to achieve? 

We were particularly interested to understand how we could reduce manual effort and increase the accuracy of data matching. We wanted to understand what benefits machine learning would bring to the process, using an approach that was transparent and which would make decision-making open and obvious to an audit or regulator. 

What benefit is there from applying Machine Learning to this problem? 

Machine learning is a broad domain. It covers application areas from speech recognition, understanding language to automating processes and decision making. Machine learning approaches are built on mathematical algorithms and statistical models. The advantages of these approaches is the ability of the algorithms to learn from data, uncover patterns and then use this learning to make predictions on new unseen cases. We see machine learning deployed in everyday life from our email filters through to personal assistance devices such as Amazon Echo and Apple Siri. 

Within the financial sector, Machine Learning techniques are being applied to tasks including profiling behaviour for fraud detection; the use of natural language processing to extract information from unstructured text to enrich the Know Your Customer onboarding process; through to the use of chatbots to automatically address customer queries and customise product offerings.  

At Datactics we view Machine Learning as a tool to automate manual tasks through to a decision making aid augmenting processing such as matching, error detection and data quality rule suggestion for our clients. This then frees up time and resource for clients enabling them to do more in their role.  

How can machine learning be applied to the process of matching? 

Within Datactics we have augmented our rules-based matching process with machine learning. Our solution has a focus on explainability and transparency to enable the tracing of why and how predictions have been made. This transparency is important to financial clients in terms of adhering to regulations through to the building of trust in the system which is providing these predictions. Using high confidence predictions, we can automate a large volume of manual review. For example, in the matching Use Case, we were able to reduce manual review burden by 45%, freeing up client’s time with expertise deployed to focus on the difficult edge cases. 

At Datactics we train machine learning models using examples of matches and non matches. Over time patterns within that data are detected and this learning can be used to make predictions on new unseen cases. A reviewer can validate the predictions and feed this back into the algorithm. This is known as human in the loop machine learning. Eventually the algorithm will become smarter in predictions making more accurate predictions. High quality predictions can lead to less manual review, by reducing the volume that need reviewed. 

The models we have built need good quality data. We used the Datactics self-service data quality platform to create good quality data sets and apply labels to that data.  Moving forward at Datactics, we are seeking to augment AI and to look at graph linkage analysis, as well as furthering enhancing our feature engineering and data set capabilities.  

To learn more about what the work we are doing with machine learning and how we are applying it into the Datactics platform, all content is available on the Datactics website. We also have a whitepaper on AI-enabled data quality. 

EDM

For a demo of the system in action please fill out the contact form. 

To find out more about what we do at Datactics, check out the full EDM talks video below

We will soon be publishing Part 2 of this blog series that will look at the application of AI and ML in the Fintech sector in more detail as well as an entity resolution use case.  

Click here for the latest news from Datactics, or find us on Linkedin, Twitter or Facebook 

The post EDM Talks: Lifting the lid on the problems that Datactics solves appeared first on Datactics.

]]>
Introducing SSDQ: Centralise and standardise data quality processes without programming or coding https://www.datactics.com/blog/marketing-insights/introducing-ssdq/ Fri, 23 Oct 2020 09:35:42 +0000 https://www.datactics.com/?p=12757 What does the self-service data quality (SSDQ) platform do?   It empowers data owners and SMEs to measure and maintain data quality themselves in line with governance policies.  SSDQ is already adding value at multiple investment and retail banks, wealth managers, and data vendors helping them to:  Achieve end-to-end holistic data quality management   Measure data against industry-standard dimensions and regulations   Empower data stewards / […]

The post Introducing SSDQ: Centralise and standardise data quality processes without programming or coding appeared first on Datactics.

]]>
SSDQ

What does the self-service data quality (SSDQ) platform do?  

It empowers data owners and SMEs to measure and maintain data quality themselves in line with governance policies.  SSDQ is already adding value at multiple investment and retail banks, wealth managers, and data vendors helping them to: 

  • Achieve end-to-end holistic data quality management  
  • Measure data against industry-standard dimensions and regulations  
  • Empower data stewards / subject matter experts to remediate data  

How can SSDQ help?  

SSDQ connects to source systems and data repositories, measures the quality of the information, and automatically reports on the health of underlying data to data owners via interactive data quality dashboards. The platform’s Data Quality Clinic function offers business users the opportunity to explore the root cause of data quality breaks, make decisions, or fix records themselves. The system is powerful enough to enable business users and SMEs to perform complex data operations without highly skilled technical assistance from IT. It is also flexible and open, designed to integrate easily with existing data infrastructure, and agile enough to quickly onboard new data sets and meet changing data quality demands.  

Is the platform hard to learn how to use? 

SSDQ has been built with the non-technical business user in mind. We provide full training alongside our consultancy services so that, within a matter of weeks, we are confident that we can help you to become fully self-sufficient in rule building, creating workflows and automations and fixing your broken data.  

What are the features?  

  • Out of the box rules & configuration logic  
  • Built-in connectivity to multiple data sources 
  • Interactive data quality dashboards built in off-the shelf tools such as PowerBI, Tableau, and Qlik 
  • Integrated data remediation 
  • Machine learning augmented recommendations on data matches  
  • Data access controls so that the right people see the data they are supposed to see  
  • Audit trails, assigning and tracking performance of rule break remediation over time 

What are the benefits of the SSDQ platform?  

SSDQ provides users with large numbers of data domain-specific rules that have been proven at many firms.  

  • Plug & Play options (e.g. REST API, ODBC/JDBC, File system, Cloud…) means you can rapidly add new data sources  
  • Easily drill-down from high-level statistics to actual failing data points in off-the-shelf visualisation tools, and publish statistics to governance systems  
  • Empower business users to fix identified data quality breaks themselves – so that those who know the data can fix the data 
  • Reduce and streamline the process of manually remediating data 
  • Audit trail of changes to data provides the ability to understand who did what, when, and why.  
  • Data owners no longer have to wait for IT to prioritise their data quality rule request alongside countless other IT tickets.  
  • Chief Data Officers, Heads of Data and other senior data leaders can gain rapid insights into the underlying health of their data, as well as a track record over time, improving the quality of information they report and can use for business purposes 

Who is the platform for?  

The self-service data quality platform is particularly helpful for Chief Data Officers, Head of Data roles, Data Governance leads, and Data Stewards. The platform is designed to empower those who know the data to manage and fix the data.  

To have a conversation about how the self-service data quality platform can help you to manage your data, contact our Head of Sales, Kieran Seaward today.   

Connect with Kieran on LinkedIn 

The post Introducing SSDQ: Centralise and standardise data quality processes without programming or coding appeared first on Datactics.

]]>
Datactics contributes to Bank of England and FCA’s AI Public-Private Forum https://www.datactics.com/press-releases/datactics-contributes-to-bank-of-england-and-fcas-ai-public-private-forum/ Mon, 12 Oct 2020 07:27:00 +0000 https://www.datactics.com/?p=12644 Belfast, London, New York, 12th October 2020 Datactics is pleased to announce that its Head of AI, Dr Fiona Browne, has been invited to participate in the Artificial Intelligence Public-Private Forum, joining 20 other experts from across the financial technology sectors as well as academia, along with the observers from the Information Commissioner’s Office and […]

The post Datactics contributes to Bank of England and FCA’s AI Public-Private Forum appeared first on Datactics.

]]>
Belfast, London, New York, 12th October 2020
AI Public-Private Forum

Datactics is pleased to announce that its Head of AI, Dr Fiona Browne, has been invited to participate in the Artificial Intelligence Public-Private Forum, joining 20 other experts from across the financial technology sectors as well as academia, along with the observers from the Information Commissioner’s Office and the Centre for Data Ethics and Innovation.

The purpose of the Forum, launched by the Bank of England and the Financial Conduct Authority, is to facilitate dialogue between the public and private sectors to better understand the use and impact of AI in financial services, which will help further the Bank’s objective of promoting the safe adoption of this technology.

The AI Public-Private Forum, with an intended duration of one year, will consist of a series of quarterly meetings and workshops structured around three topics: data, model risk management, and governance.

Commenting on the initiative’s launch, the deputy governor for markets and banking at the BofE, David Ramsden said:

The existing regulatory landscape is somewhat fragmented when it comes to AI, with different pieces of regulation applying to different aspects of the AI pipeline, from data through model risk to governance. The policy must strike a balance between high-level principles and a more rules-based approach. We also need to future-proof our policy initiatives in a fast-changing field.

The specific aims of the Forum are: firstly, to share information and understand the practical challenges of using AI in financial services, identify existing or potential barriers to deployment, and consider any potential risks or trade-offs; secondly, to gather views on areas where principles, guidance, or regulation could support safe adoption of these technologies; and finally, to consider whether once the forum has completed its work ongoing industry input could be useful and if so, what form this could take.

The knowledge, experience, and expertise of the Forum’s members and observers will be invaluable in helping us to contextualise and frame the Bank’s thinking on AI, its benefits, its risk and challenges, and any possible future policy initiatives.

Fiona Browne, Head of AI at Datactics, said:

I’m really excited and honoured to be part of such a timely forum. AI/ML services touch our everyday lives from recommending what we watch to groceries that we buy.

Within financial services, ML can offer efficiency benefits reducing manual time-consuming tasks, to saving customers money in suggesting best financial products to bespoke customer service solutions and fraud detection. These solutions need to sit within a legal and regulatory environment in the financial sector and are not without their risks and challenges.

I hope to offer the forum insights and experience of the practical implementation of ML-based on the areas of data quality and fairness through to transparency and explainability in the process and model predictions through to the monitoring of models in production. Excited to focus and tease out potential guidance and best practice on how to safely adopt and deploy such solutions.

What is the AI Public-Private Forum?

The BOE working with FCA have established the AIPPF (AI Public-Private Forum). This forum launched in October 2020 and consists of members reflecting a variety of views who applied to be on the forum bringing with them their expertise in the area of AI/ML. The AIPPF will:

  • Share information and understand the practical challenges of using AI/ML within financial services, as well as the barriers to deployment and potential risks. 
  • Gather views on potential areas where principles, guidance or good practice examples could be useful in supporting safe adoption of these technologies. 
  • Consider whether ongoing industry input could be useful and what form this could take (e.g. considering an FMSB-type structure or industry codes of conduct). 

More information about the Forum can be found here.

The post Datactics contributes to Bank of England and FCA’s AI Public-Private Forum appeared first on Datactics.

]]>
What is the cost of bad data? https://www.datactics.com/blog/marketing-insights/what-is-the-cost-of-bad-data/ Wed, 16 Sep 2020 21:00:00 +0000 https://www.datactics.com/?p=11393 With an increased demand for data, and with technology such as 5G making more information available than ever before, the risk that bad data presents is becoming increasingly common and dangerously frequent. Issues such as data duplication, missing data and ultimately the misuse of data cause a range of technology-driven problems; on the human side, […]

The post What is the cost of bad data? appeared first on Datactics.

]]>
With an increased demand for data, and with technology such as 5G making more information available than ever before, the risk that bad data presents is becoming increasingly common and dangerously frequent.

Issues such as data duplication, missing data and ultimately the misuse of data cause a range of technology-driven problems; on the human side, poor or low data awareness leads to the wrong data being prioritised, data being poorly maintained or fundamentally misunderstood.

Alongside these risks, in the market’s drive to achieve more personalised, on-demand and enhanced data statistics and analytics, it has also been seen that the same passion doesn’t necessarily exist for data quality.

At Datactics we believe every piece of business data gained or held, whether it is received directly from a customer, sourced internally or externally is extremely valuable – and should be treated as such.

It should be treated as an asset, not a liability, because clean data has the power to be hugely beneficial to a business, in the same way that poor quality data, if unchecked, will hinder that same business.

In this post we delve into the cost of bad data in three specific areas;  across finance and risk, productivity and customer satisfaction, and probably the most important of all in today’s tough economic climate, reputational risk.

Financial Implications

Firstly, we’re not the first to say that bad data is extremely expensive when it comes to breaching regulations, like GDPR, MIFID, BCBS 239 and more from the slew of acronyms across the globe.  As recently as 2017, notable blogger Chris Skinner reported that financial regulations were changing as rapidly as every 12 minutes! Maintaining data quality for regulatory compliance, therefore, is not something that’s simply going to stop being important. 

Additionally, poor data quality can lead to bad analysis and even more serious, poor decisions. These decisions can have a negative impact on how a business performs which, in turn, can lead to financial losses. According to research conducted by Gartner, ‘the average financial impact of poor data quality on organisations is $9.7 million per year’. This statistic highlights the weight that poor data quality can be on your organisation if not maintained well. It’s not oversimplifying it to say that failing to maintain data quality can undermine your business, undoing all your hard work to get it to that point.

On this point, Harvard Business Review reported the cost of poor data quality as over $3tn annually to the US economy, as of 2016. For a problem that can be solved, this seems an extraordinarily high price to pay.

Productivity

Good quality data empowers business insights and starts new business models in every industry.Gartner, 2018. 

The reverse is also true: bad data quality hampers business insights and thwarts business models across every industry.  Put simply, if your people can’t access the data they need, then they’re not going to get a lot done. Then anything they do manage to achieve will be subject to the risk of poor data quality. Did your bank send you two exact same letters? Is there one account that for some reason they haven’t got your most recent address on? These are simple customer data errors that are easy to fix with a proper Single Customer View – something that’s within reach of every firm with the right mindset.

A sales team, for example, might become frustrated at an out of date email address, old phone number lists and outdated customer details being kept. If data is not cleaned, matched and deduplicated it can lead to unsuccessful, misinformed campaigns. A marketer could spend a significant period of time curating pieces that are not read by the right people, at the right time, due to the data being inaccurate, inappropriate and irrelevant. Campaigns can be duplicated, overrun on spend, require corrections and fixes that steer budgets into operational losses. Errors in data can steer us back to the regulatory risk of data breaches – and the associated cost.

It stands to reason, therefore, that the cleansing of data will help communication specialists in their pursuit to create accurate marketing campaigns, targeting the audience directly at the right time and in the right way.

Reputational Damage

It can be said that reputational damage is far worse than just receiving a fine because reputational damage is very hard if not impossible to recover from. Your reputation is the biggest way to instil trust and customer loyalty, and if people don’t trust you, they won’t buy from you. It follows on from the first two points, that if you’ve been fined for a data quality-driven problem, then people will start to view you with mistrust. If the data’s incorrect and unclean, then time, money and reputations can be severely damaged and, in many cases, lost forever.

Reputation can be damaged long-term if a customer chooses to publicly express disappointment. Sharing negative experiences can impact the chances that any client would return to the service provider. Recently, online vendors using Amazon have claimed that a flood of fake one-star reviews has cost them genuine business because people tended not to trust products with anything lower than 4.5 stars out of 5. This is both a poor data issue – the apparent inability to detect fake reviews – and one showing how reputational damage costs actual revenue.

It’s not just external reputational risk, either. Poor data can lead to internal issues to arise, as team members within an organisation could lose trust in their employer if there is scepticism over the accuracy and validity of the underlying data at hand. Glassdoor, the employee review site, gives people an opportunity to report honestly on their experiences as a current or former employee. If employees’ experiences are bad, it doesn’t take long for people to report them in a public forum that could permanently damage their reputation.

Conclusion

Data quality can no longer be seen as a niche specialism for people who “do data.” It’s in everything, right from financial reporting through to employee satisfaction and reputational risk. It’s the main reason we’ve majored on self-service solutions that seek to empower all people at an organisation to take responsibility for their data – measuring, improving, fixing and reporting it.

If you would like to see how self-service can transform your organisation’s approach to data, please get in touch.

Authored by Matt Flenley and Jamie Gordon

The post What is the cost of bad data? appeared first on Datactics.

]]>
Tackling Practical Challenges of a Data Management Programme https://www.datactics.com/blog/good-data-culture/good-data-culture-facing-down-practical-challenges/ Mon, 03 Aug 2020 13:58:40 +0000 https://www.datactics.com/?p=5916 “Nobody said it was easy” sang Chris Martin, in Coldplay’s love song from a scientist to the girl he was neglecting. The same could be said of data scientists embarking on a data management programme! In his previous blog on Good Data Culture, our Head of Client Services, Luca Rovesti, discussed taking first steps on […]

The post Tackling Practical Challenges of a Data Management Programme appeared first on Datactics.

]]>
Nobody said it was easy” sang Chris Martin, in Coldplay’s love song from a scientist to the girl he was neglecting. The same could be said of data scientists embarking on a data management programme!

data culture

In his previous blog on Good Data Culture, our Head of Client Services, Luca Rovesti, discussed taking first steps on the road to data maturity and how to build a data culture. This time he’s taking a look at some of the biggest challenges of Data Management that arise once those first steps have been made – and how to overcome them. Want to see more on this topic? Head here.

One benefit of being part of a fast-growing company is the sheer volume and type of projects that we get to be involved in, and the wide range of experiences – successful and less so – that we can witness in a short amount of time.

Without a doubt, the most important challenge that rears its head on the data management journey is around complexity. There are so many systems, business processes and requirements of enterprise data that it can be hard to make sense of it all.

Those who get out of the woods fastest are the ones who recognise that there is no magical way of solving things that must be done.

A good example would be the creation of data quality rule dictionaries to play a part in your data governance journey.

data management programme

Firstly, there is no way that you will know what you need to do as part of your data driven culture efforts unless you go through what you have got.

Although technology can give us a helpful hand in the heavy lifting of raw data, from discovery to categorisation of data sets (data catalogues), the definition of domain-specific rules always requires a degree of human expertise and understanding of the exception management framework.

Subsequently, getting data owners and technical people to contribute to a shared plan that takes the uses of the data and how the technology will fit in is a crucial step in detailing the tasks, problems and activities that will deliver the programme.

Clients we have been talking to are experts in their subject areas. However, they don’t know what “best of breed” software and data management systems can deliver. Sometimes, clients find it hard to express what they want to achieve beyond a light-touch digitalisation of a human or semi-automated machine learning process.

data management

The most important thing that we’ve learned along the way is that the best chance of success in delivering a data management programme involves using a technology framework that is both proven in its resilience and flexible in how it can fit into a complex deployment.

From the early days of ‘RegMetrics’ – a version of our data quality software that was configured for regulatory rules and pushing breaks into a regulatory reporting platform – we could see how a repeatable, modularised framework provided huge advantages in speed of deployment and positive outcomes in terms of making business decisions.

Using our clients’ experiences and demands of technology, we’ve developed a deployment framework that enables rapid delivery of data quality measurement and remediation processes, providing results to senior management that can answer the most significant question in data quality management: what is the return on investing in my big data?

This framework has enabled us to be perfectly equipped to provide expertise on the technology that marries our clients’ business knowledge:

  • Business user-focused low-code tooling connecting data subject matter experts with powerful tooling to build rules and deploy projects
  • Customisable automation that integrates with any type of data source, internal or external
  • Remediation clinic so that those who know the data can fix the data efficiently
  • “Chief Data Officer” dashboards provided by integration into off-the-shelf visualisation tools such as Qlik, Tableau, and PowerBI.

Being so close to our clients also means that they have a great deal of exposure and involvement in our development journey.

We have them ‘at the table’ when it comes to feature enhancements, partnering with them rather than sell and move on, and involving them in our regular Guest Summit events to foster a sense of the wider Datactics community.

It’s a good point to leave this blog, actually, as next time I’ll go into some of those developments and integrations of our “self-service data quality” platform with our data discovery and matching capabilities.

Click here for the latest news from Datactics, or find us on LinkedinTwitter or Facebook

The post Tackling Practical Challenges of a Data Management Programme appeared first on Datactics.

]]>
How to overcome barriers to RegTech adoption https://www.datactics.com/blog/marketing-insights/notes-on-a-webinar-overcoming-barriers-to-regtech-adoption/ Thu, 02 Jul 2020 14:00:33 +0000 https://www.datactics.com/notes-on-a-webinar-overcoming-barriers-to-regtech-adoption/ At the start of June, Marketing & Partnerships Manager, Matt Flenley was a panellist on an A-Team webinar on overcoming barriers to RegTech adoption alongside Sophia Bantanidis of Citi, Kayvan Alikhani of Compliance.ai, and Patrick Boscher of Pabora.io. With questions posed by Sarah Underwood, the webinar delved deep into views from different sides of the fence – buyers, sellers and investors – seeking to provide […]

The post How to overcome barriers to RegTech adoption appeared first on Datactics.

]]>
At the start of June, Marketing & Partnerships Manager, Matt Flenley was a panellist on an A-Team webinar on overcoming barriers to RegTech adoption alongside Sophia Bantanidis of Citi, Kayvan Alikhani of Compliance.ai, and Patrick Boscher of Pabora.io. With questions posed by Sarah Underwood, the webinar delved deep into views from different sides of the fence – buyers, sellers and investors  seeking to provide a broad understanding of the barriers each side face with regulatory compliance and adopting regtech solutions. Here, he focuses on a few of the biggest talking points raised by the webinar. 

From the moment the panellists began introducing themselves, I knew it was going to be pretty wide-ranging. Patrick Boscher is ex-Allianz Group, now RegTech advisor and Startup Angel; Sophia Bantanidis is a former regulator, now leading RegTech & Fintech innovation at Citi’s Innovation Lab; Kayvan Alikhani of Compliance.ai, a technologist who’s worked with hundreds of compliance officers; and then, of course, me – now at a Fintech/RegTech, but formerly in a bank and dealing with the sorts of problems that technology is rapidly solving.  

A-Team Group usually gets a good panel together, full of opinions from across the sphere of a debate, and this was no different. 

Sophia Bantanidis’s views reflected the need of a massive financial organisation to have regtech companies think big – how can my solution be deployed at an enterprise level to solve multiple problems around regulatory requirements and risk management for thousands of people, in hundreds of different countries? – and the counterpoint RegTech adoption view from Kayvan Alikhani was that banks should think in terms of specific problems they have right now that must be solved, even on a small scale. 

Fundamentally, the most important thing emerging was that both sides – all sides, including investors – have a strong desire to make it work and improve their regulatory processes with the help of RegTech. 

Innovation centres at banks are a superb way of getting a foot-in-the-door, and as Ms. Bantanidis said – they can act as a sponsor throughout the organization, a champion for the firm to gain a foothold and start to grow.  

This willingness to support innovative tech firms flourishing in organisations is something the independent commentator Chris Skinner expanded upon in a session last year held by the FS Club on digitisation (and the focus of his most recent book)he asserted that much of the evolution necessary to fully benefit from the opportunities that RegTech promises in mitigating against financial crime and improving the financial services industry requires a change in mindset within banking and finance, to become data-and-digital-first – and not simply parrot those statements without changing the underlying structure of banking and financial regulations. 

Now, it would be too easy – not to mention wrong – for me to suggest that all we need for a banking and finance revolution is for the banks to evolve culturally! 

Like partners in a relationship where perhaps different views on a point are held, we need a willingness to work things out together and some intelligent mediation along the way. The webinar’s conclusion really hit home on this, with some sage points of advice and compromise:

  • RegTechs shouldn’t over-inflate their involvement with a financial institution, because they’ll figure out that the contract with a big bank that’s on the sales deck was actually just a one-off piece of work which didn’t go anywhere;
  • Banks mustn’t just put RegTechs into the same IT procurement process as used for giant blue-chip tech firms; there’s a need to be creative and flexible, as difficult as that might sound for banks (and I do genuinely understand!)
  • RegTechs should tell the innovation department at a bank if they’re already working on a project elsewhere in the organisation to avoid miscommunication – something Ms. Bantanidis was really very clear about
  • Lastly, banks should view a paid project with a RegTech – at the very worst – as a valuable way of “failing fast.” It might be a big cultural shock to think this way, but the saying is true: if we always do what we’ve always done, we’ll always get what we’ve always got. And in 2020, with no let-up in regulatory reporting scrutiny and COVID-19 pressuring margins, it’s clear that backing the right RegTech adoption could turbocharge compliance while delivering better efficiencies. Regulatory technology offers the opportunity to compliment compliance teams and make their regulatory obligations easier to deliver.

Next up for Datactics on the webinar front, we’ve Head of Sales Kieran Seaward in a Wealth Management special on “Future Proof Operating Models”, followed a week later by Head of AI, Dr Fiona Browne, exploring uses of AI in AML & Know Your Customer. I’ll share details of these in my LinkedIn feed, or follow the company one here. 

Click here for the latest news from Datactics, or find us on LinkedinTwitter or Facebook

The post How to overcome barriers to RegTech adoption appeared first on Datactics.

]]>
Dataset Labelling For Entity Resolution & Beyond with Dr Fiona Browne https://www.datactics.com/blog/ai-ml/blog-ai-dataset-labeling/ Fri, 05 Jun 2020 10:40:43 +0000 https://www.datactics.com/blog-ai-dataset/ In late 2019 our Head of AI, Dr Fiona Browne, delivered a series of talks to the Enterprise Data Management Council on AI-Enabled Data Quality in the context of AML operations, specifically for resolving differences in dataset labelling for legal entity data. In this blog post, Fiona goes under the hood to explain some of […]

The post Dataset Labelling For Entity Resolution & Beyond with Dr Fiona Browne appeared first on Datactics.

]]>

In late 2019 our Head of AI, Dr Fiona Browne, delivered a series of talks to the Enterprise Data Management Council on AI-Enabled Data Quality in the context of AML operations, specifically for resolving differences in dataset labelling for legal entity data.

In this blog post, Fiona goes under the hood to explain some of the techniques that underpin Datactics’ extensible AI Framework.

Across the financial sector, Artificial Intelligence (AI) and Machine Learning (ML) have been applied to a number of areas, including the profiling of behaviour for fraud detection and Anti-Money Laundering (AML), through to the use of natural language processing to enrich data in Know-Your-Customer processes (KYC).

An important part of the KYC/AML process is entity resolution, which is the process of identifying and resolving entities from multiple data sources. This is traditionally the space that high-performance matching engines have been deployed, with associated fuzzy-match capabilities used to account for trivial or significant differences (indeed, this is part of Datactics’ existing self-service platform).

In this arena, Machine Learning (ML) techniques have been applied to address the task of entity resolution using different approaches from graphs and network analysis to probabilistic matching.

Although ML is a sophisticated approach for democratizing entity resolution, a limitation of applying this approach is the requirement of large volumes of labelled data for the ML model to learn from when supervised ML is used.

What is Supervised ML? 

For supervised ML, a classifier is trained using a labelled dataset. This is a dataset that contains example inputs paired with their correct output label. In the case of entity resolution, this includes examples of input matches and non-matches which are correctly labelled. The Machine Learning algorithms learns from these examples and identifies patterns that link to specific outcomes. The trained classifier then uses this learning to make a prediction on new unseen cases based on their input values.

Dataset Labelling

As we see from above, for supervised ML we need high quality labelled examples for the classifier to learn from. Unlabelled data or poorly labelled data will only make it harder data labelling tools to work. The process of labelling raw data from scratch can be time-consuming and labour intensive especially if experts are required to provide labels for, in this example, entity resolution outputs. The data labelling process is repetitive in nature, and there is a need for consistency in the labelling process to ensure high quality and correct labels are applied. It is also costly in monetary terms, as those involved in processing the entity data require a high level of understanding of the nature of entities and ultimate beneficial owners, and in the context of failure where regulatory sanctions and fines can result.

Approaches for Dataset Labelling

As AI/ML progresses across all sectors, we have seen the rise in industrial level dataset labelling where companies/individuals are able to outsource their labelling tasks to annotation tools and labelling services. For example, the Amazon Mechanical Turk service, which enables the crowdsourcing of labelling of data. This can reduce data labelling work from months to hours.  Machine Learning models can also be harnessed for data annotation tasks using approaches such as weak and semi-supervised learning along with Human-In-The-Loop Learning (HITL). HITL enables the improvement on ML models through the incorporation of human feedback through stages such as training, testing and evaluation.

ML approaches for Budgeted Learning

We can think of budgeted learning as a balancing act between the expense (in terms of cost, effort and time) of acquiring training data against the predictive performance of the model that you are building. For example, can we label a few hundred types of data instead of hundreds of thousands? There are a number of ML approaches that can help with this question and reduce the burden of manually labelling large volumes of training data. These include transfer learning, where you reuse previously gained knowledge. For instance, leveraging existing labelled data from a related sector or similar task. The recent open-source system Snorkel uses a form of weak supervision to label datasets via programmable labelling functions.

Active learning is a semi-supervised ML approach which can be used to reduce the burden of manually labelling datasets. The ‘active learner’ proactively selects the training  dataset it needs to learn from. This is based on the concept that an ML model can achieve good predictive performance with fewer training sample instances by prioritising the examples to learn from. During the training process, an active learner poses queries which can be a selection of unlabelled instances from a dataset. These ML selected instances are then presented to an expert to manually label.

As it is seen above, there are wide and varied approaches to tackling the task of dataset labelling. What approach to select depends on a number of factors from the prediction task through to expense and budgeted learning. The connecting tenet is ensuring high quality labelled datasets for classifiers to learn from.

Click here for more from Datactics, or find us on LinkedinTwitter or Facebook for the latest news.

The post Dataset Labelling For Entity Resolution & Beyond with Dr Fiona Browne appeared first on Datactics.

]]>
Why you should read the European Banking Authority report on AI and Big Data https://www.datactics.com/blog/cto-vision/why-you-should-read-the-european-banking-authority-report-on-ai-and-big-data/ Thu, 13 Feb 2020 15:05:43 +0000 https://www.datactics.com/why-you-should-read-the-european-banking-authority-report-on-ai-and-big-data/ You might have missed this highly informative report from the European Banking Authority (EBA)  – because the title didn’t contain the popular buzzwords of Artificial Intelligence – AI or Machine Learning – ML (nor does the front cover have a picture of a robot!). But for anyone who is trying to understand the challenges ahead for AI and […]

The post Why you should read the European Banking Authority report on AI and Big Data appeared first on Datactics.

]]>
European banking authority

You might have missed this highly informative report from the European Banking Authority (EBA)  – because the title didn’t contain the popular buzzwords of Artificial Intelligence – AI or Machine Learning – ML (nor does the front cover have a picture of a robot!).

But for anyone who is trying to understand the challenges ahead for AI and broader data management in banking I think this report provides a rare unbiased, concise and highly educational deep dive into pretty much all of the key topics involved. I won’t give a synopsis here, just some reasons why I think you should read it:

It’s really all about AI in Banking!

‘Advanced Analytics’ is the term the authors use for AI, ML tech.

BS Free

Provides most of the background you need to see through the smoke, mirrors and hype surrounding AI or Advanced Analytics.

It’s a great introduction

But not dumbed down – Great for business people who need a better understanding of the challenges their data scientists and AI professionals face, and great for data scientists who need to understand the broader applications and implications of this rapidly emerging technology in Banking. If you don’t know what kind of algorithm might be used for a particular business case this is for you. If you are trying to understand what a data scientist means by accuracy and a confusion matrix this is for you too.

Technologically Neutral

The report maintains technological neutrality and with so much information these days coming from vendors of proprietary tech, in a world where there are few common open standards, it’s hard to find information that doesn’t in some way implies vendor lock-in.

Holistic

This report covers pretty much everything including Data Quality, different types of ML, explainability and interpretability, ethics… So many reports are very narrow focusing on one use case or tech, but this takes the whole horizon into account.

Pragmatic

It describes practical use cases for AI and the technology involved – I was particularly impressed with the technical content: accurate concise and easy to understand. More importantly, it also describes all the potential problems – things like how automated credit scoring could be ‘gamed’ by an institution’s sales staff and could coach uncreditworthy customers on how to be granted a loan!

Forward-thinking

The European Banking Authority covers the topics of ethics in AI and even security in AI. Ethics has obviously been talked about a lot in recent months (sometimes with slightly fanciful references to Asimov’s laws of Robotics!) but this report lays out some really good practical steps that need to be implemented to ensure ML solutions are fair. It’s also refreshing to see serious consideration to security (data poisoning, adversarial attacks, model stealing) something I blogged about a couple of years ago. It’s a bit like in the old days of software development when people didn’t really take things like SQL injection or cross-site scripting seriously, resulting in security breaches in many applications and web sites. If AI solutions aren’t built with security from the ground up, the next few years could see echoes of these past security breaches played out in the AI domain.

You can get the report here

Click here for more from Datactics, or find us on LinkedinTwitter or Facebook for the latest news.

The post Why you should read the European Banking Authority report on AI and Big Data appeared first on Datactics.

]]>
How Datactics helps Santa with his Data Quality issues https://www.datactics.com/blog/cto-vision/how-datactics-helps-santa-with-his-data-quality-issues/ Thu, 19 Dec 2019 16:00:45 +0000 https://www.datactics.com/how-datactics-can-help-santa-with-his-data-quality-issues/ Yes of course Santa has data quality issues! Everyone has data quality issues. In this article, we outline how Datactics software can help Santa improve the efficiency of his pre-Christmas operations and have a stress free Christmas Eve delivery and a relaxing Christmas Day. Data Quality Firewall, REST API, Data Quality Remediation Datactics provides a […]

The post How Datactics helps Santa with his Data Quality issues appeared first on Datactics.

]]>
Yes of course Santa has data quality issues! Everyone has data quality issues.

In this article, we outline how Datactics software can help Santa improve the efficiency of his pre-Christmas operations and have a stress free Christmas Eve delivery and a relaxing Christmas Day.

Data Quality Firewall, REST API, Data Quality Remediation

Datactics provides a REST API interface and “Data Quality Firewall” to allow the import of data from Optical Character Recognition (OCR) software that has scanned the children’s letters and guarantees the quality of data entering the data store. Records passing DQ criteria are automatically allowed through to Santa, while records failing DQ checks are quarantined where they can be reviewed interactively by Santa’s Elves in the Data Quality Clinic

Oh dear! Did Ellie ask for a Barbie House or a Barbie Horse? Not to worry – the Record is in quarantine and will be reviewed by an Elf who perhaps knows Ellie and can find out what she wanted, and can check against additional data sources like the latest online toy catalogues to discover what the possible matches might be. This saves the elves significant time in only having to review a smaller set of records, making the busiest time of the year far less stressful for all at the North Pole!

SVC – Single View of Child

Managing vast quantities of historical Personally Identifiable Information (PII) on his data servers in Lapland is a difficult task, but Datactics can help create a Single View of Child from the disparate data silos, normalising the data and creating a golden record for each child. This ensures that presents aren’t duplicated and more importantly keeps him compliant with GDPR.

Address Validation

The last thing Santa wants on Christmas Eve when he’s delivering to a few billion houses is to go to the wrong address, it wastes time and risks a potential present mix up. Fortunately, Datactics makes it easy to validate the children’s addresses against databases such as the Post Code Address File (PAF) and Capscan so Santa knows he’s going to the right place before he sets out.

Screening Against the Naughty List

This is not as simple as it may sound because you have to get it right or someone is going to be very upset. But using the established techniques Datactics has developed for KYC & AML screening against Politically Exposed Persons etc. Santa can screen against The Naughty List with confidence

Excitingly, it’s not only Santa who can screen against this list: everyone can try the naughty list screening for free at https://aml-screening.datactics.com/

Merry Christmas from everyone at Datactics!

Click here for more from Datactics, or find us on LinkedinTwitter or Facebook for the latest news.

The post How Datactics helps Santa with his Data Quality issues appeared first on Datactics.

]]>