Unlock loyalty ROI insights with Omnivy and Voucherify – register now for expert tips and a free worksheet!
0
Days
0
Hours
0
Minutes
0
Seconds
Get it now
2025-01-13 12:00 am
2024-10-03 12:00 am
2024-09-23 12:00 am
2024-09-18 12:00 am
2024-05-09 12:00 am
2024-03-18 12:00 am
2024-04-16 12:00 am
2024-04-14 12:00 am
2024-09-16 12:00 am
2024-06-25 12:00 am
2024-06-13 12:00 am
2024-06-17 12:00 am
2024-05-29 12:00 am
arrow pointing left
go to TECH
Technology
Writing Data Deduplication Engine
Paweł Rychlik
Paweł Rychlik
September 3, 2013
Share it on Twitter
Share it on Facebook
Share it on LinkedIn
Share it on Twitter
Share it on Facebook
Share it on LinkedIn

Writing Data Deduplication Engine

Duplitector

It’s a duplicate data detector engine based on Elasticsearch. It’s been successfully used as a proof of concept, piloting a full-blown enterprise solution.

Background

In certain systems we have to deal with lots of low-quality data, containing some typos, duplicate entries, malformatted or missing fields, erroneous or inaccurate bits of information.

The reason for it may lay in the specifics of the data source:

  • Careless humans using touch-screens.
  • Keyboard cats and internet trolls.
  • Cheap and/or faulty sensors.
  • Underpaid or lazy staff.
  • Primitive web crawlers, etc.

In some cases the data comes from multiple external providers, each using slightly different schema, and ends up dumped in a common database. The chance is that these providers share a common subset of entries, but for some reasons it’s difficult to uniquely identify an entry, so that we know which of them are likely to be duplicates.

This kind of datasets often contain vast numbers of similar entries. If not filtered out properly, these affect the quality of service delivered by the system. Better to get rid of those soon!

This project is …

… meant to be a playground for developing a deduplication algorithm, and is currently aimed at the domain of various sorts of organizations (e.g. NPO databases). Still, it’s small and generic enough, so that it can be easily adjusted to handle other data schemes or data sources. The repository contains a set of crafted organizations and their duplicates (partially fetched from IRS, partially intentionally modified, partially made up), so that it’s convenient to test the algorithm’s pieces.

The basic idea

The most basic concept is as follows.

  1. Take a new entry.
  2. Ask elasticsearch if it can recognize something like the given entry within the index (e.g. fuzzy-search):
  3. No: it’s a unique entry – add to the elasticsearch index and go to #1
  4. Yes, but the similarity is way too low1 – it’s a unique entry – add to the elasticsearch index under a new group id and go to #1
  5. Yes and the similarity is rather fair – it’s likely to be a duplicate. Take the closest one,combine2 the new entry with our finding, put it back to elasticsearch index under the same group id. Go to #1.

Ad. 1 – How do I tell if similarity is sufficient? – In the simplest solution, it’s a matter of putting a fixed score threshold, based on a series of test results. More advanced use cases might require more sublime techniques.

Ad. 2 – Combine entries the easiest way – join their properties as they are. Here we benefit from the fact that groups which contain multiple occurrences of keywords that we search for, are ranked higher in the overall search results.

What kind of query types may come in handy in finding the most effective dedupe engine? All of them are thoroughly described in the reference, however most likely you’ll need to try these out:

  • fuzzy-like-this,
  • more-like-this,
  • fuzzy,
  • wildcard,
  • bool,
  • geo-distance.

Ideas for further improvements:

  • If possible, it’s recommended to normalize the entries up-front before the processing (phone numbers, zip codes, etc.).
  • If applicable, make use of geocoding services. External services like Google Maps might prove helpful.
  • Deduplicating large datasets with this kind of engine may appear inefficient. To address this issue, apart from scaling elasticsearch and fine-tuning queries, one can think of parallelizing the process.

Summary

Although it looks pretty naive in this form, it definitely can do the job. With very little effort, one can achieve really encouraging results. And what do exactly encouraging results mean? Well, it’s all relative, all depends on your specific requirements, e.g. whether some level of chance of getting a false negative is acceptable. Just to give some reference point: 95+% accuracy (with barely any false negatives) is feasible.

For the sake of this PoC, I’ve prepared a set of 200 entries: 100 unique & 100 duplicates, along with an extra information about the duplicate groups to be able to measure the quality of experiment results. It’s obviously a modest and artificial dataset, but it suits as a model for predicting deduplication quality of much larger target datasets. But it’s pretty much the only way to work with it – try it out on a small test dataset, and project the results for large production data.

Sources on github: Duplitector

{{CTA}}

Interested in working at Voucherify?

See offers

{{ENDCTA}}

Share it on Twitter
Share it on Facebook
Share it on LinkedIn

Join our newsletter

By registering, you confirm you have read and agree to the Subscription Agreement, and to the storing and processing of your personal data by Voucherify as described in the Privacy Policy.
Before you send us your data, you must become acquainted with the Privacy Policy where you will find information on the personal data controller, your rights and our obligations, the purpose for which your data are processed and any other information which relates to the protection and security of your personal data.
Thank you for subscribing to our newsletter!
Something went wrong while submitting the form.
Close

We’re constantly growing

and we need your help!