Gains Made in Fingerprint Analysis: NIST
An algorithm developed by NIST and Michigan State University researchers may help to reduce the chance of human error in the first step of fingerprint analysis.
Scientists from the National Institute of Standards and Technology (NIST) and Michigan State University have developed an algorithm that automates an important step in fingerprint analysis, publishing their research in IEEE Transactions on Information Forensics and Security. The accomplishment is important because research has shown fingerprint examination can produce erroneous results, according to the NIST news release, which cites a 2009 report from the National Academy of Sciences that found results "are not necessarily repeatable from examiner to examiner" and even experienced examiners may disagree with their own past conclusions when they re-examine the same prints later.
The algorithm may help to reduce the chance of human error, it says.
"We know that when humans analyze a crime scene fingerprint, the process is inherently subjective," said Elham Tabassi, a computer engineer at NIST and a co-author of the study. "By reducing the human subjectivity, we can make fingerprint analysis more reliable and more efficient."
"At a crime scene, there's no one directing the perpetrator on how to leave good prints," explained Anil Jain, a computer scientist at MSU and co-author of the study. When an examiner receives latent prints from a crime scene, the first step is to judge how much useful information they contain. This first step is standard practice in the forensic community, and thus is the step automated by the researchers, Jain said.
After that initial step, if the print contains sufficient usable information, it can be submitted to an Automated Fingerprint Identification System, which searches its database and returns a list of potential matches to be examined for a conclusive match.
"If you submit a print to AFIS that does not have sufficient information, you're more likely to get erroneous matches. If you don't submit a print that actually does have sufficient information, the perpetrator gets off the hook," Tabassi said.
The researchers used machine learning to build their algorithm, according to the release. "To get training examples, the researchers had 31 fingerprint experts analyze 100 latent prints each, scoring the quality of each on a scale of 1 to 5. Those prints and their scores were used to train the algorithm to determine how much information a latent print contains," it says. "After training was complete, researchers tested the performance of the algorithm by having it score a new series of latent prints. They then submitted those scored prints to AFIS software connected to a database of over 250,000 rolled prints. All the latent prints had a match in that database, and they asked AFIS to find it.
"This testing scenario was different from real casework, because in this test, the researchers knew the correct match for each latent print. If the scoring algorithm worked correctly, then the ability of AFIS to find that correct match should correlate with the quality score. In other words, prints scored as low-quality should be more likely to produce erroneous results—that's why it's so important to not inadvertently submit low- quality prints to AFIS in real casework—and prints scored as high-quality should be more likely to produce the correct match. Based on this metric, the scoring algorithm performed slightly better than the average of the human examiners involved in the study."
Next, the researchers will use a larger dataset to improve the algorithm's performance and more accurately measure its error rate.