Skip to content

Research within Biometrics

I don't think this lecture is assessed, per se. but I wanted to take notes regardless.

Sasoon's research group is both in biometrics and medical image.

Biometrics

Face Recognition using Attractiveness as a Metric

This paper took the idea of soft biometrics, such as questions of were they taller/shorter than \(x\) for an eyewitness, and added the attractiveness of a person as a feature which could be used in the feature vector.

This is done based on the faces only. To measure attractiveness, they used eigenfaces to extract what in the image made a person attractive, and which parts of the face the eigenfaces think make the big difference.

Recognition was then done with the initial face recognition and the soft biometric, and with a CMC plot, we see that the model with attractiveness as a metric gives a slight improvement.

Face Profiles Recognition

If we don't have a face facing forwards, and only have profiles from the side, the paper aims to see if the side profiles only give enough information about faces that we can use it as a biometric.

We can do this in two ways--first with the traditional computer vision biometric, then again with the soft biometrics for the sides of the face. E.g., the size of the eye, size of the nose, size of the ear, size, location, and shape of lips, which were then given to a set of people to generate a feature vector for the soft biometrics.

They then examined the correspondence between the soft biometric and the computer vision biometric. When presenting the results, they present the ROC curves for both a traditional approach, and their approach. The EER with purely computer vision is 2.72%, 0.95% with the soft biometric, then 0.86% for a fusion of the computer vision and the soft biometric.

Ear Recognition

Ears were used as a biometric entity, using a deep learning ResNet. This was pre-trained on a much larger dataset. The paper did some transfer learning with the dataset to extract some fratures from the last layer of the ResNet, which is the vectorized version of the channels of the feature maps. From here, they calculated inter, and intra-class variations, which showed a good variation between them.

The last layer of the ResNet can be plotted with a heatmap, which shows the more important features in red, and the less important features in blue. We see that the centre of the ear is the most important part for recognition.

Medical Image Analysis

Histopathology Segmentation Using ADS_UNet

This was a specific form of neural network, with the objective of segmenting histopathological images (cancer tissues) so that we can see a segmentation of the images into different parts (e.g., cancer tissues.

Fast R-CNN for Osteoarthritis Detection

This detects osteoarthritis using a CNN. The important bit is that the result from the research is similar or better than the ground truth values from the hospital.

Vein Image Analysis for Brain Damage Detection in Infants

There is an illness when babies are born where they suffer from lack of oxygen. If this is the case, then the blood vessels in the brain will look distinctive and different to those of a healthy baby.

The problem this paper solves is that when a baby who suffers from a lack of oxygen is born, they have to wait two years to find out if the baby is suffering from an illness or not. The traditional diagnosis is to look at the behaviour of the baby after 2 years, to see the personality of the baby.

To avoid having to wait two years, we can instead look at the MRIs of the brain, where we take an MRI at birth, then allow the model to diagnose whether the baby is suffering form the illness or not.

The first stage is to detect the blood vessels from the 3D image of the brain. These are regions in the space. From the regions, we can make some measurements (e.g., width, intensity, length, eigenvalue of Hessian matrix, and a combination of all of these).

The best accuracy achieved was 78% with a very small dataset. Due to the restrictions with patient data, the datasets are typically much smaller.

We were then shown the ROC curves, and the best ROC curve is the series that has the highest area.

Osteoporosis and Bone Fracture Prediction

If we can see a scan of a bone in 3D, can we find out if the patient will be likely to have a fracture in the future or if they have had a fracture in the bone in the past.

If a person is suffering from osteoporosis, the pattern of the bone in the scan will change and we can then find out if the person is more at risk of a bone fracture.

This makes use of a 3D image data, then extract texture information from an LBP to extract features. The performance of the system is quantified with a false positive against true positive, with normally a 50% success rate. The ROC plot then includes curves based on the measurements (VMD) in the hospital. The second ROC curve, then includes extra data about the patient as more features. Finally the ROC curve with the best results is a combination of the features from the scans with the clinical data.

The most important thing is that the performance of the new algorithm is better than that of the current classifier.

Texture Feature Extraction for 3D Texture Classification

If we have 3D data from a patient, and the patient is suffering from empicinia? then we can look at the texture of the lungs using the LBP of the imagery from the scan.

If the lung rotates, then the features will change. If we then pass into LBP, we can make it invariant to rotation, using spherical harmonics. Gaussian Markov random forest (similar to LBP) can then be used, and we can then get feature vectors to segment or classify the lungs.

The results show good performance in a large dataset of synthetic textures. This can then be applied to the lungs to get a segmentation of the lung as healthy and suffering with COPD.