Multimodal Image Fusion for Feature Extraction based on Biometric Person Identification

Main Article Content

Namdeo D. Kapale, Agarkar Balasaheb S.

Abstract

 The method of identifying a person using behavioral attributes, such as fingerprints, eye structures, or facial characteristics is known as biometric person identification. The procedure utilizes complex algorithms to analyze and compare biometric data against recorded templates. The purpose of this research is to develop a novel multimodal image fusion model for feature extraction based on biometric person identification. To build an intelligent biometric person identification framework, we provide a new Versatile Convolutional Neural Network (VCNN) technique for obtaining vital characteristics using multimodal data. The features gathered from modalities are combined and provided into the (Multi-Kernel Support Vector Machine) MKSVM classifier for classification by combining ensemble learning for classification with deep knowledge for extracting characteristics; this method offers precise biometric person identification. Initially, we gathered a Dataset that contains two biometrics, such as the Faces94 database for face images and fingerprints gathered fromFVC2002, to train our suggested model. We employed histogram equalization to pre-process the gathered raw data, it improves the quality of the obtained data. The finding assessment phase evaluates the suggested model performance using various metrics. We compared the recommended technique to various existing methods to assess its effectiveness. The experimental findings demonstrate that the recommended model outperforms conventional approaches.

Article Details

Section
Articles