Multi-Level Feature Abstraction from Convolutional Neural Networks for Multimodal Biometric Identification
In this paper, we propose a deep multimodal fusion network to fuse multiple modalities (face, iris, and fingerprint) for person identification. The proposed deep multimodal fusion algorithm consists of multiple streams of modality-specific Convolutional Neural Networks (CNNs), which are jointly optimized at multiple feature abstraction levels. Multiple features are extracted at several different convolutional layers from each modality-specific CNN for joint feature fusion, optimization, and classification. Features extracted at different convolutional layers of a modality-specific CNN represent the input at several different levels of abstract representations. We demonstrate that an efficient multimodal classification can be accomplished with a significant reduction in the number of network parameters by exploiting these multi-level abstract representations extracted from all the modality-specific CNNs. We demonstrate an increase in multimodal person identification performance by utilizing the proposed multi-level feature abstract representations in our multimodal fusion, rather than using only the features from the last layer of each modality-specific CNNs. We show that our deep multi-modal CNNs with multimodal fusion at several different feature level abstraction can significantly outperform the unimodal representation accuracy. We also demonstrate that the joint optimization of all the modality-specific CNNs excels the score and decision level fusions of independently optimized CNNs.
NurtureToken New!

Token crowdsale for this paper ends in

Buy Nurture Tokens

Authors

Are you an author of this paper? Check the Twitter handle we have for you is correct.

Sobhan Soleymani (add twitter)
Ali Dabouei (add twitter)
Hadi Kazemi (add twitter)
Jeremy Dawson (add twitter)
Nasser M. Nasrabadi (add twitter)
Ask The Authors

Ask the authors of this paper a question or leave a comment.

Read it. Rate it.
#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

Github
User:
None (add)
Repo:
None (add)
Stargazers:
0
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
0
Language:
None
Youtube
Link:
None (add)
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
Comments:
0
Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
None
07/04/18 06:13PM
6,339
1,921
Tweets
HubBucket: RT @arxiv_org: Multi-Level Feature Abstraction from Convolutional Neural Networks for Multimodal Biometr... https://t.co/cOUjGKDP21 https:/…
nmfeeds: [O] https://t.co/A6Sjgskqxc Multi-Level Feature Abstraction from Convolutional Neural Networks for Multimodal Biometric Id...
akshitac8: RT @arxiv_org: Multi-Level Feature Abstraction from Convolutional Neural Networks for Multimodal Biometr... https://t.co/cOUjGKDP21 https:/…
SagarSharma4244: RT @arxiv_org: Multi-Level Feature Abstraction from Convolutional Neural Networks for Multimodal Biometr... https://t.co/cOUjGKDP21 https:/…
arxiv_org: Multi-Level Feature Abstraction from Convolutional Neural Networks for Multimodal Biometr... https://t.co/cOUjGKDP21 https://t.co/eewRiEzOd3
Images
Related