Sign-in

Unconstrained Ear Recognition Challenge

! SUBMIT YOUR RESULTS HERE !

Welcome to the Unconstrained Ear Recognition Challenge!

We are proud to announce the 1st Unconstrained Ear Recognition Challenge (UERC) that will be organized in the scope of the IEEE/IAPR International Joint Conference on Biometrics 2017. The goal of the challenge is to advance the state-of- technology in the field of automatic ear recognition, to provide participants with a challenging research problem and introduce a benchmark dataset and protocol for assessing the latest techniques, models, and algorithms related to automatic ear recognition.

The results of UERC will be published in an IJCB conference paper co-authored jointly by all challenge participants. Additionally, a special issue on Unconstrained Ear Recognition will be organized in IET Biometrics and the most innovative approaches taking part in UERC will be invited to submit papers of their approaches to the special issue.

If you are interested in participating in UERC, please register and you will be included in our mailing list and receive detailed instructions regarding the challenge after the UERC kick-off.

REGISTRATION FORM

Motivation

Despite the numerous application possibilities in security, surveillance applications, forensics, criminal investigations or border control, the existing research in ear recognition has seldom gone beyond laboratory settings. This can mostly be attributed to the enormous appearance variability of ear images when captured in unconstrained settings. However, due to recent advances in computer vision, machine learning and artificial intelligence (e.g. with deep learning), many recognition problems are now solvable (to at least some extent) in unconstrained settings and many modalities that were too complex to use in real-life situations are now becoming a viable source of data for person recognition.

The Unconstrained Ear Recognition Challenge (UERC) will build on the advances outlined above and address the problem of ear recognition “in the wild”. While many competitions and challenges were organized in the scope of past biometrics-oriented conferences (ICB, BTAS, FG, etc.) for various biometric modalities and numerous problems, ear recognition has not yet been studied within group evaluations making the UERC a unique opportunity with high potential impact.

The Dataset, Protocol and Performance Metrics

The challenge will be held on an extended version of the Annotated Web Ears (AWE)dataset, containing a total of 9,500 ear images. The images were collected with a semi-automatic procedure that involved web-crawlers and a subsequent manual inspection. Because the AWE images were not gathered in controlled laboratory-like conditions, they better represent the variability in ear appearance than existing datasets of ear images. However, the problem of automatic ear recognition is also significantly harder. A few example images from the extended AWE dataset are shown below.

A more in depth description of the images, acquisition procedure, dataset characteristics and other information on the AWE dataset is available in the Neurocomputing paper.

UERC will use three image datasets:

  • part A: the main datasets of 3,300 ear images belonging to 330 distinct identities (with 10 images per subject) that will be used for the recognition experiments (training and testing),
  • part B: a set of 804 ear images of 16 subjects (with a variable number of images per subject) that will be used for the recognition experiments (training),
  • part C: an additional set of 7,700 ear images of around 3,360 identities that will be used to test the scalability of the submitted algorithms.

The 3,300 images of the main dataset contain various annotations, such as the level of occlusion, rotation (yaw, roll and pitch angles), presence of accessories, gender and soon. This information will also be available during training and can be exploited to build specialized recognition techniques.

The 3,300 images of the main part of the dataset will be split into a training set of 1,500 images (belonging to 150 subjects) and a test set of 1,800 images (belonging to 180 subjects). The identities in the training and test set will be disjoint. The purpose of the training set is to train recognition models and set any open hyper-parameters, while the test set is reserved for the final evaluation. The test set MUST NOT be used to learn or fine-tune any parameters of the recognition model. The organizers reserve the right to exclude a team from the competition (and consequently the jointly authored IJCB conference paper) if the final result analysis suggests that the test images were also used for training.

The train and test sets will be split:

  • Train (2,304 images of 166 subjects): 1,500 images (150 subjects) from part A and all images from part C (804 - 16 subjects).
  • Test (9,500 images of 3,540 subjects): 1,800 images (180 subjects) from part A and all impostor images (7,700 - 3,360 subjects).

UERC will test the recognition performance of all submitted algorithms through identification experiments. The participants will have to generate a similarity score matrix with comparisons of each image in the probe.txt file to each image in the gallery.txt file and return the resulting similarity matrix to the organizers for scoring. Thus, each participant is required to generate a 7442x9500 matrix for each submitted system.

The number of approaches that each participant will be allowed to submit will not be limited. However, only approaches with a least a short description (written by the participants) will be included in the IJCB summary paper of UERC 2017. Submissions are possible via a simple web interface accessible HERE.

The submitted similarity matrices will be scored by the organizers. Rank-1 recognition rates, complete CMC curves and the Area-under- the CMC (AUC) curve will be computed and reported for each submitted algorithm. The AUC will be used to rank the participating algorithms. Based on the annotations available with the extended AWE dataset, we will also compute results for sub-sets of the data, e.g. focusing only on certain ranges of head rotations, presence/absence of accessories, etc.

Starter Kit

We will provide all participants with a Matlab starter kit that will generate a matrix of comparisons for each sample in the dataset and compute all relevant performance metrics. The starter kit will help participants to quickly start with the research work and generate score matrices compliant with our scoring procedures. For the baseline approaches we will make a number of descriptor-based approaches (using SIFTs,POEMs, LBPs, etc.) available to the participants as well as a deep learning approach.

Provisional Timeline

  • 2017/02/01: UERC 2017 kick-off: starter kit and datasets made available
  • 2017/04/17: Initial results and a two-sentence description of the approach (for better planning of the paper structure)
  • 2017/04/24: Final results submission
  • 2017/05/01: Final submissions of the results and algorithm descriptions
  • 2017/05/20: Submission of UERC summary paper to IJCB 2017

The Call for Participants PDF is available here.

The IET Call for Papers PDF is available here.

If you have any questions, suggestions or would like to participate in the competition, feel free to contact ziga.emersic@fri.uni-lj.si.