Welcome to the Unconstrained Ear Recognition Challenge 2019!
The 2nd Unconstrained Ear Recognition Challenge (UERC) will be organized in the scope of the IAPR International Conference on Biometrics 2019. The goal of the challenge is to further advance the state-of- technology in the field of automatic ear recognition, to provide participants with a challenging research problem and introduce a benchmark dataset and protocol for assessing the latest techniques, models, and algorithms related to automatic ear recognition.
The results of UERC 2019 will published in an ICB conference paper authored jointly by all participants of the challenge.
The submission will be open util 2019/01/23. If you have any questions or suggestions feel free to contact firstname.lastname@example.org.
Ear recognition is an active area of research within the biometric community. While work in this ﬁeld has long focused on constrained, laboratory like-setting, recent approaches are looking increasingly at data acquired in unconstrained conditions and many techniques and approaches have been presented recently focusing on data captured in these so called “in-the-wild” settings. To promote research in these “in-the-wild” settings the Unconstrained Ear Recognition Challenge (UERC) 2019 will bring together researchers working in the field of ear recognition and benchmark existing algorithms on a common dataset and under a predefined experimental protocol. UERC 2019 will build on the previous challenge, UERC 2017 (available on IEEE Xplore and on arXiv.org), organized in the scope of the 2017 International Joint Conference on Biometrics (IJCB) and use the same dataset and protocol, thus enabling to examine and directly compare the progress made in the field since 2017.
The results of the challenge will be published in a summary paper authored jointly by all participants of the challenge.
The challenge will be held on an extended version of the Annotated Web Ears (AWE) dataset, containing a total of 9,500 ear images - see the UERC 2017 summary paper for details (available on IEEE Xplore and on arXiv.org). The images were collected with a semi-automatic procedure involving web-crawlers and a subsequent manual inspection. Because the AWE images were not gathered in controlled laboratory-like conditions, they better represent the variability in ear appearance than existing datasets of ear images. However, the problem of automatic ear recognition is also significantly harder. A few example images from the extended AWE dataset are shown below.
A more in depth description of the images, acquisition procedure, dataset characteristics and other information on the AWE dataset is available in the Neurocomputing paper.
UERC 2019 will use three image datasets:
The 3,300 images of the main dataset contain various annotations, such as the level of occlusion, rotation (yaw, roll and pitch angles), presence of accessories, gender and side. This information is also made available during training and can be exploited to build specialized recognition techniques.
The 3,300 images of the main part of the dataset was split into a training set of 1,500 images (belonging to 150 subjects) and a test set of 1,800 images (belonging to 180 subjects). The identities in the training and test set were disjoint. The purpose of the training set is to train recognition models and set any open hyper-parameters, while the test set is reserved for the final evaluation. The test set MUST NOT have been used to learn or fine-tune any parameters of the recognition model. The organizers reserved the right to exclude a team from the competition (and consequently the jointly authored ICB conference paper) if the final result analysis suggested that the test images were also used for training.
The train and test sets were split:
UERC will test the recognition performance of all submitted algorithms through identification experiments. The participants will have to generate a similarity score matrix with comparisons of each image in the probe.txt file to each image in the gallery.txt file and return the resulting similarity matrix to the organizers for scoring. Thus, each participant will be required to generate a 7442x9500 matrix for each submitted system.
The number of approaches that each participant is allowed to submit is not limited. However, only approaches with a least a short description (written by the participants) and some sort of original contribution, will be included in the ICB summary paper of UERC 2019.
The submitted similarity matrices will be scored by the organizers. Rank-1 recognition rates, complete CMC curves and the Area-under- the CMC (AUC) curve will be computed and reported for each submitted algorithm. The AUC will be used to rank the participating algorithms.
Sequestered Data: Participants of the winning three recognition models on the main UERC data (described above) will be asked to provide their source code, so that the organizers can run independent experiments on a sequestered dataset. The goal of this part of the challenge is to test the generalization abilities of the best submitted algorithms on data that may differ slightly in characteristics from the data used in the main part of the competition.
We provide all participants with a Matlab starter kit that generated a matrix of comparisons for each sample in the dataset and computed all relevant performance metrics. The starter kit will help participants to quickly start with the research work and generate score matrices compliant with our scoring procedures. For the baseline approaches we made a number of descriptor-based approaches (using SIFTs, POEMs, LBPs, etc.) available to the participants as well as a deep learning approach.
All scripts that were used to generate results for the UERC 2017 summary paper will also be included in the starter kit.
If you have any questions, suggestions or would like to participate in the competition, feel free to contact email@example.com.