Welcome to the Annotated Web Ears website!

We have organized the First Unconstrained Ear Recognition Challenge (UERC) as part of IJCB 2017 around the AWE dataset. For more information visit the UERC website.

Annotated Web Ears (AWE) is a dataset of ear images gathered from the web and in the current form contains 1000 ear images of 100 distinct subjects. The dataset was collected for the goal of studying unconstrained ear recognition and is made publicly available to the community (for research purposes only). All images of the AWE dataset were labeled according to yaw, roll and pitch angles, ear occlusion, presence of accesories, ethnicity, gender and identity.

The dataset comes with a Matlab toolbox dedicated to research in ear recognition. The AWE toolbox implements several state-of-the-art ear recognition techniques and allows for rapid experimentation with the AWE dataset as well as other avialable datasets.

Both the dataset and toolbox are free for non-commercial use and can be downloaded from our download page. Kindly make a reference to this paper, when using either the toolbox or the dataset for your work:

Ž. Emeršič, V. Struc, and P. Peer: »Ear Recognition: More than a Survey«, Neurocomputing, 2017
Download the paper here and download bibtex here.

If you have any questions, feel free to contact the corresponding author.

Annotated Web Ears Dataset – AWE Dataset

The database contains 1,000 images of 100 persons. Images were collected from the web using a semi-automatic procedure.

Get a copy of the AWE dataset from our download page.
Sample images from the Annotated Web Ears Dataset.

Annotated Web Ears Toolbox – AWE Toolbox

The toolbox is written in Matlab. You need Mex for it to run. The toolbox already includes Annotated Web Ears Dataset. It should work out of the box. In case you encounter problems, please have a look at the help section.
Get a copy of the AWE toolbox from our download page.
Diagram of AWE Toolbox
Diagram showing the workflow of the AWE toolbox.