Emteq Activity recognition challenge

 

Emteq aims to improve lives by providing people with actionable feedback about their behaviours. To do this, we need to be able to improve our methods for activity detection as part of our disease severity detection methods for Parkinson’s disease and depression monitoring. Previous activity recognition challenges have focused on the use of smartphones or body-worn sensors.  The aim of our Machine Learning Challenge is to predict activities of daily life from inertial measurements as would be derived from a head-mounted device such as AR glasses.

The goal is to recognise the user’s activities from inertial data. Daily activities include: walking, watching a movie, sitting at a desk, using a computer, using a smartphone, sitting on a sofa.

 
MLchallenge_banner.png

Background

For this dataset four volunteers performed activities of daily living over 3 hours.

The dataset comprises 3 hours of labelled training data from 3 volunteers, and 3 hours of unlabelled test data from a different volunteer. The dataset was based on volunteers recorded in a simulated home environment with camera-based annotation of the volunteer activities. Datasets will be available online as below.

To be eligible for the cash prize, participants must be registered to UbiComp 2019 / or ISWC 2019. Emteq are offering a further prize of a pair of Emteq OCOsense activity recognition glasses (value £1500) which is available for both registered and non-registered entrants.

Submission of predictions on the test dataset

Competition entrants (the participants) will develop an algorithm pipeline that will process the sensor data, create models and output the recognised activities. The F1-scores will be used as the metric to evaluate the winner (we are looking for the F1-macro score- please see here for clarification).

Results should be submitted in a CSV file containing two columns: (1) timestamp and (2) activity. Each prediction (row in CSV file) should contain a timestamp and the activity predicted. Participants should also include a detailed description of the proposed classification system, (limited to 6 to 8 pages) in IEEE format.

The participants’ predictions should be submitted online by sending an email to claire.baert@emteq.net, in which there should be a link to the predictions file, using services such as Dropbox, Google Drive, etc.

 To be eligible for the prize participants will be need to share their code to enable formal evaluation by the prize committee.

License

This dataset is publicly available solely for non-profit and research purposes. For all other usage, please contact claire.baert@emteq.net to discuss licensing.

Citation request:

If you use this dataset please cite:

Dataset for the emteq 2019 activity recognition challenge (www.emteq.net)

 

Dataset

Prize

  • 1st Prize £2000

  • 2nd Prize £1500 value OCOsense smart glasses

  • 3rd Oculus GO VR headset

Deadlines

  • Registration via email by 16th August 2019

  • Challenge duration: 16th July – 30th August 2019

  • Submission deadline: 02nd Sept

 

Results

Thank you all for entering the challenge. We will be making available another more extensive activity recognition dataset in a few months so stay tuned!

Four teams did not have predictions for all of the timestamps in the Test (Subject 4) file. It was expected that each team would have a prediction for each timestamp, i.e., 437 013 rows/predictions or 536 575 rows/predictions (if the Transitions were included).  These 4 teams below were missing some of the timestamps/ predictions:  • TheHARNets -Gautham Krishna and Venkatesh Umaashankar  • Joanna Sendorek  • Evan Carter (missing 1 timestamps at the beginning of each activity segment).  • Hyeok Kwon (missing the first 1 second (50 timestamps) of each activity segment. Presumably this was because a sliding window was used with of buffering of 1 second of data to start predicting.  For these teams, as is commonly done in machine learning challenged, we included an “OTHER” class, and put the missing predictions in this class, hence also counted this class as 0 in the final F1 score.

Four teams did not have predictions for all of the timestamps in the Test (Subject 4) file. It was expected that each team would have a prediction for each timestamp, i.e., 437 013 rows/predictions or 536 575 rows/predictions (if the Transitions were included).

These 4 teams below were missing some of the timestamps/ predictions:

• TheHARNets -Gautham Krishna and Venkatesh Umaashankar

• Joanna Sendorek

• Evan Carter (missing 1 timestamps at the beginning of each activity segment).

• Hyeok Kwon (missing the first 1 second (50 timestamps) of each activity segment. Presumably this was because a sliding window was used with of buffering of 1 second of data to start predicting.

For these teams, as is commonly done in machine learning challenged, we included an “OTHER” class, and put the missing predictions in this class, hence also counted this class as 0 in the final F1 score.

 

Registration

The competition has now closed. We will announce the winner on 10th September. To receive updates about future machine learning challenges, sign up using the form below.

Name *
Name