PAKDD 2019 Challenge

Overview

The winners of the competition are:

First Place   deepsmart (DeepBlueAI)
Second Place   baomengjiao (ML Intelligence)
Third Place   automan (Meta_Learners)

The competition has been launched at CodaLab, please follow the link to participate:

https://competitions.codalab.org/competitions/20675

Data

Data Format

For each instance, we have the following 4 types of features split by blank in our instance files, i.e.,
Categorical Feature: an integer describing which category the instance belongs to.
Numerical Feature: a real value.
Multi-value Categorical Feature: a set of integers, split by the comma. The size of the set is not fixed and can be different for each instance. For example, topics of an article, words in a title, items bought by a user and so on.
Time Feature: an integer describing time information.
Note: Categorical/Multi-value Categorical features with a large number of values following a power law might be included.

Feedback Phase Public Datasets

5 public datasets are released, including their first 5 batches, another 5 batches are kept private.

DATASET A
Budget NaN
#Cat 51
#Num 23
#MVC 6
#Time 2
#Feature 82
#Instance ~10 Million
DATASET B
Budget NaN
#Cat 17
#Num 7
#MVC 1
#Time 0
#Feature 25
#Instance ~1.9 Million
DATASET C
Budget NaN
#Cat 44
#Num 20
#MVC 9
#Time 6
#Feature 79
#Instance ~2 Million
DATASET D
Budget NaN
#Cat 17
#Num 54
#MVC 1
#Time 4
#Feature 76
#Instance ~1.5 Million
DATASET E
Budget NaN
#Cat 25
#Num 6
#MVC 1
#Time 2
#Feature 34
#Instance ~17 Million

Budget=Time budget(seconds). #Cat=Number of categorical features. #Num=Number of numerical features. #MVC=Number of multi-value categorical features. #Time=Number of time features. #Feature=Total number of features. #Instance=Total number of instances for all 10 batches.

AutoML Phase Private Datasets

5 private datasets to be released.

Basic Tips

Here, we provide some basic tips for dealing with large datasets:
• Subsampling/multi-fidelity AutoML approaches might be needed for these datasets.
• Incremental learning might be needed for these datasets.
Some basic tips for handling difficult features:
• One-hot encoding for Categorical features.
• Hashing tricks might be used for Categorical and Multi-value Categorical features.
• Normalization tricks for Numerical features.
• If the number of elements in a Categorical/Multi-value Categorical feature is too large, instead of hashing tricks, one might compute moving frequencies of these elements and always keep top ones.
• For Time features, one might minus a fixed value from the original feature. When multiple Time features present, one might construct new features based on the difference of these Time features.
• For missing features, i.e., NaN, one might set it as a default value or replace it with another valid value indicating it is missing.
Note: The competition focuses on automatically combating with the drifting concepts. The processing of features might need to be adaptive over time. Automatic feature generation and selection methods or deep learning approaches might be important for these datasets.

AutoML
1st Place Prize +Travel Grant
2nd Place Prize +Travel Grant
3rd Place Prize +Travel Grant

Sponsored by 4Paradigm, ChaLearn, and Amazon.

Rules

Evaluation

The evaluation scheme is depicted in the image below. Recall this is a competition receiving code submissions. Participants must prepare an AutoML program that will be uploaded to the challenge platform. The code will be executed in computer workers, autonomously; and allowed to run for a maximum amount of time. Code exceeding this time will be penalized with setting the dataset’s AUC as 0. Different from previous challenges, in this competition, we will evaluate the Lifelong learning capabilities of AutoML solutions, hence an appropriate protocol has been designed.

The datasets are chronologically split into 10 batches, each batch will represent a stage of the lifelong evaluation scenario. Code submitted by participants will use the first batch to generate a model, which will then be used to predict labels for the first test batch(i.e., the second batch). The performance on this test batch will be recorded. After this, the labels of the first test batch will be made available to the computer program. The computer program may use such labels to improve its initial model and make predictions for the subsequent test batch. The process will continue until all of the test batches have been evaluated. We call this 1 / 9 split evaluation, meaning that first batch for initial training, all successive 9 batches for evaluation.

Each dataset will be split into 10 batches and the data will be progressively presented to the participants’ AutoML programs:

STEP# TRAINING DATA TEST DATA
1 LABELED BATCH_0 UNLABELED BATCH_1
2 LABELED (BATCH_0 + BATCH_1) UNLABELED BATCH_2
3 LABELED (BATCH_0 + BATCH_1 + BATCH_2) UNLABELED BATCH_3
9 EVERYTHING LABELED UP TO BATCH_8 UNLABELED BATCH_9

• For each dataset, the first batch will be released as an initial training set, the evaluation will be operated on all the rest 9 batches.
• For each batch of each dataset, the evaluation will consist in computing the area under the ROC curve (AUC).
• For each dataset, we will take the average of the AUC ranks over all the successive 9 batches of the dataset. A ranking will be performed according to this metric.
• For the final score, we will use the average rank over all datasets.
• There will be a time budget for each dataset. Code exceeding the maximum execution time will be aborted and assigned an AUC=0.

Note:
• The 1 / 9 split evaluation will be used in both Feedback and AutoML phases.
• But all datasets in Feedback phase consist of 5 released batches + 5 private batches. Although 5 batches are released at Feedback Phase, the evaluation will be calculated at all the 9 batches except the first batch. This is the same as the AutoML phase.
The rationale:
• During Feedback phase, participants are provided with 5 released batches so that they can develop their method with at home, but when they submit their code, they are evaluated identically in both phases.
• During AutoML phase, we need as many test batches as possible to see the significance of the adaptation.
• In Feedback phase, the test on the first 4 batches will be biased. We will show detailed results on all batches, so it will be evident if there is bias or not.

Terms & Conditions

• The competition will be run in the CodaLab competition platform
• The competition is open to all interested researchers, specialists, and students. Members of the Contest Organizing Committee cannot participate.
• Participants may submit solutions as teams made up of one or more persons.
• Each team needs to designate a leader responsible for communication with the Organizers.
• One person can only be part of one team.
• Registration will be ceased two weeks before the start of the blind phase.
• A winner of the competition is chosen on the basis of the final evaluation results. In the case of draws in the evaluation scores, the time of the submission will be taken into account.
• Each team is obligated to provide a short report (fact sheet) describing their final solution.
• By enrolling to this competition you grant the organizers rights to process your submissions for the purpose of evaluation and post-competition research.

Dissemination

• Top-ranked participants will be invited to attend a workshop collocated with PAKDD 2019 to describe their methods and findings. Winners of prizes are expected to attend.
• The challenge is part of the competition program of the PAKDD 2019 conference. Organizers are making arrangements for the possible publication of a book chapter or article written jointly by organizers and the participants with the best solutions.

Timeline

25 Dec 2018: Beginning of the competition, release of development data.

7 Mar 2019: End of the feedback phase.

15 Mar 2019: End of the testing phase.

20 Mar 2019: End of the AutoML phase.

14 Apr 2019: Begining of PAKDD conference.

About

Sponsors

Committee

In case of any questions please send an email to hugo.jair@gmail.com

Isabelle Guyon, UPSud/INRIA Univ. Paris-Saclay, France & ChaLearn, USA, (Coordinator, Platform Administrator, Advisor),guyon@clopinet.com

Quanming Yao, 4Paradigm Inc. Beijing, China, (Coordinator, Baseline Provider, Data Provider) yaoquanming@4paradigm.com

Ling Yue, 4Paradigm Inc. Beijing, China, (Coordinator, Baseline Provider, Platform Administrator), yueling@4paradigm.com

Mengshuo Wang, 4Paradigm Inc. Beijing, China, (Baseline Provider, Data Provider) wangmengshuo@4paradigm.com

Wei-Wei Tu, 4Paradigm Inc., Beijing, China, (Coordinator, Baseline Provider, Data Provider),tuww.cn@gmail.com

Hugo Jair Escalante, INAOE (Mexico), ChaLearn (USA), (Platform Administrator, Coordinator), hugo.jair@gmail.com

Evelyne Viegas, Microsoft Research, (Coordinator, Advisor), evelynev@microsoft.com

Organization Institutes

About AutoML Challenge

Previous AutoML Challenges: The First AutoML Challenge and The Second AutoML Challenge.

AutoML workshops can be found here.

Microsoft research blog post on AutoML Challenge can be found here.

KDD Nuggets post on AutoML Challenge can be found here.

I. Guyon et al. A Brief Review of the ChaLearn AutoML Challenge: Any-time Any-dataset Learning Without Human Intervention. ICML W 2016. link

I. Guyon et al. Design of the 2015 ChaLearn AutoML challenge. IJCNN 2015. link

Springer Series on Challenges in Machine Learning. link

Q. Yao et al. Taking Human out of Learning Applications: A Survey on Automated Machine Learning. (a comprehensive survey on AutoML). link

About 4Paradigm Inc. (Main Sponsor, Baseline Provider & Data Provider, Coordinator)

Founded in early 2015, 4Paradigm (https://www.4paradigm.com/) is one of the world’s leading AI technology and service providers for industrial applications. 4Paradigm’s flagship product – the AI Prophet – is an AI development platform that enables enterprises to effortlessly build their own AI applications, and thereby significantly increase their operation’s efficiency. Using the AI Prophet, a company can develop a data-driven “AI Core System”, which could be largely regarded as a second core system next to the traditional transaction-oriented Core Banking System (IBM Mainframe) often found in banks. Beyond this, 4Paradigm has also successfully developed more than 100 AI solutions for use in various settings such as finance, telecommunication and Internet applications. These solutions include, but are not limited to, smart pricing, real-time anti-fraud systems, precision marketing, personalized recommendation and more. And while it is clear that 4Paradigm can completely set up a new paradigm that an organization uses its data, its scope of services does not stop there. 4Paradigm uses state-of-the-art machine learning technologies and practical experiences to bring together a team of experts ranging from scientists to architects. This team has successfully built China’s largest machine learning system and the world’s first commercial deep learning system. However, 4Paradigm’s success does not stop there. With its core team pioneering the research of “Transfer Learning,” 4Paradigm takes the lead in this area, and as a result, has drawn great attention by worldwide tech giants.

About ChaLearn & CodaLab (Platform Provider, Coordinator)

ChaLearn (http://chalearn.org) is a non-profit organization with vast experience in the organization of academic challenges. ChaLearn is interested in all aspects of challenge organization, including data gathering procedures, evaluation protocols, novel challenge scenarios (e.g., competitions), training for challenge organizers, challenge analytics, results from dissemination and, ultimately, advancing the state-of-the-art through challenges. ChaLearn is collaborating with the organization of the PAKDD 2019 data competition (AutoML Challenge).

The competition will be run in the CodaLab platform (https://competitions.codalab.org/). CodaLab is an open-source web-based platform that enables researchers, developers, and data scientists to collaborate, with the goal of advancing research fields where machine learning and advanced computation is used. CodaLab offers several features targeting reproducible research. In the context of the AutoML Challenge 2018, CodaLab is the platform that will allow the evaluation of participants solutions. Codalab is administered by Université Paris-Saclay and maintained by CKcollab, LLC. This will be possible by the funding of 4Paradigm and a Microsoft Azure for Research grant.