Program (tentative)

(All listed times are EST)

8:55-9:00 Opening Remarks
9:00-9:30 Planting undetectable backdoors in Machine Learning models
Or Zamir (Invited Speaker)
9:30-10:00 An Algorithmic Framework for Bias Bounties
Aaron Roth (Invited Speaker)
10:00-10:30 Break
10:30-11:30 Spotlight talks 1

From Adaptive Query Release to Machine Unlearning
Enayat Ullah, Raman Arora

Geometric Alignment Improves Fully Test Time Adaptation
Kowshik Thopalli, Pavan K. Turaga, Jayaraman J. Thiagarajan

Modeling the Right to Be Forgotten
Aloni Cohen, Adam Smith, Marika Swanberg, Prashant Nalini Vasudevan

Revisiting the Updates of a Pre-trained Model for Few-shot Learning
Yujin Kim, Jaehoon Oh, Sungnyun Kim, Se-Young Yun

Super Seeds: extreme model compression by trading off storage with computation
Nayoung Lee*, Shashank Rajput*, Jy-yong Sohn, Hongyi Wang, Alliot Nagle, Eric Xing, Kangwook Lee, Dimitris Papailiopoulos (*: equal contribution)

Beyond Tabula Rasa: Reincarnating Reinforcement Learning
Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, Marc G Bellemare
11:30-12:00 Adapting Deep Networks to Distribution Shift with Minimal Assumptions
Chelsea Finn (Invited Speaker)
12:00-1:30 Lunch break
1:30-2:00 What does it mean to unlearn?
Nicolas Papernot (Invited Speaker)
2:00-2:30 Test-time adaptation via the convex conjugate
Zico Kolter (Invited Speaker)
2:30-3:00 Spotlight talks 2

Simulating Bandit Learning from User Feedback for Extractive Question Answering
Ge Gao, Eunsol Choi, Yoav Artzi

How Adaptive are Adaptive Test-time Defenses?
Francesco Croce, Sven Gowal, Thomas Brunner, Evan Shelhamer, Matthias Hein, Ali Taylan Cemgil

Comparing Model and Input Updates for Test-Time Adaptation to Corruption
Jin Gao, Jialing Zhang, Xihui Liu, Trevor Darrell, Evan Shelhamer, Dequan Wang
3:00-3:30 Break
3:30-5:30 Poster Session

Accepted Papers

Accepted papers will be presented as in-person posters or pre-recorded lightning talks. Additionally, nine papers were selected as spotlight talks (all of which will be in person).

Poster Session:

Virtual poster presentations (2 minute videos):

Context

In modern ML domains, state-of-the-art performance is attained by highly overparameterized models that are expensive to train, costing weeks of time and millions of dollars. At the same time, after deploying the model, the learner may realize issues such as leakage of private data or vulnerability to adversarial examples. The learner may also wish to impose additional constraints post-deployment, for example, to ensure fairness for different subgroups. Retraining the model from scratch to incorporate additional desiderata would be expensive. As a consequence, one would instead prefer to update the model, which can yield significant savings of resources such as time, computation, and memory over retraining from scratch. Some instances of this principle in action include the emerging field of machine unlearning, and the celebrated paradigm of fine-tuning pretrained models. The goal of our workshop is to provide a platform to stimulate discussion about both the state-of-the-art in updatable ML and future challenges in the field.

This workshop will bring together researchers from various ML communities to discuss recent theoretical and empirical developments in updatable machine learning

Specific topics of interest for the workshop include (but are not limited to) theoretical and empirical works in:

Submission

The goal of UpML is to bring together researchers from various theoretical and applied ML communities working on topics related to post deployment modification of a trained model. We seek contributions from different research areas of computer science, and statistics.

Authors are invited to submit a short abstract (4 pages of main content + unlimited pages for references) of their work. Submissions are single-blind (non-anonymized), and there is no prescribed style file (though authors should be considerate of reviewers in their selection). Supplementary material (proofs, additional experiments, etc) can be included after the references but it is not expected that the authors will review the content beyond the first 4 pages. The authors can also provide a link pointing to the full version of the paper.

Submissions will undergo a lightweight review process and will be judged on originality, relevance, interest and clarity. Submission should describe novel work or work that has already appeared elsewhere but that can stimulate the discussion between different communities at the workshop. Accepted abstracts will be presented at the workshop either as a talk or a poster.

The workshop will not have formal proceedings and is not intended to preclude later publication at another venue. We will be posting links to accepted papers publicly on the workshop website (if the authors of that paper want to).

Call for Papers (PDF)

Invited Speakers

Important Dates

Abstract Submission
Monday, May 16, 2022 (11:59 PM)
Notification
Monday, June 13, 2022
Workshop
July 23, 2022

Organizing and Program Committee

Contact

upml2022workshop@gmail.com

Submission website


OpenReview