Google AI Weblog: Google at ICLR 2022


The tenth Worldwide Convention on Studying Representations (ICLR 2022) kicks off this week, bringing collectively researchers, entrepreneurs, engineers and college students alike to debate and discover the quickly advancing discipline of deep studying. Fully digital this 12 months, ICLR 2022 presents convention and workshop tracks that current a number of the newest analysis in deep studying and its functions to areas starting from pc imaginative and prescient, speech recognition and textual content understanding to robotics, computational biology, and extra.

As a Platinum Sponsor of ICLR 2022 and Champion DEI Motion Fund contributor, Google may have a sturdy presence with almost 100 accepted publications and intensive participation on organizing committees and in workshops. You probably have registered for ICLR 2022, we hope you’ll watch our talks and study in regards to the work carried out at Google to handle advanced issues that have an effect on billions of individuals. Right here you’ll be able to study extra in regards to the analysis we might be presenting in addition to our basic involvement at ICLR 2022 (these with Google affiliations in daring).

Senior Space Chairs:

Consists of: Been Kim, Dale Schuurmans, Sergey Levine

Space Chairs:

Consists of: Adam White, Aditya Menon, Aleksandra Faust, Amin Karbasi, Amir Globerson, Andrew Dai, Balaji Lakshminarayanan, Behnam Neyshabur, Ben Poole, Bhuwan Dhingra, Bo Dai, Boqing Gong, Cristian Sminchisescu, David Ha, David Woodruff, Denny Zhou, Dipanjan Das, Dumitru Erhan, Dustin Tran, Emma Strubell, Eunsol Choi, George Dahl, George Tucker, Hanie Sedghi, Heinrich Jiang, Hossein Mobahi, Hugo Larochelle, Izhak Shafran, Jasper Snoek, Jean-Philippe Vert, Jeffrey Pennington, Justin Gilmer, Karol Hausman, Kevin Swersky, Krzysztof Choromanski, Mario Lučić, Mathieu Blondel, Matt Kusner, Michael Ryoo, Ming-Hsuan Yang, Minmin Chen, Mirella Lapata, Mohammad Ghavamzadeh, Mohammad Norouzi, Naman Agarwal, Nicholas Carlini, Olivier Bachem, Piyush Rai, Prateek Jain, Quentin Berthet, Richard Nock, Rose Yu, Sewoong Oh, Silvio Lattanzi, Slav Petrov, Srinadh Bhojanapalli, Tim Salimans, Ting Chen, Tong Zhang, Vikas Sindhwani, Weiran Wang, William Cohen, Xiaoming Liu

Workflow Chairs:

Consists of: Yaguang Li

Variety Fairness & Inclusion Chairs:

Consists of: Rosanne Liu

Invited Talks

Past Interpretability: Creating a Language to Form Our Relationships with AI

Google Speaker: Been Kim

Do You See What I See? Massive-Scale Studying from Multimodal Movies

Google Speaker: Cordelia Schmid


Hyperparameter Tuning with Renyi Differential Privateness – 2022 Excellent Paper Award

Nicolas Papernot, Thomas Steinke

MIDI-DDSP: Detailed Management of Musical Efficiency through Hierarchical Modeling

Yusong Wu, Ethan Manilow, Yi Deng, Rigel Swavely, Kyle Kastner, Tim Cooijmans, Aaron Courville, Cheng-Zhi Anna Huang, Jesse Engel

The Info Geometry of Unsupervised Reinforcement Studying

Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

Studying Strides in Convolutional Neural Networks – 2022 Excellent Paper Award

Rachid Riad*, Olivier Teboul, David Grangier, Neil Zeghidour

Poisoning and Backdooring Contrastive Studying

Nicholas Carlini, Andreas Terzis

Coordination Amongst Neural Modules Via a Shared World Workspace

Anirudh Goyal, Aniket Didolkar, Alex Lamb, Kartikeya Badola, Nan Rosemary Ke, Nasim Rahaman, Jonathan Binas, Charles Blundell, Michael Mozer, Yoshua Bengio

Positive-Tuned Language Fashions Are Zero-Shot Learners (see the weblog publish)

Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le

Massive Language Fashions Can Be Sturdy Differentially Non-public Learners

Xuechen Li, Florian Tramèr, Percy Liang, Tatsunori Hashimoto

Progressive Distillation for Quick Sampling of Diffusion Fashions

Tim Salimans, Jonathan Ho

Exploring the Limits of Massive Scale Pre-training

Samira Abnar, Mostafa Dehghani, Behnam Neyshabur, Hanie Sedghi

Scarf: Self-Supervised Contrastive Studying Utilizing Random Characteristic Corruption

Dara Bahri, Heinrich Jiang, Yi Tay, Donald Metzler

Scalable Sampling for Nonsymmetric Determinantal Level Processes

Insu Han, Mike Gartrell, Jennifer Gillenwater, Elvis Dohmatob, Amin Karbasi

When Imaginative and prescient Transformers Outperform ResNets with out Pre-training or Sturdy Information Augmentations

Xiangning Chen, Cho-Jui Hsieh, Boqing Gong

ViTGAN: Coaching GANs with Imaginative and prescient Transformers

Kwonjoon Lee, Huiwen Chang, Lu Jiang, Han Zhang, Zhuowen Tu, Ce Liu

Generalized Choice Transformer for Offline Hindsight Info Matching

Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu

The MultiBERTs: BERT Reproductions for Robustness Evaluation

Thibault Sellam, Steve Yadlowsky, Ian Tenney, Jason Wei, Naomi Saphra, Alexander D’Amour, Tal Linzen, Jasmijn Bastings, Iulia Turc, Jacob Eisenstein, Dipanjan Das, Ellie Pavlick

Scaling Legal guidelines for Neural Machine Translation

Behrooz Ghorbani, Orhan Firat, Markus Freitag, Ankur Bapna, Maxim Krikun, Xavier Garcia, Ciprian Chelba, Colin Cherry

Interpretable Unsupervised Variety Denoising and Artefact Elimination

Mangal Prakash, Mauricio Delbracio, Peyman Milanfar, Florian Jug

Understanding Latent Correlation-Based mostly Multiview Studying and Self-Supervision: An Identifiability Perspective

Qi Lyu, Xiao Fu, Weiran Wang, Songtao Lu

Memorizing Transformers

Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy

Churn Discount through Distillation

Heinrich Jiang, Harikrishna Narasimhan, Dara Bahri, Andrew Cotter, Afshin Rostamizadeh

DR3: Worth-Based mostly Deep Reinforcement Studying Requires Specific Regularization

Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, Sergey Levine

Path Auxiliary Proposal for MCMC in Discrete House

Haoran Solar, Hanjun Dai, Wei Xia, Arun Ramamurthy

On the Relation Between Statistical Studying and Perceptual Distances

Alexander Hepburn, Valero Laparra, Raul Santos-Rodriguez, Johannes Ballé, Jesús Malo

Chance Earlier than Utility: Studying And Utilizing Hierarchical Affordances

Robby Costales, Shariq Iqbal, Fei Sha

MT3: Multi-Activity Multitrack Music Transcription

Josh Gardner*, Ian Simon, Ethan Manilow*, Curtis Hawthorne, Jesse Engel

Bayesian Neural Community Priors Revisited

Vincent Fortuin, Adrià Garriga-Alonso, Sebastian W. Ober, Florian Wenzel, Gunnar Rätsch, Richard E. Turner, Mark van der Wilk, Laurence Aitchison

GradMax: Rising Neural Networks utilizing Gradient Info

Utku Evci, Bart van Merrienboer, Thomas Unterthiner, Fabian Pedregosa, Max Vladymyrov

Scene Transformer: A Unified Structure for Predicting Future Trajectories of A number of Brokers

Jiquan Ngiam, Benjamin Caine, Vijay Vasudevan, Zhengdong Zhang, Hao-Tien Lewis Chiang, Jeffrey Ling, Rebecca Roelofs, Alex Bewley, Chenxi Liu, Ashish Venugopal, David Weiss, Ben Sapp, Zhifeng Chen, Jonathon Shlens

The Function of Pretrained Representations for the OOD Generalization of RL Brokers

Frederik Träuble, Andrea Dittadi, Manuel Wüthrich, Felix Widmaier, Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer

Autoregressive Diffusion Fashions

Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, Tim Salimans

The Function of Permutation Invariance in Linear Mode Connectivity of Neural Networks

Rahim Entezari, Hanie Seghi, Olga Saukh, Behnam Neyshabur

DISSECT: Disentangled Simultaneous Explanations through Idea Traversals

Asma Ghandeharioun, Been Kim, Chun-Liang Li, Brendan Jou, Brian Eoff, Rosalind W. Picard

Anisotropic Random Characteristic Regression in Excessive Dimensions

Gabriel C. Mel, Jeffrey Pennington

Open-Vocabulary Object Detection through Imaginative and prescient and Language Data Distillation

Xiuye Gu, Tsung-Yi Lin*, Weicheng Kuo, Yin Cui

MCMC Ought to Combine: Studying Vitality-Based mostly Mannequin with Circulate-Based mostly Spine

Erik Nijkamp*, Ruiqi Gao, Pavel Sountsov, Srinivas Vasudevan, Bo Pang, Track-Chun Zhu, Ying Nian Wu

Impact of Scale on Catastrophic Forgetting in Neural Networks

Vinay Ramasesh, Aitor Lewkowycz, Ethan Dyer

Incremental False Unfavourable Detection for Contrastive Studying

Tsai-Shien Chen, Wei-Chih Hung, Hung-Yu Tseng, Shao-Yi Chien, Ming-Hsuan Yang

In the direction of Evaluating the Robustness of Neural Networks Discovered by Transduction

Jiefeng Chen, Xi Wu, Yang Guo, Yingyu Liang, Somesh Jha

What Do We Imply by Generalization in Federated Studying?

Honglin Yuan*, Warren Morningstar, Lin Ning, Karan Singhal

ViDT: An Environment friendly and Efficient Absolutely Transformer-Based mostly Object Detector

Hwanjun Track, Deqing Solar, Sanghyuk Chun, Varun Jampani, Dongyoon Han, Byeongho Heo, Wonjae Kim, Ming-Hsuan Yang

Measuring CLEVRness: Black-Field Testing of Visible Reasoning Fashions

Spyridon Mouselinos, Henryk Michalewski, Mateusz Malinowski

Knowledge of Committees: An Missed Method To Sooner and Extra Correct Fashions (see the weblog publish)

Xiaofang Wang, Dan Kondratyuk, Eric Christiansen, Kris M. Kitani, Yair Alon (prev. Movshovitz-Attias), Elad Eban

Leveraging Unlabeled Information to Predict Out-of-Distribution Efficiency

Saurabh Garg*, Sivaraman Balakrishnan, Zachary C. Lipton, Behnam Neyshabur, Hanie Sedghi

Information-Pushed Offline Optimization for Architecting {Hardware} Accelerators (see the weblog publish)

Aviral Kumar, Amir Yazdanbakhsh, Milad Hashemi, Kevin Swersky, Sergey Levine

Diurnal or Nocturnal? Federated Studying of Multi-branch Networks from Periodically Shifting Distributions

Chen Zhu*, Zheng Xu, Mingqing Chen, Jakub Konecny, Andrew Exhausting, Tom Goldstein

Coverage Gradients Incorporating the Future

David Venuto, Elaine Lau, Doina Precup, Ofir Nachum

Discrete Representations Strengthen Imaginative and prescient Transformer Robustness

Chengzhi Mao*, Lu Jiang, Mostafa Dehghani, Carl Vondrick, Rahul Sukthankar, Irfan Essa

SimVLM: Easy Visible Language Mannequin Pretraining with Weak Supervision (see the weblog publish)

Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, Yuan Cao

Neural Stochastic Twin Dynamic Programming

Hanjun Dai, Yuan Xue, Zia Syed, Dale Schuurmans, Bo Dai

PolyLoss: A Polynomial Enlargement Perspective of Classification Loss Features

Zhaoqi Leng, Mingxing Tan, Chenxi Liu, Ekin Dogus Cubuk, Xiaojie Shi, Shuyang Cheng, Dragomir Anguelov

Info Prioritization Via Empowerment in Visible Mannequin-Based mostly RL

Homanga Bharadhwaj*, Mohammad Babaeizadeh, Dumitru Erhan, Sergey Levine

Worth Operate Areas: Ability-Centric State Abstractions for Lengthy-Horizon Reasoning

Dhruv Shah, Peng Xu, Yao Lu, Ted Xiao, Alexander Toshev, Sergey Levine, Brian Ichter

Understanding and Leveraging Overparameterization in Recursive Worth Estimation

Chenjun Xiao, Bo Dai, Jincheng Mei, Oscar Ramirez, Ramki Gummadi, Chris Harris, Dale Schuurmans

The Effectivity Misnomer

Mostafa Dehghani, Anurag Arnab, Lucas Beyer, Ashish Vaswani, Yi Tay

On the Function of Inhabitants Heterogeneity in Emergent Communication

Mathieu Rita, Florian Strub, Jean-Bastien Grill, Olivier Pietquin, Emmanuel Dupoux

No One Illustration to Rule Them All: Overlapping Options of Coaching Strategies

Raphael Gontijo-Lopes, Yann Dauphin, Ekin D. Cubuk

Information Poisoning Received’t Save You From Facial Recognition

Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramèr

AdaMatch: A Unified Method to Semi-Supervised Studying and Area Adaptation

David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, Alex Kurakin

Most Entropy RL (Provably) Solves Some Sturdy RL Issues

Benjamin Eysenbach, Sergey Levine

Auto-scaling Imaginative and prescient Transformers With out Coaching

Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Track, Zhangyang Wang, Denny Zhou

Optimizing Few-Step Diffusion Samplers by Gradient Descent

Daniel Watson, William Chan, Jonathan Ho, Mohammad Norouzi

ExT5: In the direction of Excessive Multi-Activity Scaling for Switch Studying

Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, Donald Metzler

Fortuitous Forgetting in Connectionist Networks

Hattie Zhou, Ankit Vani, Hugo Larochelle, Aaron Courville

Evading Adversarial Instance Detection Defenses with Orthogonal Projected Gradient Descent

Oliver Bryniarski, Nabeel Hingun, Pedro Pachuca, Vincent Wang, Nicholas Carlini

Benchmarking the Spectrum of Agent Capabilities

Danijar Hafner

Charformer: Quick Character Transformers through Gradient-Based mostly Subword Tokenization

Yi Tay, Vinh Q. Tran, Sebastian Ruder, Jai Gupta, Hyung Received Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, Donald Metzler

Point out Reminiscence: Incorporating Textual Data into Transformers Via Entity Point out Consideration

Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Fei Sha, William Cohen

Eigencurve: Optimum Studying Fee Schedule for SGD on Quadratic Targets with Skewed Hessian Spectrums

Rui Pan, Haishan Ye, Tong Zhang

Scale Effectively: Insights from Pre-training and Positive-Tuning Transformers

Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Received Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler

Omni-Scale CNNs: A Easy and Efficient Kernel Measurement Configuration for Time Sequence Classification

Wensi Tang, Guodong Lengthy, Lu Liu,Tianyi Zhou, Michael Blumenstein, Jing Jiang

Embedded-Mannequin Flows: Combining the Inductive Biases of Mannequin-Free Deep Studying and Specific Probabilistic Modeling

Gianluigi Silvestri, Emily Fertig, Dave Moore, Luca Ambrogioni

Put up Hoc Explanations Could also be Ineffective for Detecting Unknown Spurious Correlation

Julius Adebayo, Michael Muelly, Hal Abelson, Been Kim

Axiomatic Explanations for Visible Search, Retrieval, and Similarity Studying

Mark Hamilton, Scott Lundberg, Stephanie Fu, Lei Zhang, William T. Freeman

Pix2seq: A Language Modeling Framework for Object Detection (see the weblog publish)

Ting Chen, Saurabh Saxena, Lala Li, David J. Fleet, Geoffrey Hinton

Mirror Descent Coverage Optimization

Manan Tomar, Lior Shani, Yonathan Efroni, Mohammad Ghavamzadeh

CodeTrek: Versatile Modeling of Code Utilizing an Extensible Relational Illustration

Pardis Pashakhanloo, Aaditya Naik, Yuepeng Wang, Hanjun Dai, Petros Maniatis, Mayur Naik

Conditional Object-Centric Studying From Video

Thomas Kipf, Gamaleldin F. Elsayed, Aravindh Mahendran, Austin Stone, Sara Sabour, Georg Heigold, Rico Jonschkowski, Alexey Dosovitskiy, Klaus Greff

A Loss Curvature Perspective on Coaching Instabilities of Deep Studying Fashions

Justin Gilmer, Behrooz Ghorbani, Ankush Garg, Sneha Kudugunta, Behnam Neyshabur, David Cardoze, George E. Dahl, Zack Nado, Orhan Firat

Autonomous Reinforcement Studying: Formalism and Benchmarking

Archit Sharma, Kelvin Xu, Nikhil Sardana, Abhishek Gupta, Karol Hausman, Sergey Levine, Chelsea Finn

TRAIL: Close to-Optimum Imitation Studying with Suboptimal Information

Mengjiao Yang, Sergey Levine, Ofir Nachum

Minimax Optimization With Clean Algorithmic Adversaries

Tanner Fiez, Lillian J. Ratliff, Chi Jin, Praneeth Netrapalli

Unsupervised Semantic Segmentation by Distilling Characteristic Correspondences

Mark Hamilton, Zhoutong Zhang, Bharath Hariharan, Noah Snavely, William T. Freeman

InfinityGAN: In the direction of Infinite-Pixel Picture Synthesis

Chieh Hubert Lin, Hsin-Ying Lee, Yen-Chi Cheng, Sergey Tulyakov, Ming-Hsuan Yang

Shuffle Non-public Stochastic Convex Optimization

Albert Cheu, Matthew Joseph, Jieming Mao, Binghui Peng

Hybrid Random Options

Krzysztof Choromanski, Haoxian Chen, Han Lin, Yuanzhe Ma, Arijit Sehanobish, Deepali Jain, Michael S Ryoo, Jake Varley, Andy Zeng, Valerii Likhosherstov, Dmitry Kalashnikov, Vikas Sindhwani, Adrian Weller

Vector-Quantized Picture Modeling With Improved VQGAN

Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, Yonghui Wu

On the Advantages of Most Probability Estimation for Regression and Forecasting

Pranjal Awasthi, Abhimanyu Das, Rajat Sen, Ananda Theertha Suresh

Surrogate Hole Minimization Improves Sharpness-Conscious Coaching

Juntang Zhuang*, Boqing Gong, Liangzhe Yuan, Yin Cui, Hartwig Adam, Nicha C. Dvornek, Sekhar Tatikonda, James S. Duncan, Ting Liu

On-line Goal Q-learning With Reverse Expertise Replay: Effectively Discovering the Optimum Coverage for Linear MDPs

Naman Agarwal, Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli, Syomantak Chaudhuri

CrossBeam: Studying to Search in Backside-Up Program Synthesis

Kensen Shi, Hanjun Dai, Kevin Ellis, Charles Sutton


Workshop on the Parts of Reasoning: Objects, Construction, and Causality (OSC)

Organizers embrace: Klaus Greff, Thomas Kipf

Workshop on Agent Studying in Open-Endedness

Organizers embrace: Krishna Srinivasan

Audio system embrace: Natasha Jaques, Danijar Hafner

Wiki-M3L: Wikipedia and Multi-modal & Multi-lingual Analysis

Organizers embrace: Klaus Greff, Thomas Kipf

Audio system embrace: Jason Baldridge, Tom Duerig

Setting Up ML Analysis Requirements to Speed up Progress

Organizers embrace: Rishabh Agarwal

Audio system and Panelists embrace: Katherine Heller, Sara Hooker, Corinna Cortes

From Cells to Societies: Collective Studying Throughout Scales

Organizers embrace: Mark Sandler, Max Vladymyrov

Audio system embrace: Blaise Aguera y Arcas, Alexander Mordvintsev, Michael Mozer

Emergent Communication: New Frontiers

Audio system embrace: Natasha Jaques

Deep Studying for Code

Organizers embrace: Jonathan Herzig

GroundedML: Anchoring Machine Studying in Classical Algorithmic Concept

Audio system embrace: Gintare Karolina Dziugaite

Generalizable Coverage Studying within the Bodily World

Audio system and Panelists embrace: Mrinal Kalakrishnan

CoSubmitting Summer season (CSS) Workshop

Organizers embrace: Rosanne Liu

*Work carried out whereas at Google.  


Please enter your comment!
Please enter your name here