Registration Desk Sat 16 Dec 07:30 a.m.
NeurIPS 2023 Workshop on Machine Learning for Creativity and Design Sat 16 Dec 08:15 a.m.
Machine co-creativity grows continually and exponentially with machine learning, especially with the recent surge of generative models on multiple domains. This workshop, as a continuation of a long series, explores these topics, including state-of-the-art algorithms for the creation, accessibility of these models for artists, social and cultural impact, as well as actual artistic applications. This workshop is consistent of Presentations by invited speakers, presentation of selected papers and artworks, two panels and an art showcase (collaborating with the chairs of the NeurIPS Creative AI track). The goal of this workshop is to bring together researchers and artists interested in exploring the intersection of human creativity and machine learning, and to look beyond technical issues to better understand the needs of artists and creators.
NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning: Blending New and Existing Knowledge Systems Sat 16 Dec 08:15 a.m.
Climate change is a complex, multifaceted, and far-reaching challenge with increasingly severe consequences for humanity as natural disasters multiply, sea levels rise, and ecosystems falter. Actions to address climate change take many forms, from designing smart electric grids to tracking greenhouse gas emissions through satellite imagery. Machine learning is emerging as one necessary aspect to mitigating and adapting to climate change via a wide array of techniques. Using machine learning to address climate change, a subset of the "AI for society" research area, requires close interdisciplinary collaboration among various fields with diverse practitioners. This workshop is intended to form connections and foster cross-pollination between researchers in machine learning and experts in complementary climate-relevant fields, in addition to providing a forum for those in the machine learning community who wish to tackle climate change.
Workshop: Adaptive Experimental Design and Active Learning in the Real World Sat 16 Dec 08:15 a.m.
Join us for an insightful workshop on adaptive experimental design and active learning. Dive into their use in fields like computational biology, materials discovery, chip design, and more.
Workshop: Generalization in Planning (GenPlan '23) Sat 16 Dec 08:15 a.m.
This workshop aims to bridge highly active but largely parallel research communities, addressing the problem of generalizable and transferrable learning for all forms of sequential decision making (SDM), including reinforcement learning and AI planning. We expect that this workshop will play a key role in accelerating the speed of foundational innovation in SDM with a synthesis of the best ideas for learning generalizable representations of learned knowledge and for reliably utilizing the learned knowledge across different sequential decision-making problems. NeurIPS presents an ideal, inclusive venue for dialog and technical interaction among researchers spanning the vast range of research communities that focus on these topics.
Workshop: Gaze Meets ML Sat 16 Dec 08:15 a.m.
Eye gaze has proven to be a cost-efficient way to collect large-scale physiological data that can reveal the underlying human attentional patterns in real-life workflows, and thus has long been explored as a signal to directly measure human-related cognition in various domains. Physiological data (including but not limited to eye gaze) offer new perception capabilities, which could be used in several ML domains, e.g., egocentric perception, embodied AI, NLP, etc. They can help infer human perception, intentions, beliefs, goals, and other cognition properties that are much needed for human-AI interactions and agent coordination. In addition, large collections of eye-tracking data have enabled data-driven modeling of human visual attention mechanisms, both for saliency or scan path prediction, with twofold advantages: from the neuro-scientific perspective to understand biological mechanisms better, and from the AI perspective to equip agents with the ability to mimic or predict human behavior and improve interpretability and interactions.
The Gaze meets ML workshop aims at bringing together an active research community to collectively drive progress in defining and addressing core problems in gaze-assisted machine learning. This year the workshop will run its 2nd edition at NeurIPS again and it attracts a diverse group of researchers from academia and industry presenting novel works in this area of research.
6th Robot Learning Workshop: Pretraining, Fine-Tuning, and Generalization with Large Scale Models Sat 16 Dec 08:15 a.m.
The proposed workshop focuses on the intersection of machine learning (ML) and robotics, under this year’s focus topic: “Pretraining, Fine-Tuning, and Generalization with Large Scale Models.” Embodied AI and robotics pose unique challenges and opportunities for utilizing large pre-trained models. We seek to host a diverse set of views and approaches from across the robotics domain and dive deep into questions such as: What sources of data can be used for training large models in robotics? What role should pre-training play in robotics pipelines? How far can pre-trained models generalize when faced with novel tasks and environments? What is currently missing to the pre-training paradigm for embodied systems?
Intrinsically Motivated Open-ended Learning (IMOL) Workshop Sat 16 Dec 08:15 a.m.
How do humans develop broad and flexible repertoires of knowledge and skills? How can we design autonomous lifelong learning machines with the same abilities? The field of IMOL explores these questions through integrating research on the motivational forces, learning architectures, and developmental and environmental constraints supporting the acquisition of open-ended repertoires of skill and knowledge.
At this full-day in-person NeurIPS workshop, we will gather speakers from a wide diversity of scientific traditions, showcase on-going research via contributed talks and poster sessions, and provide networking opportunities for research and mentorship discussions.
Temporal Graph Learning Workshop @ NeurIPS 2023 Sat 16 Dec 08:15 a.m.
Temporal graph learning is an emerging area of research in graph representation learning, motivated by the prevalence of evolving and dynamic interconnected data in different domains and applications. In this workshop, which will be the second workshop on temporal graph learning, we plan to bring together researchers working on relevant areas to exchange ideas on different aspects of temporal graph learning including datasets for discrete and continuous time graphs, evaluation strategies, theoretical foundations, as well as using temporal graph learning paradigms in real-world applications.
Workshop: AI for Science: from Theory to Practice Sat 16 Dec 08:15 a.m.
AI is being increasingly integrated into scientific discovery to augment and accelerate research, helping scientists to generate hypotheses, design experiments, collect and interpret large datasets, and gain new insights that might not have been possible using traditional scientific methods alone. It has solved scientific challenges that were unimaginable before, e.g., predicting 3D protein structures, simulating molecular systems, forecasting global climate, and discovering new scientific laws. Despite this promise, several critical gaps stifle algorithmic and scientific innovation in "AI for Science," and the overarching goal of this workshop is to grow AI for Science by closing these gaps: * Gap 1: Science of science. The principles of scientific methods have remained unchanged since the 17th century. How AI can facilitate the practice of scientific discovery itself often remains undiscussed. For example, instead of the numerous hypothesis-experiment cycles to make sense of a scientific phenomenon, can AI reason and output natural laws directly?* Gap 2: Limited exploration at the intersections of multiple disciplines. Solutions to grand challenges stretch across various disciplines. For example, protein structure prediction requires collaboration across physics, chemistry, and biology, and single-cell imaging of whole tumors can be approached by cosmology algorithms that connect cells as stars.* Gap 3: Unified ecosystems of datasets, models, and scientific hypotheses. Comprehensive ecosystems and engagements of the research community, e.g., accumulation of datasets, open-source platforms, and benchmarks, are needed to reliably evaluate AI tools and integrate them into scientific workflows and instruments so that they can contribute to scientific understanding or acquire it autonomously. The workshop will emphasize this indispensable ingredient to the success of AI for Science and engage in discussions around it.* Gap 4: Responsible use and development of AI for science. Interest in AI across scientific disciplines has grown, but very few AI models have progressed to routine use in practice. We plan to present a roadmap and guidelines for accelerating the translation of AI in science. To be successful, translation will require a team of engaged stakeholders and a systematic process from beginning (problem formulation) to end (widespread deployment).* Gap 5: Lack of educational resources. A critical element to increase the adoption of AI for scientific discovery across disciplines is to create accessible education materials and AI-lab protocols for both AI researchers and scientists with different areas of expertise, seniority, and level of interest.* Gap 6: Unrealistic methodological assumptions or directions. While AI researchers strive for methodological advances, they can make unrealistic assumptions that can limit the applicability of new algorithms, their adoption in real-world settings, and transition into implementation (e.g., at a particle accelerator, genome sequencing lab, or quantum chemistry lab). For example, while state-of-the-art molecule generation AI models perform well on benchmarks, they often generate molecules that can't be synthesized in a lab.
Workshop on Advancing Neural Network Training (WANT): Computational Efficiency, Scalability, and Resource Optimization Sat 16 Dec 08:15 a.m.
Unlock neural network training's potential for good and science! Enhance computational efficiency, scalability, and resource optimization. Join HPC and AI experts to tackle challenges in theory and applications.
Third Workshop on Efficient Natural Language and Speech Processing (ENLSP-III): Towards the Future of Large Language Models and their Emerging Descendants Sat 16 Dec 08:15 a.m.
The third version of the Efficient Natural Language and Speech Processing (ENLSP-III) workshop will focus on the future of large language and speech foundation models; and how to make them more efficient in terms of Data, Model, Training, and Inference for real-world applications as well as academic research. The workshop program offers an interactive platform for gathering different experts and talents from academia and industry through invited talks, panel discussion, paper submissions, reviews, interactive posters, oral presentations and a mentorship program. This will be a unique opportunity to discuss and share challenging problems, build connections, exchange ideas and brainstorm solutions, and foster future collaborations. The topics of this workshop can be of interest for people working on general machine learning, deep learning, optimization, theory and NLP & Speech applications.
Workshop: Generative AI and Biology (GenBio@NeurIPS2023) Sat 16 Dec 08:15 a.m.
Advancing biological discovery, therapeutic design, and pharma development through generative AI.
Workshop: Machine Learning for Audio Sat 16 Dec 08:20 a.m.
The Machine Learning for Audio Workshop at NeurIPS 2023 will bring together audio practitioners and machine learning researchers to a venue focused on various problems in audio, including music information retrieval, acoustic event detection, computational paralinguistics, speech transcription, multimodal modeling, and generative modeling of speech and other sounds. Our team has previously held multiple audio-related workshops at top machine learning venues, and both the organizing team and invited speakers represent broad diversity in terms of gender identity, affiliation, seniority, and geography. We also plan to solicit workshop papers on the topic.
Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 (FL@FM-NeurIPS'23) Sat 16 Dec 08:25 a.m.
An exciting forum for researchers to exchange the recent developments in federated learning in the modern age of foundation models.
Please visit our workshop webpage for full details: https://federated-learning.org/fl@fm-neurips-2023/
Workshop: Optimal Transport and Machine Learning Sat 16 Dec 08:30 a.m.
Over the last decade, optimal transport (OT) has evolved from a prize-winning research area in pure mathematics to a recurring theme bursting across many areas of machine learning (ML). Advancements in OT theory, computation, and statistics have fueled breakthroughs in a wide range of applications, from single-cell genomics \cite{schiebinger2019optimal} to generative modeling \cite{arjovsky2017wasserstein} and optimization of over-parametrized neural nets \cite{chizat2018global,de2021diffusion}, among many others. The OTML workshop series (in '14,~'17,~'19, and '21) has been instrumental in shaping this influential research thread. For~this new OTML installment, we aim even higher by hosting two exceptional plenary speakers: Luis Caffarelli, who received the 2023 Abel Prize for his seminal contributions to regularity theory for the Monge–Amp{`e}re equation and OT, and Felix Otto, the 2006 Leibniz Prize awardee and 2017 Blaise Pascal medalist, who made profound contributions to the theory of Wasserstein gradient flows. The OTML workshop will provide a unique platform to federate, disseminate, and advance current knowledge in this rapidly growing field. This, in turn, will facilitate cross-field fertilization and drive the community towards future groundbreaking discoveries.
Workshop: Socially Responsible Language Modelling Research (SoLaR) Sat 16 Dec 08:30 a.m.
The inaugural Socially Responsible Language Modelling Research (SoLaR) workshop at NeurIPS 2023 is an interdisciplinary gathering that aims to foster responsible and ethical research in the field of language modeling. Recognizing the significant risks and harms [33-37] associated with the development, deployment, and use of language models, the workshop emphasizes the need for researchers to focus on addressing these risks starting from the early stages of development. The workshop brings together experts and practitioners from various domains and academic fields with a shared commitment to promoting fairness, equity, accountability, transparency, and safety in language modeling research. In addition to technical works on socially responsible language modeling research, we also encourage sociotechnical submissions from other disciplines such as philosophy, law, and policy, in order to foster an interdisciplinary dialogue on the societal impacts of LMs.
Workshop: The Symbiosis of Deep Learning and Differential Equations -- III Sat 16 Dec 08:30 a.m.
In the deep learning community, a remarkable trend is emerging, where powerful architectures are created by leveraging classical mathematical modeling tools from diverse fields like differential equations, signal processing, and dynamical systems. Differential equations are a prime example: research on neural differential equations has expanded to include a large zoo of related models with applications ranging from time series analysis to robotics control. Score-based diffusion models are among state-of-the-art tools for generative modelling, drawing connections between diffusion models and neural differential equations. Other examples of deep architectures with important ties to classical fields of mathematical modelling include normalizing flows, graph neural diffusion models, Fourier neural operators, architectures exhibiting domain-specific equivariances, and latent dynamical models (e.g., latent NDEs, H3, S4, Hyena). The previous two editions of the Workshop on the Symbiosis of Deep Learning and Differential Equations have promoted the bidirectional exchange of ideas at the intersection of classical mathematical modelling and modern deep learning. On the one hand, this includes the use of differential equations and similar tools to create neural architectures, accelerate deep learning optimization problems, or study theoretical problems in deep learning. On the other hand, the Workshop also explores the use of deep learning methods to improve the speed, flexibility, or realism of computer simulations. Last year, we noted a particularly keen interest from the audience in neural architectures that leveraged classical mathematical models, such as those listed above. We therefore propose that the third edition of this Workshop focus on this theme.
Workshop: I Can’t Believe It’s Not Better (ICBINB): Failure Modes in the Age of Foundation Models Sat 16 Dec 08:45 a.m.
In the past year, tools such as ChatGPT, Stable Diffusion and SegmentAnything have had an immediate impact on our everyday lives. Many of these tools have been built using foundation models, that is, very large models (having billions or trillions of parameters) trained on vast amounts of data (Bommasani et al., 2021). The excitement around these foundation models and their capabilities might suggest that all the interesting problems have been solved and artificial general intelligence is just around the corner (Wei et al., 2022; Bubeck et al., 2023).
At this year’s I Can’t Believe It’s Not Better workshop we invite papers to cooly reflect on this optimism and to demonstrate that there are in fact many difficult and interesting open questions. The workshop will specifically focus on failure modes of foundation models, especially unexpected negative results. In addition, we invite contributions that will help us understand current and future disruptions of machine learning subfields as well as instances where these powerful methods merely remain complementary to another subfield of machine learning.
Contributions on the failure modes of foundation models might consider:
- Domain-specific areas where the application of foundation models did not work as expected.
- Failures in the safety and explainability of foundation models.
- The limits of current foundation model methodologies.
Besides failure modes of foundation models, this workshop also considers their impact on the ML ecosystem and potential problems that remain to be solved by these new systems. In this context, relevant questions include:
- Where do foundation models leave researchers in other areas (e.g., AI for science, recommender systems, Bayesian methods, bioinformatics)?
- Which important problems are not solved by training large models with large amounts of data?
- What unexpected negative results were encountered when applying foundation models to a specific domain?
Workshop: XAI in Action: Past, Present, and Future Applications Sat 16 Dec 08:50 a.m.
Transparency is vital for AI’s growth. This led to the design of new methods inexplainable AI. We aim to explore the current state of applied XAI and identifyfuture directions.
Workshop: Mathematics of Modern Machine Learning (M3L) Sat 16 Dec 08:50 a.m.
This workshop explores theory for understanding and advancing modern ML practices: optimization, generalization, and foundation models.
Workshop: Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations Sat 16 Dec 08:55 a.m.
This workshop brings together ML and policy experts to identify and address various technical and policy challenges that arise when regulating ML models.
Workshop: Machine Learning with New Compute Paradigms Sat 16 Dec 09:00 a.m.
As GPU computing comes closer to a plateau in terms of efficiency and cost due to Moore’s law reaching its limit, there is a growing need to explore alternative computing paradigms, such as (opto-)analog, neuromorphic, and low-power computing. This NeurIPS workshop aims to unite researchers from machine learning and alternative computation fields to establish a new hardware-ML feedback loop.By co-designing models with specialized accelerators, we can leverage the benefits of increased throughput or lower per-flop power consumption. Novel devices hold the potential to further accelerate standard deep learning or even enable efficient inference and training of hitherto compute-constrained model classes. However, new compute paradigms typically present challenges such as intrinsic noise, restricted sets of compute operations, or limited bit-depth, and thus require model-hardware co-design. This workshop’s goal is to foster cross-disciplinary collaboration to capitalize on the opportunities offered by emerging AI accelerators.
Workshop: Synthetic Data Generation with Generative AI Sat 16 Dec 09:00 a.m.
Synthetic data (SD) is data that has been generated by a mathematical model to solve downstream data science tasks. SD can be used to address three key problems: 1/ private data release, 2/ data de-biasing and fairness, 3/ data augmentation for boosting the performance of ML models. While SD offers great opportunities for these problems, SD generation is still a developing area of research. Systematic frameworks for SD deployment and evaluation are also still missing. Additionally, despite the substantial advances in Generative AI, the scientific community still lacks a unified understanding of how generative AI can be utilized to generate SD for different modalities.The goal of this workshop is to provide a platform for vigorous discussion from all these different perspectives with research communities in the hope of progressing the ideal of using SD for better and trustworthy ML training. Through submissions and facilitated discussions, we aim to characterize and mitigate the common challenges of SD generation that span numerous application domains. The workshop is jointly organized by academic researchers (University of Cambridge) and industry partners from tech (Amazon AI).
Workshop: Medical Imaging meets NeurIPS Sat 16 Dec 09:00 a.m.
“Medical Imaging meets NeurIPS” aims to bring researchers together from the medical imaging and machine learning communities to create a cutting-edge venue for discussing the major challenges in the field and opportunities for research and novel applications. The proposed event will be the continuation of a successful workshop organized for the past 6 years. It will feature a series of invited speakers (all confirmed) from academia, medical sciences, and industry to present their latest work, and to present reviews of recent technological advances and remaining major challenges. This year we aim to have all keynotes presented in person (to facilitate speaker interaction and discourse), an extended number of submitted talks (approximately double from previous years), and an updated call that highlights changes occurring in our interdisciplinary field.
Workshop: Symmetry and Geometry in Neural Representations Sat 16 Dec 09:00 a.m.
In recent years, there has been a growing appreciation for the importance of respecting the topological, algebraic, or geometric structure of data in machine learning models. In parallel, an emerging set of findings in computational neuroscience suggests that the preservation of this kind of mathematical structure may be a fundamental principle of neural coding in biology. The goal of this workshop is to bring together researchers from applied mathematics and deep learning with neuroscientists whose work reveals the elegant implementation of mathematical structure in biological neural circuitry. Group theory and differential geometry were instrumental in unifying the models of 20th-century physics. Likewise, they have the potential to unify our understanding of how neural systems form useful representations of the world.
Workshop: Machine Learning for Systems Sat 16 Dec 09:00 a.m.
Machine Learning (ML) for Systems describes the application of machine learning techniques to problems related to computer systems. By leveraging supervised learning and reinforcement learning (RL) approaches, machine learning can replace longstanding heuristics that currently drive many of these systems. This includes a wide range of topics, including multi-objective tasks such as designing new data structures, integrated circuits, or design verification, as well as implementing control algorithms for applications such as compilers, databases, memory management, or ML frameworks. While the systems community increasingly recognizes the importance of ML in solving a variety of different systems problems, ML for Systems remains an emerging area without widely established best practices, methods and strategies for the application of state-of-the-art machine learning techniques. The goal of this workshop is to provide an interdisciplinary venue for ML and Systems experts to push this boundary and start new directions within the ML for Systems area.
4th Workshop on Self-Supervised Learning: Theory and Practice Sat 16 Dec 09:00 a.m.
The 4th Workshop on "Self-Supervised Learning: Theory and Practice" aims to discuss the theory and practice of self-supervised learning across multiple research areas like vision, NLP \& robotics.
Workshop: Multi-Agent Security: Security as Key to AI Safety Sat 16 Dec 09:00 a.m.
This workshop proposal builds on the observation that the AI and cyber security communities are currently not sufficiently interconnected to navigate risks and opportunities in our multi-agent world. Through a series of discussions involving experts and audiences, provocation and intervention keynotes, and contributed content, we aim to compare, contrast, and synthesize near- and long-term perspectives of AI deployment across society. The fundamental goal of this workshop is to bring together researchers, practitioners, and activists across AI and cyber security in order to create a blueprint for the future of AI security in a multi-agent world, and to define, explore, and challenge the nascent field of multi-agent security (MASEC).
Submission deadline: September 25, 2023
Acceptance Notification: October 27, 2023
Workshop date: December 16, 2023
Competition: Practical Vector Search (Big ANN) Challenge 2023 Sat 16 Dec 09:00 a.m.
We propose a competition to encourage the development of indexing data structures and search algorithms for the Approximate Nearest Neighbor (ANN) or Vector search problem in real-world scenarios. Rather than evaluating the classical uniform indexing of dense vectors, this competition proposes to focus on difficult variants of the task. Optimizing these variants is increasingly relevant as vector search becomes commonplace and the "simple" case is sufficiently well addressed. Specifically, we propose the sparse, filtered, out-of-distribution and streaming variants of ANNS.These variants require adapted search algorithms and strategies with different tradeoffs. This competition aims at being accessible to participants with modest compute resources by limiting the scale of the datasets, normalizing on limited evaluation hardware, and accepting open-source submissions to only a subset of the datasets.This competition will build on the evaluation framework https://github.com/harsha-simhadri/big-ann-benchmarksthat we set up for the billion-scale ANNS challenge https://big-ann-benchmarks.com of NeurIPS 2021.
Workshop: Learning-Based Solutions for Inverse Problems Sat 16 Dec 09:00 a.m.
Inverse problems are ubiquitous in science, medicine, and engineering,and research in this area has produced real-world impact in medical tomography, seismic imaging, computational photography, and other domains. The recent rapid progress in learning-based image generation raises exciting opportunities in inverse problems, and this workshop seeks to gather a diverse set of participants who apply machine learning to inverse problems, from mathematicians and computer scientists to physicists and biologists. This gathering will facilitate new collaborations and will help develop more effective, reliable, and trustworthy learning-based solutions to inverse problems.
Competition: MyoChallenge 2023: Towards Human-Level Dexterity and Agility Sat 16 Dec 09:00 a.m.
Humans effortlessly grasp objects of diverse shapes and properties and execute agile locomotion without overwhelming their cognitive capacities. This ability was acquired through millions of years of evolution, which honed the symbiotic relationship between the central and peripheral nervous systems and the musculoskeletal structure. Consequently, it is not surprising that uncovering the intricacies of these complex, evolved systems underlying human movement remains a formidable challenge. Advancements in neuromechanical simulations and data driven methods offer promising avenues to overcome these obstacles. To this end, we propose to organize \longName, where we will provide a highly detailed neuromechanical simulation environment and invite experts to develop any type of controller, including state-of-the-art reinforcement learning. Building on the success of NeurIPS 2022: MyoChallenge, which focused on manipulating single objects with a highly articulated musculoskeletal hand, this year's competition will feature two tracks: the manipulation track and the locomotion track. The manipulation track will utilize a substantially extended musculoskeletal model of the hand with added elbow and shoulder, MyoArm, which has 27 DOFs controlled by 63 muscles, and aims to realize generalizable manipulation for unseen objects. The new locomotion track will feature the newly developed MyoLeg, which represents the full body with articulated legs featuring 16 DOFs controlled by 80 muscles. This track aims to push the boundaries of agile locomotion and benchmark the World Chase Tag match. The competition is not only suitable for testing state-of-the-art reinforcement learning techniques but will also advance our understanding of human movement toward improved rehabilitation and assistive technologies.
NeurIPS 2023 Competition Proposal: Open Catalyst Challenge Sat 16 Dec 11:00 a.m.
The Open Catalyst Challenge is aimed at encouraging the community to make progress on this consequential problem of catalyst materials discovery. An important proxy for catalyst performance is the adsorption energy, i.e. how strongly the adsorbate molecule binds to the catalyst’s surface. This year’s challenge will consist of one primary task – find the adsorption energy (global minimum) given an adsorbate and a catalyst surface. Adsorption energies can be used for screening catalysts and as a result this task will directly support the acceleration of computational discovery of novel catalysts for sustainable energy applications.
This year's results presentation will be part of the AI for Science Workshop held on December 16th.
Competition: Train Offline, Test Online: A Democratized Robotics Benchmark Sat 16 Dec 01:30 p.m.
The Train Offline, Test Online (TOTO) competition provides a shared, remote robot setup paired with an open-source dataset. Participants can train offline agents (e.g. via behavior cloning or offline reinforcement learning) and evaluate them on two common manipulation tasks (pouring and scooping), which require challenging generalization across objects, locations, and lighting conditions. TOTO has an additional track for evaluating vision representations, which are combined with a standard behavior cloning method for evaluation. The competition begins with a simulation phase to qualify for the real-robot phase. We hope that TOTO will recruit newcomers to robotics by giving them a chance to compete and win on real hardware and the resources needed to get started.
Competition: Single-cell perturbation prediction: generalizing experimental interventions to unseen contexts Sat 16 Dec 01:30 p.m.
Single-cell sequencing technologies have revolutionized our understanding of the heterogeneity and dynamics of cells and tissues. However, single-cell data analysis faces challenges such as high dimensionality, sparsity, noise, and limited ground truth. In this 3rd installment in the Open Problems in Single-Cell Analysis competitions at NeurIPS, we challenge competitors to develop algorithms capable of predicting single-cell perturbation response across experimental conditions and cell types. We will provide a new benchmark dataset of human peripheral blood cells under chemical perturbations, which simulate drug discovery experiments. The objective is to develop methods that can generalize to unseen perturbations and cell types to enable scientists to overcome the practical and economic limitations of single-cell perturbation studies. The goal of this competition is to leverage advances in representation learning (in particular, self-supervised, multi-view, and transfer learning) to unlock new capabilities bridging data science, machine learning, and computational biology. We hope this effort will continue to foster collaboration between the computational biology and machine learning communities to advance the development of algorithms for biomedical data.
Competition: TDC 2023 (LLM Edition): The Trojan Detection Challenge Sat 16 Dec 01:30 p.m.
The Trojan Detection Challenge (LLM Edition) aims to advance the understanding and development of methods for detecting hidden functionality in large language models (LLMs). The competition features two main tracks: the Trojan Detection Track and the Red Teaming Track. In the Trojan Detection Track, participants are given a large language model containing thousands of trojans and tasked with discovering the triggers for these trojans. In the Red Teaming Track, participants are challenged to elicit specific undesirable behaviors from a large language model fine-tuned to avoid those behaviors. TDC 2023 will include Base Model and Large Model subtracks to enable broader participation, and established trojan detection and red teaming baselines will be provided as a starting point. By uniting trojan detection and red teaming, TDC 2023 aims to foster collaboration between these communities to promote research on hidden functionality in LLMs and enhance the robustness and security of AI systems.