Call for Participation
HCAI@NeurIPS2022
Human Centered AI workshop at NeurIPS 2022, the 36th Conference on Neural Information Processing Systems
A VIRTUAL workshop
December 9, 2022
Workshop: https://hcai-at-neurips.github.io/site/
NeurIPS 2022: https://hcai-at-neurips.github.io/site/
Submission deadline: September 22, 2022
Many institutions, researchers, and thought leaders have recently promoted the notion that AI systems should be human-centered. Although there is no consensus definition for what human-centered AI means, it commonly refers to a set of principles that AI systems ought to be
(a) reliable, safe, and trustworthy;
(b) relatively free from bias and not promote or reinforce existing structural inequalities;
(c) empowering people by supporting their creativity and (d) fit-for-purpose for use by specific humans, or groups of humans, to meet human needs and self-efficacy.
Following a successful workshop in 2021, our 2022 virtual workshop aims to bring people together across different communities that have a stake in HCAI. Broadly, these communities encompass researchers and practitioners working across AI, machine learning, and human-computer interaction (HCI). However, many disconnected sub-communities exist that work across various. important
topics in HCAI, including:
- human-centered explainable AI (XAI),
- AI Fairness,
- human-centered data science (HCDS),
- human-centered machine learning,
- computational creativity,
- human-AI co-creation.
Building on the 2021 workshop, we will explore topics such as:
- Processes, principles, and technologies to make AI systems more human-centered.
- Experimental design and data collection to strengthen HCAI studies.
- Explanations (XAI) that serve the needs of diverse end-users.
- Human-AI frameworks for analyzing, designing, and evaluating HCAI systems.
- Collaboration and (co-)creativity in HCAI systems.
- Emergent questions in ethics and fairness.
THEMES
Submissions to the workshop may address one or more of the following themes – or other relevant themes of interest:
- Theoretical frameworks, disciplines and disciplinarity.
How we approach AI and data science depends on the “lenses” that we bring, based in theory and in practice. Through what perspectives do you approach this complex domain?
- Experiences and cases with AI systems.
Theories suggest studies and experience reports. Studies and experience reports inform theories. What cases or experiences of human-AI interactions can you contribute to our inter-disciplinary knowledge and discussion?
- Design frameworks for human initiative and AI initiative.
Scholars have debated the question of who should have initiative or control between human and AI for over 70 years. What forms of discrete or shared initiative are possible now, and how can we include these possibilities in our systems?
- Experiences and cases with human-AI collaboration.
Design frameworks can inform applications. Experiences with applications can challenge frameworks, or lead to new frameworks. What cases or experiences of human-AI collaborations can you contribute to our inter-disciplinary knowledge and discussion?
- Fairness and bias.
Machine learning-based decision-making systems have the potential to replicate or even exacerbate social inequeties and discrimination. As a result, there is a surge of recent work on developing machine learning algorithms with fairness constraints or guarantees. However, for these tools to have positive real-world impact, their design and implementation should be informed by a clear understanding of human behavior and real needs. What is the interplay between algorithmic fairness and HCI?
- Privacy.
In many important machine learning tasks – e.g. those related to healthcare – there is much to be gained from training on personal information, but we must take care to respect individuals’ privacy appropriately. In this workshop, we are particularly interested in understanding specific use cases and considering costs and benefits to individuals and society of making use of private data.
- Transparency, explainability, interpretability, and trust.
We are interested to understand what specific types of explainability or interpretability are helpful to whom in concrete settings, and in exploring any tradeoffs which are inevitably faced.
- User research.
What do we need to know in order to create or enhance an AI-based system? Our engineering heritage suggests that we seek user needs and resolve user pain points. How does our user research for these concepts change with AI systems? Are there other user research goals that are now possible with more sophisticated AI resources and implementations?
- Accountability.
When people engineer (or create) an AI system and its data, how do we hold them and ourselves accountable for design decisions and outcomes?
- Automation of AI.
It is tempting to apply AI to AI, in the form of automated AI. Is this a credible approach? Does human discernment play a role in creating AI systems? Is this a necessary role?
- Evaluation.
What are the appropriate measurement concepts and resulting metrics to assess our AI systems? How do we balance among efficiency, explainability, understandability, user satisfaction, and user hedonics?
- Governance.
Consequential machine learning systems impact the lives of millions of people in areas such as criminal justice, healthcare, education, credit scoring or hiring. Key concepts in the governance of such systems include algorithmic discrimination, transparency, veracity, explainability and the preservation of privacy. What is the role of HCI in relation to the governance of such systems?
- Problematizing data.
Data initially seem to be simple and ”objective.” However, a growing body of evidence shows the often-hidden role of humans in shaping the data in AI. Should we design our systems to strengthen human engagement with data? or to reduce human impact on data?
- Qualitative data in data science.
Quantitative data analyses may be powerful, but often decontextualized and potentially shallow. Qualitative data analyses may be insightful, but often limited to a narrow sample. How can we combine the strengths of these two approaches?
- Values and ethics of AI.
Values and ethics are necessarily entangled with localized, situated, and culturally-informed human perspectives. What are useful frameworks for a comparative analysis of values and ethics in AI?
CODE OF CONDUCT
The workshop will be governed by the NeurIPS Code of Conduct, https://neurips.cc/FAQ/EthicsFairnessInclusivityandCodeofConduct.
Submissions should also be consistent with the Code of Conduct.
SUBMISSIONS
Submissions may address one or more of the themes – or other relevant themes of interest – in 1-2 page position papers. All submissions will be reviewed by the workshop’s Program Committee. Authors of the highest-rated submissions will be invited to participate in one of the panel discussions with our invited speakers.
NeurIPS do not publish workshop papers. We plan to make a public website for the workshop, where you will be able to list your accepted contribution in one of the following ways:
(1) title+abstract+authors;
(2) the contents of item #1 plus PDF of your submission, hosted locally;
(3) the contents of item #1 plus a link to your PDF at a website of your choosing.
NeurIPS provide resources for formatting full-papers at https://neurips.cc/Conferences/2022/PaperInformation/StyleFiles. In practice, we will consider any reasonable approximation of those formats.
Where to submit:
https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/HCAI
Last year’s workshop:
https://sites.google.com/view/hcai-human-centered-ai-neurips/home
IMPORTANT DATES
Submission: 2022-09-22 AoE (Anywhere on Earth)
Notification: 2022-10-15
CONTACT
Questions and Comments: michael_muller@us.ibm.com
Michael Muller, PhD, Senior Research Scientist, IBM Research, Cambridge MA USA
|