Research overview
The Humanising Machine Intelligence (HMI) Grand Challenge project is developing the foundational research needed to build democratically and culturally legitimate AI systems, knitting together insights from computer science, law, philosophy, political science, and sociology. HMI research centres around four themes: automating governance, personalisation, algorithmic ethics and human-AI interaction.
Automating Governance As data and AI are increasingly used—by states and digital platforms—to exercise power over us, what does it mean for that power to be used justly? How can we design socio-technical systems that enable legitimate AI, where power is exercised only by those with standing to do so and is subject to standards of due process and accountability?
Personalisation The most sophisticated AI systems in the world ensure that your every moment online is tailored to you: personalised media, news, ads, prices. What are the consequences for democratic societies? Can we achieve serendipitous recommendations without manipulating users, or unduly invading their privacy? Can we ensure that content on social media is surfaced that informs, edifies and educates, rather than undermining public discourse?
Algorithmic Ethics AI systems can increasingly make significant state changes without intervening human influence. We need to design these systems to take our values into account. But which values? And how can we translate them into algorithmic form? What are the fundamental complexity constraints on algorithmic ethics? Can we design robotic systems that can emulate compassion?
Human-AI Interaction We fall into predictable errors when we interact with AI; and over time, those interactions change us. What cognitive and other biases should designers of AI systems account for? How does use of automated systems lead us to make faulty attributions of responsibility? And how do we avoid the risks associated with outsourcing morally significant decisions to AI systems?
Impact Creating long-term social benefit is central to the core mission of HMI. The project is collaborating with State and Federal Government, industry and civil society organisations, including Services Australia, the US Defense Innovation Board, and a study committee of the National Academies of Science, Engineering and Medicine on responsible computing research, to provide policy input and design and implement cutting-edge AI techniques.
Intellectual Community HMI is leading interdisciplinary collaboration in Australia and internationally through (for example) co-chairing the AAAI/ACM AI Ethics and Society Conference for 2021, and launching a new AI: Law, Ethics, Algorithms, Politics annual conference in Australia with collaborators across the country. HMI is also spearheading a Philosophy, AI and Society consortium of complementary research groups at: Oxford’s Institute of Ethics in AI, Stanford’s Human- Centred AI Institute, Toronto’s Schwarz-Reisman Institute, Harvard’s Safra Centre, and Princeton’s Centre for Human Values.