At the Canberra Cyber Hub, we operate at the intersection of emerging technologies and our everyday reality, looking at how we can best support organisations as the future unfolds. Especially now, as space exploration and our tech landscape accelerate, looking at the future means getting ready for the ramping up of autonomous systems.
But how will that affect us as humans?
We recently sat down with Psychology Researcher and Deputy Director of the UC Collaborative Robotics Lab, Associate Professor Janie Busby Grant, to discuss the friction at the interface of robot and human systems. No matter your field, the evolution of robots and autonomous systems in the next few decades will have profound impacts on human psychology, even if we don’t often think about it yet.
So, the question is how do we fit into the equation?
How do Zero Trust frameworks apply?
In the world of advanced tech, the general consensus leans towards an increasing adoption of Zero Trust Architecture. However, when it comes to robotics, a seemingly paradoxical situation arises; how can effective robotic systems be built if these autonomous machines can’t be truly trusted? Janie offers a reframed perspective on this paradox, saying that first, before anything else, we must really understand what ‘trusting a system’ means.
"I would say we're focused on understanding trust... how do we know when to trust a system? When you have an embodied system like a robot, how do you know whether that system is secure enough? Currently, there’s no indicators of that, no way for a user to tell whether it is a trustworthy system, and it’s something we’re working on."
The "Coffee Problem" and the Engineering Hurdle
We often see sleek marketing videos of robots performing complex tasks, but the reality is humbler. Janie describes the current state as solving the "A to B" problem:
“Can the robot receive an instruction like ‘Get me a cup of coffee from the café’ and competently achieve all that is implied by that question – from understanding the implied information in the query, to navigating stairs, lifts and weather, waiting in a line with humans, and answering seemingly simple questions - such as coffee type, cup size and milk type. Then it still needs to organise payment, and that’s not to mention successful return in a reasonable time without spilling the drink!”
The challenge is twofold:
- The Engineering Question: Environmental perception and physical navigation and manipulation in messy, unconstrained contexts, as well as the robot’s behavioural selection under uncertainty.
- The Behavioral Question: How can you train a system to interact with chaotic, unpredictable humans?
“What's interesting to think about, is how will the people around a robot react (will they see it as a ‘being’ and give it place to stand in line? Will they ask it questions and deliberate with it as a person?). Do people actually want to interact with these machines and if so, will they interact with them like they do with a person? A pet? A machine? How do we ensure the robots successfully complete their roles when those roles inherently rely on interacting with humans?"
This is especially relevant to areas like the space industry, where the way autonomous systems work alongside humans for the ongoing development of the sector and human endeavour, is especially important to consider. Building autonomous systems where robots can handle the context of their environments presents some interesting engineering challenges.
The Need for Interdisciplinary Teams
One of the most striking insights from the work being undertaken at the Collaborative Robotics Lab is the "Valley of Death" between ideation and successful deployment. Crossing it isn’t just an engineering challenge. Psychologists, social scientists, user advocates and 'people who understand people' are all integral in making systems that attend to the market needs without unforeseen complications arising at the interface of interaction.
One recent analysis of 5.1 million job ads by Associate Professor Busby Grant and Associate Professor Amanda George found that psychology skills actually map uniquely well to the needs of modern technical, IT and engineering workforce environments.
"For example, many jobs need a really good understanding of how human behaviour interacts with systems ... as well as a really good understanding of ethics, research, and statistics."
To build a robot that works in a hospital, or alongside humans travelling to distant moons and planets, you need someone who understands what is meaningful to people and how a piece of technology can enhance a specific user group’s life.
Where To Next?
The next 10 to 20 years will see a shift from token user groups to deep co-design. We are moving toward a world where robots aren't just tools, but integrated team members. As Janie puts it:
"You're seeing a lot of really good design now actually incorporating users all the way along... talking to people who understand people. If you've got a system that is adopted by people and you see that ongoing use over a long period, that is the success. That's what you're aiming for."
For Canberra, this brings immeasurable opportunities. The Canberra ecosystem has a unique intersection of cyber and space expertise; bringing in human cognition into the equation when exploring human-robot systems provides additional leverage. By contributing to ongoing research, we can ensure that in a couple decades time, we are already prepared to work with these autonomous systems and be at the forefront of this transformation.