Machine learning is quickly improving and many everyday tasks will soon be overtaken by computers. Yet, according to a panel of experts, machine learning may only work well in particular circumstances, such as airline 'auto-pilot' where the environment is well understood.
Have you ever wondered about the future interaction between humans and machines? Can you answer these 4 questions?
- Where do humans and computers think alike and where are they different?
- What are their strengths and weaknesses?
- How can computers help humans — and humans help computers?
- How can we make computers more human-like? Should we?
During a recent University of Melbourne public lecture held in Lab 14, Carlton Connect, these and other thought-provoking questions were presented as part of a one-off symposium featuring six panel experts from Australia and overseas, representing an array of specialist fields including computer science, robotics, psychology, economics and neurobiology.
With just 7 minutes to present their perspective on what the future holds for humans and artificial intelligence machines, the panel brought a wide range of ideas to the table, under the watchful eye of moderator Professor Peter Bossaerts from the Department of Finance, Faculty of Business & Economics.
Here are some of the compelling takeaways from the night:
1st speaker: Baroness Susan Greenfield
- What is special about human beings that does not exist in artificial intelligence devices? Connections are carefully cultivated throughout an individual's experiences. This highly personalised meaning accrues over many different experiences. They become a 'brain-soul connection'.
- Humans have a consciousness that computers do not have. In addition, emotions are an integral and crucial part of human decision-making, in contrast to computers.
2nd speaker: Ben Rubinstein
- Should we trust machines? How do we address privacy issues with machines?
- Machine learning is based around predicting the future based on the past. But does algorithmic accuracy equate to fairness?
3rd speaker: Prof Chris Manzie
- In the case of automated transportation, research shows that taking some responsibility away from a driver can be detrimental. Are we still paying the same level of attention when we engage cruise control and/or lane keeping intelligence?
- There are no fully-fail safe AI systems yet. Scientists are trying to prevent misuse of systems.
- Perhaps we need to think of AI as a cooperative integration not a replacement systems.
Humans are not good at calculations, but they are great at identifying (diagnosing) problems and possess the creativity to come up with imaginative solutions. For machine learning, the answer has to be somewhere in the data; humans can extrapolate beyond past data. Professor Peter Bossaerts
4th speaker: Dr Carsten Murawski
- Humans compute too (even when making simple choices), so, the theory of computation applies to humans as well.
- Regarding rational decision making; for instance grocery shopping, this would take computers 1,000s of hours to compute and make a decision.
- We need to enhance the dialogue between the two disciplines of computer science and neuroscience.
5th speaker: Associate Prof John Thangarajah
- Today's hardware can handle a lot. Once, the smartphones of today would have taken up a whole room.
- Before we get to Einstein (of AI systems) we should focus on an AI machine that is correctly processing and developing to the level of a 15-month old child.
- AI is extending and enhancing human capabilities.
6th speaker: Associate Prof Denny Oetomo
- You can't replicate experiential learning
- We should consider a human/robot collaboration. The robot will then be better than the sum of its parts.
- In the health sector, robots are giving critical mobility to patients. Robots are not taking jobs, they are helping improve the quality of jobs and inefficiencies.