I am a Ph.D. candidate at the School of Computing and Information Systems, University of Melbourne, supervised by A/Prof. Wafa Johal and Prof. Vassilis Kostakos. I am part of the Human-Computer Interaction Group and the Human-Robot Interaction Lab.


My Research

My research challenges the assumption that robots or AI systems are perfect. I study how mistakes and failures that occur during interaction or collaboration between humans and robots affect user perceptions—particularly how much trust users lose after such failures.

Several characteristics of failures influence users differently, including:

  • The type and timing of the failure
  • Whether the failure occurs as part of a sequence of failures
  • The number and severity of the failures

It is crucial for robots to recover effectively from their failures. My research aims to design and evaluate interaction strategies that help robots maintain user trust and improve recovery after failure.


Research 1 — Detecting Robot Failures via Human Gaze

For robots, detecting and predicting failures as early as possible is vital to prevent potential damage or negative user experiences.
To achieve this, I studied user non-verbal behaviours, especially gaze patterns, to identify cues that indicate when a failure is about to occur.

We found that:

  • User gaze behaviour can signal the onset of a robot failure.
  • Gaze patterns are related to the type of failure the robot makes.
  • A random forest classifier showed strong potential for detecting failures within a few seconds after they occur.

This study highlights how human gaze can serve as a real-time indicator for robot performance monitoring.


Research 2 — Trust Dynamics Across Multiple Robot Failures

Robot failures can occur multiple times during an interaction, and their effects on user trust may accumulate over time. These failures may differ in severity, or they may share a severity level while presenting in unfamiliar forms. When a failure occurs, the robot must regain the user’s trust, which requires some level of failure awareness. In this research, I also examined how much failure awareness a robot needs for different types of failures.

We found that:

  • User trust is influenced not only by the current failure but also by previous failures the user has experienced.
  • When different types of failures with similar severity occur, user trust and perceived robot intelligence change differently across the sequence of failures.
  • For less severe or barely noticeable failures, displaying awareness of the failure can reduce user trust; however, for more severe failures, showing awareness helps the robot regain user trust.

Research 3 — (Coming Soon)


Teaching Experience

  • Elements of Data Processing, Semester 2, 2025 — University of Melbourne

Education

  • Ph.D. (Ongoing) — School of Computing and Information Systems, University of Melbourne
  • M.Sc. in Applied Design — Sharif University of Technology, 2021-2023
  • B.Sc. in Mechanical Engineering — University of Tehran, 2017-2021