I am an assistant professor in the Department of
Information Systems at the University of Maryland, Baltimore County (UMBC)
and the director of the Bodies in Motion Lab.
My research contributes to the areas of human-computer interaction (HCI), computer supported
cooperative work (CSCW), and health informatics. I investigate how images and visualizations
play a part in medical collaboration and care, particularly with regards to how both professionals
and layman perceive imaging information through interaction and manipulation. In turn, I develop image interaction systems in
order to investigate the effects of new mechanisms for sensing, presenting, and interacting with
images. Most recently I have been addressing these questions in two fundamental streams of research: (1) imaging
interaction in surgery and (2) movement analysis in healthcare. Typically I leverage new commercially available
interaction device such as the Microsoft Kinect, Leap Motion, or Myo and new imaging capture and display
technology such as the Google Glass, Hololens, or immersive CAVES. I have employed a variety of methods in my work, but primarily
I perform participant observations and interviews throughout a design research process.
Prior to my position at UMBC, I was an ERCIM
postdoctoral researcher at Mobile Life in Sweden (2009-2010),
held a joint postdoctoral fellowship at
Microsoft Research Cambridge
and Corpus Christi College
at the University of Cambridge (2010-2012),
and then went on to serve as a research fellow at
Harvard Medical School and
the Cambridge Health Alliance (2012-2013).
I received a PhD in Information Sciences and Technology
from Penn State, MS in Communication
from Cornell, and BS in Psychology from
Current Areas of Research
Imaging and Interaction in Surgery
This work has shown how collaborating surgeons prefer to dynamically manipulate images together,
such as advancing through CT slices, while concurrently discussing them and how surgeons will
create a view of the work area by manipulating camera angles, zoom in on an image, or narrate
video captured with a head-mounted camera in order to guide their collaborator to an area of
interest for surgical decision making. To date, the predominant paradigm for touchless (gesture
or voice) image interaction has supported the practices of a single surgeon manipulating images
at the patient tableside during surgical procedures. With my NSF funding, I am developing and testing
a collaborative touchless imaging interaction system for collocated laparoscopic surgical teams.
The aim of that project is to investigate how collocated communication practices change with the
ability to point to and annotate live laparoscopic video during surgery.
This is work I began with my colleagues at Microsoft Research and continued with my surgeon
collaborators at Anne Arundel Medical Center, University of California San Francisco, and SUNY Buffalo.
Video explaining how our first Kinect system worked (2013)
Video from BBC Coverage (2013)
Feng, Y., Wong, C., Park, A., & Mentis, H.
(2016). Taxonomy of instructions given to residents in laparoscopic cholecystectomy. Surgical Endoscopy, 30
, Rahim, A., & Theodore, P. (2016). Crafting the Image in Surgical Telemedicine. Proceedings of the Conference on Computer Supported Cooperative Work (CSCW)
, San Francisco, CA (pp. 744-755), New York: ACM.
, O’Hara, K., Gonzalez, G., Sellen, A., Corish, R., Criminisi, A., Trivedi, R., & Theodore, P. (2015). Voice or Gesture in the Operating Room.
Extended Abstracts of the Conference on Human Factors in Computing Systems (CHI), Seoul, South Korea, (pp. 773-780), New York: ACM.
, Chellali, A., & Schwaitzberg, S. (2014). Learning to See the Body: Supporting Instructional Practices in Laparoscopic Surgical Procedures. Proceedings of the Conference on Human Factors in Computing Systems (CHI)
, Toronto, ON, Canada (pp. 2113-2122), New York:ACM.
O’Hara, K., Gonzalez, G., Penney, G., Sellen, A., Corish, R., Mentis, H.
, ... & Carrell, T. (2014). Interactional Order and Constructed Ways of Seeing with Touchless Imaging Systems in Surgery. Computer Supported Cooperative Work (CSCW), 23
O'Hara, K., Harper, R., Mentis, H.M.
, Sellen, A., & Taylor, A. (2013). On the naturalness of touchless: Putting the "interaction" back into NUI. Embodied Interaction. Spec. issue of ACM Transactions on Computer-Human Interaction (TOCHI), 20
& Taylor, A. (2013). Imaging the body: Embodied vision in minimally invasive surgery. Proceedings of the Conference Human Factors in Computing Systems (CHI)
, Paris, France (pp. 1479-1488), New York: ACM.
, O’Hara, K, Sellen, A., & Trevedi, R. (2012). Interaction proxemics and image use in neurosurgery
. Proceedings of the Conference on Human Factors in Computing Systems (CHI)
, Austin, Texas (pp. 927-936), New York: ACM.
Reflection on Movement in Health and Well-Being
Investigating the benefits of presenting people with movement impairments or disorders with
sensor-based reflections of movement. This area of study emphasizes the need for integrating a
patient’s subjective assessment of motor impairment into objective motor sensing data in order
to provide a more complete view of the patient’s illness. My analyses focuses on how this changes
the discussion of the person's body and health with eye towards quality of life, empowerment, and
, Shewbridge, R., Powell, S., Armstrong, M., Fishman, P., & Shulman, L. (2016) Co-Interpreting Movement With Sensors: Assessing Parkinson’s Patients’ Deep Brain Stimulation Programming. Human-Computer Interaction, 31
Carrington, P., Chang, K., Mentis, H.
, & Hurst, A. (2015). "But, I don't take steps": Examining the Inaccessibility of Fitness Trackers for Wheelchair Athletes. Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS)
, Lisbon, Portugal (pp. 193-201), New York:ACM.
, Shewbridge, R., Powell, S., Fishman, P., & Shulman, L. (2015). Being seen: Co-Interpreting Parkinson’s Patient’s Movement Ability in Deep Brain Stimulation Programming. Proceedings of the Conference on Human Factors in Computing Systems (CHI)
, Seoul, South Korea (pp. 511-520), New York:ACM.
Shewbridge, R., Mentis, H.M.
, Pharr, C., Powell, S., Fishman, P., Armstrong, M., & Shulman, L. (2014). Getting in Sync: Health and Digital Literacy in Patient Deep Brain Stimulation Device Use.
Presented at the AMIA Workshop on Interactive Systems in Healthcare (WISH).
Morrison, C., Culmer, P., Mentis, H.
, & Pincus, T. (2014). Vision-based body tracking: turning Kinect into a clinical tool. Disability and Rehabilitation: Assistive Technology,
Seeing, Feeling, and Interpreting Movement
Investigating the applicability of movement sensors align with the experience of movement or
with the professional vision of movement assessment. Much of this work has occurred outside of
health contexts - including dance, museums, and gaming in the home. The lessons learned, though,
provide insights on how to use, train, and appropriate movement sensors for specific human-centered contexts.
, Laaksolahti, J., & Höök, K. (2014). My Self and You: Tension in Bodily Sharing of Experience. ACM Transactions on Computer-Human Interaction, 21
(4), article 20.
& Johansson, C. (2013). Seeing movement qualities. Proceedings of the Conference Human Factors in Computing Systems (CHI)
, Paris, France (pp. 3375-3384), New York: ACM.
Harper, R. & Mentis, H.M.
(2013). The mocking gaze: ‘You are a poor controller!’ Proceedings of the Conference on Computer Supported Cooperative Work (CSCW)
, San Antonio, Texas (pp. 167-180), New York: ACM.
Vongsathorn, L., O’Hara, K., & Mentis, H.M.
(2013). Bodily interaction in the dark. Proceedings of the Conference Human Factors in Computing Systems (CHI)
, Paris, France (pp. 1275-1278), New York: ACM.
Fothergill, J., Mentis, H.M.
, Nowozin, S., & Kohli, P. (2012). Instructing people for training gestural interactive systems
. Proceedings of the Conference on Human Factors in Computing Systems (CHI)
, Austin, Texas (pp. 1737-1746), New York: ACM.