I am an associate professor in the Department of Information Systems at the University of Maryland Baltimore County (UMBC), a faculty member in the Human Centered Computing graduate program, and the director of the Bodies in Motion Lab.
My research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. I investigate how collaboration and coordination are achieved and better supported, primarily with regards to information sharing and decision making in healthcare contexts. In turn, I develop interactive systems to investigate the effects of new mechanisms for collaboratively sensing, presenting, and interacting with information. For the past four years, I have been addressing this problem space in two fundamental streams of research: (1) imaging interaction in surgery and (2) patient empowerment.
Typically I leverage new commercially available interaction device such as the Microsoft Kinect, Leap Motion, or Myo and new imaging capture and display technology such as the Google Glass, Hololens, or immersive CAVES. I have employed a variety of methods in my work, but primarily I perform participant observations and interviews throughout a design research process.
Prior to my position at UMBC, I was an ERCIM postdoctoral researcher at Mobile Life in Sweden (2009-2010), held a joint postdoctoral fellowship at Microsoft Research Cambridge and Corpus Christi College at the University of Cambridge (2010-2012), and then went on to serve as a research fellow at Harvard Medical School and the Cambridge Health Alliance (2012-2013). I received a PhD in Information Sciences and Technology from Penn State, MS in Communication from Cornell, and BS in Psychology from Virginia Tech.
Much of my recent work has been investigating how images and visualizations play a part in medical collaboration and care, particularly with regards to how both professionals and layman perceive imaging information through interaction and manipulation. In turn, I develop image interaction systems in order to investigate the effects of new mechanisms for sensing, presenting, and interacting with images. The work on Imaging Interaction in Surgery has shown how collaborating surgeons prefer to dynamically manipulate images together, such as advancing through CT slices, while concurrently discussing them and how surgeons will create a view of the work area by manipulating camera angles, zoom in on an image, or narrate video captured with a head-mounted camera in order to guide their collaborator to an area of interest for surgical decision making. To date, the predominant paradigm for touchless (gesture or voice) image interaction has supported the practices of a single surgeon manipulating images at the patient tableside during surgical procedures. With my NSF funding, I am developing and testing a collaborative touchless imaging interaction system for collocated laparoscopic surgical teams. The aim of that project is to investigate how collocated communication practices change with the ability to point to and annotate live laparoscopic video during surgery. This is work I began with my colleagues at Microsoft Research and continued with my surgeon collaborators at Anne Arundel Medical Center, University of California San Francisco, and SUNY Buffalo.Video explaining how our first Kinect system worked (2013)
My interest in the perception and use of medical information has also led me to form a critical and alternative stance towards the application of movement tracking systems in healthcare. I have been investigating the benefits of presenting people with movement impairments with sensor-based reflections of movement. Our findings highlight how sensors can provide much needed co-interpreted assessment of movement but sensors can also intrude on this process through clinician or sensor authority. This area of study emphasizes the need for integrating a patient’s subjective assessment of motor impairment into objective motor sensing data in order to provide a more complete view of the patient’s illness. My analyses focuses on how this changes the discussion of the person's body and health with an eye towards quality of life, empowerment, and whole-body decision-making.Mentis, H., Shewbridge, R., Powell, S., Armstrong, M., Fishman, P., & Shulman, L. (2016) Co-Interpreting Movement With Sensors: Assessing Parkinson’s Patients’ Deep Brain Stimulation Programming. Human-Computer Interaction, 31(3-4), 227-260.
Investigating the applicability of movement sensors align with the experience of movement or with the professional vision of movement assessment. Much of this work has occurred outside of health contexts - including dance, museums, and gaming in the home. The lessons learned, though, provide insights on how to use, train, and appropriate movement sensors for specific human-centered contexts.Mentis, H., Laaksolahti, J., & Höök, K. (2014). My Self and You: Tension in Bodily Sharing of Experience. ACM Transactions on Computer-Human Interaction, 21(4), article 20.