top of page


The workshop will address the following themes including but not limited to:

New Roles, Labor, and Work Created by AI/Automation Technology in Healthcare

  • What are the new forms of invisible, unintended, or unanticipated work when AI/automation is introduced in a healthcare setting? (e.g., new work processes required of clinicians as they work with and around the infrastructures necessary to deliver AI-enabled technology?)

  • How do medical cultural norms shift to accommodate new computational forms of expertise?
    How do these shifting norms affect collaborative practices?

  • Who is responsible for evaluating and updating the AI/automated system, as it continues to learn and evolve in a healthcare setting?

  • How should we determine which workflows should not be replaced but require human intelligence, even if it is technologically feasible to replace certain tasks?

  • Should AI/automation technology be explicitly identified as a participant in the shared-decision making model (i.e., when patients and clinicians make medical decisions together) of patient-centered healthcare?


Trust in Light of Shifting Healthcare Workflows

  • What enables trust in an AI system?

  • Who is responsible for enabling the trust of patients and clinicians in an AI system, and how is that best accomplished?

  • How do perceived risk and perceived task difficulty play into the formation of trust in AI technologies by patients and clinicians?

  • What new work processes will be required of clinicians as they take practical steps to achieve trust in a system (e.g., develop their own workarounds, determine their own tolerance for acceptability of results, etc.)?

  • As system design affects trust as well as humans’ trust in each other when their work is mediated by the system, what types of research methods can expose these different dimensions of trust and how they interact?

  • How does the potential effect of AI technology on the agency and power of healthcare workers within organizational contexts affect trust in an AI system?

  • Given the importance of making human work visible to foster trust, what types of work by an AI system will be important to surface, how should they be surfaced, and when?


bottom of page