Go Back

Member story

go back

"The best way to find out if you can trust someone is to trust them." -Hernest Hemingway. Picture this: you're lying on the operating table. Before you fall asleep, you say a prayer. You're not sure what will happen when you wake up. But you try to focus on the positive. When the anesthesiologist delivers the final dose of anesthesia that will put you to sleep, you have time to think about the worst. When anxiety builds up in you, you pull yourself out of that train of thought. You try to focus on that one thing. In that moment, you will trust the doctors and nurses in the operating room to save you. What if there is a robot in the room? Will you trust that robot as much as you trust a member of your surgical team? There are two kinds of trust in psychology: cognitive trust and affective trust. Cognitive trust is based on our knowledge and evidence of those we trust. Affective trust stems from emotional connections with others. Affective trust can develop between the surgeon and his/her AI System. However, between the patient and the AI System, cognitive trust is at the core of this trust relationship. To develop cognitive trust between the patient, the doctor, and the robot, people must "understand" the workings of the robot. "Valid informed consent is based on disclosure of relevant information to a competent patient who is allowed to make a voluntary choice." - Appelbaum. Informed consent assumes that the patient is capable of: Understanding the "black box" of AI systems such as neural networks. Freedom from fear, prejudice, complacency, and confusion with AI systems. Understanding the uses, mistakes, and potential consequences of using an AI system. In reality, a patient can rarely understand all three at once to make an informed choice. AI systems that implement neural networks can feel like a "black box," even to the technologists who developed it and the surgeons who use it. Beyond the inputs and outputs, it is impossible to understand how the system arrives at the decisions it makes. Even the most knowledgeable surgeon would have difficulty explaining this to a patient. Surgeons are integral to helping the patient release the emotional constraints that lead to informed consent. Surgeons who are aware of how the AI system works can explain to the patient the role the AI system will play in the surgical procedure. The surgeon can explain the benefits and risks of using an AI system. The surgeon can also explain the surgeon's responsibilities to monitor the preoperative plan and guide the AI system to proper use during surgery. The surgeon can also use statistics, such as success rates, to establish trust with the patient. The surgeon can only do so much. The hospital can also help the patient obtain informed consent. Hospitals can use medical procedures related to the use of the AI system before surgery, during surgery, and after surgery to limit the risks of using the AI system. Hospitals can also implement training, testing, and best practices for using the AI system to ensure that the entire medical team understands how the AI system works. Hospitals can also allocate responsibility, where appropriate, to each member of the healthcare team to ensure that AI system use is monitored and controlled. Guidelines and procedures must be robust to ensure that errors can be evaluated and complications mitigated. The health care issue and the complex problem of informed patient consent regarding the use of AI systems limits the use of AI systems in medical settings. Recent advances mean less invasiveness, improved accuracy and shorter recovery times. The solution to the ethical dilemma of informed patient consent lies with everyone involved. To maximize the benefits of using AI Systems in surgery while preserving the patient's right to informed consent, everyone in the healthcare field needs to be involved. The era of artificial intelligence brings many medical innovations. As a society, the biggest challenge in mass adoption is solving the ethical dilemma. The ethical dilemma lies at the heart of the deeper issue of "trust" between humans and AI systems. Only with broad understanding, the support of the entire medical community and our society as a whole can we begin to implement procedures, set standards, and mitigate the consequences of using these AI systems. In turn, we can then unlock the true potential of these AI systems to help us manage our health in ways we never thought possible.