A Field Experiment on AI-Assisted Physicians

AI assistants—software agents that can perform tasks or services for individuals—are among the most promising AI applications. However, little is known about the adoption of AI assistants by service providers (i.e., physicians) in a real-world healthcare setting. Specifically, we investigate the impact of AI smartness—whether the AI assistant is empowered by machine learning intelligence—and AI transparency—whether physicians are informed of the assistant feature. We collaborate with a leading healthcare platform to run a field experiment in which we compare physicians’ adoption behavior, i.e., adoption rate and adoption timing, of smart and automated AI assistants under transparent and non-transparent conditions. We find that AI smartness can increase the adoption rate and shorten the adoption timing, while AI transparency can only shorten the adoption timing. Moreover, the impact of transparency on the adoption rate is contingent on the smartness level of the assistant: AI transparency increases the adoption rate only when the AI assistant is not equipped with smart algorithms and fails to do so when the assistant is smart. Our study can guide platforms in designing their AI strategies. In particular, platforms should develop and apply smart AI algorithms in aiding physicians, and also keep physicians informed on such development and application, especially when the smartness level of the algorithms is low.
Contact Emails: