Skip to main content

We are looking to appoint an experienced and enthusiastic postdoctoral researcher to join our team. The successful candidate will contribute to the project on AI Ethics.

Professor Julian Savulescu is the Principal Investigator (PI) of the grant. This is a part of larger project aims to develop the theoretical basis and content for the procedure of Collective Reflective Equilibrium and establish an ethical algorithms to address the normative issues in medical ethics, advances in biotechnology and artificial intelligences (AI) in medicine. 

Geisinger’s new Department of Bioethics and Decision Sciences is recruiting bioethicists 
at all faculty ranks. Although faculty in the department pursue more traditional research in their respective fields
of bioethics and decision sciences — both broadly construed — the department’s unique vision is to bring these
fields together to collaborate on research and other activities at the intersection of their interests, especially
on studies of judgments and decision-making in the domains of health, science and innovation. The department

The Anesthesia Department at Stanford Healthcare is recruiting a 2-year fellow for analyzing the ethical and regulatory challenges in evaluating artificial intelligence and machine learning (AI/ML) tools before they are deployed at Stanford Healthcare. The post-doctoral fellow will work closely with Drs. Danton Char, Nigam Shah, and Michelle Mello to both evaluate proposed AI/ML deployments into clinical care and to develop an approach for rapid systematic evaluation of AI/ML tools for ethical considerations.

Big data and artificial intelligence are growing more pervasive and are creating new, complex links between individuals and the many groups to which they might belong, including groups no one might have thought of as a “group” before. How should we think about questions of group privacy, discrimination, and group identity in this new world? Does it matter whether algorithms used in health care focus on identified groups that have been designated as protected classes, rather than more precisely (or amorphously) defined groups that may or may not align with some protected class boundaries?

Subscribe to AI