Artificial Intelligence Vision

Since our founding, innovation has been central to Amgen and our mission to serve patients. By further embracing that innovative mindset, we believe that Amgen can safely and ethically harness the exciting potential of emerging artificial intelligence (AI) capabilities to: 

  • Improve patient care by reducing the time it takes to get new therapies to market, reaching more patients with existing therapeutics, and better understanding our patients' evolving needs.
  • Redesign ways of working to increase productivity and equip our people with future-proof skills that will help them grow both professionally and personally.

To fulfill our vision for the adoption of AI systems and tools at Amgen, we will strive to provide AI Assurance and Security Assurance.

AI Assurance is a process applied at all stages of the AI development lifecycle to assess whether an AI tool or system is (1) producing outcomes that are valid, verified, data-driven, trustworthy, and explainable to a layman, (2) ethical in the context of its deployment, (3) unbiased in its learning, and (4) fair to its users.

Security Assurance is a measure of confidence that the security features, practices, procedures, and architecture of an information system, including AI systems and tools, follow an organization's security policies and best practices.

Amgen has embraced the National Institute of Standards and Technology (NIST) Trustworthy AI framework. We have established an AI Governance Council sponsored by our Chief Compliance Officer and Chief Information Officer, which is composed of individuals from various functions including quality, law, safety, global security, information security, regulatory affairs, privacy, compliance, human resources, sourcing and corporate audit. This cross-functional leadership forum provides guidance and advice on the development, deployment, and use of AI systems at Amgen, consistent with the NIST Trustworthy AI framework and emerging laws and regulations.

Specifically, Amgen is committed to adhering to the seven principles of the NIST Trustworthy AI framework, as represented below:


  • Responsible System Development Lifecycle
  • Responsible Decisions for System Use
  • Responsible Use
  • Explanation and Documentation


  • Safeguards to Protect Intellectual Property
  • Safeguards to Protect against Data Exfiltration
  • Safeguards to Protect against Data Poisoning
  • Safeguards to Protect the Confidentiality, Integrity and Availability of Data and Information
  • Policies, Procedures, and Safeguards Designed to Provide Resilience, Respond to Unexpected Events, and Resume Operations within a Reasonable Time


  • Ability to Explain How Outputs Are Being Derived
  • Ability to Make Sense of and Contextualize the System Output


  • Anonymity, Confidentiality, and Controls to Guide Choices for AI Design
  • Use of Privacy Enhanced Technologies, De-identification and Aggregation Methods to Mitigate Risk


  • Promote Fairness and Reduce Potential for Discrimination or Bias in AI Models and Outputs
  • Policies, Procedures and Safeguards Intended to Address Systemic, Computational, Statistical, and/or Human Cognitive Bias


  • Accountability for the Outcomes of AI Systems
  • Users Interacting with an AI System Have Access to Sufficient Information about the AI System to Understand It, without Compromising the Security of the AI System


  • Confirmation, through Objective Evidence and Reasonable Measures, that the Intended Use or Application of the AI System has been Met
  • Policies, Procedures, and Safeguards to Measure the Ability of an AI System to Perform as Intended and without Disruption for a Specified Time Interval under Given Conditions

At Amgen, our goal is to Build Trustworthy AI and Use AI Responsibly. We will continue to take a holistic approach that aims to strike an appropriate balance between the promise of AI systems and tools while managing their inherent and evolving risks. To accomplish this, our AI Governance exists to support:

  • Use Case Governance – to drive compliance with our NIST trustworthy AI principles stated above
  • Data and Content Governance – to safeguard the quality of data used within AI systems
  • Platform and Model Governance – to foster appropriate model and technology usage of AI systems
  • Platform Security and Governance – to implement policies, procedures, and tools designed to provide for the security and protection of AI systems

Please email for any general questions.