X

Building Trust with NIST AI Risk Management Framework

April 7, 2025

NIST, the United States Institute of Standards and Technology, has long been an invaluable resource to product marketers. From data protection to AI, it has provided definitive technical definitions and processes. But if its role evolves, should we examine its guidance more critically?

The importance of NIST for product marketers

One of the jobs of product marketers is to get the team to agree on consistent terminology. We must use language that resonates with our audiences when marketing our product.

For example, if you have a security product (including backup and recovery), then you’ll need an authoritative guide to explain the best way to set up a secure computing environment. NIST offers that guidance. It’s simply good data center hygiene practice.

The NIST Cybersecurity Framework (CSF) has a dedicated website full of non-prescriptive guidance for companies developing and implementing cyber security programs. If you are selling to that market, it makes sense to use the CSF as the arbitrator for definitions and procedures.

I personally used the NIST framework many times, aligning product features to the CSF guidelines. So, I was extremely excited to know that NIST would build the same types of documents for AI environments.

NIST Artificial Intelligence Risk Management Framework

The NIST Artificial Intelligence Risk Management Framework was created to provide “in-depth, voluntary guidance primarily intended for developers and users of AI systems” (techpolicy.press).

The framework provides guidance on setting up governance for AI systems. It explains how to assess risks at every level, emphasizes the importance of documentation, and defines seven characteristics of trustworthy AI.

The U.S. Artificial Intelligence Safety Institute (US AISI) was established in November 2023, as a part of NIST. The Institute’s mission is “tasked with developing the testing, evaluations, and guidelines that will help accelerate trustworthy AI innovation in the United States and around the world.”

Interestingly, there have been new requirements for scientists partnering with U.S. AISI. According to a Wired report, these instructions require the removal of references to “AI safety,” “responsible AI,” and “AI fairness” from the expected skills of members.  

But as AI continues to shape our world, it is crucial to ensure that the stories it tells are accurate, fair, and reflective of our diverse societies. NIST's frameworks provide a foundation, but it is up to us to build on it responsibly and require AI safety and fairness from the frameworks we rely on to tell the story of our products and services.

Those who tell the stories rule society

Plato is credited with saying, “those who tell the stories rule society.” If he was right, we must ensure that the stories told by technology are accurate, fair, and inclusive.  

At its root, AI simply processes data to tell a story. For example:

• Sequencing genomes  

• Detecting hackers before they can encrypt your data

• Finding new cancer treatments

• Animating your great-grandmother’s pictures

There have already been warnings from data scientists and linguists about concerns of AI safety and fairness. In the paper “On the Dangers of Stochastic Parrots: Can Language Models be Too Big” by Emily M. Bender et al, the authors raised concerns about how large language models can negatively affect the environment, diversity, social views, and encode bias.  

AI tells the story of our world, based on its training data of course. Responsible AI is how we can be sure the entire world is represented as those stories are told.

Building a future with responsible AI

Product marketers and technical writers rely on frameworks like those from NIST to guide our work and ensure clarity for our audiences. With the rise of AI, the stakes are higher than ever.

By embracing responsible AI practices and using tools like the NIST frameworks, we can shape a future where technology tells the true story of our societies, one that promotes fairness, safety, and innovation.

NIST, the United States Institute of Standards and Technology, has long been an invaluable resource to product marketers. From data protection to AI, it has provided definitive technical definitions and processes. But if its role evolves, should we examine its guidance more critically?

The importance of NIST for product marketers

One of the jobs of product marketers is to get the team to agree on consistent terminology. We must use language that resonates with our audiences when marketing our product.

For example, if you have a security product (including backup and recovery), then you’ll need an authoritative guide to explain the best way to set up a secure computing environment. NIST offers that guidance. It’s simply good data center hygiene practice.

The NIST Cybersecurity Framework (CSF) has a dedicated website full of non-prescriptive guidance for companies developing and implementing cyber security programs. If you are selling to that market, it makes sense to use the CSF as the arbitrator for definitions and procedures.

I personally used the NIST framework many times, aligning product features to the CSF guidelines. So, I was extremely excited to know that NIST would build the same types of documents for AI environments.

NIST Artificial Intelligence Risk Management Framework

The NIST Artificial Intelligence Risk Management Framework was created to provide “in-depth, voluntary guidance primarily intended for developers and users of AI systems” (techpolicy.press).

The framework provides guidance on setting up governance for AI systems. It explains how to assess risks at every level, emphasizes the importance of documentation, and defines seven characteristics of trustworthy AI.

The U.S. Artificial Intelligence Safety Institute (US AISI) was established in November 2023, as a part of NIST. The Institute’s mission is “tasked with developing the testing, evaluations, and guidelines that will help accelerate trustworthy AI innovation in the United States and around the world.”

Interestingly, there have been new requirements for scientists partnering with U.S. AISI. According to a Wired report, these instructions require the removal of references to “AI safety,” “responsible AI,” and “AI fairness” from the expected skills of members.  

But as AI continues to shape our world, it is crucial to ensure that the stories it tells are accurate, fair, and reflective of our diverse societies. NIST's frameworks provide a foundation, but it is up to us to build on it responsibly and require AI safety and fairness from the frameworks we rely on to tell the story of our products and services.

Those who tell the stories rule society

Plato is credited with saying, “those who tell the stories rule society.” If he was right, we must ensure that the stories told by technology are accurate, fair, and inclusive.  

At its root, AI simply processes data to tell a story. For example:

• Sequencing genomes  

• Detecting hackers before they can encrypt your data

• Finding new cancer treatments

• Animating your great-grandmother’s pictures

There have already been warnings from data scientists and linguists about concerns of AI safety and fairness. In the paper “On the Dangers of Stochastic Parrots: Can Language Models be Too Big” by Emily M. Bender et al, the authors raised concerns about how large language models can negatively affect the environment, diversity, social views, and encode bias.  

AI tells the story of our world, based on its training data of course. Responsible AI is how we can be sure the entire world is represented as those stories are told.

Building a future with responsible AI

Product marketers and technical writers rely on frameworks like those from NIST to guide our work and ensure clarity for our audiences. With the rise of AI, the stakes are higher than ever.

By embracing responsible AI practices and using tools like the NIST frameworks, we can shape a future where technology tells the true story of our societies, one that promotes fairness, safety, and innovation.

Transcript

Subscribe to TechArena

Subscribe