Skip to main content

PBS Standards

Applying Standards to Generative AI Tools

Email share
A blue image with "Ai" as the focal point
Getty Images

The possibilities for creating content with emerging generative artificial intelligence tools are expanding rapidly.  While new image generation tools (such as Dall-E, Midjourney, and Stable Diffusion) and voice generators (such as Murf and Listnr), as well as large language models (such as GPT-4 and Bard), may seem like they present unprecedented Standards & Practices issues, longstanding and established standards can continue to guide producers remarkably well.

Many of the issues presented by the latest generative AI services are in fact addressed by the existing PBS Editorial Standards.  The standards rest on the core principle of independence, which requires that producers serve the public by never abrogating their fundamental editorial role to an outside party, including an artificial intelligence. If considering the use of such tools, producers are responsible for discussing with PBS Programming and Standards & Practices how doing so could further the educational public service mission (set forth on page 2 of the standards) and would align with the framework described below. (This assessment will depend in part on the genre of the content.  While these tools may advance the educational mission of PBS if used thoughtfully for some genres, they cannot replace journalism and may not be appropriate for use in news programming, except of course to cover the capabilities of such technologies.)

(1) Accuracy

If using a generative AI tool, it is imperative not to assume that its output is truthful. Producers always need to fact-check any AI output.  The PBS Editorial Standards include accuracy as a core principle in Section III, stating: “Producers must exercise the highest level of care in verifying information” (page 5).

The standards put this principle into practice by requiring in Section IV.A.1 that producers “implement rigorous fact-checking procedures” and that producers “should be able to identify the source for each asserted fact and why that source is reliable” (page 6). Generative AI tools like those examples referenced above are not reliable sources of fact.

The standards also include a section on application to emerging technologies, which instructs producers in Section IV.B.6 to vet third-party content.  “Every effort must be made to prevent the transmission of false information” (page 10).  The output from AI tools is a new type of third-party content.

This is not a novel issue, as long as we continue to apply our longstanding commitment to accuracy to these new tools.  It can be easy when experimenting with Dall-E, Midjourney, or Bard to take the output at face value and mistakenly assume that the tool’s confident assertions or depictions are accurate, so it is essential to remain vigilant and skeptical when using such services.

(2) Transparency

The PBS Editorial Standards include transparency as a core principle, requiring that producers “be open with the audience … about how the work was done” (page 5).  The standards put this principle into practice in Section IV.A.11 on using labels and other disclosures to aid the audience’s understanding (page 8) and in Section IV.D.5 on clearly identifying re-creations and simulations (page 12).  These longstanding provisions provide useful guidance for working with any content generated by emerging AI tools and services.

PBS is committed to transparency as “the proof, in effect, that the editorial principles outlined in these standards are living principles that inform a professional and ethical editorial process” (page 5).  As the standards acknowledge, the specific methods of putting transparency into practice will vary depending on the circumstances, but the overall commitment needs to remain steadfast.  This could include tools such as lower-thirds, top-of-show language, closing credits, and supplemental online materials depending on the circumstances. Ultimately, PBS strives to empower the audience to evaluate the credibility of content and “determine for themselves whether it is trustworthy” (page 5).

(3) Inclusiveness

If considering the use of a generative AI tool, it is also essential to prioritize the core principle of inclusiveness and to think critically about ways in which the tool may be introducing stereotypes or prejudice into its output.  Producers need to evaluate the prompts used and any content generated to guard against propagating or perpetuating bias.  This work will benefit from implementing Section IV.A.5 to leverage a diversity of voices behind the camera to aid in evaluating any AI output (page 7).

Early studies have shown that current AI tools can exacerbate stereotypical and prejudicial depictions.  For instance, MIT Technology Review has published research on bias and stereotyping in AI image models (such as depictions of Native Americans).  The Centre for International Governance Innovation has recently shared findings on the ways in which generative AI tools perpetuate harmful gender stereotypes.  Researchers at Trinity College Dublin have published their findings on the ways in which generative AI models encode biases and negative stereotypes in their users.

The PBS Editorial Standards set forth inclusiveness as a core principle, requiring that content “reflect the views of people from different backgrounds.”  Furthermore, content may need “supplemental material,” such as “links to credible, high-quality related resources that provide access to additional information” (page 6).  When considering the use of a generative AI tool, it may be necessary to add companion resources online that provide additional context or background about the process.

*                           *                           *

The PBS Editorial Standards provide useful and relevant guidance for how to think about the possibility of using emerging AI tools in content development.  This guidance may be updated as the technology and media landscape evolve. The current standards – including the core principles of Editorial Independence, Accuracy, Transparency, and Inclusiveness – empower producers to think critically about any generative AI output, to preserve our fundamental commitment to telling human stories, and to vigilantly safeguard the public’s trust in PBS content.

Editorial Principles

More Resources