Skip to content
  • There are no suggestions because the search field is empty.

AAM Ethical AI Compliance Guide

 An explanation of the AAM certification and framework.

Overview

AI is rapidly reshaping the dynamics of media and advertising. Audiences, advertisers, regulators, and even AI systems are asking the same questions: What content is created by humans? What is machine-generated? What safeguards are in place to ensure transparency, accountability, and trust?

The industry needs a clear, market-recognized signal of responsible AI practices to differentiate authentic, human-governed media and help unlock new revenue opportunities in the AI era.

AAM’s Ethical AI Certification is an independent assurance program that verifies adherence to industry best practices for responsible AI use. The certification provides third-party validation that a publisher has implemented appropriate governance, disclosure, human oversight, privacy protections, bias mitigation, and risk management for AI technologies. The framework aligns with leading industry guidelines from AAM, IAB, ISO, NIST and more.

This guide explains the certification process, provides guidelines for publishers, and details how AAM’s Ethical AI Certification serves as a durable trust signal for the future of media.

The AI Trust Challenge - As the use of AI proliferates, the industry faces several key challenges:

Stats depicting industry AI challenges.


Certification Process

The AAM Ethical AI Certification is a voluntary program that verifies an organization’s adherence to the eight pillars of AAM’s Ethical AI Certification framework. This is a collaborative process between AAM’s media assurance team and the organization seeking certification. AAM may identify gaps in compliance and work with the organization to resolve them before issuing certification. Only companies that pass certification are identified.

The certification process includes four main areas:  

  • Launch Meeting: AAM hosts a kick-off call to explain the certification framework, review the certification checklist, and answer questions about the program and the process.
  • Publisher Checklist Completion: The publisher’s team completes the certification checklist and provides the requested supporting documentation. The AAM team is available to answer questions throughout this process. Upon completion of the certification checklist, the publisher submits the checklist and the supporting documentation to AAM.
  • AAM Review: AAM reviews the checklist and the supporting documentation. If there are gaps in program compliance or compliance documentation, AAM provides the publisher with a list of identified gaps and provide counsel on how to best remedy the gaps. The publisher is given a reasonable amount of time to implement the remediation solution(s).
  • Certification: Upon successful completion of the program, AAM provides a comprehensive management report indicating compliance to the industry program including recommendations for improvement, if necessary. An AAM certification letter states that the publisher has met all requirements as verified by AAM. Once certified, companies may use the AAM Ethical AI Certification seal and are featured on AAM's Assurance List of certified companies.

AAM Ethical AI Guidelines for Publishers

These guidelines support organizations seeking alignment with the AAM Ethical AI Certification Framework. They translate certification controls into best practices that promote responsible, transparent use of AI in media.

The guidance below is intended to promote consistency, accountability, and transparency in AI use while allowing flexibility across business models. It reflects the structure of the certification framework and is written for use by senior leaders, compliance teams, editorial and marketing operations, legal, and technology stakeholders. For additional resources and examples, visit AAM’s website.

I. Policies and Governance

Establish Clear AI Use Policies

Organizations develop, maintain, and document internal policies governing the use of AI technologies across the enterprise. These policies should clearly define:

  • Acceptable and prohibited uses of AI
  • Scope of AI applications (e.g., content creation, research, optimization)
  • Authorized AI solutions.
  • Roles and responsibilities for oversight and compliance

Policies apply across departments and subsidiaries where AI is used in content creation, advertising, or consumer interaction.

Review and Update Policies Regularly

Organizations formally review AI use policies at least annually, and more frequently when there are material changes in technology, regulation, or business practices. Review processes should be documented and include senior-level oversight to ensure accountability.

 

II. Transparency and Disclosure

Publish General AI Use Disclosures

Organizations publicly disclose their general use of generative AI, including AI-generated text, images, video, audio, and synthetic or conversational interactions. These disclosures should:

  • Be accessible without restricted access
  • Describe AI use at a high level using clear language
  • Apply across owned and operated properties

The goal is to enable reasonable consumer understanding of where and how AI is used.

Provide Content-Level AI Disclosures When AI Materially Shapes Content

When AI materially shapes content in ways that could mislead consumers about authenticity, identity, or representation, organizations provide content-level disclosures. This includes, but is not limited to:

  • 100% synthetic copy
  • AI-generated images, video, or voices
  • Digital replicas or “digital twins” of real people
  • AI-powered chatbots or conversational agents used in advertising

Pre- and post-production uses such as research, editing, or technical enhancement that do not alter meaning do not require content-level disclosures.

Implement Disclosures Clearly and Accessibly

Organizations should implement content-level disclosures in ways that are accessible across formats:

  • Visual (text and symbols) disclosures remain visible during content exposure or appear at the first frame or screen
  • Audio disclosures are spoken clearly at a normal pace
  • Accessibility requirements are met, including alt text and visual equivalents where applicable

Disclosures should not be obscured by platform interface elements or design choices.

Support Machine-Readable Disclosure Where Feasible

Organizations are encouraged to embed machine-readable provenance or disclosure metadata (such as C2PA or similar technologies) in AI-generated assets. While not required, metadata supports verification and interoperability across agencies, platforms, and publishers.

 

III. Rights and Permissions

Ensure Authorized Use of Data and Likenesses

Organizations confirm they have the appropriate rights, licenses, or consent to use data, content, voices, or likenesses in AI systems. Authorization does not replace or negate disclosure requirements. Documentation of permissions are maintained and available for review.

 

IV. Accountability and Human Oversight

Maintain Human Oversight of AI Systems

AI-assisted processes are subject to human oversight throughout research, production, review, and publication. Oversight responsibilities are assigned to individuals or teams with sufficient authority to enforce compliance and address issues.

Designate an AI Disclosure Lead

Organizations designate an AI Disclosure Lead (title may vary) to act as the central internal resource for AI policy interpretation, disclosure guidance, and compliance coordination. This role is clearly positioned within the organization and supported by appropriate authority and resources.

 

V. Bias and Fairness

Identify and Mitigate Bias in AI Systems

Organizations that build, train, or modify AI models implement processes to identify and mitigate bias. These processes are proportionate to the organization’s level of control over model development and documented as part of governance and risk management practices.

 

VI. Privacy and Data Protection

Protect Consumer Data in AI Applications

Where AI systems access or process consumer data, organizations comply with applicable privacy and data protection laws. Controls should address data access, retention, security, and third-party risk, consistent with existing privacy programs.

 

VII. Training and Education

Provide Ongoing AI Training

Organizations implement ongoing training programs that may cover:

  • General AI awareness for employees, including acceptable use and risks
  • Role-specific or proficiency training for individuals responsible for developing, deploying, or overseeing AI systems

The AI Disclosure Lead supports targeted training to teams involved in content creation, production, and distribution. Training execution is documented.

 

VIII. Risk Management and Adaptation

Integrate AI Into Enterprise Risk Management

Organizations incorporate AI risks into existing risk management frameworks. This includes:

  • Understanding applicable legal and regulatory requirements
  • Integrating trustworthy AI principles into policies and processes
  • Defining roles, responsibilities, and review cadence
  • Establishing escalation and contingency plans for high-risk AI or third-party failures

Risk management practices should evolve as AI technologies, regulations, and industry expectations change.


Completing the Ethical AI Certification

Publishers receive a comprehensive management report indicating performance against AAM’s Ethical AI framework, including any recommendations for improvement, if applicable. If issues are identified that preclude AAM from issuing an unqualified opinion, AAM provides a description of said issues and a reasonable timeline for the publisher to remedy the issues and meet the Ethical AI certification requirements.

Upon successful completion, AAM provides the Ethical AI Certification Seal, listing on AAM’s public Assurance List, an Ethical AI Certification Report to share with stakeholders, and a Management Report detailing AAM’s work and findings.


Recertification

AAM does an annual review for companies to maintain certification.


Appendix: Ethical AI Framework

Section

Control

Recommended Documentation/Deliverable

I. Policies and Governance

Development of AI Use Policies

Organizations have policies for AI usage.

AAM will review the organization’s AI Use Policies.

Regular Review and Updates

Organizations review AI policies annually and make necessary updates to address changes in usage, technology and risk. Companies may review the policies more frequently.

AAM will review the organization’s review and update procedures and ensure timely compliance.

II. Transparency and Disclosure

General AI Use Disclosure (1)

Organizations publish General AI Use disclosure(s) describing the use of generative AI for text, images, video, audio, and synthetic human interaction or influencers. The General AI Use disclosures are publicly available, without restricted access.

AAM will review the organization’s General AI Use disclosures and confirm open access to the disclosures.

Content Level AI Use Disclosures: (1)

AI disclosures are not required for research, editing, post-production processes, and AI-assisted technical improvements that don't alter content meaning.

When AI use materially shapes content in ways that could mislead a reasonable consumer about authenticity, identity, or representation, organizations provide additional notification in, or adjacent to, content that was created.

Organizations provide Content Level disclosures of AI use in the following circumstances:

  • Synthetic copy: Text that is generated 100% through AI prompts with limited human refinement or editorial oversight.
  • Synthetic images: Images generated from AI prompts (text-to-image or image-to-image generation), regardless of subsequent human refinement, editing, or compositing, excluding obviously non-realistic content.
  • Synthetic video: Video generated from AI prompts (text-to-video, image-to-video, or video-to-video generation), regardless of subsequent human refinement, editing, or compositing, excluding obviously stylized or fantastical content.
  • Synthetic voices: AI-generated voice content that creates new speech or statements from living or deceased individuals (words they never actually spoke), even with authorization.
  • Digital twins: AI replicas of real people, living or deceased, used in any capacity (even with authorization).
  • AI chatbots or conversational agents in ads: When AI-powered personas engage directly with consumers in ways that simulate human interaction.

AAM will review the organization’s Content Level AI notifications and review sample AI generated content for the required disclosures in all applicable contexts.

 

Content Level AI Use Disclosures: Implementation (1)

It is recommended that organizations adhere to the following implementation guidelines for Content Level disclosures:

Visual (Display, Video, Social):

 For text labels:

  • Language: Plain language using "AI-generated" terminology
  • Text size: Sufficiently large to be clearly readable

For visual indicators (watermarks, badges, icons):

  • Size: Sufficiently large to be clearly readable

All visual disclosure methods:

  • Persistence: Remain visible throughout content exposure or appear on first frame/screen
  • Placement: Positioned to avoid obstruction by platform UI elements

Audio (Radio, Podcast, Streaming):

  • Clarity: Spoken at normal pace in clear, intelligible voice

Accessibility Requirements:

  • All visual labels and indicators shall include alt text for screen readers. Audio disclosures should have visual equivalents when video is present.

AAM will review the organization’s Content Level AI notifications and review sample AI generated content for the required disclosures in all applicable contexts.

Content Level AI Use Disclosures: Metadata Recommendations (1)

It is recommended (not required) that machine-readable disclosures (i.e., C2PA or similar technologies) are embedded in metadata and accompany every AI-created asset, whether consumer AI labeling is required or not. This ensures agencies, publishers, platforms, and auditors can verify usage and alignment with policy without relying on visible labels.

AAM will determine whether content provenance disclosures are embedded in metadata to make appropriate recommendations.

If applicable, AAM will review the organization’s use of content provenance metadata through independent validation tools.

III. Rights and Permissions

Authorized Use

Organizations have the applicable rights, permissions, or appropriate level of consent to use the information provided from their AI solutions to create and publish content.

Authorized use, whether the authorization is obtained from an organization, living individual, or an estate, does not affect disclosure requirements.

AAM will review all applicable licensing agreements that authorize the use of the information and data provided by the AI solution and confirm the rights to use the content and information in their published content.

IV. Accountability and Human Oversight

Human in the Loop

Organizations ensure human oversight of AI processes, including research, content production, review and publishing.

Governing the AI systems is assigned to individuals with sufficient authority to enforce compliance (e.g. senior management).

Organizations shall describe the teams and technologies used for overseeing applicable AI processes (editorial, managerial, etc.). AAM will review the processes, controls, and tools used for compliance.

AI Disclosure Lead

Organizations designate an individual as an AI Disclosure Lead (actual titles may vary) to serve as the internal resource for AI policy guidance, questions and compliance.

The AI Disclosure Lead may sit within their organization’s editorial, marketing, brand safety, or compliance teams.

AAM will confirm the identity of the AI Disclosure Lead and will review the Lead’s job description and place within the organizational chart.

Organizations shall describe the team(s) and technology used for overseeing applicable AI processes (editorial, managerial, etc.). AAM will review the processes, controls, and tools used for compliance. 

V. Bias and Fairness
Bias Mitigation Strategies 

Organizations that build and train their own AI models, or train or otherwise modify commercially available AI models, implement strategies to identify and mitigate biases in AI algorithms.

AAM will review whether organizations train or modify AI models, and if so, whether bias identification and mitigation controls are present.

VI. Privacy and Data Protection
Data Privacy Compliance 

Organizations that use or otherwise allow AI models to access their consumer data ensure all such AI systems protect consumer data and comply with applicable data privacy regulations.

AAM will review whether the organization allows AI access to consumer data, and if so, AAM will review the organization’s data protection and privacy compliance controls.

VII. Training and Education
Staff Training Programs 

Organizations provide ongoing training for staff on AI technologies, risks, and ethical considerations.

General training: the overall employee base has received training on the company’s AI policy, acceptable/unacceptable uses, and risks associated with AI use.

Proficiency training: the individuals responsible for developing, deploying, monitoring the AI systems have received applicable training.

The AI Disclosure Lead conducts applicable training sessions with those responsible for the creation, production and  distribution of content so each function can apply the applicable disclosures according to the organization’s policies. 

AAM will review the organization’s training programs and the execution of those training programs throughout the review year.

VIII. Risk Management and Adaptation
Govern, Map, Measure, Manage 

Legal and regulatory requirements involving AI are understood, managed, and documented.

The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.

Ongoing monitoring and periodic review of the risk management process and its outcomes are planned, and organizational roles and responsibilities are clearly defined, including determining the frequency of periodic review.

Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be high-risk.

AAM will review the organization’s risk management program and controls, and the application of those controls throughout the review year.

Note 1: Aligns with the IAB Transparency and Disclosure Framework, January 2026