Artificial intelligence is reshaping marketing and business operations, yet findings from our AI survey reveal a surprising gap: the majority of respondents using AI in marketing don’t have formal policies. This article digs into who’s actually using AI policies across industries, countries and company sizes and explores how these trends might reflect broader challenges in AI governance.
As companies fight to stay competitive with tech advancements, the lack of consistent internal frameworks reveals a glaring industry-wide policy gap. This emphasizes the need for more proactive guidelines to manage AI’s potential risks and rewards.
Find out more about this survey on AI:
- AI in Marketing: What Marketers Really Think
- Marketers’ Outlook on AI: From Optimism to Skepticism
- The 4 Biggest Challenges in AI Content Creation (+ How To Solve Them)
Who’s Using AI Policies?
A staggering 73% of survey respondents using AI in marketing processes said their company did not have a policy to govern usage — and this data reflects a broader industry trend. For example, a similar study by Traliant found that only 60% of companies have an AI Acceptable Use Policy (AUP), yet 31% of HR professionals have not shared any guidelines around the proper use of AI. Further, Security Magazine reports that 10% of companies have a comprehensive, formal policy in place, and more than 1 in 4 say no policy exists in their workplace — nor is there any plan for one.

The governance gap raises questions about how brands are managing risk and opportunity within the space. So, how does the AI policy gridlock translate to specific companies, exactly? Let’s break it down by firmographic trends:
By Industry
Industries with the most respondents reporting a functional and active AI policy are:
- Marketing and Media (45%).
- Professional and Business Services (42%).
- Other (36%).
Given that these sectors are also the largest users of AI, we can reasonably conclude one of two things: Either these industries are inherently more proactive about implementing AI policies or, as AI usage becomes more integral to business operations, the need for structured policies naturally increases.
This data suggests a direct correlation between the pervasiveness of AI adoption and the emphasis on policy development, underscoring the necessity for standards in high-usage sectors.
By Country
Countries with the highest volume of respondents incorporating AI policy in their organization are:
- The United Kingdom (29%).
- Other (38%).*
The most populous group of respondents were from the United States, yet their policy adoption (26%) sat slightly below average (27%). Here is how other countries measured up:
- Australia: Among 8 respondents, 1 has an AI policy.
- New Zealand: Among 3 respondents, none have an AI policy.
- Canada: Among 2 respondents, none have an AI policy.
- Pakistan: The 1 respondent that reported using AI in marketing does have an AI policy.**
While the industry data implies that higher AI usage drives a greater need for policies, the country-level insights reveal a more nuanced picture. For instance, the U.S. — despite being the most prominent user — lags slightly behind in policy adoption compared to the U.K., suggesting that factors such as regulatory environment, organizational culture and market maturity may also play a role in shaping AI governance.
*Philippines, Barbados, the Netherlands, Ghana, Indonesia, Germany, Italy, Japan, Israel, Panama, Romania, Sweden, Nigeria and Albania.
**While this technically adds up to a 100% score for Pakistan, our pool of regional respondents was too small to draw binding conclusions. Nonetheless, well done, Pakistan.
By Company Size
Companies with 501 or more employees (38%) and those with 51 to 500 employees (35%) reported the highest percentages of having established AI policies. Our survey found that bigger companies are more likely to have an AI policy. These findings are also reflected in Littler’s 2024 AI C-Suite Report, which indicates that among companies with 5,000+ employees, 80% have a generative AI policy in place (63%) or in process (17%) — this is likely due to large companies’ heightened risk exposure and resources. But that doesn’t mean the trend is linear.
Our survey found that while bigger companies are the most likely to have an AI policy, smaller companies (21%) are the second most likely and mid-sized companies are the least likely (8%). This could reflect a tension between resource availability and risk appetite.
By Engagement Model
Organizations implementing AI policies with a remote workforce top the list (41%). Companies with a hybrid setup come in second (22%), followed by those with workers who show up to the office (12%).
This suggests a direct correlation between cloud-based operations and a perceived need for AI policy. While this data suggests a relationship, it doesn’t indicate that businesses running from the office don’t have an actual need for internal AI governance.
A Glimpse at AI Policies on the Map
Given that very few companies have enacted organization-wide AI policies, it’s unsurprising that AI policies (existing or in development) also vary significantly at a national level.
Some jurisdictions have developed comprehensive, legally binding frameworks, while others provide more advisory guidelines. The level of detail — whether addressing AI usage at a granular level or through broad policies — also differs, potentially influencing how companies develop and implement AI responsibly. Here’s a taste of what you might find:
- Legally binding vs. advisory policies: Certain regions have enacted lawfully binding regulations governing AI development and use, while others offer non-binding guidelines with limited enforcement mechanisms.
- Policy scope: Some policies provide detailed instructions on AI implementation and usage within organizations. Others adopt a broader approach, outlining general principles for responsible AI development.
- Focus areas: Policies may concentrate on how companies should use and implement AI technologies, reinforcing ethical considerations, or they might emphasize responsible development practices to prevent adverse social impacts like job displacement.
Here’s a breakdown of regional progress in AI policy development:
- United States: In 2023, the U.S. issued an Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. While it sets a policy direction, the enforceability depends on subsequent legislative and regulatory actions.
- Canada: The Directive on Automated Decision-Making provides a framework for the responsible use of AI in Canadian federal institutions. This directive is more detailed and binding than broader, principle-based policies.
- Europe: The European Union is advancing the Artificial Intelligence Act to establish a comprehensive legal framework for AI. This act represents one of the most detailed and binding approaches to AI regulation globally.
- Australia: Australia’s Digital Transformation Agency has released policy guidance and training resources to support the responsible adoption of AI in government. These resources build awareness rather than imposing strict legal obligations.
- New Zealand: New Zealand has developed a Public Service Artificial Intelligence (AI) Framework, which provides guidance for the ethical and safe use of AI within the public sector. This framework offers principles and guidelines rather than legally binding regulations.
- OECD: AI principles serve as a global reference, encouraging member countries to implement policies ensuring safe and fair AI systems. While influential, the OECD principles are non-binding.
A global lack of leadership in domestic granular AI policy development may explain why private companies lack their own policies. However, that doesn’t mean this will always be the case.
Existing and emerging AI-related risks in the market could already jeopardize an organization’s productivity. Developing standards helps you anticipate and prepare for those risks, mitigating potentially dire outcomes. So, what is showing up in marketers’ AI policies?
What Do AI Policies Include?
Companies that have policies in place are setting a strong foundation of governance to mitigate potential risks in their business. Granted, our survey received a limited response because of the small cohort of companies that have implemented AI policies.

If you’re thinking about creating one for yourself, consider the following components:
Policy Component | Why It Matters |
Scope and Purpose | Define the policy’s application and limitations, who should use it and how. |
Principles and Values | Consider how your company’s principles translate to effective AI deployment. |
Regulatory Compliance | AI usage should align with legal and regulatory frameworks within and outside your organization. |
Data Security and Privacy | Guidelines should protect your business, employees and clients.Due to AI and machine learning training models, feeding proprietary information into AI may compromise intellectual property.AI systems can be vulnerable to external, malicious attacks that leak proprietary information and sensitive client or employee data. |
Bias | Some AI programs can potentially perpetuate biases in training data and design.AI usage should not result in discrimination against employees or customers. |
Transparency | Guidelines clarify what types of activities AI systems should undertake and how they collect, use and process data.Define which tasks are acceptable for AI use and which are not. Assess which AI tools are appropriate to use in the workplace.Provide guidance on how to handle AI-generated outputs.Under which circumstances will you add a disclaimer, and for whom? |
Accountability | Who is responsible for ensuring AI usage is safe, ethical and fair?Define a reporting process for misuse, stakeholder issues and complaints.Outline an auditing process to monitor AI usage and update policies regularly. |
Ethics | AI should have a human-centered role and avoid violating human rights or discriminating against stakeholders. |
Note that each company’s policy will look slightly different depending on the type of tools they use and the scope of work AI is involved in.
Do AI Policies Help Make Adoption Easier?
Our question on whether AI policies help ease adoption yielded only 14 responses — a reflection of how few organizations have formal frameworks in place. However, half (7/14) noted it positively impacted the ease of AI adoption. Here is what some respondents had to say:
How has the AI policy impacted AI adoption at your company?
- “It added visibility onto what the company provides, and clients know what to expect.”
- “It has helped set expectations with our clients and potential partners.”
- “Reduced production time slightly, but humans are still needed for refining the content.”
- “It didn’t.”
- “It encourages AI usage.”
- “It’s too soon to tell.”
- “Not dramatically, but mostly outlines best practices.”
One respondent also expressed concerns about gen-AI’s capacity to produce brand-aligned, legally compliant content:
“It doesn’t comply with brand standards and legal considerations/guidelines.”
While our small sample limits definitive conclusions, responses hint at a broader pattern seen in supplemental research. For example, the Littler report found that nearly 85% of HR leaders are concerned about litigation risks associated with predictive or generative AI, and 73% are scaling back its use amid regulatory uncertainty.
A clear, comprehensive AI policy provides internal guidance and may also serve as a risk mitigation tool by addressing legal and regulatory uncertainties. By establishing defined parameters for AI deployment, organizations can protect themselves from potential litigation while nursing a more confident, proactive approach to adoption.
Tips for Creating AI Policies That Support Responsible AI Adoption
- Involve cross-functional stakeholders: Gather input from IT, legal and HR to ensure the policy addresses diverse needs and risks, creating a balanced framework that encourages organization-wide buy-in.
- Keep it flexible: Write policies in straightforward language with room for adjustments as technology evolves. This helps teams understand expectations without feeling constrained by overly rigid rules.
- Integrate risk management measures: Include guidelines for data privacy, bias mitigation and legal compliance to create a roadmap for safe AI use and shield against potential litigation or regulatory setbacks.
- Invest in training: Roll out AI training programs and regular updates on the policy, ensuring everyone from executives to end users understands how to integrate AI responsibly into workflows.
When Will AI Policies Become More Common in the Workplace?
While the momentum for AI adoption is undeniable, formal policies are still catching up. Fewer than half of the executives (44%) in the abovementioned Littler report say their organizations have generative AI policies, of which some are in the process (25%) or consideration (19%) phases. Yet, this represents a significant increase from Littler’s 2023 Employer Survey, when only 10% said the same. Companies are clearly beginning to recognize the need for structured guidelines.
Many firms may view AI risk management as a future challenge rather than a present necessity, potentially due to the evolving nature of AI technology and the uncertainty of regulatory frameworks. As businesses become more reliant on AI, we can expect a gradual shift toward more comprehensive policy implementation in the workplace.
One of our respondents mentioned that “[AI policy] limits use as AI-generated content is not accepted in some formats,” underscoring the importance of building standards that achieve reliable governance without stifling your marketing team’s creative potential.
Futureproof Your AI Strategy With Clear Policies
Our data highlights that while gaps exist, organizations that adopt intentional AI policies stand to reap significant benefits. A well-planned AI policy can mitigate legal and operational risks, and nurture a culture of transparency and accountability, enabling teams to innovate and deploy AI solutions confidently.
As industries increasingly rely on AI, those with proactive policies position themselves to maintain a competitive advantage and build trust among stakeholders. As AI becomes more integral to business functions, the call for more explicit, comprehensive policies will likely intensify — and now is the time to get ahead.