Five Practical Questions Every DMO Should Ask When Building an AI Policy

schedule 4 min read calendar_today
Five Practical Questions Every DMO Should Ask When Building an AI Policy
Bottom Line:

The use of AI in destination organizations is strategically essential; however, without a clear policy, the risks can outweigh the benefits for your brand. Putting privacy, trust, and governance at the center of your AI strategies is one way that destination organizations can strike a balance between innovation and responsibility. 

As destination organizations increasingly leverage AI to streamline internal workflows, enhance visitor engagement, and engage stakeholders, the urgency to establish strategic and forward-looking AI policies has never been more important. This need is reinforced in the 2025 DestinationNEXT Futures Study, which highlights AI adoption, workforce readiness, and organizational capacity as core strategic themes for future-ready destination organizations. 

This aligns with a recent Destinations International AI Virtual Workshop, hosted by Educational Collaborator UNCHAINED AI CEO Todd Brook, who led a compelling discussion around designing responsible AI policy. His message was clear: While AI software tools may be cutting-edge, real innovation happens through intentional governance, transparent communication, and strategic alignment with your destination's values. 

Here are five essential questions and corresponding action steps to help destination organizations develop AI policies that support innovation while maintaining stakeholder trust. 

1. What are our ethical guardrails and who defines them? 

Successful AI policy starts with a shared set of ethical principles. As Todd Brook noted, “If you don’t define your beliefs, someone else will fill in the gaps.” 

Action Step: Develop a short, actionable code of AI ethics aligned with your destination’s core values. Invite input from a cross-section of departments: marketing, operations, HR, IT, to ensure policies reflect both internal priorities and public trust. 

Example: One destination organization created an internal AI ethics advisory group that reviews new tools and use cases before implementation. This structure builds shared accountability and fosters open dialogue around emerging risks. 

For reference, here are some Destinations International blogs for AI policy creation:  

2. Who is accountable for AI-generated content? 

From chatbots to itinerary planners, AI is already producing content on behalf of destination brands. However, when AI makes an error, accountability still rests with people. 

Action Step: Establish clear “human-in-the-loop” processes (which should align with existing approval steps). Require human review for all AI-generated content before public distribution—especially on websites, social media, and visitor-facing communications. 

Pro Tip: Use plagiarism and AI detection tools as a first filter but emphasize that human judgment (human in the loop) remains the final authority. 

3. Where do we allow autonomous workflows—and where don’t we? 

AI can automate everything from guest services to internal and stakeholder reporting. But not all workflows carry the same level of risk. 

Action Step: Create a tiered review process: 

  • Front-stage tools (e.g., virtual visitor assistants) should undergo executive and legal review.
  • Back-stage tools (e.g., internal reporting automation) may only need departmental oversight. 

Example: One destination organization deployed an AI-powered itinerary builder and added a mandatory validation step using their CRM and partner content guidelines to ensure that services or destinations are accurately represented. 

4. Are we proactively managing data privacy and compliance? 

Just because an AI software tool is SOC 2 or SOC 3 certified, it doesn’t mean it complies with your vendor, partner agreements or Data Processing Agreements (DPA). (SOC 2 or SOC 3 certification means a company has passed an independent audit verifying it meets recognized security, privacy, and data protection standards set by the AICPA.) 

Action Step: Conduct a full data audit to clarify which data types (visitor, partner, financial) can or cannot be used with AI tools. Review your contracts and update DPAs accordingly. 

Tool Tip: If using note-taking AI like Otter.ai or Fathom, opt to retain only meeting summaries—not transcripts—for sensitive conversations. 

5. How are we checking for bias—and who’s responsible for flagging it? 

Bias in AI software tools often goes unnoticed until harm is done. Brook emphasized that the more intentional you are with your design and instructions, the more you can mitigate risk. 

Action Step: Form internal review committees or departmental feedback to evaluate AI use cases for fairness and inclusion. Offer anonymous ways for staff to report bias or concerns. 

Pro Tip: When prompting AI software tools, include bias checks, specific data sources, and inclusive language goals—especially when designing tools that impact public-facing messaging or partner representation. 

Destinations International offers numerous resources to support your AI journey, including the newly launched AI Community, which features an online community library, as well as forum discussions.   

A dedicated AI resources folder has been recently added to the online community library, along with a simple submission form for uploading items such as policies, training materials, and communication strategies. Feel free to share your questions, thoughts, and AI policies via this collaborative AI community. 

Final Thought: Build a flexible framework, not a flawless one 

AI policy shouldn’t be set in stone. Instead, treat it as a living framework—something tested, iterated, and informed by real-world destination organization application. Start with what’s achievable. Require annual training. Build in governance. Most importantly, create space for feedback and improvement. 

As the session concluded, one takeaway stood out: Responsible AI isn’t just about risk mitigation. It’s about empowering staff, building institutional trust, and adopting technology in ways that advance your destination organization’s mission and values. 

Suggested Takeaways for DI Social Media: 

  • Create cross-departmental AI task forces to guide policy development and feedback.
  • Define ownership and accountability for AI-generated content(“human-in-the-loop”).
  • Establish clear data privacy boundaries before deploying public-facing tools.
  • Address AI bias with inclusive design and ongoing review.
  • Keep the AI policy agile, review and refine regularly. 

Vimal Vyas, CDME

Vice President of Data, Security and Digital Innovation, Greater Raleigh CVB

Vimal Vyas is Vice President of Data, Security and Digital Innovation for the Greater Raleigh Convention and Visitors Bureau. As data and digital innovation VP, he is responsible for leading the Bureau's strategic data and technology innovation programs as well as integrated digital efforts, including digital product (web/mobile) development, data integration, cloud-based software solutions, infrastructure, content/Internet marketing technologies and business analytics/intelligence. In 2023 Vyas earned Destinations International's Certified Destination Management Executive (CDME) credential, which signals the destination marketing industry's highest educational achievement. 

chevron_right More from this Author

Submit Your Thought Leadership

Share your thought leadership with the Destinations International team! Learn how to submit a case study, blog or other piece of content to DI.

Submit to DI