AI Guidelines for UH System Universities Marketing and Communications
Introduction
Artificial Intelligence (AI) involves technology that empowers computers or robots to mimic humans’ higher capabilities, such as actions, reasoning and decision making. AI relies on algorithms that can copy human reactions or, on a higher level, solve problems without the further intervention of people. Among these tools, generative AI has emerged as a powerful technology capable of producing text, images, music, videos and more. While these tools can be an effective tool for innovation and can significantly boost efficiency and creativity, they are not a substitute for human judgment or ethical considerations. At University of Houston System (System) institutions, these guidelines provide a framework for responsibly integrating AI into official University communications and marketing efforts.
UH System Approach to AI
It is essential that all faculty, staff and students understand acceptable uses for these emerging AI tools. All UH System faculty, staff and students should use AI tools in an ethical, safe and responsible manner that aligns with the System's values and principles in compliance with all applicable laws, rules, and regulations, as well as industry standards governing the use of AI. The Artificial Intelligence Acceptable Use Policy (“Policy”) is in development and will outline the appropriate and responsible use of AI tools within System universities.
Marketing and Communications Approach to AI
Use of AI tools, including Generative AI, by the System is intended to augment and complement the creativity and expertise of our communications and marketing professionals, not to function independently or replace human effort. Creativity, emotion and judgment are uniquely human attributes, and human judgment is essential at every stage of AI use. The integrity of our messaging and the trust of our audience depends on the thoughtful, ethical use of these AI tools.
To ensure responsible and effective use of AI tools across the System, these guidelines have been established to promote a unified and consistent approach in alignment with our institutional values and mission.
Guiding Principles
The following principles should guide the use of AI tools in the marketing and communications efforts of each System institution.
Human-Centered
AI is a tool to assist and augment human work — not replace human judgment, creativity or accountability. AI tools should be assistive, not autonomous.
Human Accountability
All decisions and outputs remain the responsibility of people. Any AI-assisted or AI-optimized material must be carefully reviewed, edited and overseen by a human author and approved by a human before reaching its final audience.
Ethical Use
The System upholds strict ethical standards for the use of AI in content creation to ensure transparency, accuracy and integrity. AI must never be used to deceive, misrepresent or spread misinformation. Integrity, transparency and accuracy guide its use.
Transparency
Transparency in AI usage is essential in order to maintain the trust of our audiences and stakeholders.
-
- Disclosure is required when AI significantly contributes to content. This requires acknowledgment or co-author-like credit.
- Any AI use in official marketing and communications materials that fall outside of the acceptable use guidelines should be shared with supervisors or creative leads to determine if public disclosure is required.
Accuracy and Fact-Checking
AI outputs must always be verified by human fact-checkers. Humans are accountable for correcting errors and ensuring reliability.
Respect for Copyright and Intellectual Property
AI-generated material must be reviewed, edited and modified to avoid plagiarism and protect intellectual property rights.
Privacy and Data Security
No sensitive, private, proprietary or confidential information, including data protected by FERPA, should be input into AI tools that have not received prior System approval for such use. AI use must comply with privacy laws and institutional policies.
Acceptable Use of AI (Assistive, Not Autonomous)
The examples provided below illustrate common acceptable uses of AI in official marketing and communications at System institutions, but they are not exhaustive.
Brainstorming & Ideation
- Generate fresh story ideas, new perspectives and constructive feedback.
- Anticipate potential questions or objections from stakeholders.
- Assist with brainstorming art concepts and creative direction.
Content Structuring & Editing
- Create outlines, editorial calendars, headlines, subheads and navigation elements.
- Suggest ways to shorten or restructure existing text.
- Assist with editing, proofreading or style checks (AP), while deferring to official System editorial guidelines.
- Serve as a thesaurus or phrasing aid.
Search Engine Optimization
- Conduct keyword research, readability checks and web performance analysis.
Web Editing and Proofing
- Draft, edit and/or troubleshoot markup, styling and programming languages.
Social Media
- Brainstorm ideas for social media posts or engagement.
- Suggest editing to drafts.
- Provide feedback on content for different audiences (subject to human review and approval).
Research & Summaries
- Quickly summarize concepts, transcripts or documents.
- Analyze non-confidential/non-proprietary data.
- Provide background on topics — but all facts must be verified by humans.
Repurposing Existing Content
- Suggest edits for condensing or adjusting content, with close human oversight. For example, AI tools can make suggestions on how to repurpose a press release into digital screen content.
Image Enhancement
- Use tools like Photoshop or Canva for content-aware fill, retouching or light filtering of owned images.
- Edits must preserve authenticity and not misrepresent context or compromise the integrity of the image, maintaining the image's intended message.
Prohibited Use of AI
The examples provided below illustrate common unacceptable uses of AI in official marketing and communications at System institutions, but they are not exhaustive.
Full Content Creation
- Do not generate complete articles, press releases, web pages or official communications.
Unverified Fact-Checking
- Do not rely solely on AI for facts, research or citations. AI “hallucinations”— when a generative AI model produces incorrect, false or completely fabricated information— make human verification mandatory.
Synthetic Media for Official Use
- AI-Generated Images & Sound: Creative output created entirely or substantially by a generative AI tool without relying on an original, human-created asset— such as a photograph, illustration, audio or design file — may not be used in official System institutional communications, including promotional merchandise, until legal and ethical concerns are resolved. Creative outputs include but are not limited to images, voiceovers and music.
- AI-Generated Headshots & Portraits: Generative AI should not be used to create headshots or portraits of individuals, and AI-generated images of known persons should not be used in official System institutional communications.
- Exceptions: In limited circumstances, the use of AI-generated creative assets may be appropriate — for example, when a story is specifically about generative AI research, projects or topics and the creative assets were produced as part of that project. In such limited cases, an exception must be requested by a dean or vice president and approved by the vice president for marketing and communications.
Sensitive or Confidential Data
- Never input proprietary, student, employee, patient or legally protected information (HIPAA, FERPA, etc.) into AI tools unless such tools have been previously approved for such use. See the Artificial Intelligence Acceptable Use Policy for more details.
Personal or Sensitive Messaging
- Do not use AI to generate personal, emotional or sensitive communications (e.g., condolence notes, eulogies).
Deceptive or Manipulative Content
- Never use AI to create false, misleading or unethical communications (including spamming, phishing or fabricated content).