Usage of Artificial Intelligence (AI) at UHS
From day to day tasks, to research and teaching, Artificial Intelligence (AI) is expected to be the next great change agent to how people and companies operate. The University of Houston System is committed to ensuring that all constituents can use AI tools such as Microsoft Copilot, ChatGPT, or Google Gemini in a safe and responsible manner. Please note that this security and privacy guidance may evolve as circumstances change and/or the System develops further policy regarding the use of AI.
- General guidelines for the usage of AI
- Guidelines for AI agents
- Guidelines for hosting your own model
- Additional resources
Security & privacy guidelines regarding AI tools
UHS currently is considering the necessity for an official AI policy. During this process, please use these privacy and security guidelines, or contact security@uh.edu should you have any questions, comments or concerns.
Prohibited use
Do not use confidential, sensitive, or mission critical data (Level 1 data) or protected information (Level 2 data) with AI tools. For more information on what constitutes Level 1, Level 2 or Level 3 data please see SAM 07.A.08 – Data Classification and Protection.
Allowable use
You may use AI tools freely when using non-university or public information or creating new content through the use of the tools.
Avoid data sharing
Sharing information with AI models such as ChatGPT could cause data to be used by the tool for future use, or potentially expose data to unauthorized individuals.
Check accuracy
AI models can “make things up” or provide biased information, so it is critical that you verify any answers an AI tool such as ChatGPT provides. Many tools lack contemporary context to questions they answer.
Data privacy
Consider how your share data with others as it may get used if your data is shared with everyone.
Academic use of AI tools
Please check with the provost’s office at your System university regarding specific use cases of AI tools in your curriculum.
AI Agents
AI agents are (semi)autonomous software entities that are either homegrown or customized by you. This technology is still emerging, but please follow these guidelines when developing/deploying AI agents:
- Enforce access control - AI agents should have their own credentials to access systems and resources, rather than reuse credentials that you use to access these systems
- Determine agency - Ensure that agents can only autonomously take actions they are supposed to, without introducing too much business risk
- Multi agents - Carefully consider the deployment location of AI agents if they will be collaborating, especially how they interact and if they could end up with more access or agency than they should have
Hosting Your Own Model
If you would like to host your own LLM there are several things to keep in mind:
- Review any implementation with UHS Information Security
- Clearly define the purpose for the use of the model
- Clearly define who has access to the model
- Ensure that you get the correct model (verify checksums/hashes prior to installing the model) and that the model is not prohibited via the TX DIR Prohibited Technology & Covered Applications list (i.e. Deepseek)
- Decide if real data has to be used or if synthetic data would be sufficient
- If reusing the model for a different purpose, reset it back to the foundational model as residual training data may remain and may lead to unintended consequences
Additional Resources
Some additional resources that may be helpful as you explore the topic of AI further:
- University of Houston AI Solutions Group
- UHS SAM 07.A.08 Data Classification and Protection
- OWASP Top 10 for Large Language Model Applications
- NIST AI 600-1: Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
- UNESCO: AI and Education: Guide for policy-makers
- WHO: Ethics and governance of artificial intellgience for health: Guidance on large multi-modal models