Usage of Artificial Intelligence (AI) at UHS

From day to day tasks, to research and teaching, Artificial Intelligence (AI) is expected to be the next great change agent to how people and companies operate.  The University of Houston System is committed to ensuring that all constituents can use AI tools such as Microsoft Copilot, ChatGPT, or Google Gemini in a safe and responsible manner.  Please note that this security and privacy guidance may evolve as circumstances change and/or the System develops further policy regarding the use of AI.

Security & privacy guidelines regarding AI tools

UHS currently is considering the necessity for an official AI policy.  During this process, please use these privacy and security guidelines, or contact security@uh.edu should you have any questions, comments or concerns.

Prohibited use

Do not use confidential, sensitive, or mission critical data (Level 1 data) or protected information (Level 2 data) with AI tools.  For more information on what constitutes Level 1, Level 2 or Level 3 data please see SAM 07.A.08 – Data Classification and Protection.

Allowable use

You may use AI tools freely when using non-university or public information or creating new content through the use of the tools.

Note: Sharing information with AI models such as ChatGPT could cause data to be used by the tool for future use, or potentially expose data to unauthorized individuals. 

Check accuracy

AI models can “make things up” or provide biased information, so it is critical that you verify any answers an AI tool such as ChatGPT provides.  Many tools lack contemporary context to questions they answer.

Data privacy

Consider how your share data with others as it may get used if your data is shared with everyone.

Academic use of AI tools

Please check with the provost’s office at your System university regarding specific use cases of AI tools in your curriculum.


Hosting Your Own Model

If you would like to host your own LLM there are several things to keep in mind:

  • Review any implementation with UHS Information Security
  • Clearly define the purpose for the use of the model
  • Clearly define who has access to the model
  • Ensure that you get the correct model (verify checksums/hashes prior to installing the model)
  • Decide if real data has to be used or if synthetic data would be sufficient
  • If reusing the model for a different purpose, reset it back to the foundational model as residual training data may remain and may lead to unintended consequences