AI Use in the Workplace (Governance)

The growing capabilities of AI also come with potential risks if deployed without oversight. Unauthorized access to or use of AI systems within the company could lead to legal violations, unfair bias, and loss or misuse of sensitive data. It is critical that all AI use at our company, whether by internal teams or external vendors, be in coordination with the IT Department and follow AI governance policies. AI solutions must be evaluated for bias, tested for security vulnerabilities, only use authorized data sources, and comply with all applicable regulations. Proactive governance of AI will allow us to tap its benefits while minimizing risk.

AI models are trained on large quantities of data, including data provided as prompts or as source for analysis. Small, seemingly insignificant bits of data, when combined with other small bit of data and user analysis can lead to company data being compromised. 

The use of AI in the workplace must be approved by IT after risk evaluation. Approved AI resources are listed in a separate knowledge base article 'Approved AI Resources'. 

The use of AI outside of those listed as approved is prohibited. This include browser based generative AI chat models, such as Chat GPT, Google Bard, and Anthropic Claude 2, as well as image generating AI services (AI art).