Seminar Recap: Understanding AI Risks and Establishing Workplace Guidelines
During AAFCPAs’ recent Nonprofit Seminar (April 2024), Ryan K. Wolff, MBA and Vassilis Kontoglis briefed approximately 400 attendees on AI risks and the importance of establishing workplace guidelines.
The full session was recorded and may be viewed as a webcast at your convenience. >>
The integration of artificial intelligence (AI) into the workforce brings with it significant opportunities and challenges. To effectively navigate its complexities and risks, management teams will need to establish robust guidelines. The following insights outline considerations for incorporating AI responsibly within your organization.
As AI continues to evolve, its applications in data analytics, robotic process automation, and IT security become increasingly relevant. AI tools, such as large language models (LLMs) including ChatGPT, can transform how organizations handle data and perform tasks. But these advancements come with potential risks that should be managed carefully.
Managing Data and Confidentiality
One primary concern with AI is data management. When using AI tools, it is important to understand that any information provided to the platform can become the property of that tool’s owner. For instance, entering confidential information into ChatGPT’s public instance may inadvertently expose that data to future queries from other users. Therefore, it is essential that users keep prompts generic to avoid the release of sensitive information.
Established guidelines can help to ensure employees understand what constitutes confidential information and what they can do to protect it. For instance, make sure employees are aware that they should refrain from entering specific financial data or proprietary information in AI prompts. AI systems are often hosted on cloud servers, which may be vulnerable to unauthorized access if not properly secured. To mitigate risk, AAFCPAs advises that clients implement stringent data security measures. This includes encryption of sensitive information, the use of secure access protocols, policies and procedures, and relative training along with routine audits of AI systems for potential vulnerabilities.
In addition, it is important to assess how third-party AI services manage and protect data. Before integrating these services, organizations should evaluate security practices to ensure they align with internal data protection standards.
Establishing AI Policies and Procedures
Creating a comprehensive AI policy is critical when managing AI risk. As you develop your policy, look to cover the following key areas.
- Confidential Data Handling: Define what constitutes confidential data and outline rules for its use in AI tools. Specify what information can and cannot be entered into AI systems and have a policy in place specifying the agreed-upon process should confidential information be accidentally entered into the model. Note that you can create your own instance and limit the system’s interface with the Internet.
- Task-Specific Guidelines: Determine which tasks are appropriate for AI tools and which should be handled manually. This helps to prevent the misuse of AI and ensures critical tasks receive requisite human oversight.
- Verification of AI Output: Employees should verify the accuracy and relevance of AI-generated information before using it in decision-making. AI outputs can sometimes be incorrect or misleading, making verification crucial.
- Monitoring Third-Party AI Use: Establish guidelines for evaluating third-party AI services, focusing on their data security and usage practices.
Training employees on the proper use of AI tools is essential to mitigating risk. Training efforts should emphasize the importance of maintaining data confidentiality, verifying AI outputs, and understanding the limitations and potential biases of AI systems. Continuous education is just as important given AI technology continues to evolve. Organizations should offer regular training sessions to keep employees up to date on AI developments and best practices.
Monitoring Accuracy and Bias
AI systems are built on algorithms that may reflect the biases of their creators. This can lead to biased outputs that may affect decision making. To address this, organizations should:
- Monitor AI Systems for Bias: Regularly review AI outputs to identify and correct biases.
- Implement Ethical AI Standards: Develop and adhere to ethical standards for AI use, ensuring AI applications align with the organization’s values and principles.
- Engage Diverse Teams: Involve diverse teams in the deployment of AI systems to minimize the risk of bias.
Given the rapid pace of AI development, having a dedicated AI risk committee may be beneficial. This committee should oversee AI-related activities, address emerging risks, and ensure AI applications comply with established policies and ethical standards. It can also serve as a resource for employees with AI-related questions or concerns.
AI systems can sometimes produce incorrect or nonsensical outputs, known as hallucinations. This occurs when information is generated that appears plausible but is inaccurate. To mitigate this risk:
- Verify AI Outputs: Always cross-check AI-generated information with reliable sources before using it in decision-making.
- Limit AI’s Role in Critical Tasks: AI should never be used when making a final decision. Instead, final decisions should be made using the professional judgement of the individual assessing a situation.
AI is an evolving technology that requires careful management to ensure it is used responsibly and ethically. While it presents significant opportunities for efficiency and innovation, it must be approached with caution. By establishing clear guidelines, implementing robust security measures, and continuously educating employees, organizations can harness the benefits of AI while mitigating its risks.
Related Insights: Our Keynote presenter from the 2024 Nonprofit Seminar, Philip Deng talked about ways in which artificial intelligence may be used to transform nonprofit organizations. Listen to the 60 minute recording. >>
If you have questions, please contact Ryan K. Wolff, MBA, Solutions Specialist at 774.512.4054 or rwolff@nullaafcpa.com, Vassilis Kontoglis, Partner, Analytics, Automation & IT Security at 774.512.4069 or vkontoglis@nullaafcpa.com—or your AAFCPAs Partner.