Understanding the Risks of DeepSeek R1
AI tools have become increasingly integral to both our work and daily lives, assisting with everything from content creation to complex problem-solving. As these tools become more powerful, AAFCPAs’ IT Security team advises that clients take a cautious approach. One such tool making waves is the DeepSeek R1 model, developed by Chinese tech company DeepSeek. This model has garnered attention for its advanced reasoning capabilities, which are comparable to ChatGPT and Copilot but offered at a significantly lower cost.
Although such tools are extremely useful, their widespread adoption raises important considerations about data security. A key concern with DeepSeek R1 is the location of its data storage, specifically all user data is sent to servers in the People’s Republic of China. According to DeepSeek’s terms of service, data collected from U.S. users is also transferred to these servers. This introduces a potential risk due to China’s extensive national security laws, which require that companies share data with authorities upon request. While there are no reports of Chinese officials accessing data from DeepSeek specifically, cybersecurity experts highlight the possibility of such access as a serious risk, similar to concerns raised about other China-based apps like TikTok.
The potential for sensitive data to be accessed by adversarial forces is not the only concern. Data collected by DeepSeek may be used to glean insights into user behaviors, potentially making individuals vulnerable to targeted phishing attacks or manipulation. Adding to the complexity, the Chinese government already has vast amounts of data on U.S. citizens stemming from previous breaches of U.S. institutions like the Treasury Department and telecom companies. This context heightens the urgency for users to consider potential risks in interacting with AI models that store data in regions with less stringent privacy protections.
Given these factors, safeguarding your data becomes paramount. Despite DeepSeek’s advanced capabilities, users must remember that no AI tool is immune to data security risks. It is crucial to avoid inputting personal or confidential information into any AI platform, such as DeepSeek, ChatGPT, or Copilot. This practice mirrors the caution we exercise with search engines. Just as we would never recommend entering sensitive information into a search query, the same level of caution should be applied to AI interactions.
Still, the link between DeepSeek and China warrants additional vigilance. While DeepSeek’s capabilities are comparable to those of ChatGPT and Copilot, the latter two have been established longer and are thought to offer more robust data privacy protections. The primary concern with DeepSeek and similar AI models is the uncertainty regarding data storage and tracking. Even though AI developers implement security measures, the possibility remains that data could be stored, tracked, or inadvertently exposed. DeepSeek’s privacy policy underscores this, stating, “We collect your information in three ways: Information You Provide, Automatically Collected Information, and Information from Other Sources.” This makes it essential for users to approach AI interactions with the same care they would when conducting any other online activity.
As AI tools evolve, they offer significant potential for innovation and efficiency. However, users should remain mindful of data security and carefully assess the AI models they use. AAFCPAs advises clients to avoid entering private or sensitive information into any platform to help protect their data.
How We Help
At AAFCPAs, we recognize that technology is both a powerful tool and a potential vulnerability. Our IT and Cyber Security Assessments are designed to identify and address risks inherent in your organization’s technology environment, ensuring sensitive data is protected from loss, misuse, or malicious attack. We work closely with clients to understand the nuances of their systems, offering tailored assessments that align with industry standards. Findings are then presented in a clear, actionable format to serve as a roadmap for strengthening security controls and reducing risk exposure.
For IT leaders and security teams, we provide strategic insight into regulatory compliance, system vulnerabilities, and risk mitigation. Our assessments help organizations meet industry and government security requirements while improving overall resilience. We also offer ongoing Vulnerability Management as a Service (VMaaS) to continuously identify and mitigate security gaps across all systems and devices. This proactive approach allows IT teams to stay ahead of evolving threats while maintaining compliance with frameworks such as NIST, HIPAA, and ISO 27001. Whether looking to enhance security posture, ensure compliance, or implement more effective controls, AAFCPAs provides the expertise and guidance needed to safeguard operations with confidence.
If you have questions, please contact Ryan K. Wolff, MBA, Supervisor, Strategic Innovation & Data Analytics at 774.512.4054 or rwolff@nullaafcpa.com, Vassilis Kontoglis, Partner, Analytics, Automation & IT Security at 774.512.4069 or vkontoglis@nullaafcpa.com, Mr. Anderson, MCSE, CCNP, CISSP, CEH, Certified Ethical Hacker at 774.512.4066 or manderson@nullaafcpa.com—or your AAFCPAs Partner.