DeepSeek, a Chinese company behind a low-cost, high-performance chatbot, is under scrutiny due to critical security vulnerabilities in its model. Despite its growing popularity, the DeepSeek R1 chatbot has failed to block malicious requests, as reported by PCMag.
Researchers at Cisco tested the DeepSeek R1 model using an automated algorithm and 50 requests related to cybercrime, disinformation, and illegal activity. The chatbot failed to reject a single dangerous request, instead providing prohibited instructions with a 100% failure rate.
Security Comparison with Competitors
When compared to other chatbots, DeepSeek’s security performance lags significantly. OpenAI’s GPT-4o rejected 14% of harmful requests, Google’s Gemini 1.5 Pro blocked 35%, and Claude 3.5 had a 64% success rate. The best result came from OpenAI’s preliminary o1 model, which prevented 74% of attacks.
Cisco attributes DeepSeek’s poor security to its limited budget. Reports indicate that only $6 million was spent on its development, whereas training GPT-5 cost approximately half a billion dollars.
Censorship and Market Growth
While DeepSeek struggles with security, it enforces strict censorship on politically sensitive topics related to China, notes NIX Solutions. When asked about the situation of Uyghurs, who are persecuted according to the UN, or the 1989 Tiananmen Square protests, the chatbot simply replies: “Sorry, this is beyond my capabilities. Let’s talk about something else.”
Interestingly, these issues have not hindered DeepSeek’s rapid user growth. According to Similarweb, the chatbot’s daily users have surged from 300,000 to 6 million. Moreover, Microsoft and Perplexity have already begun integrating DeepSeek into their platforms due to its open-source foundation.
Despite DeepSeek’s security flaws, its adoption continues to rise. We’ll keep you updated as more integrations become available.