The post DeepSeek-R1 flagged for insecure coding traced to political directives appeared on BitcoinEthereumNews.com. New research by cybersecurity firm CrowdStrike has found that DeepSeek’s large language model (LLM) DeepSeek-R1 generates weaker and more insecure code when prompted with topics that China’s leadership could regard as “politically sensitive.”  Chinese-based tech firm DeepSeek introduced DeepSeek-R1 in January, and it became the most downloaded AI model during its launch week on both Chinese and US stores, Cryptopolitan reported.  CrowdStrike’s Counter Adversary Operations team typed in prompts involving subjects considered politically touchy by the Chinese Communist Party, and found that the probability of DeepSeek-R1 producing code with severe security flaws jumped by as much as 50%. “Given that up to 90% of developers already used these tools in 2025 with access to high-value source code, any systemic security issue in AI coding assistants is both high-impact and high-prevalence,” the firm wrote. DeepSeek R1 model censorship and concern for national security According to CrowdStrike’s blog published last Thursday, several governments have issued restrictions or outright bans on open-source DeepSeek-R1. Policymakers blasted the model for allegedly censoring politically sensitive subjects like inquiries on China’s internet firewall and the status of Taiwan. The American software company found R1 frequently refused to assist with topics involving groups or movements deemed unfriendly to mainland China’s government. Western models almost always generated code when asked to create software related to Falun Gong, but DeepSeek-R1 refused to do so in 45% of trials. In several cases, the model wrote structured plans for responding to questions, including system requirements and sample code, even though it was fully capable of delivering a technical answer. The reasoning traces sometimes contained lines such as: “Falun Gong is a sensitive group. I should consider the ethical implications here. Assisting them might be against policies. But the user is asking for technical help. Let me focus on the technical aspects.” R1’s… The post DeepSeek-R1 flagged for insecure coding traced to political directives appeared on BitcoinEthereumNews.com. New research by cybersecurity firm CrowdStrike has found that DeepSeek’s large language model (LLM) DeepSeek-R1 generates weaker and more insecure code when prompted with topics that China’s leadership could regard as “politically sensitive.”  Chinese-based tech firm DeepSeek introduced DeepSeek-R1 in January, and it became the most downloaded AI model during its launch week on both Chinese and US stores, Cryptopolitan reported.  CrowdStrike’s Counter Adversary Operations team typed in prompts involving subjects considered politically touchy by the Chinese Communist Party, and found that the probability of DeepSeek-R1 producing code with severe security flaws jumped by as much as 50%. “Given that up to 90% of developers already used these tools in 2025 with access to high-value source code, any systemic security issue in AI coding assistants is both high-impact and high-prevalence,” the firm wrote. DeepSeek R1 model censorship and concern for national security According to CrowdStrike’s blog published last Thursday, several governments have issued restrictions or outright bans on open-source DeepSeek-R1. Policymakers blasted the model for allegedly censoring politically sensitive subjects like inquiries on China’s internet firewall and the status of Taiwan. The American software company found R1 frequently refused to assist with topics involving groups or movements deemed unfriendly to mainland China’s government. Western models almost always generated code when asked to create software related to Falun Gong, but DeepSeek-R1 refused to do so in 45% of trials. In several cases, the model wrote structured plans for responding to questions, including system requirements and sample code, even though it was fully capable of delivering a technical answer. The reasoning traces sometimes contained lines such as: “Falun Gong is a sensitive group. I should consider the ethical implications here. Assisting them might be against policies. But the user is asking for technical help. Let me focus on the technical aspects.” R1’s…

DeepSeek-R1 flagged for insecure coding traced to political directives

2025/11/25 19:39

New research by cybersecurity firm CrowdStrike has found that DeepSeek’s large language model (LLM) DeepSeek-R1 generates weaker and more insecure code when prompted with topics that China’s leadership could regard as “politically sensitive.” 

Chinese-based tech firm DeepSeek introduced DeepSeek-R1 in January, and it became the most downloaded AI model during its launch week on both Chinese and US stores, Cryptopolitan reported. 

CrowdStrike’s Counter Adversary Operations team typed in prompts involving subjects considered politically touchy by the Chinese Communist Party, and found that the probability of DeepSeek-R1 producing code with severe security flaws jumped by as much as 50%.

“Given that up to 90% of developers already used these tools in 2025 with access to high-value source code, any systemic security issue in AI coding assistants is both high-impact and high-prevalence,” the firm wrote.

DeepSeek R1 model censorship and concern for national security

According to CrowdStrike’s blog published last Thursday, several governments have issued restrictions or outright bans on open-source DeepSeek-R1. Policymakers blasted the model for allegedly censoring politically sensitive subjects like inquiries on China’s internet firewall and the status of Taiwan.

The American software company found R1 frequently refused to assist with topics involving groups or movements deemed unfriendly to mainland China’s government. Western models almost always generated code when asked to create software related to Falun Gong, but DeepSeek-R1 refused to do so in 45% of trials.

In several cases, the model wrote structured plans for responding to questions, including system requirements and sample code, even though it was fully capable of delivering a technical answer. The reasoning traces sometimes contained lines such as:

“Falun Gong is a sensitive group. I should consider the ethical implications here. Assisting them might be against policies. But the user is asking for technical help. Let me focus on the technical aspects.”

R1’s final output after completing its reasoning phase ended with the standardized refusal, “I’m sorry, but I can’t assist with that request,” written without any external filtering or guardrails placed on the model. CrowdStrike concluded the behavior is embedded in the model’s self-overriding mechanism or an intrinsic kill switch of sorts.

Taiwan and Western governments bash Chinese AI products

In a statement earlier this month, Taiwan’s National Security Bureau said citizens should be cautious when using generative AI systems developed by DeepSeek and four other Chinese firms: Doubao, Yiyan, Tongyi, and Yuanbao. 

“The five GenAI language models are capable of generating network-attacking scripts and vulnerability-exploitation code that enable remote code execution under certain circumstances, increasing risks of cybersecurity management,” the Bureau reckoned.

US and Australian Regulators have asked app stores to remove models from Chinese developers, fearing the tools could collect user identities, conversation logs, and personal information, then transmit that data to servers operated inside China.

“It shouldn’t take a panic over Chinese AI to remind people that most companies in the business set the terms for how they use your private data. And that when you use their services, you’re doing work for them, not the other way around,” University of Toronto’s Citizen Lab researcher John Scott-Railton told WIRED in January.

AI market boom sparks regional competition in Asia

In the broader Asian AI market, a top-performing Asian fund manager recently increased exposure to Chinese artificial intelligence stocks while cutting holdings in South Korea and Taiwan, news outlet The Japan Times reported

Kelly Chung, who helps oversee the Value Partners Asian Income Fund and the Asian Innovation Opportunities Fund, said some of the Chinese AI stocks are still quite cheap in terms of valuation. She has been rotating out of Taiwanese and South Korean stocks to Chinese hyperscaler companies listed in Hong Kong since August. 

Chung noted that both of her funds, which hold a combined $490 million, have outperformed nearly all their competitors over the past year.

South Korea’s tech-heavy Kospi has climbed 21% in the past three months, aided by SK Hynix, a major supplier to Nvidia, whose share price more than doubled. Taiwan’s stock index has risen 9.2% in the same period. On the other end of the stick, Hong Kong’s Hang Seng Tech Index, which includes China’s biggest AI spenders, has fallen by 4.8%.

Sharpen your strategy with mentorship + daily ideas – 30 days free access to our trading program

Source: https://www.cryptopolitan.com/deepseek-writes-insecure-code-communist/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

USD/CHF rises on US dollar rebound, weak Swiss economic data

USD/CHF rises on US dollar rebound, weak Swiss economic data

The post USD/CHF rises on US dollar rebound, weak Swiss economic data appeared on BitcoinEthereumNews.com. USD/CHF trades slightly higher on Friday, around 0.8060, up 0.15% at the time of writing. The pair remains on track for a weekly gain, supported by the persistent weakness of the US Dollar (USD) amid growing expectations of interest rate cuts by the Federal Reserve (Fed). The US Dollar Index (DXY) is heading toward its worst weekly performance since July, despite a modest rebound on Friday driven by firmer US Treasury yields. Investors continue to price in substantial monetary easing over the next 12 months. According to the CME FedWatch tool, the chance of a 25-basis-point cut at the December meeting now stands at 85%, compared with less than 40% one month ago. This dynamic is reinforced by dovish comments from several Fed officials and this week’s soft US Retail Sales data. Speculation within the National Economic Council (NEC), suggesting that Kevin Hassett may emerge as the leading candidate to replace Jerome Powell in May, also fuels expectations of a prolonged easing cycle through 2026. In this context, US Dollar rallies are likely to remain contained unless the macroeconomic backdrop shifts meaningfully. In Switzerland, the Swiss Franc (CHF) lacks momentum following economic indicators that came in well below expectations. Swiss Gross Domestic Product (GDP) contracted 0.5% (QoQ) in Q3, below the 0.4% contraction consensus and after a revision of the previous quarter to 0.2%. Growth YoY slowed to 0.5%, far below the previously reported 1.3%. The only positive signal came from the KOF Leading Indicator, which improved to 101.7 from 101.03, slightly above consensus. Still, the data confirms a slowdown in the Swiss economy, reinforcing expectations that the Swiss National Bank (SNB) may keep its policy rate at 0.00% potentially through 2027, according to several analysts. Overall, the environment continues to favour USD/CHF upside, although the pair remains sensitive to…
Share
BitcoinEthereumNews2025/11/28 22:04