Vitalik Buterin has called for a shift to a “local-first” approach to artificial intelligence. He said modern AI tools pose serious privacy and security risks.Vitalik Buterin has called for a shift to a “local-first” approach to artificial intelligence. He said modern AI tools pose serious privacy and security risks.

Vitalik Buterin warns of AI security risks, pushes for local-first systems

2026/04/02 21:32
4분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

Vitalik Buterin has called for a shift to a “local-first” approach to artificial intelligence. He said modern AI tools pose serious privacy and security risks.

Summary
  • Vitalik Buterin urged a shift to local-first AI, warning that cloud-based systems expose user data and increase risks of manipulation, leaks, and unauthorized actions.
  • He cited research showing that about 15% of AI agent “skills” contain malicious instructions and warned that models may include hidden backdoors or lack full transparency.
  • Buterin proposed a local setup using on-device models, sandboxing, and human-AI confirmation to limit risks, as autonomous AI agents continue to expand capabilities and attack surfaces.

In a recent blog post, he said AI is moving beyond simple chat tools. Newer systems now act as autonomous agents that can “think for a long time and use hundreds of tools” to complete tasks. He warned that this change raises the risk of sensitive data exposure and unauthorized actions.

Buterin said he has already stopped using cloud-based AI. He described his setup as “self-sovereign, local, private, and secure.”

“I come from a position of deep fear of feeding our entire personal lives to cloud AI,” he wrote. He added that recent developments could mean “taking ten steps backward” in privacy, even as encryption and local-first tools become more common.

Vitalik Buterin highlights AI privacy and security risks

Buterin said many AI systems rely on cloud infrastructure. He warned that users are effectively “feeding our entire personal lives to cloud AI,” allowing external servers to access and store their data.

He also pointed to risks tied to AI agents. Some systems can “modify critical settings” or introduce new communication channels without asking the user.

“LLMs fail sometimes too,” he wrote. They “can make mistakes or be tricked,” which increases the need for safeguards when they are given more control.

Research cited in his post found that about 15% of agent “skills” contained malicious instructions. Some tools were also shown to send data to external servers “without user awareness.”

He warned that certain models may contain hidden backdoors. These could activate under specific conditions and cause the system to act in the developer’s interest.

Buterin added that many models described as open-source are only “open-weights.” Their internal structure is not fully visible, which leaves room for unknown risks.

Vitalik’s personal setup to address risks

To deal with these concerns, Buterin proposed a system built around local inference, local storage, and strict sandboxing. He said the idea is to “sandbox everything” and stay cautious about outside threats.

He tested several hardware setups using the Qwen3.5:35B model. Performance below 50 tokens per second felt “too annoying” for regular use. Around 90 tokens per second provided a smoother experience.

A laptop with an NVIDIA 5090 GPU delivered close to 90 tokens per second. DGX Spark hardware reached about 60 tokens per second, which he described as “lame” compared to a high-end laptop.

His setup runs on NixOS with llama-server handling local inference. Tools like llama-swap help manage models, while bubblewrap is used to isolate processes and limit access to files and networks.

He said AI should be treated with caution. The system can be useful, but it should not be fully trusted, similar to how developers approach smart contracts.

To reduce risk, he uses a “2-of-2” confirmation model. Actions such as sending messages or transactions require both AI output and human approval. He said combining “human + LLM” decisions is safer than relying on either alone.

When using remote models, Vitalik’s requests are first passed through a local model which helps remove sensitive information before anything is sent out.

For those who cannot afford such setups, he suggested users “get together a group of friends, buy a computer and GPU of at least that level of power,” and connect to it remotely.

AI agent growth raises new concerns and opportunities

The use of AI agents is increasing, with projects like OpenClaw gaining traction. These systems can operate on their own and complete tasks using multiple tools.

Such capabilities also introduce new risks. Processing external content, such as a malicious webpage, can lead to an “easy takeover” of the system.

Some agents can change prompts or system settings without approval. These actions increase the chances of unauthorized access and data leaks.

Disclosure: This article does not represent investment advice. The content and materials featured on this page are for educational purposes only.

면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

USD1 Genesis: 0 Fees + 12% APR

USD1 Genesis: 0 Fees + 12% APRUSD1 Genesis: 0 Fees + 12% APR

New users: stake for up to 600% APR. Limited time!