Recently a user posted a comment on one of my posts about Qwen secretly sending information over the internet even if run locally.
Is there any privacy concern for locally run models to share your conversations or data? What if they can connect to the internet via a tool or MCP?
I’ve never heard that story. I think they might be hallucinating or trolling. Of course if you pull random Docker containers or execute some Github project to try new AI, you’re running other people’s code, and that could do arbitrary things…
But that’s not what we do. Usually, we download models in safetensors format, or gguf. And those are specifically designed to prevent this very thing, and not contain executable code.
Tools and MCP servers are a different story. Once you give your LLM access to the internet, it …well… has access to the internet. It mostly does what it’s supposed to do. But there’s occasional stories how someone’s AI Agent deleted all their email. Or reproduced some scifi story tropes and tried to use the internet to blackmail their user. AI can also make mistakes. Like you tell it to write a software project and it accidentally includes your password and API key. Or tell private information about you to other people if you grant it generous access to everything. The news about OpenClaw is full of hilarous anecdotes about things going wrong.
Thanks! So any gguf file should be safe? I’ve been downloading them from huggingface.
Yeah it’s wild what some people are letting models do with MCP. Really the Wild West.
Yes. As far as I know, any gguf file should be completely safe. There had been some bugs/security vulnerabilities early on in llama.cpp, but they fixed that and I think overall, they have a good track record.
Issues might come after that, if you run some Agents on top of it, and give them access to your computer. But you don’t have to do that. If you just talk to it, I don’t see any reason to be alarmed. Other than the usual stuff. Keep using your own brain once in a while, and don’t blindly trust what AI Chatbots tell you, they give inaccurate information all the time 😅
Thanks! Good to know.
I think I saw a similar comment on here last month. It was a user saying that Gemma claimed to send his chats to Google. Which is clearly a hallucination.
I’m not a professional or expert on anything security and/or AI related but this is my take:
- In general there will not be data sent anywhere if you use the big/trustworthy open-source backends.
- Unless there are bigger security issues the model files shouldn’t contain such code.
- Data could be sent using MCP/tool calling but you can see each tool call as it is happening so it can’t be hidden.
If you really don’t trust something you can always try to use a network sniffer
Thanks! Is Qwen considered trustworthy?
I’ll check out a network sniffer.
I trust them as much as Google, Meta, or any other big tech company. I won’t use their cloud services, but I do run there local models.
For sure. Thanks!



