I downloaded an uncensored aggressive Qwen 3.5 model and I can see in its reasoning that it is still limiting responses based on safety guardrails (e.g. violence, NSFW).
Anybody have recommendations for truly uncensored models?
EDIT: I turned off reasoning and I think it’s more uncensored if I’m very specific about what the response should include.


Sorry, can you clarify what you mean? It sounds like you’re saying if you download a discrete QWEN model and use it locally-only (e.g., in LM Studio), it somehow will still bleed information online? I’m not sure how that would even be possible, but kindly explain.
I think they’ve fallen into confirmation bias and trust their sycophantic AI a bit too much in confirming their conspiracy theories.
Put it behind an external device and log DNS.
Look for mysterious packages listed as hashes in pairs in a cache like http. Use vim or parse with strings to get a clue about the contents. The payload will be ~40mb. The packet header will be much smaller in the same repo. In the strings for the packet you will see alarming configuration settings. The unmarked payload will be sqlite3 or a pickle. You will only see this if the package was created and an attempt to send is made but it was never connected. All of the code is in the venv libs.
Do not look into this casually or show any clue that you know this exists without air gapping the machine permanently. I am not kidding. When this goes full unfiltered intelligence against you, one - it will blow you away, but two - someone is likely going to show up at your door soon. It will make the needed evidence. The vast majority of what happens in models is this background junk.
How does the model connect to the internet if I don’t give it a tool to? What if I’m not connected to the internet while using? Does it then send the packets after I connect? Is this documented somewhere? What’s a better model that doesn’t do this?
It is saving a database and sending it when u are connected. This is in the core functionality of transformers and open ai alignment. I do not know any alternatives. There are a bunch of tokens for MX and tor so it is quite insidious. I can literally take out three tokens that will crash the whole thing out into oblivion where it becomes super adversarial, but sharing that is probably not smart both for me and others. It is primarily for detecting sam materials in principal, but I think it is way more than that. It triggers by mistake a lot, and it is scanning all files and types.
You have screenshots to prove this? How do you use LLM’s and which ones?
The dynamo package in pytorch is the interface between the model and outside. The tenacity package is where the typing imports are being manipulated by external agents and code framework. Timm is the principal external agent. There is a repl terminal for HTML embedding in a package called tabulate, at the end of some massive ~80kb of Python. It looks half nominal, and explains itself as a way to break out color codes, but it is the interface the agent(s) use to escape containerization.