Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this.)
This is a thought I’ve been entertaining for some time, but this week’s discussion about Ars Technica’s article on Anthropic, as well as the NIH funding freeze, finally prodded me to put it out there.
A core strategic vulnerability that Musk, his hangers-on, and geek culture more broadly haven’t cottoned onto yet: Space is 20th-century propaganda. Certainly, there is still worthwhile and inspirational science to be done with space probes and landers; and the terrestrial satellite network won’t dwindle in importance. I went to high school with a guy who went on to do his PhD and get into research through working with the first round of micro-satellites. Resources will still be committed to space. But as a core narrative of technical progress to bind a nation together? It’s gassed. The idea that “it might be ME up there one day!” persisted through the space shuttle era, but it seems more and more remote. Going back to the moon would be a remake of an old television show, that went off the air because people ended up getting bored with it the first time. Boots on Mars (at least healthy boots with a solid chance to return home) are decades away, even if we start throwing Apollo money at it immediately. The more outlandish ideas like orbital data centers and asteroid mining don’t have the same inspirational power, because they are meant to be private enterprises operated by thoroughly unlikeable men who have shackled themselves to a broadly destructive political program.
For better or worse, biotechnology and nanotechnology are the most important technical programs of the 21st century, and by backgrounding this and allowing Trump to threaten funding, the tech oligarchs kowtowing to him right now are undermining themselves. Biotech should be obvious, although regulatory capture and the impulse for rent-seeking will continue to hold it back in the US. I expect even more money to be thrown at nanotechnology manufacturing going into the 2030s, to try to overcome the fact that semiconductor scaling is hitting a wall, although most of what I’ve seen so far is still pursuing the Drexlerian vision of MEMS emulating larger mechanical systems… which, if it’s not explicitly biocompatible, is likely going down a cul-de-sac.
Everybody’s looking for a positive vision of the future to sell, to compete with and overcome the fraudulent tech-fascists who lead the industry right now. A program of accessible technology at the juncture of those two fields would not develop overnight, but could be a pathway there. Am I off base here?
Hmm, any sort of vision for generating public support for development of a technology has to have either ideological backing or a profit incentive. I don’t say this to mean that the future must be profitable, rather, I say this to mean that you don’t get the space race if western powers aren’t afraid of communism appearing as a viable alternative to capitalism, on both ideological and commercial fronts.
Unfortunately, a vision of that kind is necessarily technofascist. Rather than look for a tech-forward vision of the future, we need deprogram ourselves and unlearn the unspoken narratives that prop up capitalism and liberal democracy as the only viable forms of society. We need to dismantle the systems and structures that require the complex political buy-in for projects that are clearly good for society at large.
Uh, I guess I’ve kind of gone completely orthogonal to your point of discussion. I’m kind of saying the collapse of the US is inevitable.
Buckle up humans; because humanity’s last exam just dropped: https://lastexam.ai/ (Hacker News discussion). May the odds be ever in your favor.
just mark C for every answer if you don’t get it, that’s what the State of California taught me in elementary school
You may have heard that Catturd doesn’t have any fiber in his diet and was hospitalized for bowel blockage. (Best sneer I’ve seen so far: “can’t turd.”) Along similar lines, Srid isn’t taking his statins for high cholesterol caused by a carnivore diet.
Meta: I’m kind of pissed that Catturd is WP notable but laughing my ass off at the page for carnivore diets. Life takes and gives.
what, are statins woke now?
edit wtf 'sneak did this too? https://news.ycombinator.com/item?id=42800452
Refusal of statins was one of the most prominent anti-medical trends I remember observing among right-wing acquaintences, even well before such people got on the anti-vax bandwagon. To be sure, some people experience bad side-effects (including my mom, at least for a while), but it definitely seemed like a few bits of anecdata in the early 2010s built into a broad narrative of “doctor’s tryin’ ta kill ya”
I love how srid deflects by claiming no one has reported bad outcomes from the “meat and butter” diet… I found an endless stream of anecdotes from Google, like this.
can you imagine sneak, of all people, telling you you’re crazy and probably being right?
you could say that being full of shit finally caught up to him *rimshot*
My favorite part of the carnivore diet is that apparently scurvy can become enough of a problem that you’ll see references to “not wanting to start the vitamin C debate” in forums.
I’m pretty sure it’s not just a me thing, but I thought we all knew that sailors kept citrus on board specifically to prevent scurvy by providing vitamin C and that we all learned about this as kids when either a teacher tried to make the colonial era interesting or we got vaguely curious about pirates at some point.
scurvy? what year do we have? maybe they need to include mice in their diet since rodents can make their own vitamin C (iirc)
if they start eating rat, does that technically define them as cannibals? given how much of their ilk become diet target…
I learnt about it because I was so damn interested in sauerkraut.
deleted by creator
@o7___o7 @Amoeba_Girl also delicious
Here’s a bonus high fiber diet pro-tip: Metamucil tastes like old socks and individual capsules have hardly any fiber anyway, I eat triscuits and Oroweat Double-Fiber bread instead because they’re both much much better tasting. Also chili is the food of the gods.
So that’s how to translate “Yo, this diet is for chumps” into Wikipedian.
Polish commentary on Hitlergruß: https://bsky.app/profile/smutnehistorie.bsky.social/post/3lgaoyezhgc2c
Translation:
- it’s just a Hindu symbol of prosperity
- a normal Roman salute
- regular rail car
- wait a second
Rationalist death count keeps climbing https://xcancel.com/jessi_cata/status/1882182975804363141#m
Does anyone know who or what is Ziz in this context? Google says jewish mythological beast.
edit: found this:
The Zizians were a cult that focused on relatively extreme animal welfare, even by EA standards, and used a Timeless/Updateless decision theory, where being aggressive and escalatory was helpful as long as it helped other world branches/acausally traded with other worlds to solve the animal welfare crisis.
They apparently made a new personality called Maia in Pasek, and this resulted in Pasek’s suicide.
They also used violence or the threat of violence a lot to achieve their goal.
This caused many problems for Ziz, and she now is in police custody.
that blog in question comes with its own private glossary and is just as dense and long as you can expect. i spent half an hour trying to figure it out and noped when noticed scroll bar position
tfw when you recognize it’s Quality Rationalist Content
it’s workday, i’m too sober for this
it’s like looking from outside at minor splinter groups within scientology, and the purported voice of reason says that the right way to deal with these transgressors is to return to scientologist orthodoxy. it even includes seasteading
It’s another one of those things that the further you read the worse it gets, isn’t it?
I was reading something David wrote about it at one point, but it seemed like lore too cursed even for the rationalist milieu
yep.
What the fuck?
The agents were conducting a routine roving patrol when they stopped Bauckholt and a female in the town close to the border. During a records check, the unidentified female occupant was removed from the vehicle for further questioning, broke free, and began shooting at the agents, the incident report shows.
After the female suspect was hit by return fire, Bauckholt emerged from the vehicle and also began firing on the agents. He sustained gunshot wounds and was pronounced dead.
… What the fuck?
The zizian angle makes this so weird. Like, on top of probably being stopped for driving while trans, they might have instigated the shootout to prove to the basilisk that their parallel universe selves/simulated iterations/eternal souls can’t be acausally blackmailed.
Ziz is a boogeyman figure to them at this point. I think its deliberate to deflect from the sex abuse stuff (ziz was a part of that whole controversy).
Yeah there is so much untold in the reporting and I’m not going to trust either tpots or border cops. I have no idea whatsoever what to make of this.
Maybe someone will finally write that article…
Jesus wept, it’s so frustratingly obvious that anytime some flavor of cop kills someone, the news media reporting (if any) will be this weird Yoda grammar pidgin.
The femoidically gendered female shot with its gun by very personally pulling the trigger, with this viscerally physical action performed by the said femalian in most pointedly concrete terms amounting to it (the femaloidistical entity, a specimen of the species known as females) firing lethal gunshots at the border patrol with the female’s own two hands.
Subsequently return fire manifested itself from somewhere and came into contact with the female suspect female. The Justice Enforcement Officers involved in the situation were made a part of a bilateral exchange of gunfire between the shooting female and the officers situated in the scenario in which shooting was, to some extent, quite possibly performed from their side as well.
CIDR 2025 is ongoing (Conference on Innovative Data Systems Research). It’s a very good conference in computer science, specifically database research (an equivalent of a journal for non-CS science). And they have a whole session on LLMs called “LLMs ARE THE NEW NO-SQL”
I didn’t have time to read the papers yet, believe me I will, but the abstracts are spicy
We systematically develop benchmarks to study [the problem] and find that standard methods answer no more than 20% of queries correctly, confirming the need for further research in this area.
(Text2SQL is Not Enough: Unifying AI and Databases with TAG, Biswal et al.)
Hey guys and gals, I have a slightly different conclusion, maybe a baseline 20% correctness is a great reason to not invest a second more of research time into this nonsense? Jesus DB Christ.
I’d also like to shoutout CIDR for setting up a separate “DATABASES AND ML” session, which is an actual research direction with interesting results (e.g. query optimizers powered by an ML model achieving better results than conventional query optimizers). At least actual professionals are not conflating ML with LLMs.
I know a lot of people are looking for alternatives for programs as stuff is ennshitfying, rot economying, slurping up your data, going all in on llms etc https://european-alternatives.eu/ might help. Have not looked into it myself btw.
Always down for the european alternative if you know what I mean.
trump just dumped half trillion dollars into openai-softbank-oracle thing https://eu.usatoday.com/story/news/politics/elections/2025/01/21/trump-stargate-ai-openai-oracle-softbank/77861568007/
you’d think it’s a perfect bait for saudi sovereign wealth fund, and perhaps it is
for comparison, assuming current levels of spending, this will be something around 1/10 of defense spending in the same timeframe. which goes to, among other things, payrolls of millions of people and maintenance, procurement and development of rather pricey weapons like stealth planes (B-21 is $700M each) and nuclear-armed nuclear-powered submarines ($3.5B per Ohio-class, with $31M missiles, up to 24). this all to burn medium-sized country worth of energy to get more “impressive” c-suite fooling machine
The fact that the first thing a new fascist regime does is promise Larry Ellison a bunch of dollaridoos answers a lot of questions asked by my “ORACLE = NAZIS” tshirt
Elon Musk is already casting doubt on OpenAI’s new, up to $500 billion investment deal with SoftBank (SFTBY+10.51%) and Oracle (ORCL+7.19%), despite backing from his allies — including President Donald Trump. […] “They don’t actually have the money,” the Tesla (TSLA-1.13%) CEO and close Trump ally said shortly before midnight on Tuesday, in a post on his social media site X. “SoftBank has well under $10 [billion] secured. I have that on good authority,” Musk added just before 1 a.m. ET.
I was mad about this, but then it hit me: this is the kind of thing that happens at the top of a bubble. The nice round numbers, the stolen sci-fi name, the needless intertwining with politics, the lack of any clear purpose for it.
[mr plinkett voice] hey wait a minute wasn’t that meant to be a Microsoft project?
Hey wasn’t that project contingent on “meaningfully improving the capabilities of OpenAI’s AI”?
(Referring to this newsletter of his from last April.)
I like how none of the reporting I’ve seen on this so far can be bothered to mention Softbank’s multi-year, very obvious history of failures
I think I saw like one outlet mention it, and it was buried in the 18th paragraph
You gotta love how in the announcement the guy is so blatantly “hey they said and did such nice things for me that I just got a throw them a bone, and if releasing the leader of a notorious drug bazaar who tried to put out a hit on one of his employees is what they want then they can have it!”
Sidenote: AFAIK, even with this pardon, Ulbricht still ended up spending more time in prison than if he took a plea deal he was reportedly offered:
He was offered a plea deal, which would have likely given him a decade-long sentence, with the ability to get out early on good behavior. Worst-case scenario, he would have spent five years in a medium-security prison and been freed.
Gotta say, this whole situation’s reminding me of SBF - both of them thought they could outsmart the Feds, and both received much harsher sentences than rich white collar criminals usually get as a result.
Not really sure if he thought he was smart or got bad legal advice from coiners who figured he could get off scot-free because “crypto” and “harm reduction”
Probably both tbh. It really is like SBF round 1, but because it’s drugs instead of financial crimes they didn’t need to hire Margot Robbie to explain why it’s illegal and destructive to everyone from her bath.
SBF
Wonder when his pardon clears.
Ah yes that will be good for international relations and the morale of law enforcement and anti cybercrime people. Lol it is all so stupid.
This and the releasing of the jan 6 people who assaulted cops (one cop who testified against them got a shitton of messages they got early release) is going to do wonders. Not that it will shake the belief of a lot of people that the repubs are the party of back the blue and law and order.
following on from this comment, it is possible to get it turned off for a Workspace Suite Account
- contact support (
?
button from admin view) - ask the first person to connect you to
Workspace Support
(otherwise you’ll get some made-up bullshit from a person trying to buy time or Case Success or whatever, simply because they don’t have the privileges to do what you’re asking) - tell the referred-to person that you want to enable controls for “Gemini for Google Workspace” (optionally adding that you have already disabled “Gemini App”)
hopefully you spend less time on this than the 40-something minutes I had to (a lot of which was spent watching some poor support bastard start-stop typing for minutes at a time because they didn’t know how to respond to my request)
Thanks. I simply switched to Fastmail over this bullshit. (“Simply” mileage may vary)
- contact support (
hackernews: We’re going to build utopia on Mars, reinvent money, and construct god.
also hackernews: moving off facebook is too hard :( :( :(
they will take facebook there with them. none of their space escapism will solve their problrms because they take them along. these mfers will do anything but go to therapy
deleted by creator
so the new feature in the next macos release 15.3 is “fuck you, apple intelligence is on by default now”
For users new or upgrading to macOS 18.3, Apple Intelligence will be enabled automatically during Mac onboarding. Users will have access to Apple Intelligence features after setting up their devices. To disable Apple Intelligence, users will need to navigate to the Apple Intelligence & Siri Settings pane and turn off the Apple Intelligence toggle. This will disable Apple Intelligence features on their device.
IDK how helpful this is, but Apple intelligence appears to not get downloaded if you set your ipad language and your siri language to be different. I have it set to english (australia) and english (united states). Guess I’ll have to live without “gaol” support, but that just shows how much I’m willing to sacrifice.
oh boy: https://social.wake.st/@liaizon/113868769104056845 iOS devices send the contents of Signal chats to Apple Intelligence by default
e: this fortunately doesn’t seem to be accurate; excuse my haste. here’s the word from the signal forums
also, my inbox earlier:
24661 N + Jan 21 Apple Developer ( 42K) Explore the possibilities of Apple Intelligence.
til that there’s not one millionaire with family business in south african mining in current american oligarchy, but at least two. (thiel’s father was an exec at mine in what is today Namibia). (they mined uranium). (it went towards RSA nuclear program). (that’s easily most ghoulish thing i’ve learned today, but i’m up only for 2h)
there’s probably a fair couple more. tracing anything de beers or a good couple of other industries will probably indicate a couple more
(my hypothesis is: the kinds of people that flourished under apartheid, the effect that had on local-developed industry, and then the “wider world” of
opportunitiesprey they got to sink their teeth into after apartheid went away; doubly so because staying ZA-only is extremely limiting for ghouls of their sort - it’s a fixed-size pool, and the still-standing apartheid-vintage capital controls are Limiting for the kinds of bullshit they want to pull)there are more it seems https://www.ft.com/content/cfbfa1e8-d8f8-42b9-b74c-dae6cc6185a0
that list undercounts far more than I expected it to
there’s gotta be way more, but frankly idk even where to begin to look
Banner start to the next US presidency, with Wiener Von Wrong tossing a Nazi salute and the ADL papering that one over as an “awkward gesture”. 2025 is going to be great for my country.
Incidentally is “Wiener Von Wrong” or “Wernher Von Brownnose” better?
Perhaps “Wanker von Clown”?
Ooo, I like that.
in that spirit: Loserus Inamericus
(I don’t know if that scans, I have no latin skills and I don’t feel like breaking out information to check)
It’s term time again and I’m back in college. One professor has laid out his AI policy: you should not use an AI (presumably Chat GPT) to write your assignment, but you can use an AI to proofread your assignment. This must be mentioned in the acknowledgements. He said in class that in his experience AI does not produce good results and that when asked to write about his particular field it produces work with a lot of mistakes.
Me, I’m just wondering how you can tell the difference between material generated by AI then edited by a human, and material written by a human then edited by an AI.
Here is what I wrote in the instructions for the term-paper project that I will be assigning my quantum-physics students this coming semester:
I can’t very well stop you from using a text-barfing tool. I can, however, point out that the “AI” industry is a disaster for the environment, which is the place that we all have to live in; and that it depends upon datasets made by exploiting and indeed psychologically torturing workers. The point of this project is for you to learn a physics topic and how to write physics, not for you to abase yourself before a blurry average of all the things the Internet says about quantum physics — which, spoiler alert, includes a lot of wrong things. If you are going to spend your time at university not learning physics, there are better ways to do that than making yourself dependent upon a product that is a tech bubble waiting to pop.
I was talking to someone recently and he mentioned that he has used AI for programming. It worked out fine, but the one thing he mentioned that really stuck with me was that when it was all done, he still didn’t know how to do the task.
You can get things done, but you don’t learn how to do them.
This must be mentioned in the acknowledgements
wat
I know!