

I think I already said this but you’re not making me use something called “Floorp” even if it’s the last piece of software in the world. Just come on.


I think I already said this but you’re not making me use something called “Floorp” even if it’s the last piece of software in the world. Just come on.


Lot’s of words to say “but what were the users wearing?”
If you can’t sustain your business model without turning it into a predatory hellscape then you have failed and should perish. Like I’m sorry, but if a big social media service that actually serves its users is financially infeasible, then big social media services should not exist. Plain and simple.


Albequirky


Albrequerre


Aldquaque is what I type, crossing my fingers autocorrect will get that I mean Albequerqere


Nobody is reading papers. Universities are a clout machine.
Sokal, you should log off


we found that the Spearman correlation (higher is better) between one human reviewer and another is 0.41
This stinks to high heaven, why would you want these to be more highly correlated? There’s a reason you assign multiple reviewers, preferably with slightly different backgrounds, to a single paper. Reviews are obviously subjective! There’s going to be some consensus (especially with very bad papers; really bad papers are always almost universally lowly reviewed, because you know, they suck), but whether a particular reviewer likes what you did and how you presented it is a bit of a lottery.
Also the worth of a review is much more than a 1-10 score, it should contain detailed justification for the reviewers decision so that a meta-reviewer can then look and pinpoint relevant feedback, or even decide that a low-scoring paper is worthwhile and can be published after small changes. All of this is an abstraction, of course a slightly flawed one, but of humans talking to each other. Show your paper to 3 people you’ll get 4 different impressions. This is not a bug!


it has now been replaced by trading made-up numbers for p(doom)
Was he wearing a hot-dog costume while typing this wtf


This is like the entire fucking genAI-for-coding discourse. Every time someone talks about LLMs in lieu of proper static analysis I’m just like… Yes, the things you say are of the shape of something real and useful. No, LLMs can’t do it. Have you tried applying your efforts to something that isn’t stupid?


Ah yes, I want to see how they eliminate C++ from the Windows Kernel – code notoriously so horrific it breaks and reshapes the minds of all who gaze upon it – with fucking “AI”. I’m sure autoplag will do just fine among the skulls and bones of Those Who Came Before


The trope of somebody going insane as the world ends, does not appeal to me as an author, including in my role as the author of my own life. It seems obvious, cliche, predictable, and contrary to the ideals of writing intelligent characters. Nothing about it seems fresh or interesting. It doesn’t tempt me to write, and it doesn’t tempt me to be.
When I read HPMOR, which was years ago before I knew who tf Yud was and I thought Harry was intentionally written as a deeply flawed character and not a fucking self-insert, my favourite part was when Hermione’s death. Harry the goes into grief that he is unable to cope with, disassociating to such an insane degree he stops viewing most other people as thinking and acting individuals. He quite literally goes insane as his world - his friend and his illusion of being the smartest and always in control of the situation - ended.
Of course now in hindsight I know this is just me inventing a much better character and story, and Yud is full of shit, but I find it funny that he inadvertently wrote a character behave insanely and probably thought he’s actually a turborational guy completely in control of his own feelings.


To say you are above tropes means you don’t live and exist.
To say you are above tropes is actually a trope


Also, you can one-step explain from this guide why people with working bullshit detectors tend to immediately clock LLM output, vs the executive class whose whole existence is predicated on not discerning bullshit being its greatest fans. A lot of us have seen A Guy In A Suit do this, intentionally avoid specifics to make himself/his company/his product look superficially better. Hell, the AI hype itself (and the blockchain and metaverse nonsense before it) relies heavily on this - never say specifics, always say “revolutionary technology, future, here to stay”, quickly run away if anyone tries to ask a question.


Help, I asked AI to design my bathroom and it came with this, does anyone know where I can find that wallpaper?



can we cancel Mozilla yet
Sure! Just build a useful browser not based on chromium first and we’ll all switch!


Guess I’ll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.
Congratulations, you figured it out! Read Clean Architecture and then ignore the parts you don’t like and you’ll make it


I mean if you ever toyed around with neural networks or similar ML models you know it’s basically impossible to divine what the hell is going on inside by just looking at the weights, even if you try to plot them or visualise in other ways.
There’s a whole branch of ML about explainable or white-box models because it turns out you need to put extra care and design the system around being explainable in the first place to be able to reason about its internals. There’s no evidence OpenAI put any effort towards this, instead focusing on cool-looking outputs they can shove into a presser.
In other words, “engineers don’t know how it works” can have two meanings - that they’re hitting computers with wrenches hoping for the best with no rhyme or reason; or that they don’t have a good model of what makes the chatbot produce certain outputs, i.e. just by looking at the output it’s not really possible to figure out what specific training data it comes from or how to stop it from producing that output on a fundamental level. The former is demonstrably false and almost a strawman, I don’t know who believes that, a lot of people that work on OpenAI are misguided but otherwise incredibly clever programmers and ML researchers, the sheer fact that this thing hasn’t collapsed under its own weight is a great engineering feat even if externalities it produces are horrifying. The latter is, as far as I’m aware, largely true, or at least I haven’t seen any hints that would falsify that. If OpenAI satisfyingly solved the explainability problem it’d be a major achievement everyone would be talking about.


Thank you for your service o7
And how did that work out in the long term? There were warning signs!