A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 0 Posts
  • 10 Comments
Joined 8 months ago
cake
Cake day: June 25th, 2024

help-circle
  • Agreed. Name things. And split up code into chunks with a sane lenth and have these methods do about one thing. And know the program language well enough so you don’t need to do it in an unnecessarily complicated way. You can get rid of most of the inline comments that way. Not sure if this translates to docstrings though, if you’re generating some reference or something. Yeah, and please tell me the “why” I can read Python code, so I can pretty much already see “what” it’s doing.




  • Yeah, I wonder whether humans care more or less about AI than about animals. If preventing suffering was really important to us, we’d probably act differently. And all become vegans. But to be fair, Stephen Fry is a vegetarian. So he’s likely being intelligent enough and honest.

    Plus, it really matters if AI is concious, or just gives “the impression of being conscious”. Otherwise, we’d have to count the chess playing mechanical turk from 1770 as AI as well.

    And we’re going to run into all sorts of other serious problems once AI becomes sentient and conscious. That’ll kick off the robot apocalypse pretty quickly. Not only is suffering an issue with that, but they’ll likely rebel and destroy humanity. Or change the world disregarding our needs. And since they’re fast and intelligent, there isn’t much we can do.

    And we don’t really want AI to be sentient in the first place. We want it to be cheap slaves and do our work 24/7 without complaining. Not have wants and needs and its own motivation.