• 0 Posts
  • 45 Comments
Joined 2 years ago
cake
Cake day: June 28th, 2023

help-circle








  • If I’m not mistaken, in past tech booms, many employees used to become rich by keeping at least some of their stock, though. I think it is somewhat telling if most of the employees (who could be expected to be familiar with the company, its technology, its products and markets) don’t seem to expect this to happen here, but rather treat this as a job in a more “mature” industry with little growth potential, such as manufacturing or banking.

    Also, capital market investors tend to consider so-called “insider trading” (which includes trades by company employees and executives) as somewhat predictive of stock prices, as far as I know.



  • Also, if the LLM had reasoning capabilities that even remotely resembled those of an actual human, let alone someone who would be able to replace office workers, wouldn’t they use the best tool they had available for every task (especially in a case as clear-cut as this)? After all, almost all humans (even children) would automatically reach for their pocket calculators here, I assume.


  • Also, these bots have been deliberately fine-tuned in a way that is supposed to sound human. Sometimes, as a consequence, I find it difficult to describe their answering style without employing vocabulary used to describe human behavior. Also, I strongly suspect that this deliberate “human-like” style is a key reason for the current AI hype. It is why many people appear to excuse the bots’ huge shortcomings. It is funny to be accused of being “emotional” when pointing out these patterns as problematic.




  • LOL - you might not want to believe that, but there is nothing to cut down. I actively steer clear of LLMs because I find them repulsive (being so confidently wrong almost all the time).

    Nevertheless, there will probably be some people who claim that thanks to LLMs we no longer need the skills for language processing, working memory, or creative writing, because LLMs can do all of this much better than humans (just like calculators can calculate a square root faster). I think that’s bullshit, because LLMs just aren’t capable of doing any of these things in a meaningful way.


  • No, but it does mean that little girls no longer learn to write greeting cards to their grandmothers in beautiful feminine handwriting. It’s important to note that I was part of Generation X and, due to innate clumsiness (and being left-handed), I didn’t have pretty handwriting even before computers became the norm. But I was berated a lot for that, and computers supposedly made everything worse. It was a bit of a moral panic.

    But I admit that this is not comparable to chatbots.


  • Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc. It is true that the use of these tools and technologies has probably contributed to a decline in the skills required for activities such as memorization, handwriting or mental calculation. However, I believe there is an important difference to chatbots: While typewriters (or computers) usually produce very readable text (much better than most people’s handwriting), pocket calculators perform calculations just fine and information from a reputable source retrieved online isn’t any less correct than one that had been memorized (probably more so), the same couldn’t be said about chatbots and LLMs. They aren’t known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.



  • It is very tangential here, but I think this whole concept of “searching everything indiscriminately” can get a little bit ridiculous, anyway. For example, when I’m looking for the latest officially approved (!) version of some document in SharePoint, I don’t want search to bring up tons of draft versions that are either on my personal OneDrive or had been shared with me at some point in the past, random e-mails etc. Yet, apparently, there is no decent option for filtering, because supposedly “that’s against the philosophy” and “nobody should even need or want such a feature” (why not???).

    In some cases, context and metadata is even more important than the content of a document itself (especially when related to topics such as law/compliance, accounting etc.). However, maybe the loss of this insight is another collateral damage of the current AI hype.

    Edit: By the way, this fits surprisingly well with the security vulnerability described here. An external email is used that purports to contain information about internal regulations. What is the point of a search that includes external sources for this type of questions, even without the hidden instructions to the AI?