There’s an extraordinary amount of hype around “AI” right now, perhaps even greater than in past cycles, where we’ve seen an AI bubble about once per decade. This time, the focus is on generative systems, particularly LLMs and other tools designed to generate plausible outputs that either make people feel like the response is correct, or where the response is sufficient to fill in for domains where correctness doesn’t matter.

But we can tell the traditional tech industry (the handful of giant tech companies, along with startups backed by the handful of most powerful venture capital firms) is in the midst of building another “Web3”-style froth bubble because they’ve again abandoned one of the core values of actual technology-based advancement: reason.

  • tias@discuss.tchncs.de
    link
    fedilink
    arrow-up
    17
    arrow-down
    3
    ·
    edit-2
    10 months ago

    The article makes several claims and insinuations without backing them up so I find it hard to follow any of the reasoning.

    I don’t think it’s desirable that it’s easier to reason about an AI than about a human. If it is, then we haven’t achieved human-level intelligence. I posit that human intelligence can be reasoned about given enough understanding but we’re not there yet, and until we are we shouldn’t expect to be able to reason about AI either. If we could, it’s just a sign that the AI is not advanced enough to fulfill its purpose.

    Postel’s law IMHO is a big mistake - it’s what gave us Internet Explorer and arbitrary unpredictable interpretation of HTML, leading to decades of browser incompatibility problems. But the law is not even applicable here. Unlike the Internet, we want the AI to appear to think for itself rather than being predictable.

    “Today’s highly-hyped generative AI systems (most famously OpenAI) are designed to generate bullshit by design.” Uh no? They’re designed with the goal to generate useful content. The bullshit is just an unfortunate side effect because today’s AI algorithms have not evolved very far yet.

    If I had to summarize this article in one word, that would be it: bullshit.

    • Windex007@lemmy.world
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      10 months ago

      I agree that the author didn’t do a great job explaining, but they are right about a few things.

      Primarily, LLMs are not truth machines. That just flatly and plainly not what they are. No researcher, not even OpenAI makes such a claim.

      The problem is the public perception that they are. Or that they almost are. Because a lot of time, they’re right. They might even be right more frequently than some people’s dumber friends. And even when they’re wrong, they sound right. Even when it’s wrong, it still sounds smarter than most peoples smartest friends.

      So, I think that the point is that there is a perception gap between what LLMs are, and what people THINK that they are.

      As long as the perception is more optimistic than the reality, a bubble of some kind will exist. But just because there is a “reckoning” somewhere in the future doesn’t imply it will crash to nothing. It just means the investment will align more closely to realistic expectations as the clarity of what realistic expectations even are become more clear.

      LLMs are going to revolutionize and also destroy many industries. It will absolutely fundamentally change the way we interact with technology. No doubt…but for applications which strictly demand correctness, they are not appropriate tools. And investors don’t really understand that yet.

    • Sonori@beehaw.org
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      10 months ago

      OpenAI’s algorithm like all LLM’s is designed to give you the next most likely word in a sentence based on what most frequently came next in its training data. Their main strategy has actually been to use a older and simpler transformer algorithm, and to just vastly increase the scrapped text content and recently bias with each new release.

      I would argue that any system that works by stringing sudorandom words together based on how often they appear in its input sources is not going to be able to do anything but generate bullshit, albeit bullshit that may happen to be correct by pure accident when it’s near directly quoting said input sources.

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      10 months ago

      The article makes several claims and insinuations without backing them up so I find it hard to follow any of the reasoning.

      “Article”. I’m going to call it what it is: a blog post that should have moderated away. If people here are going to post “tech news”, make sure it has actual journalism.

      Postel’s law IMHO is a big mistake - it’s what gave us Internet Explorer and arbitrary unpredictable interpretation of HTML, leading to decades of browser incompatibility problems. But the law is not even applicable here. Unlike the Internet, we want the AI to appear to think for itself rather than being predictable.

      It’s almost like Isaac Asimov wrote a famous book about robotic laws and a bunch of different short stories on how easy it was to circumvent them.