• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: August 7th, 2023

help-circle



  • That isn’t neccesarily true, though for now there’s no way to tell since they’ve yet to release their code. If the timeline is anything like their last paper it will be out around a month after publication, which will be Nov 20th.

    There have been similar papers for confusing image classification models, not sure how successful they’ve been IRL.



  • BetaDoggo_@lemmy.worldtoMemes@lemmy.mlIt's pronounced AYE...
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    1 year ago

    The issue is the marketing. If they only marketed language models for the things they are able to be trusted with, summarization, cleaning text, writing assistance, entertainment, etc. there wouldn’t be nearly as much debate.

    The creators of the image generation models have done a much better job of this, partially because the limitations can be seen visually, rather than requiring a fact check on every generation. They also aren’t claiming that they’re going to revolutionize all of scociety, which helps.


  • LLMs only predict the next token. Sometimes those predictions are correct, sometimes they’re incorrect. Larger models trained on a greater number of examples make better predictions, but they are always just predictions. This is why incorrect responses often sound plausable even if logically they don’t make sense.

    Fixing hallucinations is more about decreasing inaccuracies rather than fixing an actual problem with the model itself.



  • Duckduckgo doesn’t have anywhere near the capacity to collect data that google does, and their ads are keyword based, rather than being influenced by other data. Their search engine is really the only thing I’d recommend using however since their add-on and browser don’t offer anything that others don’t.