Three raccoons in a trench coat. I talk politics and furries.
Other socials: https://ragdollx.carrd.co/
I always round up the price when I see $X.99 but my grandmother always rounds it down and it pisses me off
They’re trying to fool you! Don’t be a sheep!!!
Just don’t ask them what they were doing between 1933 and 1945
Kinda ironic I saw this post right after this other one lol: Three Democrats Re-Introduce Bill That Would Bring Ranked Choice Voting to Congressional Elections Across America
One of the community notes on the post said it was posted the day before by another account, and an AI image detector flagged it as AI with 90% confidence.
Musk using an AI image to fantasize about being some badass cowboy is both pathetic and absolutely expected lol
I did something like this when working support at Phillips
I thought it was funny but my colleagues and supervisor were not entertained lol
Sorry to give you false hope 😔
But I still believe that we’ll get at least one video this year! 😭
This is a bad habit of mine lol
I made a video a while back on the topic of political violence which I intended to be only like 5min long, but actually turned into 30 minutes as I researched more and came across more stuff I wanted to discuss on this subject.
I’m doing the same thing now as I’m writing an article on the Cass Review which is already 20 pages long and I’m probably gonna go over 30. I have two pages dedicated entirely to discussing only one of the items of the modified Newcastle-Ottawa Scale that some of the systematic reviews used, and I decided to leave it there because if I were to discuss all of the items in detail I could write three times more.
Not my favorite community but I wish !writingprompts@lemmy.world had more engagement. It was really cool on Reddit and I wish it got more love here as well. I’d participate too but I can’t write for shit.
Cute little slimy fellas
It is, in fact, very easy to code a game!
from pygame import game
game.load_player()
game.load_enemies()
game.load_audio()
game.run()
update it faster
*cries in Deltarune fan*
“Let the bodies pile up in the streets” but unironically
She just has some birth defects according to the owner: https://twitter.com/bigfootjinx/status/1432771782528831488
Please tell me how an AI model can distinguish between “inspiration” and plagiarism then.
[…] they just spit out something that it “thinks” is the best match for the prompt based on its training data and thus could not make this distinction in order to actively avoid plagiarism.
I’m not entirely sure what the argument is here. Artists don’t scour the internet for any image that looks like their own drawings to avoid plagiarism, and often use photos or the artwork of others as reference, but that doesn’t mean they’re plagiarizing.
Plagiarism is about passing off someone else’s work as your own, and image-generation models are trained with the intent to generalize - that is, being able to generate things it’s never seen before, not just copy, which is why we’re able to create an image of an astronaut riding a horse even though that’s something the model obviously would’ve never seen, and why we’re able to teach the models new concepts with methods like textual inversion or Dreambooth.
Have you tried some data augmentation?
After some Googling I couldn’t find anything about “code-free” .exe’s or some “.EXE” framework, so probably just a joke.
Why yes I do in fact (In Arma)
Gave me flashbacks to my time working with Philips’ Tasy system in 2017.
By now they’ve surely finished implementing their HTML5 system which was somewhat better, but back then it was still a desktop app made using Delphi and Java, and it was basically as unsightly and unwieldy as the example in the meme lol
I know the second definition was proposed by OpenAI, who obviously has a vested interest in this topic, but that doesn’t mean it can’t be a useful or informative conceptualization of AGI, after all we have to set some threshold for the amount of intelligence AI needs to display and in what areas for it to be considered an AGI. Their proposal of an autonomous system that surpasses humans in economically valuable tasks is fairly reasonable, though it’s still pretty vague and very much debatable, which is why this isn’t the only definition that’s been proposed.
Your definition is definitely more peculiar as I’ve never seen anyone else propose something like it, and it also seems to exclude humans since you’re referring to problems we can’t solve.
The next question then is what problems specifically AI would need to solve to fit your definition, and with what accuracy. Do you mean solve any problem we can throw at it? At that point we’d be going past AGI and now we’re talking about artificial superintelligence…
Not only has it not been proven whether LLMs will lead to AGI, it hasn’t even been proven that AGIs are possible.
By your definition AGI doesn’t really seem possible at all. But of course, your definition isn’t how most data scientists or people in general conceptualize AGI, which is the point of my comment. It’s very difficult to put a clear-cut line on what AGI is or isn’t, which is why there are those like you who believe it will never be possible, but there are also those who argue it’s already here.
No it can’t. If the task requires the LLM to solve a problem that hasn’t been solved before, it will fail.
Ask an LLM to solve a problem without a known solution and it will fail.
That’s simply not true. That’s the whole point of the concept of generalization in AI and what the few-shot and zero-shot metrics represent - LLMs solving problems represented in text with few or no prior examples by reasoning beyond what they saw in the training data. You can actually test this yourself by simply signing up to use ChatGPT since it’s free.
Exams often are bad measures of intelligence. They typically measure your ability to consume, retain, and recall facts. LLMs are very good at that.
So are humans. We’re also deterministic machines that output some action depending on the inputs we get through our senses, much like an LLM outputs some text depending on the inputs it received, plus as I mentioned they can reason beyond what they’ve seen in the training data.
The ability to interact with physical objects is very clearly not a good test for general intelligence and I never claimed otherwise.
I wasn’t accusing you of anything, I was just pointing out that there are many things we can argue require some degree of intelligence, even physical tasks. The example in the video requires understanding the instructions, the environment, and how to move the robotic arm in order to complete new instructions.
I find LLMs and AGI interesting subjects and was hoping to have a conversation on the nuances of these topics, but it’s pretty clear that you just want to turn this into some sort of debate to “debunk” AGI, so I’ll be taking my leave.
Depends on what you mean by general intelligence. I’ve seen a lot of people confuse Artificial General Intelligence and AI more broadly. Even something as simple as the K-nearest neighbor algorithm is artificial intelligence, as this is a much broader topic than AGI.
Wikipedia gives two definitions of AGI:
An artificial general intelligence (AGI) is a hypothetical type of intelligent agent which, if realized, could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks.
If some task can be represented through text, an LLM can, in theory, be trained to perform it either through fine-tuning or few-shot learning. The question then is how general do LLMs have to be for one to consider them to be AGIs, and there’s no hard metric for that question.
I can’t pass the bar exam like GPT-4 did, and it also has a lot more general knowledge than me. Sure, it gets stuff wrong, but so do humans. We can interact with physical objects in ways that GPT-4 can’t, but it is catching up. Plus Stephen Hawking couldn’t move the same way that most people can either and we certainly wouldn’t say that he didn’t have general intelligence.
I’m rambling but I think you get the point. There’s no clear threshold or way to calculate how “general” an AI has to be before we consider it an AGI, which is why some people argue that the best LLMs are already examples of general intelligence.
well at least you wash your hands
Fox host says he ‘hasn’t washed hands in 10 years’
Many Americans don’t always wash their hands after going to the bathroom