Somebody has to hold the bag. I don’t think this is functionally different.
Somebody has to hold the bag. I don’t think this is functionally different.
Then they may as well say they did it “with computers.”
Oh, but that’s not sexy, is it.
when we’re in the midst of another bull run.
Oh, that’s nice. So, who’s money are you rug pulling?
a rite of passage for your work to become so liked by others that they take your ideas,
ChatGPT is not a person.
People learn from the works they see […] and that is what AI does as well.
ChatGPT is not a person.
It’s actually really easy: we can say that chatgpt, which is not a person, is also not an artist, and thus cannot make art.
The mathematical trick of putting real images into a blender and then outputting a Legally Distinct™ one does not absolve the system of its source material.
but are instead weaving it into their process to the point it is not even clear AI was used at the end.
The only examples of AI in media that I like are ones that make it painfully obvious what they’re doing.
Yeah, it’s a lot like piss in a river. I just drink around the piss.
—Holy shit, warn me next time.
Yeah, the context is your mom never standing up to your dad, so you never learned what strength looks like.
Mate, I chose the name, that’s not a burn. This is like making fun of a clown for wearing bright colors.
They are that. They’re mannequins also. Actually, going by the remake, I swear they have a mannequin half, which is a little unsettling.
You got that from a single sentence?
You’re not beating the allegations, my friend.
And screen-share knowledge is not some skill that is short in supply
Right, so they should know how to do it then.
Probably a few bucks. Could get a nice shirt.
Linux and… uh… GNU+Linux
Yeah, there are a lot of those people xD
I believe in you. If you fill yourself with determination, you too can use comedy to challenge people and also complain about those people canceling you loudly on NBC.
Oh, I know this video!
I remember playing it for somebody, and you could tell they were trying so hard to disagree with it in their head. I imagine they still believe it was fake, but it was funny.
Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.
I didn’t quite understand this at first. I think I was going to say something about the paper leaving the method ambiguous, thus implicating all methods yet unknown, etc, whatever. But yeah, this divide between solvable and “unsolvable” shifts if we ever break NP-hard and have to define some new NP-super-hard category. This does feel like the piece I was missing. Or a piece, anyway.
e.g. humans don’t fit the definition either.
I did think about this, and the only reason I reject it is that “human-like or -level” matches our complexity by definition, and we already have a behavior set for a fairly large n. This doesn’t have to mean that we aren’t still below some curve, of course, but I do struggle to imagine how our own complexity wouldn’t still be too large to solve, AGI or not.
Anyway, the main reason I’m replying again at all is just to make sure I thanked you for getting back to me, haha. This was definitely helpful.
Hey! Just asking you because I’m not sure where else to direct this energy at the moment.
I spent a while trying to understand the argument this paper was making, and for the most part I think I’ve got it. But there’s a kind of obvious, knee-jerk rebuttal to throw at it, seen elsewhere under this post, even:
If producing an AGI is intractable, why does the human meat-brain exist?
Evolution “may be thought of” as a process that samples a distribution of situation-behaviors, though that distribution is entirely abstract. And the decision process for whether the “AI” it produces matches this distribution of successful behaviors is yada yada darwinism. The answer we care about, because this is the inspiration I imagine AI engineers took from evolution in the first place, is whether evolution can (not inevitably, just can) produce an AGI (us) in reasonable time (it did).
The question is, where does this line of thinking fail?
Going by the proof, it should either be:
I’m not sure how to formalize any of this, though.
The thought that we could “encode all of biological evolution into a program of at most size K” did made me laugh.
but there’s no reason to think we can’t achieve it
They provide a reason.
Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.
What are we science deniers now?
Wow. I’m so used to this comic being presented from a left perspective that I completely missed what explodicle was actually saying.
No problem, yo!
And to be fair, communication is always two-way. It"s not like I don’t want the public to be more thoughtful in their disagreements.
Have a good one.
They’re saying .world is anti-china.