𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠

  • 0 Posts
  • 52 Comments
Joined 1 year ago
cake
Cake day: August 16th, 2023

help-circle
  • Is public healthcare actually made illegal by the supreme court?

    No, Citizens United is the effective legalization of public bribery, masked as “political donations”.

    The problem is that you’re never going to get that grassroots movement built up. The healthcare companies rake in billions, they’ll happily spend that to ensure they can keep existing. And other billionaire corporations will join in too, because why risk a party willing to deal with healtcare companies getting power? What else will that party do that could harm their precious profits?

    They’ll invest billions to primary candidates, buy media coverage, demonize their opponents or even fabricate fake negative PR. That grassroots movement would be stamped out, as you won’t be able to get enough votes. That’ll put a party like the GOP in charge and they will pass as many voter disenfranchisement laws, gerrymandering laws, etc… to ensure you need massive majorities to barely get 50% of the representation.

    People are already pissed with the state of healthcare, so much so that they’re collectively cheering for the murder of a CEO. Yet no grassroots campaign is in sight. By the time the next election rolls around American voters will already have forgotten about that CEO and will be more concerned about inflation or migration or whatever-the-fuck the media has decided to focus on.

    I think by the time you get enough Americans on board with a grassroots campaign powerful enough to actually make changes, you are at such a high level of public anger a violent revolution is nearly inevitable.


  • First-Past-The-Post system sucks but systematic change can happen. Its just… you guys elected Trump.

    Systemic change is being made next to impossible due to the rampant legalised bribery and corruption at all levels of the political offices.

    How would you even go about going against the corporate oligarchy? Your candidates will get primaried and out-funded, your party colleagues will get bribed to vote against tackling these issues, and that’s all assuming you could get close enough to having enough candidates for all races across the country, you get your messaging picked up by the media and you somehow poll so high that strategic voters won’t split the vote, actively putting the worst party in charge instead.

    You’d somehow have to get elected, get enough supreme court justices pushed through and have them repeal Citizens United to even get started. That’s a tall order to ask from a political class that actively benefits from the current situation.








  • If producing an AGI is intractable, why does the human meat-brain exist?

    Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.

    The human brain is extremely complex and we still don’t fully know how it works. We don’t know if the way we learn is really analogous to how these AIs learn. We don’t really know if the way we think is analogous to how computers “think”.

    There’s also another argument to be made, that an AGI that matches the currently agreed upon definition is impossible. And I mean that in the broadest sense, e.g. humans don’t fit the definition either. If that’s true, then an AI could perhaps be trained in a tractable amount of time, but this would upend our understanding of human consciousness (perhaps justifyingly so). Maybe we’re overestimating how special we are.

    And then there’s the argument that you already mentioned: it is intractable, but 60 million years, spread over trillions of creatures is long enough. That also suggests that AGI is really hard, and that creating one really isn’t “around the corner” as some enthusiasts claim. For any practical AGI we’d have to finish training in maybe a couple years, not millions of years.

    And maybe we develop some quantum computing breakthrough that gets us where we need to be. Who knows?


  • This is a gross misrepresentation of the study.

    That’s as shortsighted as the “I think there is a world market for maybe five computers” quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented.

    That’s not their argument. They’re saying that they can prove that machine learning cannot lead to AGI in the foreseeable future.

    Maybe transformers aren’t the path to AGI, but there’s no reason to think we can’t achieve it in general unless you’re religious.

    They’re not talking about achieving it in general, they only claim that no known techniques can bring it about in the near future, as the AI-hype people claim. Again, they prove this.

    That’s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesn’t mean it has any relationship to the real world.

    That’s not what they did. They provided an extremely optimistic scenario in which someone creates an AGI through known methods (e.g. they have a computer with limitless memory, they have infinite and perfect training data, they can sample without any bias, current techniques can eventually create AGI, an AGI would only have to be slightly better than random chance but not perfect, etc…), and then present a computational proof that shows that this is in contradiction with other logical proofs.

    Basically, if you can train an AGI through currently known methods, then you have an algorithm that can solve the Perfect-vs-Chance problem in polynomial time. There’s a technical explanation in the paper that I’m not going to try and rehash since it’s been too long since I worked on computational proofs, but it seems to check out. But this is a contradiction, as we have proof, hard mathematical proof, that such an algorithm cannot exist and must be non-polynomial or NP-Hard. Therefore, AI-learning for an AGI must also be NP-Hard. And because every known AI learning method is tractable, it cannor possibly lead to AGI. It’s not a strawman, it’s a hard proof of why it’s impossible, like proving that pi has infinite decimals or something.

    Ergo, anyone who claims that AGI is around the corner either means “a good AI that can demonstrate some but not all human behaviour” or is bullshitting. We literally could burn up the entire planet for fuel to train an AI and we’d still not end up with an AGI. We need some other breakthrough, e.g. significant advancements in quantum computing perhaps, to even hope at beginning work on an AGI. And again, the authors don’t offer a thought experiment, they provide a computational proof for this.








  • What happens if a mistake was made and an NFT is erroneously issued (for example to the wrong person)?

    What happens if the owner dies? How is the NFT transferred then?

    Who checks that the original NFT was issued correctly?

    What about properties that are split? What happens if the split isn’t represented in the NFT correctly (e.g. due to an error)?

    The whole non-fungible part can be a problem, not a solution. It very, very rarely happens that ownership of a property is contested. It happens quite often that a mistake is made during a property transfer/sale that needs to be corrected. How do NFTs deal with this, and are they a solution to a non-issue?