• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle




  • I love discord, for what it’s for. Quick synchronous talks you will never refer back to again. So not software development where indexable logs of information are necessary. I know discord has indexing, and now some form of forum. But every discord I’ve been to for development (especially modding communities) has a large corpus of synchronous logs where people get annoyed if you ask a question that was answered one before a long time ago with extremely common language making it nearly impossible to search for because the keywords have been used out of context of your question hundreds of times since the question was asked.

    If the Dev communities used the forums mode in discord more, it wouldn’t always solve it, but it’d be much better. There are better places than discord for these things, but I have been trying to meet people where they’re established.



  • And I wouldn’t call a human intelligent if TV was anything to go by. Unfortunately, humans do things they don’t understand constantly and confidently. It’s common place, and you could call it fake it until you make it, but a lot of times it’s more of people thinking they understand something.

    LLMs do things confident that they will satisfy their fitness function, but they do not have the ability to see farther than that at this time. Just sounds like politics to me.

    I’m being a touch facetious, of course, but the idea that the line has to be drawn upon that term, intelligence, is a bit too narrow for me. I prefer to use the terms Artificial Narrow Intelligence and Artificial General Intelligence as they are better defined. Narrow referring to it being designed for one task and one task only, such as LLMs which are designed to minimize a loss function of people accepting the output as “acceptable” language, which is a highly volatile target. AGI or Strong AI is AI that can generalize outside of its targeted fitness function and continuously. I don’t mean that a computer vision neural network that is able to classify anomalies as something that the car should stop for. That’s out of distribution reasoning, sure, but if it can reasonably determine the thing in bounds as part of its loss function, then anything that falls significantly outside can be easily flagged. That’s not true generalization, more of domain recognition, but it is important in a lot of safety critical applications.

    This is an important conversation to have though. The way we use language is highly personal based upon our experiences, and that makes coming to an understanding in natural languages hard. Constructed languages aren’t the answer because any language in use undergoes change. If the term AI is to change, people will have to understand that the scientific term will not, and pop sci magazines WILL get harder to understand. That’s why I propose splitting the ideas in a way that allows for more nuanced discussions, instead of redefining terms that are present in thousands of ground breaking research papers over a century, which will make research a matter of historical linguistics as well as one of mathematical understanding. Jargon is already hard enough as it is.




  • … Alexa literally is A.I.? You mean to say that Alexa isn’t AGI. AI is the taking of inputs and outputting something rational. The first AI’s were just large if-else complications called First Order Logic. Later AI utilized approximate or brute force state calculations such as probabilistic trees or minimax search. AI controls how people’s lines are drawn in popular art programs such as Clip Studio when they use the helping functions. But none of these AI could tell me something new, only what they’re designed to compute.

    The term AI is a lot more broad than you think.


  • Poik@pawb.socialtoMemes@lemmy.mlUSA things
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    11 months ago

    I tried to get into BCI for both personal reasons and for prosthetic reasons. I admit being able to control my computer faster, and draw/play things faster and more accurately was the goal for myself, but the greater good of improved prosthetics was always on my mind and so fascinating to follow progress on.

    When I got called for an initial interview with Neurolink, I turned it down, an entry-ish position for what was at the time my dream job, just because I heard the name Elon and would never work for a two bit hack that thinks 80 hours a week is the minimum time you should spend if you want to make any difference (paraphrased direct quote from the man who “works” 120 hours a week according to himself, and sleeps at his desk a solid chunk of that according to his employees).

    If we do ever get transhumanism, it will be too expensive to be for the greater good. Only the rich, who have proven themselves incapable of initiating positive change without financial incentive, will be able to afford it for many generations.


  • Poik@pawb.socialtoMemes@lemmy.mlUSA things
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    11 months ago

    It’s only being careful if you’re immunocompromised in some way that would make the vaccine actually dangerous, which is even rarer than side effects being more than soreness.

    COVID isn’t a well known virus. The fact that it destroys the nerves between your nose and tongue and your brain is a HUGE red flag that should be terrifying to everyone. Nerves are very similar throughout the body, and we don’t know the full extent of damage caused by it yet. Chronic Fatigue Syndrome, which may or may not be the same thing as long COVID considering it is generally caused by various viral infections, is incredibly not well known, but affects far less people supposedly. Maybe the fact that 25% estimated last I checked of people get at a minimum mild long COVID symptoms, and 10% of those never really recover, with most people reporting lowered energy levels permanently (not like enough to be a disability for most) will help drive more research, as there’s a lot of cases of COVID permanently screwing over perfectly healthy people.

    I mention here one of the least devastating aspects of ME/CFS and similarly long COVID which share a lot of symptoms. There are people who cannot stand up without assistance because of them. People have lost jobs due to them. And in America, not having a job means not having decent healthcare or any sort of benefits.

    Being careful means getting the damn vaccine if you can, when you can, as soon as your doctor tells you that you are healthy enough to do so, every single time. If not for you, then for anyone you care about. Care about human life, get the vaccine.


  • This. A lot of kids drove golf carts to the pool and stuff, but while that would be semi-legal in their gated community, they had to cross a public road where even licensed drivers would be breaking the law. No one seemed to care though. I’m fairly certain it’s illegal in most US states.



  • Not only is this not obsolete, it’s close to biographical as it closely references the first and second Artificial Intelligence Winters. The first being in the 60s. We’ve been working on these for a long time, so 5 years is short. It took until GPGPU to kick into full gear and some clever insights to get Deep Learning up and running (somewhat attributed to work published in 2011) to start reliably on this problem, and even that is an oversimplification of the timeline and the scope.

    Others have mentioned oddities like the difficulty of subject matter (picture contains a bird vs picture of a bird) but there are a lot harder problems that are trivial to humans and counterintuitively incredibly hard for computers.