• 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: July 11th, 2023

help-circle


  • Yeah, I’m gonna need more than your incredulity to convince me. Like, fun that you think it is inconceivable, but your inability to imagine has no bearing on reality. Especially when there is plenty of evidence to suggest they actually filmed and broadcasted it live. For example, the fact that a live television broadcast was a primary goal of the mission, or the fact that RCA made custom TV cameras for the Apollo program , or that the broadcast lasted for hours, or any of the analyses out there that shows the video is likely real. Also, no one suggested that the Apollo astronauts had a camera crew with them - what a bizarre thing to mention.




  • It’s crazy how most of those programs work. The way my insurance handles it is way better. For example, no matter how bad you are at driving, they never raise the premiums above the normal rate, so it almost always makes sense to get the tracker from a finance perspective. (The only exception is that they will raise your rates if you drive farther in 6 months than you estimated on your initial application. The flip side is that they lower your rates if you don’t drive very much. I only drive about 1000 miles every 6 months, so my premium is really low.) They also have a Bluetooth device that stays in your car that your phone must be connected to in order for it to record trip data, and if you happen to be riding as the passenger in the car, the app has an option that allows you to clarify for each trip that you weren’t the driver. I was surprised to learn they aren’t all like that.


  • Language parsing is a routine process that doesn’t require AI and it’s something we have been doing for decades. That phrase in no way plays into the hype of AI. Also, the weights may be random initially (though not uniformly random), but the way they are connected and relate to each other is not random. And after training, the weights are no longer random at all, so I don’t see the point in bringing that up. Finally, machine learning models are not brute-force calculators. If they were, they would take billions of years to respond to even the simplest prompt because they would have to evaluate every possible response (even the nonsensical ones) before returning the best answer. They’re better described as a greedy algorithm than a brute force algorithm.

    I’m not going to get into an argument about whether these AIs understand anything, largely because I don’t have a strong opinion on the matter, but also because that would require a definition of understanding which is an unsolved problem in philosophy. You can wax poetic about how humans are the only ones with true understanding and that LLMs are encoded in binary (which is somehow related to the point you’re making in some unspecified way); however, your comment reveals how little you know about LLMs, machine learning, computer science, and the relevant philosophy in general. Your understanding of these AIs is just as shallow as those who claim that LLMs are intelligent agents of free will complete with conscious experience - you just happen to land closer to the mark.



  • You’re thinking of topological closure. We’re talking about algebraic closure; however, complex numbers are often described as the algebraic closure of the reals, not the irrationals. Also, the imaginary numbers (complex numbers with a real part of zero) are in no meaningful way isomorphic to the real numbers. Perhaps you could say their addition groups are isomorphic or that they are isomorphic as topological spaces, but that’s about it. There isn’t an isomorphism that preserves the whole structure of the reals - the imaginary numbers aren’t even closed under multiplication, for example.