“Falsehood flies, and truth comes limping after it, so that when men come to be undeceived, it is too late; the jest is over, and the tale hath had its effect: […] like a physician, who hath found out an infallible medicine, after the patient is dead.” —Jonathan Swift

  • 0 Posts
  • 114 Comments
Joined 1 year ago
cake
Cake day: July 25th, 2024

help-circle




  • TheTechnician27@lemmy.worldtolinuxmemes@lemmy.worldDirty Talk
    link
    fedilink
    English
    arrow-up
    40
    ·
    edit-2
    13 days ago
    • sudo is telling the computer to do this with root privileges.
    • chmod sets permissions.
    • Each digit of that three-digit number corresponds to the owner, the group, and other users, respectively. It’s 0–7, where 0 means no access and 7 means access to read, write, and execute. So 077 is the exact inverse of 700, where 077 means “the owner cannot access their own files, but everyone else can read, write, and execute them”. Corresponding 700 to asexuals is joking that nobody but the owner can even so much as touch the files.
    • / is the root directory, i.e. the very top of the filesystem.
    • The -R flag says to do this recursively downward; in this case, that’s starting from /.

    So here, we’re modifying every single file on the entire system to be readable, writable, and executable by everyone but their owner. And yes, this is supposed to be extremely stupid.



  • Basically what @meekah@lemmy.world said: the idea is to be practicable. Here’s a stream of disconnected thoughts about this:

    • What you pointed out is actually consistent with how a disproportionate amount of vegans are staunchly anticapitalist.
    • A cut-and-dry example of someone who’s still vegan but eats animal products based on “practicable” is someone whose prescription medication contains gelatin with no other pill type; vegans aren’t going to say “lol ok too bad bozo you’re not vegan anymore”.
    • The core focus of veganism has traditionally been non-human animals with the idea that a reduction of cruelty and exploitation toward humans is, at most, peripheral. This is changing in my opinion, especially when questions like “vegan Linux distro” don’t involve animals short of what the devs eat.
    • Based on what you say (as someone else pointed out), a distro based solely on FLOSS would probably be regarded as “the most vegan” if that were ever measured by anyone (it never would be).
    • It’s a weird analogy, but after you’re done using and purchasing products derived from animals, what’s “practicable” from there is kind of like a vegan post-game. Many vegans, for example, won’t eat palm oil because of how horribly destructive it is to wildlife.
    • Growing all your own food is in that post-game area of “practicable”. It’s up to you to decide if that’s practicable for you. It’s up to you to implement that if you think it is or, if it’s not, to maybe think about how else you can reduce harm with how you buy vegetables. It’s up to you if you want to share that idea and help other people implement it themselves. It’s widely accepted that it’s not up to you to determine if it’s practicable for others.

  • I would say that most vegans, even if they’ve never heard it, at least approximately follow the Vegan Society’s famous definition:

    Veganism is a philosophy and way of living which seeks to exclude—as far as is possible and practicable—all forms of exploitation of, and cruelty to, animals for food, clothing or any other purpose; and by extension, promotes the development and use of animal-free alternatives for the benefit of animals, humans and the environment. In dietary terms it denotes the practice of dispensing with all products derived wholly or partly from animals.

    Striking the parts that seem irrelevant to this specific question:

    Veganism is a philosophy and way of living which seeks to exclude—as far as is possible and practicable—all forms of exploitation of, and cruelty to, animals for […] any […] purpose […]

    Keep in mind that “animals” in that first part is widely treated as “humans and non-human animals”. So you would have to decide 1) to what extent cruelty was inflicted to create the distro, 2) to what extent people and non-human animals were exploited to create the distro, and 3) if there exist practicable alternatives that meaningfully reduce (1) and (2).







  • TheTechnician27@lemmy.worldtomemes@lemmy.worldMakes sense to me
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 month ago

    Yeah, and to be clear, I actually really like trivia! The front page of Wikipedia has a section called “Did You Know?” (DYK) that has six or seven pieces of daily trivia. These are also researched and follow a similar format. The key differences are that: 1) the corresponding article is right there if you want to immediately verify what’s been said, and 2) this article lets you understand the full context of the trivia if you want.

    In this case, the most egregious part isn’t the trivia itself; it’s the kind of culture around trivia that it foments.





  • TheTechnician27@lemmy.worldtomemes@lemmy.world\begin
    link
    fedilink
    English
    arrow-up
    42
    ·
    edit-2
    2 months ago

    Any stray pixel in a (EDIT: exported) LaTeX document is a confirmed skill issue.

    Text rotated 90° clockwise and only occupies the left 1/3 of the page in an MS Word document whose pages are all numbered ‘2’? Default assumption is “not your fault.”


  • TheTechnician27@lemmy.worldtoProgramming@programming.devStack overflow is almost dead
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Dude, I’m sorry, I just don’t know how else to tell you “you don’t know what you’re talking about”. I’d refer you to Chapter 20 of Goodfellow et al.'s 2016 book on Deep Learning, but 1) it tragically came out a year before transformer models, and 2) most of it will go over your head without a foundation from many previous chapters. What you’re describing – generative AI training on generative AI ad infinitum – is a death spiral. Literally the entire premise of adversarial training of generative AI is that for the classifier to get better, you need to keep funneling in real material alongside the fake material.

    You keep anthropomorphizing with “AI can already understand X”, but that betrays a fundamental misunderstanding of what a deep learning model is: it doesn’t “understand” shit about fuck; it’s an unfathomably complex nonlinear algebraic function that transforms inputs to outputs. To summarize in a word why you’re so wrong: overfitting. This is one of the first things you’ll learn about in a ML class, and it’s what happens when you let a model train on the same data over and over again forever. It’s especially bad for a classifier to be overfitted when it’s pitted against a generator, because a sufficiently complex generator will learn how to outsmart the overfitted classifier and it will find a cozy little local minimum that in reality works like dogshit but outsmarts the classifier which is its only job.

    You really, really, really just fundamentally do not understand how a machine learning model works, and that’s okay – it’s a complex tool being presented to people who have no business knowing what a Hessian matrix or a DCT is – but please understand when you’re talking about it that these are extremely advanced and complex statistical models that work on mathematics, not vibes.


  • TheTechnician27@lemmy.worldtoProgramming@programming.devStack overflow is almost dead
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Your analogy simply does not hold here. If you’re having an AI train itself to play chess, then you have adversarial reinforcement learning. The AI plays itself (or another model), and reward metrics tell it how well it’s doing. Chess has the following:

    1. A very limited set of clearly defined, rigid rules.
    2. One single end objective: put the other king in checkmate before yours is or, if you can’t, go for a draw.
    3. Reasonable metrics for how you’re doing and an ability to reasonably predict how you’ll be doing later.

    Here’s where generative AI is different: when you’re doing adversarial training with a generative deep learning model, you want one model to be a generator and the other to be a classifier. The classifier should be given some amount of human-made material and some amount of generator-made material and try to distinguish it. The classifier’s goal is to be correct, and the generator’s goal is for the classifier to pick completely randomly (i.e. it just picks on a coin flip). As you train, you gradually get both to be very, very good at their jobs. But you have to have human-made material to train the classifier, and if the classifier doesn’t improve, then the generator never does either.

    Imagine teaching a 2nd grader the difference between a horse and a zebra having never shown them either before, and you hold up pictures asking if they contain a horse or a zebra. Except the entire time you just keep holding up pictures of zebras and expecting the child to learn what a horse looks like. That’s what you’re describing for the classifier.