• 5 Posts
  • 22 Comments
Joined 6 years ago
cake
Cake day: June 8th, 2019

help-circle









  • nah, you will attract only those that already kinda agree. All the others will see weirdos with weird ideas, weird clothing and weird vocabulary, approaching them in the street or promoting events that they don’t care about.

    “talking to people” is something I do since I’m in union organizing and the way people react to the same arguments varies wildly over time. After the waves of layoffs in the tech sector, non-politicized tech workers are incredibly more receptive to pro-union rhetoric, in a way that would have been impossible before.

    About accelerationism: I’m not saying failing an election is a necessary step in a teleological sense. You should enter elections to win them, if you do it. Nonetheless it is useful to radicalize people. It is a recuperation of what is perceived as a defeat in a system in order to feed a different system. Electoral betrayal is useful, but not necessarily something you should strive for, as an armchair accelerationist would claim. There are better ways to spend your time and energy imho, but if it happens, it is still good manure for growing the seeds of something new.



  • The mistake of this logic is to believe that this betrayal of electoral logic won’t radicalize people. It is a necessary step. There are now 11 Million French people, many of which probably don’t believe much in electoralism but vote anyway, who are furious at what’s happening.

    People don’t change their mind listening to arguments, they change their mind living experiences. The experience of joy after winning, followed by the disregard of democratic logic by Macron, will mobilize an insane amount of popular energy, contrary to snarky “electoralism doesn’t work” comments that are relatable only to a microscopic niche of edgy, maximalist leftists.






  • This paper explain a taxonomy of harms created by LLMs: https://dl.acm.org/doi/pdf/10.1145/3531146.3533088

    OpenAI released ChatGPT without systems to prevent or compensate these harms and being fully aware of the consequences, since this kind of research has been going on for several years. In the meanwhile they’ve put some paper-thin countermeasures on some of these problems but they are still pretty much a shit-show in terms of accountability. Most likely they will get sued into oblivion before regulators outlaw LLMs with dialogical interfaces. This won’t do much for the harm that open-source LLMs will create but at least will limit large-scale harm to the general population.


  • chobeat@lemmy.mlOPtoTechnology@lemmy.mlAI panic is a marketing strategy
    link
    fedilink
    arrow-up
    20
    arrow-down
    4
    ·
    edit-2
    1 year ago

    It’s not from me but from AlgorithmWatch, one of the most famous and respected NGOs in the field of Algorithmic accountability. They published plenty of stuff on these topics and human rights threats from these companies.

    Also this is an ecosystem analysis of political positioning. These companies and think tanks are going on newspapers with their names to say we should panic about AI. It’s not a secret, just open Google News and you fill find a landslide of news on these topics sponsored by these companies with a simple search.





  • chobeat@lemmy.mlOPtoTechnology@lemmy.mlAI panic is a marketing strategy
    link
    fedilink
    arrow-up
    2
    arrow-down
    9
    ·
    edit-2
    1 year ago

    They published a deliberately harmful tool against the advice of civil society, experts and competitors. They are not only reckless but tasked since their foundation with the mission to create chaos. Don’t forget the idea behind OpenAI in the beginning was to damage the advantage that Google and Facebook had on AI by releasing machine learning technology in open source. They definitely did it and now they are expanding their goals. They are not in for the money (ChatGPT will never be profitable), they are playing a bigger game.

    Pushing the AI panic is not just a marketing strategy but a way to build power. The more they are considered dangerous, the more regulations will be passed that will impact the whole sector. https://fortune.com/2023/05/30/sam-altman-ai-risk-of-extinction-pandemics-nuclear-warfare/


  • In the picture you can see organizations moving in the public sphere around AI. On the left you have right-wing and libertarian think tanks, corporations and frontline actors that fuel a sense of panic around AI, either to sabotage their business competitors or to leverage this panic to project an idea of being sellers of a very powerful tool while at the same time deflecting responsibility. If the AI is dangerous and sentient, you won’t care much about the engineers behind.

    On the right you have several public orgs or NGOs operating in the field of algorithmic accountability, digital rights and so on. They push the opposite of the AI panic, pointing the finger at the corporations and powers that create and govern AI