• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 13th, 2023

help-circle



  • Game industry professional here: We know Riccitello. He presided over EA at critical transition periods and failed them. Under his tenure, Steam won total supremacy because he was trying to shift people to pay per install / slide your credit card to reload your gun. Yes his predecessor jumped the shark by publishing the Orange Box, but Riccitellos greed sealed the total failure of the largest company to deal with digital distribution by ignoring that gamers loved collecting boxes (something Valve understood and eventually turned into the massive Sale business where people buy many more games than they consume)

    He presided over EA earlier than that too, and failed.

    Both of times, he ended up getting sacked after the stock reached a record low. But personally he made out like a bandit selling EA his own investment in VG Holdings (Bioware/Pandemic) after becoming their CEO.

    He’s the kind of CEO a board of directors would appoint to loot a company.

    At unity, he invested into ads heavily and gambled on being able to become another landlord. He also probably paid good money on reputation management (search for Riccitello or even his full name on google and marvel at the results) after certain accusations were made.




  • I think at this point we are arguing belief.

    I actually work with this stuff daily and there is a number of 30B models that are exceeding chatGPT for specific tasks such as coding or content generation, especially when enhanced with a lora.

    airoboros-33b1gpt4-1.4.SuperHOT-8k for example comfortably outputs > 10 tokens/s on a 3090 and beats GPT-3.5 on writing stories, probably because it’s uncensored. It’s also got 8k context instead of 4.

    Several recent LLama 2 based models exceed chatgpt on coding and classification tasks and are approaching GPT4 territory. Google bard has already been clobbered into a pulp.

    The speed of advances is stunning.

    M- architecture macs can run large LLMs via llama.cpp because of unified memory interface - in fact a recent macbook air with 64GB can comfortably run most models just fine. Even notebook AMD GPUs with shared memory have started running generative AI in the last week.

    You can follow along at chat.lmsys.org. Open source LLMs are only a few months but have started encroaching on the proprietary leaders who have years of headstart







  • That’s what a win win looks like. No need to be quiet around it. Russia illegally invaded Ukraine. Now everyone gets to replenish and modernize their weapons, test them in real conditions while making sure Russia gets enough of a bloody nose to not fucking try this shit ever again.

    Russia did the ‘fuck around and find out thing’. It was their choice and the only way they can win is by tankies convincing every other country that just saw rape, murder, pillaging and terrorism getting used on another country in Europe by a rabid bear that somehow Russia was justified and should be allowed a free pass. But it’s not working. The rabid bear is rabid, but there’s ways to deal with that.

    Because now they makes sure that every country around them is joining the anti rabid bear alliance.

    The way the OP framed the article is to create the idea that somehow Russia is good because US military is bad. But that’s a fallacy. The US military is perfectly capable of doing bad shit on behalf of the US, but that does not mean everyone else is good. Sometimes clobbering Nazis is win win and Russia should have know that. Their feeble at reframing may work on Fox brainwashed Republicans who are reduced to “Putins kills gays and is strong so Putin is good”, but it turns out Putin is a cuck taking it into the ass by his own chef.



  • Nothing to do with AI, Garbage in, Garbage out.

    LLMs are tools that satisfies requests. The developer decided to allow people to put the ingredients for chlorine Gas into the input - LLM never stood a chance but to comply with the instructions to combine them into the end product.

    Clear indication we are in the magical witch hunt phase of the hype cycle where people expect the technology to have magical induction capabilities.

    We could discuss liability for the developer but somehow I don’t think a judge would react favorably to “So you put razor blades into your bread mixer and want to sue the developer because they allowed you to put razor blades into the bread mixer”