Clearly they can’t be trusted with the quality assurance of their training data.
Clearly they can’t be trusted with the quality assurance of their training data.
Archive ph link
For people like me who were faced with the paywall after reading one too many Reuters articles.
If anyone can share what caused the downvotes, I’d be very happy if you could let me know. Thanks.
CTT claims that Bluesky is great, but I’m just not into that type of social media so while I’m on Mastodon, I don’t actually use it.
Right, so that was his plan all along?
Keep on promising a full self driving experience, which even now won’t really happen,
and then promising an affordable Tesla,
which he can then backtrack on?
Overpromise and underdeliver.
That’s the Musk motto.
Can’t wait to see him fail to bring humanity to Mars, and end up trying to credit himself when NASA do it.
They’ll do it in 3-4 years, claiming it’s revolutionary, while they’re just catching up to the competition.
He says he “dreads” the day when he has to “give” it back. So to me, this reads like he was given it for testing so Ford can learn what Chinese manufacturers are doing right, and take some of those ideas to Ford’s own vehicles.
I GOT A JAR OF DIIIIRT!!!
All I want is a performant and modern Emacs that has the same speed and startup time as neovim while not requiring the daemon, which also has the stability and capabilities of neovim (things like super easy language integration and lsp are a godsend)
New slop just dropped
And I’m interested because you wouldn’t make an article if it wasn’t interesting slop, right? Right? (Insert Star wars meme here)
from OpenAI
Oh, nevermind then. Don’t care.
So what? Is the US Government going to prohibit Nvidia and AMD from using TSMC’s chips to make money for Big Tech so they can keep on their corruption lobbying over the US Gov? I highly doubt it. It’s just smoke and mirrors, as usual.
AI can fix it if we make it a politician, seeing as it agrees we should’ve started acting on climate change 10 times harder, and 10-20 years earlier than we did.
Operation: Eat the Rich is a go! I repeat: Operation Eat the Rich is a go!
I tried Zen again after this article and it’s improved a lot since 2 months ago.
I’m still missing some keybinds (some keybinds I used on Vivaldi are not available, and some keybinds that are available (workspace switching) don’t seem to work),
also missing my custom theme from Vivaldi (I might just fork a Zen theme and make it that way) ,
and I still have some issues regarding the ability to remove the top bar (namely, when getting the URL bar with Ctrl-L, it calls the entire top bar instead of just the URL bar, and I can’t seem to make that disappear with just ESC. Sometimes it just hangs there no matter what I do, just for the sake of being annoying. Also, it’s not even disabled, just hidden and if you accidentally hover over the top of the window, it’s back! Absolutely infuriating!)
but other than that, it’s pretty great!
It’s actually quite impressive for Alpha software. What’s with 2024 and super stable Alphas of projects that make power user capabilities accessible and easy to use for everyone? First, COSMIC DE and now Zen Browser!
Edit: Update on these issues:
Missing Keybinds: They are all already reported as Github issues.
Workspaces keybinds issue: Caused by an already-reported issue that websites seem to take priority and grab every keybind before the browser, meaning I had to use one of the worst key combinations I’ve ever used (Win+Alt+{num}), for workspace switching. UPDATE: Trying Ctrl+{num}, we’ll see if that works.
Custom Theme: I tried to remake it using Mozilla’s tools and getting it up on the addons store but Mozilla removed it cuz it was too similar to another theme? Even though I literally created it? Weird. I couldn’t be bothered to deal with their bullshit so I forked a Zen theme that someone else had made and based it on that, with custom firefox css.
The top bar issue is still there, not sure what I can do about it.
Yeah, but something like that would be super easy to find and fix without going through lawsuits. And I’d argue the dataset creators would be far less likely to add copyrighted material to the training data when it’s all out in the open and they can be immediately made to remove and retrain the AI without that data.
Not necessarily. A lot of the harms disappear when everything goes open, which is what this person stands for, and what OpenAI was supposed to stand for.
Open LLM + Open Training Data = Open AI
Copyright and IP concerns disappear with an open dataset.
Open models are inherently more trustworthy because of an obvious reduction in vendor lock-in.
I’m actually on this man’s side.
The idea-stealing he talks about is not unheard of, and multiple people or groups coming up with similar ideas at the same time by looking at market trends is actually quite common.
If you also look at the fact that he has evidence for pretty much all his claims,
AND
He has gotten the domain and has evidence for the ideas and ownership of “Open AI” before Altman’s “OpenAI” was formed
AND
He says a lot of his ideas never came to fruition because he couldn’t get funding but the one thing he didn’t need crazy funding for, investing in Bitcoin when it was $10 per coin, is something he ends up doing and leaves him well-off.
All that to me is enough evidence that this man is one hell of an unlucky individual.
And as such, I believe him.
So, it’s cool, but not worrying. Title is a bit clickbait-y.
I really think AI development should move to creating small and single-purpose models that can easily be trained with public domain data of your choice, so they can be ran locally and spend far less energy than models like Gippity.