

Nah, storage is fried.
People always focus on systemd whenever this is posted, but all systemd is saying is that it can’t read the service files when it tries to start something. Earlier on the kernel is complaining about I/O errors as well.
made you look


Nah, storage is fried.
People always focus on systemd whenever this is posted, but all systemd is saying is that it can’t read the service files when it tries to start something. Earlier on the kernel is complaining about I/O errors as well.


I bet the actual logo display is a full screen browser too, multiple computers each running chrome just to display ads.


Depending on the output device it’s still using ALSA underneath (e.g. Bluetooth output instead is given to the BT stack), PipeWire is dealing with managing and routing the audio output rather than actually performing it.
Makes it portable across architectures while also providing sandboxing.
The fedi software I use (GoToSocial) runs both ffmpeg (Sorry, ffmpreg) and sqlite through WASM, also makes it easier to integrate it with Go code apparently.
And also, JSON was intended as a data serialisation format, and it’s not like computers actually get value from the comments, they’re just wasted space.
People went on to use JSON for human readable configuration files, and instantly wanted to add comments, rather than reconsider their choice because the truth is that JSON isn’t a good configuration format.
Compared to e.g. pushing a button in VS code and having your browser pop up with a pre-filled in github PR page? It’s clunky, but that doesn’t mean it’s not useful.
For starters it’s entirely decentralised, a single email address is all you need to commit to anything, regardless of where and how it’s hosted. There was actually an article on lobsters recently that I thought was quite neat, how the combination of a patch-based workflow and email allows for entirely offline development, something that’s simply not possible with things like github or codeberg.
https://ploum.net/2026-01-31-offline-git-send-email.html
The fact that you can “send” an email without actually sending it means you can queue the patch submissions up offline and then send them whenever you’re ready, along with downloading the replies.
Sourcehut uses it, it’s actually the only way to interact with repos hosted on it.
It definitely feels outdated, yet it’s also how git is designed to work well with. Like git makes it really easy to re-write commit history, while also warning you not to force push re-written history to a public repo (Like e.g. a PR), that’s because none of that is an issue with the email workflow, where each email is always an entirely isolated new commit.


It’s been a few years since I used a Mac, but even then resource forks weren’t something you’d see outside of really old apps or some strange legacy use case, everything just used extended attributes or “sidecar” files (e.g. .DS_Store files in the case of Finder)
Unlike Windows or Linux, macOS takes care to preserve xattrs when transferring the files, e.g. their archiver tool automatically converts them to sidecar AppleDouble files and stores them in a __MACOS folder alongside the base file in the archive, and reapplies them on extraction.
If course nothing else does that, so if you’ve extracted a zip file or whatever and found that folder afterwards, that’s what you’re looking at.
The latest Nvidia drivers have broken composition in Xfce, so I’ve been raw-dogging basic X11. It’s like I’m using WinXP again.


Really good network stack. Linux is catching up surely but places like Netflix run a ton of stuff on BSD simply for that stack.
Depends on the specific BSD, OpenBSD for example is only just now catching up to Linux.
Edit: Slide 28 for a graph
No, MS has been “shipping” curl with Windows for ages, it’s just that legacy powershell has an alias for curl to their internal download module that predates the bundling. And they won’t change that because it has backwards compatibility risks.
Upside is, it’s a literal alias. “curl” uses the internal module while “curl.exe” uses the normal app.
Further upside, if you use the up to date version of powershell, that alias is gone as they removed it during the transition.
if you want it to go away, everyone who is working on it and making it work right now disagrees with you
I’m sure most people wouldn’t like losing their jobs.


Rust has no stable inter-module ABI, so everything has to be statically linked together. And because of how “viral” the GPL/LGPL are a single dependency with that license turns the entire project into a GPL licenced one.
So the community mostly picks permissive licenses that don’t do that, and that inertia ends up applying to the binaries as well for no real good reason. Especially when there’s options like e.g. MPL.


Yep, same way people block all ICMP and then wonder why stuff breaks.


Ahh, yeah that’s a bit harder, CSS multiline stuff is pretty flaky from what I can recall. You need to drop down to block layout, e.g. making the containing element a flex parent (Better term than that?) and then making the icon centered within that can work, but then we’re back to square one with sizing the icon.


<p><svg class="icon">...</svg> Text</p>
p .icon {
--size: 1.25em;
vertical-align: calc(0.5cap - 0.5 * var(--size));
height: var(--size);
width: var(--size);
}
Done.
The lead developer of systemd has said multiple times that we should be fine with break POSIX if it means developing faster.
I mean, so does GNU.
A lot of this is also a post-hoc justification, UNIX didn’t get shared libraries until some point in the 80s (Can’t find an exact year), so before that your options were to either statically compile the needed functionality into your program or keep it as an entirely separate program and call out to that.
It’s a perfect mix, in a time where enterprise storage was measured in single digit megabytes, and the only efficient way to created shared functionality was via separate programs, and you’ve got an OS that happens to have “easily pass data between programs” as a core paradigm.
And now people invoke it to attack an init program for also monitoring the programs it starts and not just spawning them.


Git itself (Or any other VCS for that matter) really should treat symlinks as special, similar as to how btrfs stores everything as “reflinks” internally. They be stored as special references to other tracked objects (so it’d be impossible to commit a symlink that pointed at anything other than a checked-in file, and ensure they always match), and git can materialise them as needed.
To me lying implies an intent to deceive, LLMs can’t do that as they have no intentions or understanding of the output they produce.
It’s not lying, because it’s also not telling the truth either, it’s just statistically weighted noise.