• 1 Post
  • 71 Comments
Joined 3 years ago
cake
Cake day: July 3rd, 2023

help-circle
  • What I think though is that it’s particularly hard on Linux to fix programs, especially if you are not a developer (which is always the perspective I try to see things from). Most notable architectural difference here between f.e. Windows and Linux would be how you’re able to simply throw a library into the same folder as the executable on Windows for it to use it (an action every common user can do and fully understand). On Linux you hypothetically can work with LD_PRELOAD, but (assuming someone already wrote a tutorial and points to the file for you to grab) even that already requires more knowledge about some system concepts.

    You’re not even realizing how advanced of a user on Windows you have to be to realize that putting a DLL in the correct directory will make that the library used by the program running from that directory. Most users won’t even know what a DLL is. Also I work in security professionally and I’ve used this fun little fact to get remote code execution multiple times, so I don’t see how it’s a good thing, especially when you consider that Linux’s primary use case is servers. You can do the exact same thing on Linux, as you said, it’s just opt in behavior. If you are knowledgeable enough to know what a DLL is and what effects placing one in a given folder have, you’re knowledgeable enough to know what a shared library is and how to open a text editor and type LD_LOAD_PATH or LD_PRELOAD. I don’t buy this argument at all.

    Linux Desktop is predominantly a volunteer project. It is not backed by millions of dollars and devs from major corporations like the kernel or base system. It is backed by people who are doing way too much work for free. They likely care about accessibility and people using their project, but they also care about the myriad of other issues that they face for the other 90+% of their user base. Is that hugely unfortunate? Yes, it sucks. I wish there was money invested in Linux as a desktop platform, but compared to macOS and Windows it’s fair to say there is a rounding error towards $0.


  • qqq@lemmy.worldtolinuxmemes@lemmy.worldNo comment
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    6 days ago

    This is not semantics at all. You earlier say that Linux not having a stable ABI is the cause of a ton of problems.

    This shit is the exact reason Linux doesn’t just have ridiculously bad backwards compatibility but has also alienated literally everyone who isn’t a developer, and why the most stable ABI on Linux is god damn Win32 through Wine.

    Android doesn’t make any more Linux ABI guarantees than anyone else: because it’s using the Linux kernel. I can easily compile a program targeting a generic aarch64 with a static musl C and run the binary on Android. So no, it isn’t semantics, it is good proof that you’re not correct in claiming that Linux ABI stability is terrible. Maybe you’re using the term “Linux ABI” more loosely than everyone else, but that’s not “just semantics”, the Linux ABI is a well defined concept and the parts of it that are stable are well defined.

    Life is Strange: Before the Storm shipped with native Linux support back in 2017. That was a different era - glibc 2.26 was current, and some developers made the unfortunate choice of linking against internal, undocumented glibc symbols.

    The very first line of your blog post that you shared. That has nothing do do with Linux ABI stability, or honestly even glibc ABI stability, if you’re going to use symbols that are explicitly internal you can’t get annoyed when they change…? That’s a terrible example.

    Adobe still shipped CS6 until 2017, so it’s 9 years old. That’s not particularly ancient, and it’s backed by… well Adobe. They have a bit of money. [EDIT: https://helpx.adobe.com/creative-suite/kb/cs6-install-instructions.html actually it’s still available on their site so… I wouldn’t expect any issues]

    Did you run Total Annihilation through Steam? I found this link https://steamcommunity.com/app/298030/discussions/0/1353742967805047388/ and people even had to modify things that way. It’s very impressive that it runs at all and yes, Windows is most definitely the king of backwards compatibility. Or at least it used to be, I’m happy to know very little about modern Windows, and it’s definitely not backwards compatible with hardware…

    The person who wrote the “filtered out comment” was batshit crazy and clearly didn’t even know what they were talking about. “Stable ABIs are what lead to corpo-capital interests infecting every single piece of technology and chaining us to their systems via vendor lock-in” is one of the most nonsense statements I’ve read on Lemmy. I wish I had more downvotes available.

    It’s important to remember that the Linux kernel has millions of dollars and full time devs from companies like Google and Microsoft working on it. The Linux Desktop space does not have that. Like at all. Linux Desktop is predominantly a volunteer project. Valve has started putting money into it which is great, but that’s very recent.



  • qqq@lemmy.worldtolinuxmemes@lemmy.worldNo comment
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    7 days ago

    Running 20 year old binaries is not the primary use case and it is very manageable if you actually want to do that. I’ve been amazed at some completely ancient programs that I’ve been able to run, but I don’t see any reason a 20 year old binary should “just work”, that kind of support is a bit silly. Instead maybe we should encourage abandonware to not be abandonware? If you’re not going to support your project, and that project is important to people, provide the source. I don’t blame the Linux developers for that kind of thing at all.

    devs are often being discouraged from compiling tools in a way that makes them work forever (since that makes the app bigger and potentially consume more memory)

    This is simply not true. If you want your program to be a core part of a distribution, yes, you must follow that distribution’s packaging and linking guidelines: I’m not sure what else a dev would expect. There is no requirement that your program be part of a distribution’s core. Dynamic linking isn’t some huge burden holding everyone back and I have absolutely no idea why anyone would pretend it is. If you want to static link go for it? There is literally nothing stopping you.

    Linux desktop isn’t actively working against disabled people, don’t be obtuse. There is so much work being done for literally no money by volunteers and they are unable to prioritize accessibility. That’s unfortunate but it’s not some sort of hypocritical alienation. That also has likely very little to do with the Linux kernel ABI stability like you claimed earlier.

    But this idea that “finally we have people that want Linux to work” is infuriating. Do you have any idea how much of an uphill battle it has been to just get WiFi working on Linux? That isn’t because the volunteer community is lazy and doesn’t want things to work: that’s because literally every company is hostile to the open source community to the point of sometimes deliberately changing things just to screw us over. The entitlement in that statement is truly infuriating.


  • My understanding of the linking rules for the GPL is that they’re pretty much always broken and I’m not even sure if they’re believed to be enforceable? I’m far out of my element there. I personally use MPLv2 when I want my project to be “use as you please and, if you change this code, please give your contributions back to the main project”


  • qqq@lemmy.worldtolinuxmemes@lemmy.worldNo comment
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    7 days ago

    It should be noted that statically linking against an LGPL library does still come with some constraints. https://www.gnu.org/licenses/gpl-faq.html#LGPLStaticVsDynamic

    You have to provide the source code for the version of the library you’re linking somewhere. So basically if you ship a static linked glibc executable, you need to provide the source code for the glibc part that you included. I think the actual ideal way to distribute it would be to not statically link it and instead deliver a shared library bundled with your application.

    EDIT: Statically linking libc is also a big pain in general, for exampled you lose dlopen. It’s best not to statically link it if possible. All other libraries, go for it.




  • Fortunately we do have a steady influx of new people incl. those who demand shit to god damn work, finally shifting this notion.

    What the hell is going on in this thread? Linux has been being actively developed by people who want “shit to god damn work” forever. What are the concrete examples of things that don’t work? Old games? Is that the problem here? These things that were developed for the locked in Windows ecosystem since time immemorial and never ran on Linux and now, through all of the work of the Linux ecosystem, do, by some miracle, run on Linux. It’s amazing that these things work at all: they were never intended to!


  • The Linux ABI stability is tiered, with the syscall interface promising to never change which should be enough for any application that depends on libc. Applications that depend on unstable ABIs are either poorly written (ecosystem problem, not fixable by the kernel team, they’re very explicit about what isn’t stable) or are inherently unstable and assume some expertise from the user. I’d say the vast majority of programs are just gonna use the kernel through libc and thus should work almost indefinitely.


  • But you can do that: Linux provides a ton of ways to use different versions of the same lib. The distro is there to provide a solid foundation, not be the base for every single thing you want to run. The idea is you get a core usable operating system and then do whatever you want on top of that.






  • For loops with find are evil for a lot of reasons, one of which is spaces:

    $ tree
    .
    ├── arent good with find loops
    │   ├── a
    │   └── innerdira
    │       └── docker-compose.yml
    └── dirs with spaces
        ├── b
        └── innerdirb
            └── docker-compose.yml
    
    3 directories, 2 files
    $ for y in $(find .); do echo $y; done
    .
    ./are
    t good with fi
    d loops
    ./are
    t good with fi
    d loops/i
    
    erdira
    ./are
    t good with fi
    d loops/i
    
    erdira/docker-compose.yml
    ./are
    t good with fi
    d loops/a
    ./dirs with spaces
    ./dirs with spaces/i
    
    erdirb
    ./dirs with spaces/i
    
    erdirb/docker-compose.yml
    ./dirs with spaces/b
    

    You can kinda fix that with IFS (this breaks if newlines are in the filename which would probably only happen in a malicious context):

    $ OIFS=$IFS
    $ IFS=$'\n'
    $ for y in $(find .); do echo "$y"; done
    .
    ./arent good with find loops
    ./arent good with find loops/innerdira
    ./arent good with find loops/innerdira/docker-compose.yml
    ./arent good with find loops/a
    ./dirs with spaces
    ./dirs with spaces/innerdirb
    ./dirs with spaces/innerdirb/docker-compose.yml
    ./dirs with spaces/b
    $ IFS=$OIFS
    

    But you can also use something like:

    find . -name 'docker-compose.yml' -printf '%h\0' | while read -r -d $'\0' dir; do
          ....
    done
    

    or in your case this could all be done from find alone:

    find . -name 'docker-compose.yml' -execdir ...
    

    -execdir in this case is basically replacing your cd $(dirname $y), which is also brittle when it comes to spaces and should be quoted: cd "$(dirname "$y")".



  • I love nix and NixOS, but yes the documentation is incredibly insufficient. I’d recommend a normal distro + the nix package manager first for a personal laptop. You have be ok occasionally taking a detour to learn how to build some random program from source in a sandbox with no networking every once in a while so it’s kinda clunky as a daily use OS imo. It shines on servers though


  • NixOS is fun but requires tinkering for a desktop/laptop. You can use the nix package manager on any other distro though. At work I use Fedora and still use the nix package manager a ton when I want to, but I’m not locked into it when something needs to just work quickly. I have NixOS on my personal laptop and I kinda wish I didn’t. I have it on my home server and I’m very happy I did that.