• Fuck Yankies@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Sure it’s for security… securing my host systems, you goomba. You devs being heve hoed out of my deployment and migration is one of the greatest releases ever, next fo busting a nut. Keep your filthy containers and VMs. Stay outta my host systems.

      I’m a computer custodian and I absolutely hate the devs. They are maniacs. Harumph.

      • adr1an@programming.devM
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Docker is not rootless. Is only safe as long as the container (or those web devs) doesn’t use nsenter or anything similar to get root access outside of it ;)

          • adr1an@programming.devM
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            2 months ago

            Ah, my bad “again”… should have mentioned that there’s the advance configuration option that 1% of the geeks do

            • Fuck Yankies@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              2 months ago

              It’s not a question of being a geek, but securing your entire supply chain. If you don’t already vet container image layers and cosigning said containers, chances are you’re already in risky rivers all the same.

              In essence the rooted mode was never that big of a risk when compared to the actual runtimes. Certain attacks don’t even care about being in a user container if it deals with breaking the kernel itself, even with SELinux and AppArmor taken into account.

              Rootless containers aren’t a magic bullet as a result. The only thing that you should concern yourself with is what you’re pushing to prod, how you layer your images and cosigning so that you can source… every mess… to every desk jockey junior…

              You…

              Do not…

              Mess with my infra.

              1000000363

  • Jo Miran@lemmy.ml
    link
    fedilink
    arrow-up
    15
    ·
    2 months ago

    Eight years old and still hits home for the most part. Nowadays though, what I get is mostly “we’re moving to Azure” from clients that have no business in the cloud. Some environments are just not possible to move to a cloud environment without a redesign from scratch.

    • xmunk@sh.itjust.works
      link
      fedilink
      arrow-up
      24
      ·
      2 months ago

      Honestly? Pretty fucking awesome if you get it configured correctly. I don’t think it’s super useful for production (I prefer chef/vagrant) but for dev boxes it’s incredible at producing consistent environments even on different OSes and architectures.

      Anything that makes it less painful for a dev to destroy and rebuild an environment that’s corrupt or even just a bit spooky pays for itself almost immediately.

      • MajorHavoc@programming.dev
        link
        fedilink
        arrow-up
        7
        ·
        2 months ago

        I don’t think it’s super useful for production (I prefer chef/vagrant)

        Yeah!

        Docker and OCI get abused a lot to thoughtlessly ship a copy of the developer’s laptop into production.

        Life is so much simpler after taking the time to build thoughtful correct recipes in an orchestration tool.

        Anything that makes it less painful for a dev to destroy and rebuild an environment that’s corrupt or even just a bit spooky pays for itself almost immediately.

        Exactly. The learning curve is mean, but it’s worth it quickly as soon as the first mystery bug dies in a rebuild fire.

    • Platypus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 months ago

      In my experience, very, but it’s also not magic. Being able to package an application with its environment and ship it to any machine that can run Docker is great but it doesn’t solve the fact that modern deployment architecture can become extremely complicated, and Docker adds another component that needs configuration and debugging to an already complicated stack.

      • skuzz@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        And a new set of dependency problems depending on the base image. And then fighting layers both to optimize size, and with some image hubs, “why won’t it upload that one file change? It’s a different file now! The hashes can’t possibly be the same!” And having to find hackey ways to slap it so the correct files are in the correct places.

        Then manipulating multi-arch manifests to work reliably for other devs in a cross-processor environment so they don’t have to know how the sausage works…

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 months ago

      It’s a way to provide standard configuration for your programs without one configuration interfering with another.

      Honestly, almost all alternatives work better. But docker is the one you can run on any system without large changes.

    • Gamma@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      I think they’re really useful, there are alternatives that I think have feature parity at this point but the concepts of containerization are the same