What opinion just makes you look like you aged 30 years

  • argv_minus_one@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Don’t I? Recompiling avoids ABI stability issues and will reliably fail if there is a breaking API change, whereas not recompiling will cause undefined behavior if either of those things happens.

    • Hagarashi8@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      That’s why semver exists. Major-update-number.Minor-update-number.Patch-number Usually, you don’t care about patches, they address efficency of things inside of lib, no Api changes. Something breaking could be in minor update, so you should check changelogs to see if you gonna make something about it. Major version most likely will break things. If you’ll understand this, you’ll find dynamic linking beneficial(no need to recompile on every lib update), and containers will eliminate stability issues cause libs won’t update to next minor/major version without tests.

      • argv_minus_one@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        What’s so horribly inconvenient about recompiling, anyway? Unless you’re compiling Chromium or something, it doesn’t take that long.

        • Hagarashi8@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Still, it’s going to take some time, every time some dependency(of dependency(of dependency)) changes(cause you don’t wanna end up with critical vulnerability). Also, if app going to execute some other binary with same dependency X, dependency X gonna be in memory only once.

          • argv_minus_one@beehaw.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Still, it’s going to take some time

            Compared to the downsides of using a container image (duplication of system files like libc, dynamic linking overhead, complexity, etc), this is not a compelling advantage.

            Also, if app going to execute some other binary with same dependency X

            That seems like a questionable design choice.

            • Hagarashi8@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              That seems like a questionable design choice.

              I mean, you could have GUI for some CLI tool. Then you would need to run binary GUI, and either run binary CLI from GUI or have it as daemon. Also, if you are going to make something that have more than one binary, you’ll get more space overhead for static linking than for containers

              Compared to the downsides of using a container image (duplication of system files like libc, dynamic linking overhead, complexity, etc), this is not a compelling advantage.

              Man, that’s underestimating compiling time and frequency of updates of various libs, and overestimating overhead from dynamic linking (it’s so small it’s calculated in CPU cycles). Basically, dynamic linking reduces update overhead, like with static linking you’ll need to download full binary every update, even if lib is tiny, while with dynamic you’ll have to download only small lib.

              • argv_minus_one@beehaw.org
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                I mean, you could have GUI for some CLI tool.

                Yes, I’ve seen that pattern before, but:

                1. I wouldn’t expect them to have many libraries in common, other than platform libraries like libc, since they have completely different purposes.
                2. I was under the impression that Docker is for server applications. Is it even possible to run a GUI app inside a Docker container?

                Also, if you are going to make something that have more than one binary

                If they’re meant to run on the same machine and are bundled together in the same container image, I would call that a questionable design choice.

                Man, that’s underestimating compiling time and frequency of updates of various libs

                Well, I have only my own experience to go on, but I am not usually bothered by compile times. I used to compile my own Linux kernels, for goodness’ sake. I would just leave it to do its thing and go do something else while I wait. Not a big deal.

                Again, there are exceptions like Chromium, which take an obscenely long time to compile, but I assume we’re talking about something that takes minutes to compile, not hours or days.

                and overestimating overhead from dynamic linking (it’s so small it’s calculated in CPU cycles).

                No, I’m not. If you’re not using JIT compilation, the overhead of dynamic linking is severe, not because of how long it takes to call a dynamically-linked function (you’re right, that part is reasonably fast), but because inlining across a dynamic link is impossible, and inlining is, as matklad once put it, the mother of all other optimizations. Dynamic linking leaves potentially a lot of performance on the table.

                This wasn’t the case before link-time optimization was a thing, mind you, but it is now.

                Basically, dynamic linking reduces update overhead, like with static linking you’ll need to download full binary every update, even if lib is tiny, while with dynamic you’ll have to download only small lib.

                Okay, but I’m much more concerned with execution speed and memory usage than with how long it takes to download or compile an executable.

                • Hagarashi8@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  1 year ago

                  I mean, you could have GUI for some CLI tool.

                  Yes, I’ve seen that pattern before, but:

                  1. I wouldn’t expect them to have many libraries in common, other than platform libraries like libc, since they have completely different purposes.
                  2. I was under the impression that Docker is for server applications. Is it even possible to run a GUI app inside a Docker container?

                  Also, if you are going to make something that have more than one binary

                  If they’re meant to run on the same machine and are bundled together in the same container image, I would call that a questionable design choice.

                  In the time i was thinking about some kind of toolkit installed though distrobox. Distrobox, basically, allows you to use anything from containers as if it was not. It uses podman, so i guess it could be impossible to use docker for GUI, although i cant really tell.

                  inlining is, as matklad once put it, the mother of all other optimizations. Dynamic linking leaves potentially a lot of performance on the table.

                  Yes, but static linking means you’ll get security and performance patches with some delay, while dynamic means you’ll get patches ASAP.