6 votes

How do you organize your Linux packages?

Hello everyone.

I am planning to get back into Linux development after working with Mac only for almost a decade. On Mac, one of the most important lessons that I learned was to always use Homebrew. Using various package managers (e.g. Homebrew, NPM, Yarn, Pip, etc.) creates situations in which you don't know how to uninstall or upgrade certain pieces of software. Also, it's hard to generate a complete overview.

How do you Linux folks handle this?

Bonus question: How do you manage your dotfiles securely? I use Bitwarden, and it's a bit clunky.

If that helps, I want to try Mint and always use Oh My ZSH!.

22 comments

  1. [6]
    streblo
    Link
    Definitely just let your package manager handle everything needed for your system. On the development side of things, decouple your work environment from your system one, for example with python...

    Definitely just let your package manager handle everything needed for your system. On the development side of things, decouple your work environment from your system one, for example with python you can use pyenv to manage python versions and virtual environments to manage python packages.

    For dotfiles, I don't manage them individually. I have full system backups, so that's good enough for me. I will manually copy over my fish configuration and a few other things when setting up a new login but it's rare enough I haven't tried to make it less manual.

    12 votes
    1. [5]
      gianni
      Link Parent
      To add to this, many Linux distributions come with containerized environments ready to go (e.g. distrobox or toolbox). You can use these to create completely separate development environments. For...

      To add to this, many Linux distributions come with containerized environments ready to go (e.g. distrobox or toolbox). You can use these to create completely separate development environments. For instance you can install any language toolchains, databases, libraries, and software your application requires without touching your main system! And you can have as many of them as you want.

      If you're feeling particularly adventurous you can take this concept further with dev containers. Which allow you to create these same dev environments programmatically from a config file and attach them directly to your editor.

      2 votes
      1. [4]
        CunningFatalist
        Link Parent
        Yeah, dev containers sound interesting and I will probably try them at some point.

        Yeah, dev containers sound interesting and I will probably try them at some point.

        1. [3]
          HeroesJourneyMadness
          Link Parent
          This sent up a little red flag for me. I’ve not been on Linux excepting cloud servers in many years- but I managed to Bork a MacOS install with v. 2 & 3 of Python and their libraries and Homebrew...

          This sent up a little red flag for me. I’ve not been on Linux excepting cloud servers in many years- but I managed to Bork a MacOS install with v. 2 & 3 of Python and their libraries and Homebrew way back when.

          Learn from my messy ways and containerize all your dev from the jump. It’s really not hard to pick up. So long as you’re using some kind of containerization that you can throw away to keep your system pristine. Don’t sleep on it like I did.

          1. [2]
            CunningFatalist
            (edited )
            Link Parent
            I don't understand why it sends a red flag to you that I want to try something, but still thanks for the advice. And for containers in general, I dockerize all my projects anyway. I just like to...

            I don't understand why it sends a red flag to you that I want to try something, but still thanks for the advice. And for containers in general, I dockerize all my projects anyway. I just like to have a lot of stuff outside of containers. For example, Go and Node so that I can quickly test some stuff in my terminal.

            1 vote
            1. HeroesJourneyMadness
              Link Parent
              My bad. I assumed you were unfamiliar with containers… and I have dependency hell PTSD. You clearly know what you’re doing more than I do.

              My bad. I assumed you were unfamiliar with containers… and I have dependency hell PTSD. You clearly know what you’re doing more than I do.

  2. [5]
    TangibleLight
    (edited )
    Link
    On mint you'll be using apt for nearly everything. There are some exceptions: some open source tools are not distributed on package managers, so you need to obtain them through other means....

    On mint you'll be using apt for nearly everything. There are some exceptions: some open source tools do not distribute binaries are not distributed on package managers, so you need to obtain them through other means.

    Sometimes you'll be able to apt-add-repository or touch something in /etc/apt/sources.d/ to install directly from the maintainer's package repository. Beware there are security implications here. For example that's what all this about keyrings and sources.d is in the Docker Engine installation.

    You will inevitably encounter snap at some point. I can't understate how much I dislike it. It's so wasteful; each package is more-or-less given its own isolated container-like environment, with all its dependencies satisfied. It makes it impossible to get linkage errors, but it has bad implications for disk usage and for runtime permissions. Firefox, for example, can't access local files if installed through snap. Ubuntu (Canonical) has been pushing for it really hard lately, and I know Mint is (was?) based on Ubuntu, but I don't know how much Mint pushes for it.

    Sometimes a tool will give you a .sh file and pinky promise it's fine to run it with sudo. This is most common in my experience with big corporate proprietary closed-source things... offhand I recall certain Dell and Intel and Nvidia drivers doing this.

    Sometimes tools will give you a .sh file and you run it without sudo. Usually these just curl some files and place them somewhere in ~/.local. Offhand I recall Anaconda doing this, it modifies your .zshrc and places some files in ~/.anaconda or similar. Tools like rustup are similar.

    I also like using Stow for managing manually-downloaded binaries and built-from-source installations. I have a directory ~/.local/stow that contains these tools, then stow merges them into ~/.local/bin, ~/.local/lib, etc.


    For development tools, I currently use asdf and direnv whenever I can, although I'm interested in switching to mise-en-place, I just haven't done it yet.


    For dotfiles I just use a private github repo. I use stow for these too. I don't keep keys there, I keep keys in bitwarden.

    5 votes
    1. [2]
      streblo
      Link Parent
      Just to clarify, it's actually the other way around. Packagers will package whatever open source tools they need/like/think are popular.

      some open source tools do not distribute binaries on package managers

      Just to clarify, it's actually the other way around. Packagers will package whatever open source tools they need/like/think are popular.

      3 votes
      1. TangibleLight
        Link Parent
        oop, thanks. Maybe I should have just written "are not distributed".

        oop, thanks. Maybe I should have just written "are not distributed".

        1 vote
    2. CunningFatalist
      Link Parent
      I didn't know about Stow, thanks a lot. :)

      I didn't know about Stow, thanks a lot.

      and pinky promise it's fine to run it with sudo

      :)

      2 votes
    3. Drupe
      Link Parent
      I use stow to put the dotfiles that I want to sync in a separate directory, so I can use Git on that directory to sync them. I hadn't considered using Stow in your way, but I think that's a great...

      I use stow to put the dotfiles that I want to sync in a separate directory, so I can use Git on that directory to sync them. I hadn't considered using Stow in your way, but I think that's a great idea! Thank you for sharing!

      1 vote
  3. unkz
    Link
    I do about 95%+ of my development inside docker containers to avoid these kinds of issues.

    I do about 95%+ of my development inside docker containers to avoid these kinds of issues.

    2 votes
  4. Amarok
    Link
    Let the package managers handle it, they are all grown up now and pretty damn good at their jobs. :) For everything else... I like keeping my stuff and the operating system's stuff completely...

    Let the package managers handle it, they are all grown up now and pretty damn good at their jobs. :)

    For everything else... I like keeping my stuff and the operating system's stuff completely separate. The interactions just aren't worth the hassle and are easily avoided. Put all of custom stuff that you do not want the operating system to interfere with into the /opt directory. That is its original purpose and I've never seen any distro touch that directory for anything.

    Build it out with its own /opt/etc and /opt/lib and /opt/bin and /opt/databases and make your app live there entirely. Compile your own libraries and services, then use symlinks. For example /opt/postgres links to /opt/databases/postgres-12.4.5 until you upgrade to /opt/databases/postgres-13.0.0 and change the link. You can keep a rolling version repository that makes developers and sysadmins happy for every library and app in there.

    Give each app its own user dir in /home/app and you set up the scripts and environment variables and cron that way. That's where your server scripts live and data processing occurs when needed. That's where developers and sysadmins log in to run things and troubleshoot. If that user needs permissions to start and stop custom services in /opt it's easy enough to grant them to the user account.

    The advantage of doing it this way is that all you need to do to move it all to a new system, even one running a radically different distro, is move the /opt and /home/apps directories over to the new system and set up any user permissions again. This lets you give zero fucks about what the operating system has or does and it lets you install a very minimalist secure os if you use this in a production server scenario. Disaster recovery is a nothingburger with it all so easy to back up and restore in not much more than the data restore time itself.

    There are tools like Perlbrew that also take this approach and expect to be setting things up on a per-user-account basis. They can save a lot of time.

    This is the old-school unix way of doing things and when it's done well it makes sysadmin work very light indeed. It's how I managed my datacenters and built out the servers in them. Nowadays everyone uses containers instead. :P

    2 votes
  5. [5]
    vord
    Link
    So for general operating system stuff, use the native package manager. But anything you're actively developing should be as isolated from the operating system as possible. I recall hearing...

    So for general operating system stuff, use the native package manager. But anything you're actively developing should be as isolated from the operating system as possible. I recall hearing secondhand this isn't really a problem with Mac/Homebrew as Homebrew doesn't mess with operating system stuff.

    So on Linux, it's probably best to use that language's tooling installed from that language's website into your userspace, rather than in the operating system. Or leveraging containers as much as possible. You never want to pollute your system install with all the imports from NPM or PIP.

    If you want to install Python tools on your system that aren't available (or you need more recent) from your distro, the best answer is to install pipx from your distro, as it bundles all the deps together and prevents dependency hell if two tools rely on different versions.

    I have a small script that sets all my important bitwarden secrets to ENV variables when I need to. It works well enough for my purposes. All other stuff just gets grabbed as part of the system backup.

    1 vote
    1. [4]
      CunningFatalist
      Link Parent
      That's a fantastic idea, thank you. Also thanks for your advice in general, it sounds reasonable. And yes, Homebrew is probably the best reason to develop with a Mac. It's great.

      I have a small script that sets all my important bitwarden secrets to ENV variables when I need to. It works well enough for my purposes.

      That's a fantastic idea, thank you. Also thanks for your advice in general, it sounds reasonable. And yes, Homebrew is probably the best reason to develop with a Mac. It's great.

      1 vote
      1. [3]
        vord
        Link Parent
        You could also just use Homebrew on Linux as well, apparently.

        You could also just use Homebrew on Linux as well, apparently.

        1 vote
        1. [2]
          CunningFatalist
          Link Parent
          I didn't hear good things about it, though.

          I didn't hear good things about it, though.

          1 vote
          1. vord
            Link Parent
            Fair...I'm not in the Mac ecosystem yet (although my workplace is considering sending me one), so I'm mostly hearing 2ndhand there.

            Fair...I'm not in the Mac ecosystem yet (although my workplace is considering sending me one), so I'm mostly hearing 2ndhand there.

            1 vote
  6. skerit
    Link
    It's probably best to just use the original package manager from which the package comes. So use apt/pacman/... for your system packages, but use NPM for your JavaScript packages. Python packages...

    It's probably best to just use the original package manager from which the package comes.
    So use apt/pacman/... for your system packages, but use NPM for your JavaScript packages.

    Python packages are the odd ones out then, I guess. If you work with a lot of python projects, you'll have to use some kind of python environment manager anyway to maintain your sanity.

    1 vote
  7. knocklessmonster
    (edited )
    Link
    I'm a fan of distroboxes with their own homes for anything I won't just have as part of my system. I use Aurora, a derivative of Fedora Kinoite, so this is a hard requirement, but I would take...

    I'm a fan of distroboxes with their own homes for anything I won't just have as part of my system. I use Aurora, a derivative of Fedora Kinoite, so this is a hard requirement, but I would take this to any other distro if I stopped using it. It's been great for creating custom environments as well. I can use distrobox manifests or even github automation to build custom images, slap them into GHCR, and offload the lion's share of the work as well. From there it's all package managers, containerfiles/dockerfiles and distrobox manifests.

    I need to start wrangling my dotfiles again but always just used GitHub, I don't keep anything I need secret there.

    Alternatively one could use nix+home-manager and donall of this declaratively, and store your dotfiles in some sort of privately accessible repository.

    1 vote
  8. fxgn
    Link
    I'm using a custom build of Fedora Silverblue made with BlueBuild. This means that my OS is basically a docker image - it gets automatically updated in GitHub CI every day and all of the system...

    I'm using a custom build of Fedora Silverblue made with BlueBuild. This means that my OS is basically a docker image - it gets automatically updated in GitHub CI every day and all of the system packages and configurations are a part of that image. That way I don't have anything unnecessary installed on my system.

    For user packages (eg. development stuff), I use homebrew. You might think that Homebrew is only for macOS, but it is available for Linux, and has multiple core benefits for my setup:

    1. It installs everything in a user-writable directory under /home, which means that it works on my system, where / is read-only
    2. It allows listing your packages declaratively in a Brewfile. This way, again, I know that I'm not creating a mess and I only have the packages that I want to have installed. I can also just install whatever I want using brew and all unnecessary packages will be deleted the next time I run brew bundle --cleanup (although my dotfile manager, chezmoi, handles this step for me).
    3. Since it's not tied to any specific distro, I can install my dotfile repo on any Linux machine (chezmoi makes this a one-command task) and I will have all of the same packages which I have on my main system, without adding any extra junk into the new system

    For all of the GUI stuff. I just use Flatpaks. They're the best way to install GUI applications, and almost everything is available as a Flatpak.

    1 vote
  9. xk3
    Link
    I've shared this on Tildes before: I keep track of useful programs in a similar way to on my phone, using functions like these pipinstall pipuninstall which write to these files: cargo_installed...

    I've shared this on Tildes before:

    I keep track of useful programs in a similar way to on my phone, using functions like these

    which write to these files: