Server Orchestration Using SaltStack

Server Orchestration Using SaltStack

Everyone knows the pain of replacing a computer. I’ve been using Apple’s hardware and OS since around 2005 and Apple introduced “Migration Assistant” in 2003 with Mac OS X (10.3) Panther. Using a Firewire connection meant never having to install all my applications and copy over all my files ever again.

Of course, with that ability came the build up of cruft over time. Over the years between 2005 and 2024 I think I re-installed all my software on a clean machine maybe once.

I got a new laptop about 2 years ago and used the Migration Assistant, as usual. When the migration was complete, I looked over the file system to see how much space I was using… It was not pretty; of a full TB of space I was using over 900 GB. How!?

It was, of course, caches and “System Files” that were nearly impossible to find and remove. I leveraged tools like Daisy Disk to try to recover space but I knew this was a fool’s errand. I needed a way to resolve this once and for all.

Overview

About 8 years ago I was working with a consultant and we got to talking about this subject. He mentioned to me that he used Salt to orchestrate his home infrastructure. Intrigued, I spent a little time playing around but really didn’t get anything of consequence off the ground… I gave up after not really trying that hard; I wasn’t up for spending the time.

Fast forward to 2022 and I worked for a company whose infrastructure was built using Salt and this was enough impetus for me to take a second swing at things.

Why Salt

I know that there are alternatives; Chef, Puppet and Ansible all do similar things. My desire to use Salt really stemmed from what I saw back in 2017 and then again in 2022; I was intrigued by Salt as a solution. But in addition, I wanted to be able to react to changes made by applications to the systems I’m provisioning for which Salt is well suited.

What this post will (and won’t) cover

I began building my Salt infrastructure back in January and, although I’ve made good progress, I’m only just at the beginning here. I wanted to address my most pressing needs first which means computers in my home that don’t change regularly and/or don’t have complicated setups aren’t yet being automatically orchestrated.

My Salt Master is a Linux machine that only runs Salt. Its setup is dead simple so I don’t have it configured as a Salt Minion (although I certainly could and probably should). Also, my gaming machine is Windows based; it changes very infrequently (and I’ve not played with orchestrating Windows machines yet) so it is also not yet a Salt Minion.

With that said, I’ll talk through my goals for using configuration management. I’ll cover initial installation, how to install applications and how to install or add configuration files. I’ll also touch on what I plan to do next.

Why do orchestration at home?

I’ve covered one reason above. That is, that my laziness led to my machines getting bloated and full of old cached data and other system files that just took up space and were impossible to get rid of. After years of installing and uninstalling software, messing with config files and generally experimenting, I wanted a way to keep things clean and (equally as important) I wanted an easy path back to clean.

Finally, my daughter is leaving for college in a couple of months and we bought her a laptop. I wanted a way to administer her machine to help keep her digitally safe and secure on her laptop.

Getting started with SaltStack

At a high level, the Salt ecosystem is made up of one or more Salt Masters (one, in my case) and one or more Salt Minions. The Master issues instructions and the Minions follow those instructions. Instructions are issued to a set or subset of the Minions depending on how they are targeted. You run commands on the Master to interact with the Minions it knows about. Here’s an example:


sudo salt 'bonobo.apes' test.ping
[sudo] password for roger:
bonobo.apes:
    True

Here, I’m checking to see if the Master can communicate with the server called bonobo in the group called apes. As you can see, the Master got back a response from that Minion. The server and the group can be wild-carded.

Grains

Grains are the basic information about an operating system and they can be queried and targeted. So, for example, here’s a subset of the response when querying one of my machines (there is so much more than this):


sudo salt 'bonobo.apes' grains.items
[sudo] password for roger:
bonobo.apes:
    ----------
    cpu_flags:
    cpu_model:
        Apple M4
    cpuarch:
        arm64
    model:
        M4
    vendor:
        apple
    en0:
        - 192.168.XXX.XXX
        - fe80::147b:304a:f36d:6aab
        - fd0d:89ab:de66:9a47:1404:dd73:820a:e1b7
    utun4:
        - XXX.XXX.XXX.XXX
        - fe80::ecfa:93d6:85c:e4e5
        - fd7a:115c:a1e0::5101:c30c
    ipv4:
        - XXX.XXX.XXX.XXX
        - 127.0.0.1
        - 192.168.XXX.XXX
    kernel:
        Darwin
    kernelrelease:
        24.5.0
    manufacturer:
        Apple Inc.
    master:
        192.168.XXX.XXX
    mem_total:
        24576
    model_name:
        MacBook Air
    nodename:
        Mac.ts.net lan
    num_cpus:
        10
    num_gpus:
        1
    os:
        MacOS
    os_family:
        MacOS
    osarch:
        arm64
    osbuild:
        24F74
    osfinger:
        macOS 15.5-15
    osfullname:
        macOS 15.5
    osmajorrelease:
        15
    osrelease:
        15.5
    osrelease_info:
        - 15
        - 5

To target a Minion using Grains you could do something like this:


salt -G 'os:MacOS' test.ping
[sudo] password for roger:
bonobo.apes:
True

Again, the Master got back a response from that Minion. You could install a particular package for just those Minions that are running macOS (or have 10 CPUs or have ARM architecture or whatever you want to target using Grains).

Implementations

Managing macOS devices

Salt, unlike Ansible, has an agent that needs to be installed on the client machine (the Salt Minion). The agent’s job is to ensure that the specified state is applied to itself when triggered (either by a direct command from the Master or by an event locally).
Installation
The installation instructions and packages can be found on the Salt Project website. I won’t go into great detail on installation here since the docs are so good, but installation is very simple.

Salt Master

On Debian based systems (such as my Salt Master) you can install the Salt Master package using apt:


sudo apt install salt-master
sudo systemctl enable salt-master

Salt Minion

For Mac based systems you can install using a downloaded pkg file or just using Homebrew. Since, on macOS, Salt assumes the default package manager to be Homebrew, I’d recommend installing Homebrew, and then installing Salt (using Homebrew):


brew install salt
sudo launchctl start com.saltstack.salt.minion

NOTE: The versions installed on the Master and the Minion must match. As of this writing I have 3007.1 installed on both the Salt Master and all the Salt Minions.

Setup

By default the Salt Master expects all of the orchestration configurations to exist in /srv/salt and I didn’t change that config. However, since all of my configurations are stored in Github, I checked everything out elsewhere and then created a symbolic link into the /srv/salt directory.

The root of the configuration has two subdirectores: pillar and salt. The salt directory defines the Minions and the packages and files to be installed. The pillar directory, in my case, contains user-specific information: e.g. preferred shell, home directory, username, etc. for a particular server.

NOTE: Pillars are often used to store secrets. However, since my whole config is in Github, I chose not to do that. SaltStack can integrate nicely with some pretty solid secrets managers (including AWS and 1Password) so I’m going to explore that route (a topic for a future post?).

Salt

All of the configuration is done with YAML files.


/srv/salt
├─ base/
│ ├─ bonobo.sls
│ ├─ basic.sls
│ ├─ workstation.sls
│ ├─ developer.sls
├─ files/
│ ├─ ssh/
│ ├─ ├─ config
├─ packages/
│ ├─ iterm2.sls
│ ├─ slack.sls
│ ├─ git.sls
├─ top.sls
├─ users/
│ ├─ roger.sls

As you can see, at the root of those files is a file called top.sls. Mine looks something like this:


base:
    'bonobo.apes':
        - base.bonobo

This config is saying that the Minion called bonobo that belongs to the group apes is defined in a file called bonobo.sls in the base directory.

The bonobo.sls file looks like this:


include:
    - base.basic
    - base.workstation
    - base.developer
    - users.roger

So, now exploring this further, the bonobo.apes Minion includes the basic config, the workstation config and the developer config (which are all in the base directory). It also includes the roger user. Here are all those files:

Let’s start with the base/basic.sls file:


include:
    - packages.iterm2
    - packages.chrome
    - packages.oh-my-zsh
    - packages.powerlevel10k
    - packages.1password
    - etc...

This file contains everything I would want on any computer that exists and is a permanent fixture in my home environment. All laptops (mine, my wife’s, and both my daughters’) have this config.

…and now, the base/workstation.sls file:


include:
    - packages.slack
    - packages.discord
    - etc...

This file contains all the packages I want installed on any computer that belongs specifically to me in this environment because I will want to have all my comms, etc. available on all those computers.

…and next, the base/developer.sls file:


include:
    - packages.git
    - packages.nvm
    - packages.docker
    - packages.vscode
    - packages.sql-studio-pro
    - etc...

This file defines everything I want on a computer on which I will do engineering work. So, the personal laptop that is my daily driver is configured this way.

…and finally the users/roger.sls file:


{# users/roger.sls #}
{% set user = pillar['users']['roger'] %}

{{ user.username }}
    user.present
        - fullname: {{ user['full_name'] }}
        - shell: /bin/zsh
        - home: /Users/{{ user.username }}

ssh-config:
    file.managed:
        - name: /Users/{{ user.username }}/.ssh/config
        - source: salt://files/ssh/config
        - user: {{ user.username }}
        - group: staff
        - mode: 600
        - makedirs: True
        - unless: 'test -e /Users/{{ user.username }}/.ssh/config'

etc...

This file is a bit more complicated but essentially it’s pulling in Roger’s details from a Pillar, creating Rogers home directory if it doesn’t exist (and defining his full name and what shell he prefers) and then adding the SSH config file (also if it doesn’t exist). Notice that the config file is assigned ownership and permissions too.

Before we dive into Pillars, let’s take a quick look at a couple of examples of those package files. In many cases, the default package manager (again, in this case Homebrew) will handle the installation of that package. Here’s the package file for installing Git:


git:
    pkg:
        - installed

This is merely saying that the Git package should be installed and if it is not, then go ahead and install it using the default package manager. Super simple. Here’s the package file for installing “Oh-My-Zsh”:


{% set user = pillar['users']['roger'] %}

install_oh_my_zsh:
cmd.run:
    - name: sh -c "$(curl -fsSL https://raw.github.../install.sh)"
    - cwd: /Users/{{ user.username }}
    - runas: {{ user.username }}
    - shell: /bin/zsh
    - unless: 'test -e .oh-my-zsh'

In this case, there is no standard install for “Oh-My-Zsh” on macOS so it has to be done manually. This runs a custom command providing various pieces of information Salt needs to be successful.

Bottom line: if I want to add another package, I add a file for that package in the /srv/salt/packages directory and then add it to one of the basic.sls, workstation.sls or developer.sls files. Then next time I apply state, it’s installed.
Pillar
As stated earlier, I’m using the pillar directory to include user-specific information that the salt configs don’t contain. Here’s an example of that structure:


/srv/pillar
├─ users/
│ ├─ roger.sls

Let’s look at the contents of that pillar file:


users:
    roger:
        username: roger
        full_name: Sir Roger Smithington III Esq.
        shell: /bin/zsh
        home: /Users/roger

It likely goes without saying but those get substituted into the users/roger.sls file defined above that is in the base directory.

Results

The net result of all of this is that any new computers that come online in my home get added to Salt and configured. To configure a computer (or all in a group) I just run something like this:


sudo salt '*.apes' state.apply

…or, since I talked about it, using Grains:


sudo salt -G 'os.MacOS' state.apply

This installs all packages, sets up all config files, adds all users and makes the computer(s) ready for use. I’ve used this to configure every Mac in my home and there is almost nothing that I can’t get installed and configured this way. Even my development machine is just ready to go.

Whenever I have some new piece of software that I want to install, I update the Salt configs and apply the state again across the Minions that I target. Since it’s idempotent, it just adds the new packages.

What’s Next

There are a number of things I want to do next.

As I stated earlier in this article, I want to react to changes made by applications already installed. So for example, if an application opens a port or changes a config somewhere I want to either be informed, or change it back automatically (perhaps a topic for a future post?).

Also, the Salt Master and other computers in my home (Windows will likely be much harder) should really be configured as Salt Minions as well. That way, I can just stand up those machines quickly from scratch as well.

I also want to integrate with a Secrets Manager. All of my passwords are in a password manager and I use a UbiKey for SSH access (yet another topic for a future post?) but integrating Salt with something like AWS Secrets Manager would be pretty powerful.

Final Thoughts

This approach does require some level of commitment so that I don’t shortcut things and install without using Salt and there’s also the overhead of updating versions periodically. But, frankly, I have it down pretty well now and it doesn’t take long to make those changes.

I’m even able to administer Minions when they aren’t on the local network by using a service/product called TailScale (a topic for a future post?). TailScale is essentially a virtual private network over which I can access any of the computers. This enables me to SSH to them but also, target them using Salt to install new software and configs even when they aren’t on my local network.

Of course, longer term this means that I should never use Apple’s Migration Assistant again and my computers should all be fresh installs each time. So far, that’s proved to be true.

I solved for the problem I was presented with and I really enjoyed figuring out how to get this all stood up and working. Now that I’m over the hump it’s easy to push this solution further. Salt is a powerful option for client/server orchestration. I remain curious about using other solutions such as Ansible, Puppet and Chef but this is working well for my needs and is flexible for the future work I’d like to be doing with Salt.

Loved the article? Hated it? Didn’t even read it?

We’d love to hear from you.

Reach Out

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More Insights

View All