1

.::UPDATE-SOLVED::.

With must assistance form @wirap, it is now working. I've symlinked the script directory into /etc/salt/states/scripts and using this test.sls configuration it is now working.

/root/bin/updater/scripts/pam-setup-access:
    file.managed:
      - name: /tmp/pam-setup-access
      - source: salt://scripts/pam-setup-access
      - mode: 0700
run_script:
   cmd.run:
      - name: /tmp/pam-setup-access
   file.absent:
      - name: /tmp/pam-setup-access

I am new to salt-ssh and looking at some info here regarding how to run a script which resides on my admin box (/root/bin/updater/scripts/pam-setup-access) on a remote node that I have root access too.

I've created a state file (below) but not sure where to place it. The salt-ssh docs only mention /etc/salt/master so I looked that up but those docs seem to be specific for vagrant and package installation.

add script:
    file.managed:
    - name: pam-setup-access
    - source: /root/bin/updater/scripts/pam-setup-access

run script:
    cmd.run:
    - name: pam-setup-access

Lastly, I tried simply running the state file from the current directory as shown in the first link but I've only annoyed it. What am I missing?

# salt-ssh '*' state.apply test.sls 
nod0:
    [CRITICAL] Unable to import msgpack or msgpack_pure python modules
    Function state.apply is not available

nod1:
    [CRITICAL] Unable to import msgpack or msgpack_pure python modules
    Function state.apply is not available

.:: UPDATE ::.

I need to use salt-ssh as several "minions" reside on a network separate from the master (DMZ). As I understand, the traditional salt setup requires minions to connect to the salt master.

Since posting, I've installed salt-ssh from the saltstack repo which seems to have gotten rid of the msgpack error above. I've also modified some of the examples from the vagrant specific link above, adding a master file and states directory under /etc/salt. I've placed the state file (above) under /etc/salt/states/test.sls. Results are below.

I've added underscores between add script and run script, as suggested by @wirap. This has gotten me further as shown below. It seems there is some error involving the script path on the client, or server. Not sure yet. It seems I need to call state.apply with the name of the .sls file only (without the .sls extension).

/etc/salt/master:

file_roots:
   base:
      - /etc/salt/states

When I launch salt-ssh now, I get:

# salt-ssh '*' state.apply test
nod0:
----------
          ID: add_script
    Function: file.managed
        Name: pam-setup-access
      Result: False
     Comment: Specified file pam-setup-access is not an absolute path
     Started: 11:53:50.237379
    Duration: 0.602 ms
     Changes:   
----------
          ID: run_script
    Function: cmd.run
        Name: pam-setup-access
      Result: False
     Comment: Command "pam-setup-access" run
     Started: 11:53:50.238629
    Duration: 8.297 ms
     Changes:   
              ----------
              pid:
                  1037
              retcode:
                  127
              stderr:
                  /bin/bash: pam-setup-access: command not found
              stdout:

Summary for nod0
------------
Succeeded: 0 (changed=1)
Failed:    2
------------
Total states run:     2
Total run time:   8.899 ms
nod1:
----------
          ID: add_script
    Function: file.managed
        Name: pam-setup-access
      Result: False
     Comment: Specified file pam-setup-access is not an absolute path
     Started: 11:53:50.476743
    Duration: 0.555 ms
     Changes:   
----------
          ID: run_script
    Function: cmd.run
        Name: pam-setup-access
      Result: False
     Comment: Command "pam-setup-access" run
     Started: 11:53:50.477906
    Duration: 7.5 ms
     Changes:   
              ----------
              pid:
                  30772
              retcode:
                  127
              stderr:
                  /bin/bash: pam-setup-access: command not found
              stdout:

Summary for nod1
------------
Succeeded: 0 (changed=1)
Failed:    2
------------
Total states run:     2
Total run time:   8.055 ms
Server Fault
  • 3,454
  • 7
  • 48
  • 88
  • `/tmp/pam-setup-access` is probably a bad place for a static filename, especially if there are malicious local users about – thrig Dec 19 '17 at 17:20
  • Yes, I agree. /tmp/ is a horrible place without using an 'un-guessable' filename. It's not mentioned above but worth pointing out that a malicious user could place a symlink from `/tmp/pam-setup-access` to anywhere on the system causing salt-ssh to overwrite whatever the link is pointing too. It is only above for testing. – Server Fault Dec 19 '17 at 18:55
  • You should not include the solution into the question. Post an answer instead. – ᄂ ᄀ Aug 17 '18 at 19:13

1 Answers1

2
  1. Are you sure you want to use salt-ssh ("Execute salt commands and states over ssh without installing a salt-minion."). You only need to do it this way if you cannot install salt-minion on your minions
  2. Where did you put your states? Typically in the /etc/salt/master configuration you write the path where your state files should be
  3. Do not use the state supplied in the example verbatim. I'm pretty sure you can't have a space in the id declaration (add script:).
  4. Your error message is unknown to me. This thread here makes the following suggestion: "The msgpack error is very misleading, as it is entirely unrelated. It seems one of your server.conf files has invalid utf-8 on one minion and the file doesn't exist on the other? " I strongly suspect, the space (' ') in your state id. In my experience, SaltStack can spit out quite misleading error messages if you have invalid characters in your state files or pillar.

Extra hints:

  • Look here for information about how to setup your state tree.
  • Maybe use a preinstalled installation of SaltStack (docker, Vagrant ?) that already works for learning how it works. Since UtahDave is pretty active in SaltStack, I would go with his demo (Disclaimer: Did not test this myself!)

Update: When you use file.managed:

  • the parameter source of file.managed is the path on your master (relative to your state path), not the target.
  • you need to specify the path on the target, that's why you're getting the error: "Specified file pam-setup-access is not an absolute path"

Example:

/root/bin/updater/scripts/pam-setup-access
   file.managed:
     - source: salt://files/pam-setup-access

is the same as:

some-arbitrary-id
    file.managed:
    - name: /root/bin/updater/scripts/pam-setup-access
    - source: salt://files/pam-setup-access

https://docs.saltstack.com/en/develop/ref/states/all/salt.states.file.html#module-salt.states.file

The same goes for cmd.run.

Read the general SaltStack tutorial. When you design states it does not matter if you use salt-ssh or salt. Don't let this confuse you.

Sybille Peters
  • 206
  • 2
  • 12
  • Thanks - I've added some fixes per your comment. Getting a path error now. Do I need to copy all my local scripts to the **minions** or will `salt-ssh` take care of doing that? (looks like this might be covered under the Manage Files section of the vagrant specific link above) – Server Fault Dec 14 '17 at 18:03
  • Glad to see you're making progress. This may be a little more than the scope of one stackoverflow question, but I'll give it a try: The original example you linked to proposed a state to create the script (file.managed) and execute the script (cmd.run). So if you do that, the state should create the file and execute it. – Sybille Peters Dec 14 '17 at 18:15
  • Be aware, this may not be the best design. You may think of creating a set of states that *replace* your script. That is something that SaltStack is ideal for. You can put all your dependencies and error-checking in your states. Good luck! – Sybille Peters Dec 14 '17 at 18:16
  • Yeah, Ideally, I would have all my scripts salt-ified into state files but what I'm trying to do right now is automate what I currently have. If I copy the script (`pam-setup-access`) over to the minion (using path specified in state file) before running `salt-ssh`, I can get it to work now. Is there a way to tell `salt-ssh` (on the master) to copy this file to the minion automatically and then remove it after running? – Server Fault Dec 14 '17 at 18:59
  • file.absent should work: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html#salt.states.file.absent, do that after your cmd.run. – Sybille Peters Dec 14 '17 at 19:14
  • Obviously, be careful with that one: "If a directory is supplied, it will be recursively deleted." – Sybille Peters Dec 14 '17 at 19:15
  • Thanks. `file.absent` will work for cleanup. Do you know of a way to get the file on the minion (before running salt-ssh) without doing an `rsync` or something by hand first? Maybe I'm missing how `file.managed` works.. – Server Fault Dec 14 '17 at 19:20
  • Let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/70242/discussion-between-wirap-and-server-fault). – Sybille Peters Dec 14 '17 at 19:23