26

Background:

I am finally setting aside some time to join the 21st Century and look at Puppet.

As it stands today we version control all server configurations in a repository that is held internally at the office. When an update needs making, the changes are checked back into the repos and manually pushed out to the machine in question. This usually means SFTP'ing to the remote machine and then moving files into place, with the relevant permissions, from a shell.

So I am hopeful that Puppet is going to be an simple yet amazing extension to what we already have.

Now I consider the process that we currently have to be reasonably secure. On the assumption that our internal network will always be relatively more secure than the public networks in our datacentres.

  • The process is always one way. Changes traverse from a secure environment to insecure and never the other way round.

  • The master store is in the safest possible place. The risk of compromise, either by stealing configurations or sending out malicious modifications, is greatly reduced.

Question:

From what I understand of the Puppet server/client model is that the clients poll and pull updates down directly from the server. The traffic is SSL wrapped so cannot be intercepted or spoofed. But it differs from what we currently do because the Puppet server[s] would need to be hosted in a public location. Either centrally, or one for each datacentre site that we maintain.

So I am wondering:

  • Am I being unnecessarily paranoid about the change from push to pull?

  • Am I being unnecessarily paranoid about centrally storing all of that information on a public network?

  • How are others maintaining multiple networks - separate server for each site?


Update 30/07/09:

I guess that one of my other big concerns is placing so must trust in a single machine. The puppetmaster(s) would be firewalled, secured and such. But even so any public machine with listening services has an attack surface of a certain size.

Presumably if the master has permission to update any file on any one of the puppet clients, then it's compromise would ultimately result in the compromise of all it's clients. The "kings to the kingdom" so to speak.

  • Is that hypothesis correct?

  • Is there any way that it can be mitigated?

Dan Carley
  • 25,189
  • 5
  • 52
  • 70
  • Your hypotheses is correct; compromise on the puppetmaster is compromise on all clients. However, it is easier to feel good about the security of a single machine that you can focus your attention on securing than an entire network of machines, isn't it? Mitigation depends on your environment, but puppet is written to be plumbing, there are a fair amount of "hooks" in place where you can add some auditing or additional checks as needed. – Paul Lathrop Jul 30 '09 at 16:28
  • 1
    @Paul - Sort of a "put all your eggs in one basket after making sure that yo have a very good basket" approach? – Matt Simmons Jul 30 '09 at 17:12

7 Answers7

10

Because i sometimes store passwords in variables in my modules, to be able to deploy applications without having to finish configuration manually, it means that i can not decently put my puppet repo on a public server. Doing so would mean that attacking the puppetmaster would permit to gain some app or db passwords of all our different applications on all our servers.

So my puppetmaster is in our office private network, and i do not run puppetd daemon on the servers. When i need to deploy, i use ssh from private net to servers, creating a tunnel and remotely calling puppetd.
The trick is not to set the remote tunnel and puppet client to connect to the puppetmaster, but to a proxy that accept http connect and can reach the puppetmaster server on private network. Otherwise puppet will refuse to pull because of hostname conflict with certificates

# From a machine inside privatenet.net :
ssh -R 3128:httpconnectproxy.privatenet.net:3128 \
    -t remoteclient.publicnetwork.net \
    sudo /usr/sbin/puppetd --server puppetmaster.privatenet.net \
    --http_proxy_host localhost --http_proxy_port 3128 \
    --waitforcert 60 --test –-verbose

It works for me, hopes it helps you

Alex F
  • 819
  • 1
  • 10
  • 17
  • Brilliant! But do you need a --onetime on the puppetd? Otherwise won't the tunnel collapse after the command is executed, but puppetd will default to running as a server? – Purfideas Dec 10 '09 at 16:01
  • The puppet which is launched is not daemonized. I prefer to use the --test option in place of the couple --onetime --no-daemonize. So puppetd is run in foreground, and ssh forces a terminal (option -t). It also has the advantage that you can interact with the running puppet (eg ctrl^c for clean puppetd termination). Once puppetd terminates the ssh session terminates and tunnel is closed. – Alex F Dec 11 '09 at 14:02
  • I found that this still caused problems and so ended up configuring an OpenVPN server on the firewall machine so that the network with the puppet server is possible to contact from the remote machine(s)... – David Gardner Dec 04 '10 at 22:08
4

We have two sites, our office and our colo. Each site has its own puppetmaster. We set up an svn repository with the following structure:

root/office
root/office/manifests/site.pp
root/office/modules
root/colo
root/colo/manifests/site.pp
root/colo/modules
root/modules

The modules directory under each site is an svn:externals directory back to the top level modules directory. This means that they share exactly the same modules directory. We then make sure that the vast majority of the classes we write are under the modules directory and used by both sites. This has the nice advantage of forcing us to think generically and not tie a class to a particular site.

As for security, we host our puppetmaster (and the rest of our network) behind our firewall, so we're not that concerned about storing the config centrally. The puppetmaster will only send out config to hosts it trusts. Obviously you need to keep that server secure.

David Pashley
  • 23,151
  • 2
  • 41
  • 71
  • Thanks. The svn:externals tip is a nice touch. Everything will be firewalled. But, you know, anything will a listening service inherently has a larger attack surface. – Dan Carley Jul 30 '09 at 15:34
2

I can't make a judgment on how necessary your paranoia is, it highly depends on your environment. However, I can say with confidence that the two major points of your existing configuration can still apply. You can ensure your change traverse from a secure environment (the repository at your office) to the less secure environment, wherever your puppetmaster is located. You change the process from SFTP'ing to a bunch of servers and manually putting files in to place to SFTP'ing to your puppetmaster and letting Puppet distribute the files and put them in the correct place. Your master store is still the repository, and your risks are mitigated.

I don't believe either push or pull are inherently safer than the other model. Puppet does a great job of securing the configurations in transit, as well as authenticating both client and server to ensure there is a two-way trust in place.

As for the multiple networks - we handle it with a central "master" puppetmaster with sattelite puppetmasters at each location acting as clients to the central master.

Paul Lathrop
  • 1,568
  • 10
  • 10
  • The satellite approach sounds interesting. Is there any special configuration required? Could you point me in the direction of any documentation? – Dan Carley Jul 30 '09 at 15:37
  • There's not really any special configuration required. You just run puppetd on the satellites. puppet.conf should have the server setting set to the "master" instead of pointing to themselves (which is a more typical configuration) – Paul Lathrop Jul 30 '09 at 16:26
1

One design approach is to have a puppetmaster local to each site of systems and use a deployment tool to push changes to the puppetmasters. (Using git with git hooks could work too).

This would preserve your concern about listening services on a public network as the puppet network traffic would only be internal.

It's also possible to push the manifests out to each server and have the puppet client parse the manifests and apply the relevant configs.

Mark Carey
  • 151
  • 2
0

Mark Burgess, the author of cfengine and a university professor (that puppet seems to owe its heritage to) has written alot about push and pull. He claims pull is inherently more secure. If you look at the cfengine website, they've only had like 1 network security incident in 17 years. Burgess claims that is because of the pull design. I think a single point of compromise is inevitable. I would be more concerned about the routes of attack to that point.

SAnnukka
  • 69
  • 3
0

You can run puppet without a central master if you want. One method I've seen is using a git repository and having scripts that will only merge and deploy an update only if the tag is signed by one of a pre-set list of gpg keys. The people even worked out how to get stored configs (used for eg setting up nagios monitoring on a central server from a resource processed on another server).

So if the central git server was compromised the other servers would not apply any more updates from it. The gpg keys would be on sys admin laptops or something, along with some way of revoking keys.

Read more at http://current.workingdirectory.net/posts/2011/puppet-without-masters/

Hamish Downer
  • 9,142
  • 6
  • 36
  • 49
0

although you say "external", I really doubt arbitrary people need to connect to your puppetmaster. you can always throw a VPN into the mix. a friend of mine once asked me "do you need to worry about the security of the protocol if the connection is secure?" while I dont agree with that attitude, an extra layer never hurts and certainly works wonders on my personal paranoia. besides, its fun to tunnel tunnels.

neoice
  • 874
  • 4
  • 17