19

What do you think are the best practices to maintain dozens (if not hundreds) of debian servers up-to-date ? Keeping in mind that :

  • There are groups of servers (i.e identical webservers, DB Servers, ...)
  • There can be several Debian issues (lenny, etch)
  • Running a loop over all servers and doing apt-get update && upgrade is not acceptable (because it's what I'm doing at the moment :) ) It should be better than this !

Currently, when I finally finish all the upgrades, a new security update is posted, and I have to do it all over again.

Thanks in advance serverfault community !

Falken
  • 1,682
  • 5
  • 18
  • 27
  • 1
    Have one local server to store latest packets and use it as an apt repository, this will save you bandwidth and time, use your local repository to distribute updates to local servers. Oh, and use aptitude instead of apt-get. – Karolis T. Oct 28 '09 at 13:41
  • 3
    Yes for mirror and no for aptitude. No benefit these days. It doesn't even have super cow powers. – David Pashley Oct 28 '09 at 14:27

10 Answers10

12

I use apt-dater to manage upgrading all my Debian boxes. Seems to do the trick well enough. Haven't tried to scale it up to hundreds of hosts though.

Haakon
  • 1,305
  • 7
  • 11
  • 1
    Interesting product, though I had never heard of it. – wzzrd Oct 28 '09 at 15:14
  • It's very good ! I would promote this answer if apt-dater didn't have a local package to install on each host ... and I don't understand why it's even needed. – Falken Oct 28 '09 at 15:57
  • After testing, this tool is awesome ! But it works for dozens of servers, not hundreds. When handling a lot of machines it becomes all flaky and slow ... too bad. – Falken Oct 29 '09 at 11:06
  • I think the main reason for the package is to ensure the required dependencies are there, more than anything else. – Haakon Oct 29 '09 at 12:42
  • 1
    I promote this answer because I finally managed to use it, but other solutions are quite good too, depending on your preferences/environment ! – Falken Nov 10 '09 at 10:17
  • Excellent news. How did you get it to scale? Or is it still flakey and slow? – Haakon Nov 11 '09 at 01:09
  • 2
    It was the default ssh agent on ubuntu that made it all wrong. I simply removed it and used the easy "ssh-add". All slowness vanished ! – Falken Apr 06 '10 at 09:28
10

Google solved this with debmarshal:

http://code.google.com/p/debmarshal/

Which lets you approve packages from an upstream repository for installation on your production hosts.

Then you can just run cron-apt in fully automatic mode.

Here's an intro video:

http://www.youtube.com/watch?v=L3hRToC23mQ

LapTop006
  • 6,466
  • 19
  • 26
3

We were trialing using puppet to upgrade security fixes on non-essential packages. We would run apticron to email a list of updates for every server, then daily run a script that merged these updates into a puppet manifest file which gave the package and the version for each distribution. This would then update a bunch of files on the individual servers and kick off an upgrade script when a package needed upgrading. This worked reasonable well, but we haven't tested it quite as much as I'd like. This scheme did get around the limitation of Puppet of not having the same resource defined in multiple places.

I was also not comfortable with doing automatic upgrades of things like MySQL or PostgreSQL, where a random update would shut down a service, possibly in the middle of the day. These would still require manual updates.

Spacewalk and Debmarshall do look like suitable alternatives for our puppet scheme.

David Pashley
  • 23,151
  • 2
  • 41
  • 71
1

Apparently, Spacewalk now has preliminary support for Debian. That, together, maybe, with Puppet, would be my starting point. I'm pretty sure the guy developing the Debian support for Spacewalk will love you for working with him in taking Debian support to a higher level.

wzzrd
  • 10,269
  • 2
  • 32
  • 47
1

In the way of pull-based configuration systems like Puppet, there are also bcfg2 and cfengine. One or the other of those might suit your needs well. I'm rolling out bcfg2 in my lab right now.

Phil Miller
  • 1,725
  • 1
  • 11
  • 17
1

A solution can be given by func

drAlberT
  • 10,871
  • 7
  • 38
  • 52
  • I wouldn't do func. It's *way* to immature for production use, though I admit it does show promise. – wzzrd Oct 29 '09 at 08:44
  • func is used by cobbler, it is not immature IMHO. cobbler is user heavily by RH specialists and those technologies are going to be included in the next RHEL release. It is not "formally" production ready, maybe, but it is quite near to be in fact. – drAlberT Oct 29 '09 at 09:28
0

I'm not sure what type of solution you are expecting. You probably know about cron jobs, but I wouldn't update systems in the blind as there are human interventions needed (and that is why they pay you to do this, right?)

If you had completely identical systems you might consider using something like rsync to bring in the differences, but figuring out which files not to rsync could be difficult, and I wouldn't do this while services are running. At least the update scripts are set up to manage restarting the services and merging in configuration file differences.

Perhaps if you explain what the problem is with doing apt-get commands we could see what you want to avoid.

If the problem is bandwidth and time to download, perhaps you should set up one box to act as your local Debian repository. There are Debian guides on how to do that.

Here are some tips on how to minimize the number of things you need to update.

When you install Debian, don't install Desktop unless you really need to use X on that console. Most servers do not need X installed. This can decrease the number of packages on the system significantly, and then you don't need to update as many packages.

Check that the sources.list is including only the repositories you really need. If you had experimented with some repository and forgot about that, you might be bringing in updates you don't need or want.

If you have run into trouble with blindly doing updates on a production server, be careful to consult the Debian upgrade guides when there is a major update (4.0 to 5.0). These will go through very well if you follow the upgrade instructions. It isn't as easy as running apt-get dist-upgrade and walking away. Sometimes in the instructions there are even pointers on when to run aptitude rather than apt-get - there are small differences in them.

labradort
  • 1,169
  • 1
  • 8
  • 20
0

Do you now this tool "dancer's shell"? I like it and i use it. But i don't know if you can use it for so many hosts. Maybe you could try...

http://www.netfort.gr.jp/~dancer/software/dsh.html.en

And he is in the repository.

Matthieu
  • 66
  • 3
-1

ClusterSSH. You logon to all servers and give them the exact same commands, so you can also react to the dialogs. If one server gets an extra question, just click on that one and it will be the only one that responds.

I've used it to upgrade 25 webservers from etch to lenny. Worked like a charm.

http://sourceforge.net/projects/clusterssh/

blauwblaatje
  • 953
  • 1
  • 6
  • 19
  • SSH agent actually dies if you try and do weird things like connecting to ~50 machines at once. Otherwise I like ClusterSSH, although it needs another level of grouping. – LapTop006 Oct 28 '09 at 14:00
-1

Cluster ssh is a good suggestion.

debmarshal isn't part of debian yet - I'm not even sure it will be a package - seems to be a completely different system with a specialized repository. As the speaker said, this is currently user hostile, not user friendly.

Spacewalk seems to be a clone of Redhat Network, at least in the web interface. I've had bad results from using Redhat Network to update systems. One time it hung, for no apparant reason, and caused service outage. I did a yum update immediately after and it handled that fine, so I can only assume the problem was from something that barfed on the RHN side. The other thing I don't like about RHN updates is you don't know when the update will happen, to watch for issues.

labradort
  • 1,169
  • 1
  • 8
  • 20
  • -1 Untrue: RHN updates are not automatic unless you make them automatic. Apart from that: as someone who uses RHN on a daily basis, I have yet to see it barf on me. – wzzrd Oct 28 '09 at 15:16
  • I didn't say that RHN was automatic. But if you do set up updates from RHN, there is no telling when they are going to happen, so it feels the same. Your apparent luck does not undo my real experience with it failing and leaving users without service. Even yum update can fail. Anyone who thinks you can just update and walk away is not being careful or just isn't concerned because it isn't a production server (production = there are clients who depend on the services). – labradort Oct 28 '09 at 17:49