5

I am frequently logging in to remote servers using ssh, to do standard administrative stuff. I have local bashrc/vimrc and various other configuration files that I would like to be available remotely. Often i only log into these remote servers once ever, so I don't want to leave a copy of my profile on these boxes some of which are on customer sites.

I did consider doing some hack to have the remote server mount a fusefs webDAV or some other way to mount a remote file-system to the remote server for the duration of the session. However this has problems if the remote system does not have the necessary packages, or is fire-walled off.

Are there any good solutions to this problem that are cross distribution compatible, well most recent fedora/RHEL/ubuntu/debian/CentOS and don't interfere or slow down the login process?

[EDIT]

I guess one of the other considerations, is that i might be logging in with someone elses user account, so I don't want to make any persistent changes to the profile. Ideally I would just use some temporary profile for the session and then discard it at logoff. this might be getting into moon on a stick territory ;-)

Tom
  • 10,886
  • 5
  • 39
  • 62

4 Answers4

5

You can use ssh -t to run setup scripts, then a shell, then cleanup scripts. ssh -t allows you to run commands, but still run one or more shells in the middle and allocate a terminal properly

Your setup script can include wget'ing/curl'ing/scp'ing a temporary home directory to something like $HOME/tmphome, then running a script like this to start a shell there:

#!/bin/sh

HOME="$HOME/tmphome"
cd "$HOME"
bash --login

This should do a pretty good job of isolating your rc files to the tmphome, and ssh -t will skip the user's bashrc. As long as your environment is lightweight, it shouldn't take very long to copy.

Your command might be something like ssh -t user@host 'wget http://server/tmphome.tar.gz && tar -zxvf tmphome.tar.gz && rm tmphome.tar.gz && tmphome/shell.sh'

lunixbochs
  • 848
  • 5
  • 8
2

Why don't you use a distributed version control system like git to store your configuration files ? That's what I do and it works like a charm.

EEAA
  • 108,414
  • 18
  • 172
  • 242
Fred
  • 31
  • 1
  • That would kinda defeat the whole "I might only login once" part of the question; installing a DVCS, possibly popping open firewalls or whatever, and then pulling down a copy of your environment would probably take longer than just fixing the problem. – womble Jul 16 '11 at 22:51
  • I ended up doing this, running a .ssh/rc script which sync my .bashrc file to the remote servers, and you are right it works like a charm. However it doesn't meet the requirement of the original question for being non-persistent so I can't really accept this answers. This is the answer to another question.... ;-) – Tom Dec 21 '11 at 05:31
1

Many 'old-school' unix admins store their common settings in a cvs repository. The first time they log in to a new system, they cvs checkout that repository, and configure their .bashrc to do a cvs update to get the current settings, then call into ~/repo/bin/setup (where repo is where the cvs checkout targeted, and setup is a script which adds /repo/bin to their $PATH sets up aliases etc).

This method does leave your settings on the system, though that might not be a big issue in many cases.

You could of course substitute svn or git for cvs.

Mike Insch
  • 1,244
  • 8
  • 10
  • How many production systems have your particular VCS pre-installed? – womble Jul 16 '11 at 22:51
  • i would have thought that cvs would be available. so would scp/rcp or ftp/curl/wget could be used to retrieve the files. – Tom Jul 16 '11 at 23:20
1

My solution to this problem has been to learn to be comfortable with the common standards, and program my finger macros to turn on the (very few) optional features I just can't live without.

The problem is that for machines you'll only get into to fix a quick problem, the time required to setup your environment will probably exceed that of the time required to actually fix the problem. The issue is magnified somewhat by your stated desire to leave the system without your custom settings when you're done -- an admirable sentiment, since I've been bitten by an admin who couldn't live without vi mode on the (shared) root shell and readline. Made it damned hard to get anything done when you're used to the default keybindings.

I've been fairly lucky in more recent times; whenever I've been responsible for doing things to servers, they've been "mine", in the sense that I've got permanent administrative responsibiltiy/authority, and I've just used my system automation tool to pre-configure machines how I like them.

For temporary access, if I were to go back to it and were in desperate need of my own little environment, I'd be inclined to develop two shell scripts:

  • One which I ran before I logged into a new server, which put in place all my desired config files (keeping a copy of what was there before), and possibly installed packages (suitable for the distro at hand), or at least reported on what was missing so I knew what I'd be hanstrung onon a remote server when I started working on it;
  • The other, which would clean up everything to the state it was before I ran the first script -- uninstall packages (you'd need to make sure you knew what you had installed, as opposed to what was already there before you got there), and move the original config files back into place.

These scripts would be a fair amount of work to write and debug, especially across the foibles of different distributions. This is why I just learnt to get along with the sane defaults for any light work -- which is a good habit to get into in case you're part of a team in the future, and need to share an environment with other people (Mr. vi mode bash shell, I'm glaring at you).

womble
  • 95,029
  • 29
  • 173
  • 228