29
17
I can use the ssh configuration file to enable the forwarding of ssh keys added to ssh-agent. How can I do the same with gpg keys?
29
17
I can use the ssh configuration file to enable the forwarding of ssh keys added to ssh-agent. How can I do the same with gpg keys?
16
EDIT: This answer is obsolete now that proper support has been implemented in OpenSSH, see Brian Minton's answer.
SSH is only capable of forwarding tcp connections within the tunnel.
You can, however, use a program like socat
to relay the unix socket over TCP, with something like that (you will need socat both on the client and the server hosts):
# Get the path of gpg-agent socket:
GPG_SOCK=$(echo "$GPG_AGENT_INFO" | cut -d: -f1)
# Forward some local tcp socket to the agent
(while true; do
socat TCP-LISTEN:12345,bind=127.0.0.1 UNIX-CONNECT:$GPG_SOCK;
done) &
# Connect to the remote host via ssh, forwarding the TCP port
ssh -R12345:localhost:12345 host.example.com
# (On the remote host)
(while true; do
socat UNIX-LISTEN:$HOME/.gnupg/S.gpg-agent,unlink-close,unlink-early TCP4:localhost:12345;
done) &
Test if it works out with gpg-connect-agent
. Make sure that GPG_AGENT_INFO is undefined on the remote host, so that it falls back to the $HOME/.gnupg/S.gpg-agent
socket.
Now hopefully all you need is a way to run all this automatically!
@JonasWielicki - the fpaste.org link is now broken. Can you provide your script via a new pfaste.org link or better yet as an A to this Q? – slm – 2014-08-19T13:12:06.757
@slm I used fpaste as it was an informational comment. In fact, I cannot even recall what it was, although the comments make be believe that it was a simple socat-like utility binding to localhost and forwarding the traffic between the tcp and the unix socket. – Jonas Schäfer – 2014-08-22T09:29:42.650
Well the ssh agent keys are forwarded automatically when the forwarding is set in the configuration file. I will try this out. – txwikinger – 2010-07-19T14:18:51.183
You're right, ssh-agent uses a unix socket too, but has special support for it (little bit tired here :) Nevertheless, the solution should still work. – b0fh – 2010-07-19T14:32:26.637
1For this solution, my gpg-agent would be publicy accessible via port 12345 if I was not behind a firewall/NAT. This should be mentioned in the answer please. – Jonas Schäfer – 2012-04-30T14:46:41.967
I'm guessing your last edit fixed that issue, Jonas? it's only binding to localhost
now. – jmtd – 2012-05-01T08:19:46.757
This fails for me with the following argument from the remote host's gpg-connect-agent
:
can't connect to server: ec=31.16383 gpg-connect-agent: error sending RESET command: Invalid value passed to IPC
. The remote socat
then dies. The local socat
dies and utters socat[24692] E connect(3, AF=1 "", 2): Invalid argument
. This page leads me to believe that this will never work, because the agent doesn't store the key (just the passphrase). Has this been confirmed to work by anyone?
@jmtd yes, this fixes the privacy issue. However, I was unable to get it to work with socat, which is why I hacked up a python script which does the trick: http://fpaste.org/Um0D/ (this may need improvement). Other issues I had with socat was lingering tcp sockets and stuff.
– Jonas Schäfer – 2012-05-01T13:25:48.15717
OpenSSH's new Unix Domain Socket Forwarding can do this directly starting with version 6.7.
You should be able to something like:
ssh -R /home/bminton/.gnupg/S.gpg-agent:/home/bminton/.gnupg/S-gpg-agent -o "StreamLocalBindUnlink=yes" -l bminton 192.168.1.9
@DrewR. Glad to hear that. – Brian Minton – 2015-06-01T19:58:42.567
2I found a required critical detail: on the remote (private key-less) machine, the public key of the signing identity must be present. Local gpg version 2.1.15 OS X, remote 2.1.11 linux. – phs – 2016-10-06T04:06:09.843
5
In new versions of GnuPG or Linux distributions the paths of the sockets can change. These can be found out via
$ gpgconf --list-dirs agent-extra-socket
and
$ gpgconf --list-dirs agent-socket
Then add these paths to your SSH configuration:
Host remote
RemoteForward <remote socket> <local socket>
Quick solution for copying the public keys:
scp .gnupg/pubring.kbx remote:~/.gnupg/
On the remote machine, activate GPG agent:
echo use-agent >> ~/.gnupg/gpg.conf
On the remote machine, also modify the SSH server configuration and add this parameter (/etc/ssh/sshd_config):
StreamLocalBindUnlink yes
Restart SSH server, reconnect to the remote machine - then it should work.
A more detailed tutorial including some troubleshooting can be found here: https://mlohr.com/gpg-agent-forwarding/
– MaLo – 2018-06-07T07:39:26.7071
In case the remote host runs a current version of Debian, it seems running systemctl --global mask --now gpg-agent.service gpg-agent.socket gpg-agent-ssh.socket gpg-agent-extra.socket gpg-agent-browser.socket
is required to prevent systemd from launching a socket stealing remote gpg-agent. According to https://bugs.debian.org/850982 this is the intended behavior.
3
I had to do the same, and based my script on the solution by b0fh, with a few tiny modifications: It traps exits and kills background processes, and it uses the "fork" and "reuseaddr" options to socat, which saves you the loop (and makes the background socat cleanly kill-able).
The whole thing sets up all forwards in one go, so it probably comes closer to an automated setup.
Note that on the remote host, you will need:
GPG_AGENT_INFO
variable. I prefill mine with ~/.gnupg/S.gpg-agent:1:1
- the first 1 is a PID for the gpg agent (I fake it as "init"'s, which is always running), the second is the agent protocol version number. This should match the one running on your local machine.
#!/bin/bash -e
FORWARD_PORT=${1:-12345}
trap '[ -z "$LOCAL_SOCAT" ] || kill -TERM $LOCAL_SOCAT' EXIT
GPG_SOCK=$(echo "$GPG_AGENT_INFO" | cut -d: -f1)
if [ -z "$GPG_SOCK" ] ; then
echo "No GPG agent configured - this won't work out." >&2
exit 1
fi
socat TCP-LISTEN:$FORWARD_PORT,bind=127.0.0.1,reuseaddr,fork UNIX-CONNECT:$GPG_SOCK &
LOCAL_SOCAT=$!
ssh -R $FORWARD_PORT:127.0.0.1:$FORWARD_PORT socat 'UNIX-LISTEN:$HOME/.gnupg/S.gpg-agent,unlink-close,unlink-early,fork,reuseaddr TCP4:localhost:$FORWARD_PORT'
I believe there's also a solution that involves just one SSH command invocation (connecting back from the remote host to the local one) using -o LocalCommand
, but I couldn't quite figure out how to conveniently kill that upon exit.
Aren't you missing some 'user@host' argument before socat, in the last command? Anyhow even after fixing that, this fails for me with "socat[6788] E connect(3, AF=2 127.0.0.1:0, 16): Connection refused" popping up locally, when trying gpg-connect-agent remotely. – David Faure – 2016-08-07T18:56:41.687
1
According to GnuPG Wiki, you have to forward the remote socket S.gpg-agent.extra
to local socket S.gpg-agent
.
Furthermore you need to enable StreamLocalBindUnlink
on the server.
Keep in mind that you also need the public part of your key available on remote GnuPG.
Use gpgconf --list-dir agent-socket
respectively gpgconf --list-dir agent-extra-socket
on the remote to get the actual paths.
Addded configuration on remote /etc/sshd_config
:
StreamLocalBindUnlink yes
Import your public key on remote:
gpg --export <your-key> >/tmp/public
scp /tmp/public <remote-host>:/tmp/public
ssh <remote-host> gpg --import /tmp/public
Command to connect through SSH with gpg-agent forwarding enabled: (paths for my Debian)
ssh -R /run/user/1000/gnupg/S.gpg-agent:/run/user/1000/gnupg/S.gpg-agent.extra <remote-host>
@brian minton: It does not work for me if not forwarding to the extra socket. – doak – 2018-06-06T12:48:52.587
1
As an alternative to modifying /etc/ssh/sshd_config
with StreamLocalBindUnlink yes
, you can instead prevent the creation of the socket files that need replacing:
systemctl --global mask --now \
gpg-agent.service \
gpg-agent.socket \
gpg-agent-ssh.socket \
gpg-agent-extra.socket \
gpg-agent-browser.socket
Note that this affects all users on the host.
Bonus: How to test GPG agent forwarding is working:
ssh -v -o RemoteForward=${remote_sock}:${local_sock} ${REMOTE}
${remote_sock}
is shown in the verbose output from sshls -l ${remote_sock}
gpg --list-secret-keys
debug1
messages from ssh showing the forwarded trafficIf that doesn't work (as it didn't for me) you can trace which socket GPG is accessing:
strace -econnect gpg --list-secret-keys
Sample output:
connect(5, {sa_family=AF_UNIX, sun_path="/run/user/14781/gnupg/S.gpg-agent"}, 35) = 0
In my case the path being accessed perfectly matched ${remote_sock}
, but that socket was not created by sshd
when I logged in, despite adding StreamLocalBindUnlink yes
to my /etc/ssh/sshd_config
. I was created by systemd upon login.
(Note I was too cowardly to restart sshd, since I've no physical access to the host right now. service reload sshd
clearly wasn't sufficient...)
Tested on Ubuntu 16.04
3Both answers suggest running socat to expose the GPG agent unix socket on a tcp port. However, unlike unix sockets, TCP ports do not have the same level on access control. In particular, every user on the same host can now connect to your GPG agent. This is probably ok if you have a single-user laptop, but if any other users can also log into the same system (the system where the GPG agent is running), they can also access your GPG agent, posing a significant security problem. Letting socat directly start SSH using the EXEC address type is probably the best way to fix this. – Matthijs Kooijman – 2014-08-04T10:02:51.250
For another presentation of the openssh 6.7+ solution, see https://2015.rmll.info/IMG/pdf/an-advanced-introduction-to-gnupg.pdf
– phs – 2016-10-05T21:21:44.427This was useful to me. – phs – 2016-12-05T18:54:53.427