How to stole ssh session when you’re root

It happen to me all the time that one of developers notifies me about some kind of problem that I can’t confirm from my account. Sometimes it was because of bad ssh keys configuration, other times file permissions, mostly such stuff. It’s sometimes convenient to “enter into someone’s shoes” to see what’s going on there.

If you’re root on machine you may do that like this:

su developer -

Easy one but that’s not enough for all cases. When you use bastion host (or similar solutions) sometimes users have connection problems and it’s harder to check. When such user have ForwardAgent ssh option enabled you may stole this session to check login problems. After you switch to such user, you may wan’t to hide history (it’s optional 😉 ) – disable history like that:

export HISTSIZE=0

Now you may stole ssh session, but first check if you have your dev is logged on:

$ ls -la /tmp/ | grep ssh
drwx------   2 root     root          4096 Apr 27 20:56 ssh-crYKv29798
drwx------   2 developer developer    4096 Apr 27 18:03 ssh-cVXFo28108

Export SSH_AUTH_SOCK with path to developer’s agent socket:


Finally you may try to login via ssh as developer and see with his eyes what’s now working.

pip – uninstall package with dependencies

Virtualenvs in python are cheap but from time to time you will install something with pip on your system and when time comes removing all this crap could be difficult. I found this bash snippet that will uninstall package with all dependencies:

for dep in $(pip show python-neutronclient | grep Requires | sed 's/Requires: //g; s/,//g') ; do sudo pip uninstall -y $dep ; done
pip uninstall -y python-neutronclient


Daily MySQL backups with xtrabackup

I’ve been using standard MySQL dumps as backup technique on my VPS for few years. It works fine and backups were usable few times when I needed them. But in other places I’m using xtrabackup. It’s faster when crating backups and a lot faster when restoring them – they’re binary so there is no need to reevaluate all SQL create tables/inserts/etc. Backups also include my.cnf config file so restoring on other machine should be easy.

After I switched from MariaDB to Percona I have Percona repos configured, so I will use latest version of xtrabackup.

apt-get install -y percona-xtrabackup


xtrabackup requires configured user to be able to make backups. One way is to write user and password in plaintext in ~/.my.cnf. Another is using mysql_config_editor to generate ~/.mylogin.cnf file with encrypted credentials. To be honest I didn’t check what kind of security provides this encryption but it feels better than keeping password in plaintext.

I do not want to create new user for this task – I just used debian-sys-maint user. Check password for this user like this:

grep password /etc/mysql/debian.cnf

Now create encrypted file:

mysql_config_editor set --login-path=client --host=localhost --user=debian-sys-maint --password

Hit enter and copy/paste password. File .mylogin.cnf should be created with binary content. We may check this with:

# mysql_config_editor print 
user = debian-sys-maint
password = *****
host = localhost

Looks OK.


Now backup script. I placed it directly in cron.daily dir ex: /etc/cron.daily/zz-percona-backup with content:

DATE=`date +%F-%H%M%S`

# this will produce directories with compresses files
# mkdir -p $DST
# xtrabackup --backup --compress --target-dir=$DST

# this will produce tar.xz archives
xtrabackup --backup --stream=tar | xz -9 > $DST

# delete files older than 30 days
find $DIR -type f -mtime +30 -delete

I prefer to have single archive with backup because I’m transferring those files to my NAS (for security). But for local backups directories are more convenient and faster when restoring. Also tar archives have to be decompressed with -ioption.


First time I saw it it scared me a little but after all worked fine and without problems…

service mysql stop
rm -rf /var/lib/mysql
mkdir /var/lib/mysql

Now prepare backup, if you used directory backups it’s easy, ex:

xtrabackup --decompress --target-dir=/backup/xtrabackup/2016-03-14-214233
xtrabackup --prepare --target-dir=/backup/xtrabackup/2016-03-14-214233
xtrabackup  --copy-back --target-dir=/backup/xtrabackup/2016-03-14-214233

But if you used tar archives it’s little more messy… You have to create temporary dir and extract archive there:

mkdir /tmp/restore
tar -xvif /backup/xtrabackup/2016-03-14-214233.tar.xz -C /tmp/restore
xtrabackup --prepare --target-dir=/tmp/restore
xtrabackup  --copy-back --target-dir=/tmp/restore

We have to fix ownership of restored files and db may be started:

chown -R mysql:mysql /var/lib/mysql
service mysql start

If your backup is huge you should reorder commands to shutdown database after backup decompression


Use bastion host with Ansible

When you deploy your application in cloud you don’t need and don’t want your hosts exposed via SSH to the world. Malware scans whole network for easy SSH access and when find something will try some brute force attacks, overloading such machines. It’s easier to have one exposed, but secured host, that doesn’t host anything and is used as proxy/gateway to access our infrastructure- it’s called bastion host.

Ansible is quite easy to integrate with bastion host configuration. We will need custom ansible.cfg and ssh_config file. So let’s start with ssh_config:

Host bastion
  User ubuntu
  IdentityFile ~/.ssh/id_rsa
  PasswordAuthentication no
  ForwardAgent yes
  ServerAliveInterval 60
  TCPKeepAlive yes
  ControlMaster auto
  ControlPath ~/.ssh/ansible-%r@%h:%p
  ControlPersist 15m
  ProxyCommand none
  LogLevel QUIET

Host *
  User ubuntu
  IdentityFile ~/.ssh/id_rsa
  ServerAliveInterval 60
  TCPKeepAlive yes
  ProxyCommand ssh -q -A ubuntu@bastion nc %h %p
  LogLevel QUIET
  StrictHostKeyChecking no

Now I will describe what most important options mean. For bastion:

  • User – I’m using Ubuntu kickstarted on cloud as bastion host with it’s default user. Never use root here – you don’t need that
  • ForwardAgent yes – we want to forward our ssh keys through bastion to destination hosts,
  • ServerAliveInterval 60 – this is like keepalive connection, ssh will send small ping/pong packets every 60 seconds so your connection won’t hung/terminate after long time,
  • ControlMaster auto – we will open one connection to bastion host and multiplex other ssh connections through it, connection will be opened for ControlPersist time,
  • ControlPath – this have to be configured same way like in ansible.cfg,
  • ProxyCommand none – we’re setting ProxyCommand for all hosts but we need it disabled for bastion,

Default hosts configuration:

  • ProxyCommand ssh -q -A ubuntu@bastion nc %h %p – this is what makes all magic, it will pipe your ssh connection via bastion to destination host,
  • StrictHostKeyChecking no – this options shouldn’t be there for production but it’s useful at beginning when you create and destroy machines few times before you test everything. Normally this will cause notifications about ssh key changes, but you’re aware of that – you just recreated those machines.

I’ve found examples without netcat but was unable to get them working – this one worked for me really well.

To test if connections work fine use this configuration like:

ssh -F ssh_config bastion
ssh -F ssh_config

And now ansible.cfg:


ssh_args = -F ./ssh_config -o ControlMaster=auto -o ControlPersist=5m -o LogLevel=QUIET
control_path = ~/.ssh/ansible-%%r@%%h:%%p

Most important section here is in ssh_args where we’re pointing to ssh_config file in current dir with -F option. I also have to reenter configuration for multiplexing here – it wasn’t working with ssh only configuration. control_path option have to use same paths like ssh_config (% signs are escaped with %%).

You should be able to run ansible/ansible-playbook commands normally now – all traffic will be forwarded through bastion.

It’s good time now to install fail2ban on bastion and maybe reconfigure it to run ssh on crazy high port 🙂