When you deploy your application in cloud you don’t need and don’t want your hosts exposed via SSH to the world. Malware scans whole network for easy SSH access and when find something will try some brute force attacks, overloading such machines. It’s easier to have one exposed, but secured host, that doesn’t host anything and is used as proxy/gateway to access our infrastructure- it’s called bastion hostexternal link .

Ansible is quite easy to integrate with bastion host configuration. We will need custom ansible.cfg and ssh_config file. So let’s start with ssh_config:

Host bastion
  Hostname ip.xxx.xxx.xxx.xxx.or.host.name
  User ubuntu
  IdentityFile ~/.ssh/id_rsa
  PasswordAuthentication no
  ForwardAgent yes
  ServerAliveInterval 60
  TCPKeepAlive yes
  ControlMaster auto
  ControlPath ~/.ssh/ansible-%r@%h:%p
  ControlPersist 15m
  ProxyCommand none
  LogLevel QUIET

Host *
  User ubuntu
  IdentityFile ~/.ssh/id_rsa
  ServerAliveInterval 60
  TCPKeepAlive yes
  ProxyCommand ssh -q -A ubuntu@bastion nc %h %p
  LogLevel QUIET
  StrictHostKeyChecking no

Now I will describe what most important options mean. For bastion:

  • User - I’m using Ubuntu kickstarted on cloud as bastion host with it’s default user. Never use root here - you don’t need that
  • ForwardAgent yes - we want to forward our ssh keys through bastion to destination hosts,
  • ServerAliveInterval 60 - this is like keepalive connection, ssh will send small ping/pong packets every 60 seconds so your connection won’t hung/terminate after long time,
  • ControlMaster auto - we will open one connection to bastion host and multiplex other ssh connections through it, connection will be opened for ControlPersist time,
  • ControlPath - this have to be configured same way like in ansible.cfg,
  • ProxyCommand none - we’re setting ProxyCommand for all hosts but we need it disabled for bastion,``

Default hosts configuration:

  • ProxyCommand ssh -q -A ubuntu@bastion nc %h %p - this is what makes all magic, it will pipe your ssh connection via bastion to destination host,
  • StrictHostKeyChecking no - this options shouldn’t be there for production but it’s useful at beginning when you create and destroy machines few times before you test everything. Normally this will cause notifications about ssh key changes, but you’re aware of that - you just recreated those machines.

I’ve found examples without netcat but was unable to get them working - this one worked for me really well.

To test if connections work fine use this configuration like:

ssh -F ssh_config bastion
ssh -F ssh_config other.host.behind.bastion

And now ansible.cfg:

[defaults]
forks=20

[ssh_connection]
ssh_args = -F ./ssh_config -o ControlMaster=auto -o ControlPersist=5m -o LogLevel=QUIET
control_path = ~/.ssh/ansible-%%r@%%h:%%p
pipelining=True

Most important section here is in ssh_args where we’re pointing to ssh_config file in current dir with -F option. I also have to reenter configuration for multiplexing here - it wasn’t working with ssh only configuration. control_path option have to use same paths like ssh_config (% signs are escaped with %%).

You should be able to run ansible/ansible-playbook commands normally now - all traffic will be forwarded through bastion.

It’s good time now to install fail2ban on bastion and maybe reconfigure it to run ssh on crazy high port 🙂

Sources

http://alexbilbie.com/2014/07/using-ansible-with-a-bastion-host/external link
http://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/external link
https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Multiplexingexternal link