Nagios – downtime on host/service from command line with curl

Sometimes deployment process or other havy task may cause some Nagios checks to rise below normal levels and bother admin. If this is expected and you want to add downtime on host/service during this task you may use this script:

#!/bin/bash

function die {
  echo $1;
  exit 1;
}

if [[ $# -eq 0 ]] ; then
    die "Give hostname and time in minutes as parameter!"
fi

if [[ $# -eq 1 ]] ; then
    MINUTES=15
else
    MINUTES=$2
fi

HOST=$1
NAGURL=http://nagios.example.com/nagios/cgi-bin/cmd.cgi
USER=nagiosuser
PASS=nagiospassword
SERVICENAME=someservice
COMMENT="Deploying new code"

export MINUTES

echo "Scheduling downtime on $HOST for $MINUTES minutes..."

# The following is urlencoded already
#STARTDATE=`date "+%Y-%m-%d %H:%M:%S"`
STARTDATE=`date "+%d-%m-%Y %H:%M:%S"`
# This gives us the date/time X minutes from now
#ENDDATE=`date "+%Y-%m-%d %H:%M:%S" -d "$MINUTES min"`
ENDDATE=`date "+%d-%m-%Y %H:%M:%S" -d "$MINUTES min"`
curl --silent --show-error \
    --data cmd_typ=56 \
    --data cmd_mod=2 \
    --data host=$HOST \
    --data-urlencode "service=$SERVICENAME" \
    --data-urlencode "com_data=$COMMENT" \
    --data trigger=0 \
    --data-urlencode "start_time=$STARTDATE" \
    --data-urlencode "end_time=$ENDDATE" \
    --data fixed=1 \
    --data hours=2 \
    --data minutes=0 \
    --data btnSubmit=Commit \
    --insecure \
    $NAGURL -u "$USER:$PASS"| grep -q "Your command request was successfully submitted to Nagios for processing." || die "Failed to con
tact nagios";

echo Scheduled downtime on nagios from $STARTDATE to $ENDDATE

Threat this script as template with some tips:

  • I you want to add downtime on service, then provide SERVICENAME and --data cmd_typ=56 \.
  • If you want downtime on whole host, just remove this line: --data-urlencode "service=$SERVICENAME" \ and --data cmd_typ=86 \
  • Another thing that in my example nagios page use basic auth for security, if your don’t use it, you may remove -u "$USER:$PASS" from parameters.
  • If you get Start or end time not valid, then you have to adapt dates to your formats of dates accepted by Nagios (probably this depends on Nagios version or timezone configuration).

Source:
http://stackoverflow.com/questions/6842683/how-to-set-downtime-for-any-specific-nagios-host-for-certain-time-from-commandli

Grafana – installation and configuraton with InfluxDB and CollectD on Debian/Ubuntu

Now when you have CollectD and InfluxDB installed you may configure Grafana 🙂

First configure repo with current Grafana version (select your distro):

curl https://packagecloud.io/gpg.key | sudo apt-key add -
deb https://packagecloud.io/grafana/testing/debian/ wheezy main

Now install package (on wheezy I needed to install apt-transport-https to allow installation of packages from repo via HTTPS):

apt-get update
apt-get install -y apt-transport-https
apt-get install -y grafana

By default Grafana will use sqlite database to keep information about users, etc:

[database]
# Either "mysql", "postgres" or "sqlite3", it's your choice
;type = sqlite3
;host = 127.0.0.1:3306
;name = grafana
;user = root
;password =

If that’s ok for you, you may leave it as is. I prefer to configure MySQL database (create user, database, grant permissions to user):

[database]
type = mysql
host = 127.0.0.1:3306
name = grafana
user = grafana
password = mydbpassword

So Grafana should be running on port 3000 by default, now it’s time to connect ex.: http://localhost:3000 (use your host). Now click Data sources on left panel, then Add new on top panel and fill source data like below:

Grafana add data source

Because we didn’t set authorization for InfluxDB you may just type whatever login/password there. Now Test Connection and Save and you should be ready to play with Grafana.

I also used scripted dashboard for Grafana to add easily statistics for my hosts, you may find it here: https://github.com/anryko/grafana-influx-dashboard

Source:
http://docs.grafana.org/installation/debian/

InfluxDB – installation and configuration on Debian/Ubuntu

I wanted/needed some statistics on few my machines. I saw earlier grafana and was impressed so this was starting point. Then I started reading about graphite, carbon and whisper, and then… I found InfluxDB. Project is young but looks promising.

Let’s start! On project page there is no info about repo but it’s available, configure it:

curl -sL https://repos.influxdata.com/influxdb.key | apt-key add -
echo "deb https://repos.influxdata.com/debian wheezy stable" > /etc/apt.sources.list.d/influxdb.conf

for Ubuntu use url like (of course selecting your version):

echo "deb https://repos.influxdata.com/ubuntu wily stable" > /etc/apt.sources.list.d/influxdb.conf

Now install package (on wheezy I needed to install apt-transport-https to allow installation of packages from repo via HTTPS):

apt-get install -y apt-transport-https
apt-get install -y influxdb

Now edit /etc/influxdb/influxdb.conf and uncoment/fill [collectd] section like this:

[collectd]
  enabled = true
  bind-address = ":8096"
  database = "collectd_db"
  typesdb = "/usr/share/collectd/types.db"

You may adjust port to whatever suits you best. database sets InfluxDB database used to store collectd data, and typesdb is file from collectd package defining collectd metrics structure (this is location for Debian) – so you have collectd service installed earlier.

Now you may check if InfluxDB is working fine by connecting to web admin panel, by standard on port 8083.

Source:
https://github.com/influxdata/influxdb/issues/585
https://anomaly.io/collectd-metrics-to-influxdb/

CollectD – installation and configuration with InfluxDB on Debian/Ubuntu

I wanted/needed some statistics on few my machines. I saw earlier grafana and was impressed so this was starting point. Then I started reading about graphite, carbon and whisper, and then… I found InfluxDB. Project is young but looks promising.

Installation of collectd is easy on Debian because packages are in default repo. One problem is that packages may be old, ex. on wheezy it version 5.1. But in backports/backports-sloppy you may find current 5.5, so enable backports first:

echo "deb http://http.debian.net/debian wheezy-backports main contrib non-free" > /etc/apt/sources.list.d/backports.list
echo "deb http://http.debian.net/debian wheezy-backports-sloppy main contrib non-free" >> /etc/apt/sources.list.d/backports.list

Install package:

apt-get update
apt-get install -y -t backports-sloppy collectd collectd-utils
# or on recent system just
apt-get install -y collectd collectd-utils

Now edit configuration /etc/collectd/collectd.conf and add network section:

LoadPlugin network

<Plugin "network">
Server "localhost" "8096"
</Plugin>

Use your InfluxDB hostname:port.

Now select and add enable some plugins – list here and restart service:

service collectd restart

That’s all – now install InfluxDB.

Source:
https://anomaly.io/collectd-metrics-to-influxdb/
http://backports.debian.org/Instructions/

Let’s Encrypt – without auto configuration

From the first moment I heard about Let's Encrypt I liked it and wanted to use it as fast as possible. But the more I read how they want to implement it, the more I dislike it.
Current project with automatic configuration is not what I want to use at all. I have many very complicated configs and I do not trust such tools enough to use them. I like UNIX's single purpose principle, tools should do one thing and do it well – nothing more.

But there is one neet tool that use Let's Encrypt API only leaving all configuration for me, it's acme-tiny python based script. I won’t copy/paste examples – documentation is written pretty well.