The way I’m doing this relies on a feature I wrote for Graphite that was only recently merged to trunk, so at time of writing that feature isn’t in a stable release. Hopefully it’ll be in 0.9.10. Until then, you can at least test this setup using Graphite’s trunk version.
Oh yeah, the new feature is the ability to send graph images (not links) via email. I surfaced this feature in Graphite through the graph menus that pop up when you click on a graph in Graphite, but implemented it such that it’s pretty easy to call from a script (which I also wrote – you’ll see if you read the post).
Also, note that I assume you already know Nagios, how to install new command scripts, and all that. It’s really easy to figure this stuff out in Nagios, and it’s well-documented elsewhere, so I don’t cover anything here but the configuration of this new feature.
I’m not a huge fan of Nagios, to be honest. As far as I know, nobody really is. We all just use it because it’s there, and the alternatives are either overkill, unstable, too complex, or just don’t provide much value for all the extra overhead that comes with them (whether that’s config overhead, administrative overhead, processing overhead, or whatever depends on the specific alternative you’re looking at). So… Nagios it is.
One thing that *is* pretty nice about Nagios is that configuration is really dead simple. Another thing is that you can do pretty much whatever you want with it, and write code in any language you want to get things done. We’ll take advantage of these two features to actually do a couple of things:
Just to be clear, we’re going to set things up so you can get alert messages from Nagios that look like this (click to enlarge):
And you’ll also be able to track those alert events in Graphite in graphs that look like this (click to enlarge, and note the vertical lines – those are the alert events.):
In production, it’s possible that the proper contacts and contact groups already exist. For testing (and maybe production) you might find that you want to limit who receives graphite graphs in email notifications. To test things out, I defined:
For testing, you can create a test contact in templates.cfg:
define contact{ name graphite-contact service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r,f,s host_notification_options d,u,r,f,s service_notification_commands notify-svcgraph-by-email host_notification_commands notify-host-by-email register 0 } |
You’ll notice a few things here:
In contacts.cfg, you can now define an individual contact that uses the graphite-contact template we just assembled:
define contact{ contact_name graphiteuser use graphite-contact alias Graphite User email someone @example .com } |
Of course, you’ll want to change the ‘email’ attribute here, even for testing.
Once done, you also want to have a contact group set up that contains this new ‘graphiteuser’, so that you can add users to the group to expand the testing, or evolve things into production. This is also done in contacts.cfg:
define contactgroup{ contactgroup_name graphiteadmins alias Graphite Administrators members graphiteuser } |
Also for testing, you can set up a test service, necessary in this case to bypass default settings that seek to not bombard contacts by sending an email for every single aberrant check. Since the end result of this test is to see an email, we want to get an email for every check where the values are in any way out of bounds. In templates.cfg put this:
define service{ name test-service use generic-service passive_checks_enabled 0 contact_groups graphiteadmins check_interval 20 retry_interval 2 notification_options w,u,c,r,f notification_interval 30 first_notification_delay 0 flap_detection_enabled 1 max_check_attempts 2 register 0 } |
Again, the key point here is to insure that no notifications are ever silenced, deferred, or delayed by nagios in any way, for any reason. You probably don’t want this in production. The other point is that when you set up an alert for a service that uses ‘test-service’ in its definition, the alerts will go to our previously defined ‘graphiteadmins’.
To make use of this service, I’ve defined a service in ‘localhost.cfg’ that will require further explanation, but first let’s just look at the definition:
define service{ use test-service host_name localhost service_description Some Important Metric _GRAPHURL "http://graphite.example.com/render?width=800&from=-1hours&until=now&target=graphite.path.to.target" check_command check_graphite_data! 24 ! 36 notifications_enabled 1 } |
There are two new things we need to understand when looking at this definition:
These questions are answered in the following section.
In addition, you should know that the value for _GRAPHURL is intended to come straight from the Graphite dashboard. Go to your dashboard, pick a graph of a single metric, grab the URL for the graph, and paste it in (and double-quote it).
This command relies on a small script written by the folks at Etsy, which can be found on github: https://github.com/etsy/nagios_tools/blob/master/check_graphite_data
Here’s the commands.cfg definition for the command:
# ‘check_graphite_data‘ command definition define command{ command_name check_graphite_data command_line $USER1$/check_graphite_data -u $_SERVICEGRAPHURL$ -w $ARG1$ -c $ARG2$ } |
The ‘command_line’ attribute calls the check_graphite_data script we got on github earlier. The ‘-u’ flag is a URL, and this is actually using the custom object attribute ‘_GRAPHURL’ from our service definition. You can see more about custom object variables here: http://nagios.sourceforge.net/docs/3_0/customobjectvars.html - the short story is that, since we defined _GRAPHURL in a service definition, it gets prepended with ‘SERVICE’, and the underscore in ‘_GRAPHURL’ moves to the front, giving you ‘$_SERVICEGRAPHURL’. More on how that works at the link provided.
The ‘-w’ and ‘-c’ flags to check_graphte_data are ‘warning’ and ‘critical’ thresholds, respectively, and they correlate to the positions of the service definition’s ‘check_command’ arguments (so, check_graphite_data!24!36 maps to ‘check_graphite_data -u <url> -w 24 -c 36′)
This command relies on a script that I wrote in Python called ‘sendgraph.py’, which also lives in github: https://gist.github.com/1902478
The script does two things:
To make use of the script in nagios, lets define the command that actually sends the alert:
define command{ command_name notify-svcgraph-by-email command_line /path/to/sendgraph.py -u "$_SERVICEGRAPHURL$" -t $CONTACTEMAIL$ -n "$SERVICEDESC$" -s $SERVICESTATE$ } |
A couple of quick notes:
Fire up your Nagios daemon to take it for a spin. For testing, make sure you set the check_graphite_data thresholds to numbers that are pretty much guaranteed to trigger an alert when Graphite is polled. Hope this helps! If you have questions, first make sure you’re using Graphite’s ‘trunk’ branch, and not 0.9.9, and then give me a shout in the comments.
[zz]SENDING ALERTS WITH GRAPHITE GRAPHS FROM NAGIOS,布布扣,bubuko.com
[zz]SENDING ALERTS WITH GRAPHITE GRAPHS FROM NAGIOS
原文:http://www.cnblogs.com/sanquanfeng/p/3873035.html