Consuming JSON with Splunk in two simple steps

Last week I needed to configure Splunk to consume JSON log fies, the documentation on the Splunk website wasn’t particularly clear, and ended in some strange results with data being repeated…With the help of an old colleague of mine (thanks Matt) he pointed me in the direction of this Splunk Answers question, which described the problem that I was having as well as the solution – fixing the configuration.
So here are the steps required to setup Splunk to consume JSON from a log file. I’ll assume that you already have an instance of Splunk installed.

Step 1 – Install the Universal Forwarder (optional)

The setup that I was working with was a Splunk server running on a Virtual Machine in Azure and an on-premise server where the log files to consume were produced. Splunk provides a useful utilities called the Universal Forwarder that consumes events data and sends it on to the Splunk server.
Installation is really straightforward so I’m not going to cover that here.

Step 2 – Configuring a custom source type

This is the part that caught me out, from the searching that I did the first time around I learnt that I needed to setup a custom source type that told Splunk to parse the data as JSON. The mistake that I made was creating this custom source type on the remote node where I had the Forwarder installed.
To do it correctly, you will need to open/create a props.conf file on the Splunk server with the following content:
[my_custom_type]
INDEXED_EXTRACTIONS = json
KV_MODE = none
 The props.conf file can be found at
$SPLUNK_HOME/etc/system/local/
If props.conf doesn’t exist in this folder (it didn’t for me) then you will need to create it.

Step 3 – Setting up log file monitoring

This is the easy part, and the part that I did do correctly, on the remote node open the inputs.conf file and add the following
[monitor://c:\logs\directory\]
sourcetype=my_custom_type
 The inputs.conf file can be found at
$SPLUNK_HOME/etc/system/local
With that done, data is going in and nothing is being duplicated.
Advertisements

No More Interruptions – Integrating Codealike and HipChat

I’ve recently started using Codealike, a service that tracks various metrics while I’m coding. The data it collects is then presented in a bunch of really useful ways to help determine when I’m being most productive, as well as the places our code base I spend most of my time and various other things.

One of the metrics they calculate as part of this process is how “focused” you are and from this they determine whether or not someone should interrupt you – they have three different levels, No Activity, Can Interrupt and Cannot Interrupt.

One of the worst things that can happen is being interrupted while you’re “in the zone” or as Codealike put it “on fire”.

1-may-i-interrupt

Codealike does provide a webpage that you could put on a display in your office which displays your current status (you can view my current status), however whilst I have two monitors at work I tend you use both of them.

We use HipChat at work as our IM of choice, so, during the last week I’ve started to set my status as Do Not Disturb when Codealike thinks I’m in the zone.

Introducing: Codealike IM Updater

To remove the interruption of having to update my status in HipChat when I’m in the zone I set out to build the Codealike IM Updater. It’s a simple application that periodically checks my status according to Codealike and update my status on HipChat accordingly.

2-im-updaterAs you can see, it is very simple – all the user needs to do is supply their Codealike username, their HipChat API Token and Email address and how you want to map the different levels from Codealike to HipChat. Optionally you can also specify an optional message that is displayed next to your name in HipChat.

The code is available on GitHub and if anyone would like to extend it to work with other IM services then I will happily accept pull requests.

New Relic and Nancy

At DrDoctor we’ve been using Nancy as our web framework for quite sometime now. We’ve found it to have many advantages over ASP.Net MVC (we still have one legacy ASP.Net MVC site running) as well as excellent community support.

Recently we started using New Relic to monitor the performance of our web applications. However I was a bit disappointed when I was first looking at the data it collected and saw this

1-not-splitting-routes

New Relic grouped up all the different routes under the title of “NancyHttpRequestHandler”, I compared this to the list of Transactions that New Relic had picked up from our ASP.Net MVC application and there was a nice list of all the different routes that had been visited.

After a very quick Google search I came across the Nancy and New Relic page on the Nancy wiki and after following the instructions and redeploying our website New Relic started to list all the different Nancy routes

2-individual-routes

New Relic will also pick up any exceptions that are thrown and show these in the Errors section.

We’ve had New Relic capturing data performance data for about a month now and have already used it to make significant improvements to some of the key parts of our web apps.