Tracing Visualizations

I’ve been playing with javascript function tracing. There’s a lot of interesting ways function trace data can be presented. First, one must choose whether to to present a linear trace through the code or a call graph. Once you’ve made a choice of data type, how you present the data becomes key to how easily insights are gained.

I originally started by simply logging linear traces to the console. For test data, I’m using the open source Pocket Island HTML5 game. This provided some interesting data about the flow of functions as they occurred, but it’s too much data even after I filter out fast functions and traces.


At New Relic, we’ve been playing around with D3.js for visualizations. I decided to try to implement a call graph instead, and came up with a somewhat useful tree diagram.


However, the diagram gets rather unwieldy the more functions and depth your tree has. And though I highlight slow and/or hot paths through the code, it still doesn’t provide an easy way to gauge the relative time spent by each function. My next goal is to switch to a sunburst diagram, which provides multiple benefits.


Firstly, we can do both linear traces and call graphs using the sunburst diagram. For linear traces, the radial axis would represent time, and each block in the rings around the center would represent a function being called. Outer rings represent functions called by functions in the inner rings. For call graphs, the radial axis would represent the total time, and blocks would represent the relative amount of that time spent in each code path.

Secondly, because the time axis is limited, we no longer have the potential for a horrendously long time axis. Durations will neatly be divvied up around the circle, which still maintains the most important property of helping the user understand the relative differences in time spent among code paths.

Lastly, we can do some interesting coloring to visualize general paths through libraries. Each function block could be colored based on the file in which it resides. This would provide an interesting insight into how segmented one’s code base is.

Mocha Latte 005

Whiskey vs Mocha (and Grunt)

I prefer whiskey in general, but for the purposes of this discussion these are javascript test runner frameworks. They make running your tests easier and gathering output.

For a new project at New Relic, my new employer, I had to develop a test infrastructure for an existing product. I needed three services:

  • Test HTTP app server
  • Separate static file server for serving a JavaScript script
  • Fake data collection service that gather data through a REST API

I also wanted to perform some headless testing, so I decided to use phantomjs. That meant I also needed to start a phantomjs service. So now I have to start four services before running any tests. This led me to investigate Whiskey. Whiskey has a really nice dependencies feature where you can define service dependencies of individual tests. It then looks up the services in a dependencies.json file and fires them up. Once the tests completed, it kills the services.

But I found three main issues with Whiskey. First, it only works properly if you use Whiskey’s built-in assertion library. I prefer the Chai assertion library, mostly because it has some extra useful assertions that Whiskey lacks. Secondly, it runs tests in separate processes, which is great for parallelism but painful if you need to debug your tests. Lastly, although Whiskey has the wonderful dependencies feature, it does a poor job of tearing them down when testing dies unexpectedly. After repeatedly checking and killing leftover processes after many tests, I decided to find something else.

I decided to rejoin the beaten path and check out the popular Mocha test framework. However, Mocha doesn’t have any dependency running functionality. After some searching and playing around, I found that the Grunt JS task runner has the perfect plugin: external_daemon. The combination of external_daemon then mocha wrapped in a Grunt task provides the dependency running I needed while allowing me to use the more stable and versatile Mocha test framework.

I’m not sure we’ll stay with this setup long term, but it seems to be working alright so far. We’ve integrated this test framework into a new continuous integration/deployment pipeline using Jenkins. So far, I’m quite pleased!