Maximize JS

JavaScript source maps are really cool. They allow people to ship minified code while retaining the ability to map back to the original source code in development tools like browser debuggers. There are three pieces to a full source map experience when debugging in browsers:

  • The minified script, which the browser has already
  • The source map, which the browser can get from a comment at the end of the minified script
  • The original script, which the browser can get from the source map

The source map and original script are also retrieved from the web server. If you are missing either the source map or the original script, existing tools give up and only show the minified script. I had a theory that the source map contained enough data by itself to make a decent reproduction of the original script. Comments will be missing, whitespace won’t be the same, etc. However, the code would very readable, and details such as what libraries have been used should be evident.

I decided to test the theory by making Maximize. Maximize takes a URL to a script hosted on a website and outputs a deobfuscated, beautified version of the script, assuming the source map is available. For example, this is the beginning of the minified script on

"use strict";(function(e,n,t){function r(e){return String.fromCharCode(e)}function i(e){return e&&"number"==typeof e.length?"function"!=typeof e.hasOwnProperty&&"function"!=typeof e.constructor?!0:e instanceof an||Yt&&e instanceof Yt||"[object Object]"!||"function"==typeof e.callee:!1}function o(e,n,t){var r;if(e)if(k(e))for(r in e)"prototype"!=r&&"length"!=r&&"name"!=r&&e.hasOwnProperty(r)&&,e[r],r);

After maximizing, the maximized script starts to look like:

"use strict";
(function(window, document, undefined) {
    function fromCharCode(code) {
        return String.fromCharCode(code)

    function isArrayLike(obj) {
        return obj && "number" == typeof obj.length ? "function" != typeof obj.hasOwnProperty && "function" != typeof obj.constructor ? !0 : obj instanceof JQLite || jQuery && obj instanceof jQuery || "[object Object]" !== || "function" == typeof obj.callee : !1

    function forEach(obj, iterator, context) {
        var key;
        if (obj)
            if (isFunction(obj))
                for (key in obj) "prototype" != key && "length" != key && "name" != key && obj.hasOwnProperty(key) &&, obj[key], key);
            else if (obj.forEach && obj.forEach !== forEach) obj.forEach(iterator, context);
        else if (isArrayLike(obj))
            for (key = 0; obj.length > key; r++), obj[key], key);
            for (key in obj) obj.hasOwnProperty(key) &&, obj[key], key);
        return obj

If you look further into the script, you’ll find the AngularJS library along with all the fontdragr modules and controllers. This is all generated from the minified script and the source map, without any need of the original script! For web developers who minify their scripts partly for obfuscation, watch out!


How to grab SVGs from websites

So you come across an SVG on a website, and you want to save it to a file. Maybe you want to send it by email, or put it in a presentation, etc. You think, “I know, I’ll just right-click on it and Save as…” Wrong! This saves the webpage itself. In fact, there is no easy way to do this. You can find plenty of people online who also want this functionality, but I haven’t found any solutions that render directly to an svg. I’ve found solutions that render to a PDF, but there are limitations and issues with PDFs. Besides, once you have a real SVG, you can then use other tools to convert it to whatever format you want very easily.

What to do… Well, I spent some time poking around with SVG files and D3 SVG visualizations to figure out how they compared. I noticed that the structure of the HTML <svg> tag and its descendents looked the same as the structure of the SVG element in SVG files. The only difference is that the SVG element in SVG files included two XML namespace attributes as well:

<svg xmlns:svg="" xmlns="" width="620" height="90">

That’s a pretty simple fix, but it’s not quite enough. Now, we need to add a proper doctype header:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>

We’re almost there! If you then put these two pieces together in a new file, your SVG may look correct. If your SVG is a D3 visualization, probably not. D3 SVGs use CSS style rules to specify how SVG elements look. Luckily, HTML and SVG use the same CSS mechanism. We simply need to insert all the relevant CSS rules in a new <style> tag under the <svg> tag.

Ok, great, but this is a bit tedious… Yeah, I know, so I wrote svgrab. It’s a node.js script, so install node and npm. Then install the script: npm install -g svgrab. Now you can grab export SVGs from websites to a file like a pro! You can also define a timeout to wait for, since some D3 visualizations are time-dependent. Under the covers, svgrab runs phantomjs, which renders the website in a headless environment. It then selects the first SVG element (you can optionally specify another SVG element if you need) and performs the transformations above.

Now when you need to share your awesome D3 visualization somewhere other than in a browser, you can render to SVG (and then to something else if needed) much easier!


Tracing Visualizations

I’ve been playing with javascript function tracing. There’s a lot of interesting ways function trace data can be presented. First, one must choose whether to to present a linear trace through the code or a call graph. Once you’ve made a choice of data type, how you present the data becomes key to how easily insights are gained.

I originally started by simply logging linear traces to the console. For test data, I’m using the open source Pocket Island HTML5 game. This provided some interesting data about the flow of functions as they occurred, but it’s too much data even after I filter out fast functions and traces.


At New Relic, we’ve been playing around with D3.js for visualizations. I decided to try to implement a call graph instead, and came up with a somewhat useful tree diagram.


However, the diagram gets rather unwieldy the more functions and depth your tree has. And though I highlight slow and/or hot paths through the code, it still doesn’t provide an easy way to gauge the relative time spent by each function. My next goal is to switch to a sunburst diagram, which provides multiple benefits.


Firstly, we can do both linear traces and call graphs using the sunburst diagram. For linear traces, the radial axis would represent time, and each block in the rings around the center would represent a function being called. Outer rings represent functions called by functions in the inner rings. For call graphs, the radial axis would represent the total time, and blocks would represent the relative amount of that time spent in each code path.

Secondly, because the time axis is limited, we no longer have the potential for a horrendously long time axis. Durations will neatly be divvied up around the circle, which still maintains the most important property of helping the user understand the relative differences in time spent among code paths.

Lastly, we can do some interesting coloring to visualize general paths through libraries. Each function block could be colored based on the file in which it resides. This would provide an interesting insight into how segmented one’s code base is.

Mocha Latte 005

Whiskey vs Mocha (and Grunt)

I prefer whiskey in general, but for the purposes of this discussion these are javascript test runner frameworks. They make running your tests easier and gathering output.

For a new project at New Relic, my new employer, I had to develop a test infrastructure for an existing product. I needed three services:

  • Test HTTP app server
  • Separate static file server for serving a JavaScript script
  • Fake data collection service that gather data through a REST API

I also wanted to perform some headless testing, so I decided to use phantomjs. That meant I also needed to start a phantomjs service. So now I have to start four services before running any tests. This led me to investigate Whiskey. Whiskey has a really nice dependencies feature where you can define service dependencies of individual tests. It then looks up the services in a dependencies.json file and fires them up. Once the tests completed, it kills the services.

But I found three main issues with Whiskey. First, it only works properly if you use Whiskey’s built-in assertion library. I prefer the Chai assertion library, mostly because it has some extra useful assertions that Whiskey lacks. Secondly, it runs tests in separate processes, which is great for parallelism but painful if you need to debug your tests. Lastly, although Whiskey has the wonderful dependencies feature, it does a poor job of tearing them down when testing dies unexpectedly. After repeatedly checking and killing leftover processes after many tests, I decided to find something else.

I decided to rejoin the beaten path and check out the popular Mocha test framework. However, Mocha doesn’t have any dependency running functionality. After some searching and playing around, I found that the Grunt JS task runner has the perfect plugin: external_daemon. The combination of external_daemon then mocha wrapped in a Grunt task provides the dependency running I needed while allowing me to use the more stable and versatile Mocha test framework.

I’m not sure we’ll stay with this setup long term, but it seems to be working alright so far. We’ve integrated this test framework into a new continuous integration/deployment pipeline using Jenkins. So far, I’m quite pleased!