Considering Speed and Slowness in AngularJS

December 2nd, 2013 Permalink

AngularJS is a Javascript framework used as the basis for single-page applications that largely run in the client, usually exchanging information with the server via REST or REST-like APIs. Like most of its sibling frameworks it is fast in modern browsers: this is an age of machines with a great deal of memory and processing power. If you are building an application that never displays more than a modest amount of data to the user all at once, and will not be used on mobile devices or tablets, then you rarely have to pay all that much attention to performance on the client. To a first approximation everything just works and just works fast enough for the matter at hand.

The Basics: What Chews Up Time

Under the hood only a few line items are important from a speed point of view in the average AngularJS application:

  • DOM manipulation.
  • The digest cycle that creates DOM updates when data models change.
  • Latency for REST requests to the backend.

Vanilla AngularJS is usually fast enough on a desktop: modern browsers are very efficient when it comes to DOM manipulation, and small HTTP requests are fast over today's internet. Running the digest to process and update simple partials, directives, filters, and other code associated with the current view is also usually fast enough. When one of more of these items stop being fast, then that is when you start to run into issues and the need to spend time on optimization.

Watch Functions Are Invoked Many, Many Times

AngularJS runs a repeating digest process that keeps the displayed DOM as described in partials in sync with the underlying data. To a first approximation every binding of data to a partial requires a watch function to run during a digest. E.g. this requires a watch on $scope.value:

<span>{{value}}</span>

You can overload and dramatically slow AngularJS on a desktop browser by creating thousands of small watches in a single view. The display of tabular data is a frequent culprit here. It is a good idea to restrict yourself to a couple of hundred data bindings at most in any given view if you want snappy performance, and adjust downward as appropriate if dealing with mobile devices.

You can write your own watch functions, though a good rule of thumb is to ask yourself whether you really, actually, truly need to. It is rarely the case that a watch function is required unless you are trying to integrate non-AngularJS code with the AngularJS digest and update cycle.

When you do have to write watch functions, they must run very rapidly. Meaningful use of jQuery inside a watch function is a bad idea, for example. It will be invoked many, many times, and enough of that will make your application sluggish:

$scope.watch(function () {
  // Don't do this sort of thing.
  // return jQuery('.className').length;
  // Use a reference to the underlying data instead.
  if ($scope.items) {
    return $scope.items.length;
  } else {
    return 0;
  }
}, function () {
  // Do something in response.
});

Filter Functions Are Invoked Many, Many Times

Whenever you write a filter, bear in mind that this may get called many, many times. Digest loops can run sequentially several times even in simple AngularJS partials. It is easy to create a filter that can get called fifty times for a table of ten rows every time that view is rendered, not just the first time.

Not so long ago I ran into a filter - more a horrible hack, really - that was doing something like this:

someModule.filter('someFilter', function () {
  return function (id) {
    return jQuery('#' + id).children('span').text();
  };
});

As you can imagine, this was definitely slowing down the view. It's probably best not to ask what the author was thinking at the time; I didn't, I just replaced it and its role in the partial with code that worked from the underlying data rather than trying to join together loose ends after the fact.

Here are two rules for filters: firstly, if the same effect can be simply achieved by decorating the underlying data, such as by adding a "formattedValue" property, then do it that way. It will always be faster because it only happens once, not multiple times per digest cycle. Secondly, when you do have good reasons to write a filter make sure that it is blindingly fast.

In general, aim to replace this:

<span>{{value | formatForDisplay}}</span>

With this:

<span>{{formattedValue}}</span>

Work to Minimize the Number of Digest Cycles

AngularJS will run digest cycles, and thus invoke watch functions for all data bindings, in response to a variety of events. The most common are user actions set to trigger a Javascript function via directives such as ngClick, and AJAX requests made via the $http service. The more frequently that digest cycles run, the more work the browser has to do.

One practical consequence is that running many AJAX requests results in much more of a slowdown in AngularJS than in a framework that uses a different methodology to bind model data to the displayed DOM. So consider designing your application up front to avoid multiple requests during the view rending process, as this can have more of a negative impact than you might imagine. Unlike a plain jQuery application those AJAX requests are not effectively free: there is a comparatively large overhead that kicks off when each one completes.

Similarly, if reacting to user input - especially typing - you might try implementing a short delay rather than react to every change. This is already the best practice for a web application, but the need for it is greater here.

Multiple Digests Sooner Can Be Worse Than One Digest Later

Progressive loading of data is quite easy to accomplish in AngularJS. Write the partial with whatever rules you want for display of complete or partial data, and then just add the data to the model as it turns up. However, it is well worth testing to see whether it is in fact a better user experience to run a partial updates versus showing a loading spinner and waiting for the full data to arrive. Don't assume that multiple ongoing AngularJS digests are going to be faster or prettier than running it all at once: it might be, but that depends on browser, device, internet latency, and the details of the partial and which data is turning up first.

On mobile browsers in particular it is quite possible to craft a partial and data loading scenario that is slow and ugly in comparison to waiting with a spinner for the full data to arrive.

Batarang is a Great Tool to Identify Expensive Functions

You should be using Batarang or a similar inspector. Not all the time, because it is slow as molasses, but from time to time in order to find and eliminate slow functions, directives, and partials. Batarang will give you a measure of elapsed time to render a view, broken out by function, which is great for pinpointing starting points for optimization.

One thing you'll quickly notice is that ngRepeat shows up at the top of the cost list in Batarang for every view in which it is present. This is understandable: it is usually the case that ngRepeat wraps a region of a partial that contains further directives. All of that processing time contributes to the number that Batarang reports for ngRepeat.

ngRepeat is an Enabler of Slowness

Wherever you find slowness in AngularJS, you will usually also find ngRepeat. It acts as a slowness amplifier by causing code that would be only slightly slow on its own, invoked once, to become very slow because it is being invoked many times in the ngRepeat loop.

The reductio ad absurdum case is simple: try to render a 6000 row table where each row consists of a single cell and AngularJS binding. This will not be fast, for all the obvious reasons, and it can become worse if you start adding or removing rows on the fly, or otherwise forcing AngularJS to rerun digest cycles that include the ngRepeat directive.

<!--
  If values is a 6000 element array, then this will be slow.
-->
<tr ngRepeat="value in values">
  <td>{{value}}</td>
</tr>

So pay attention when using ngRepeat. At the very least, don't expect to be able to combine many rows with many bindings or complex behavior per row and still have a snappy, responsive view at the end of the day.

ngHide and ngShow Make No Difference to Speed

If you use ngHide, or ngShow to hide content, then those hidden sections of the partial are still evaluated by AngularJS as though they were visible. DOM elements are created in all cases. You don't save any processing cycles beyond those associated with the browser actually rendering the content that is now hidden.

ngIf and ngSwitch Might Make a Difference, Depending on Your Scenario

Unlike ngHide and ngShow, the ngIf and ngSwitch directives remove content from the DOM if it is set to be hidden, though it may first compile the hidden elements to create a copy that can be replaced later. In practice I've found that using ngIf or ngSwitch may or may not make a noticeable difference to the speed with which a view renders: it depends on the details of your scenario and the partial content within the directive. It should, in theory, make ongoing operations more responsive by virtue of removing elements and watches from the page. It is simple to try it out, however, especially if you are already using ngHide or ngShow, so why not run the experiment?

One approach to using ngIf is to wrap the inner content into a separate partial and ngInclude. If the ngIf condition is false at the outset, then the included partial will not be loaded and evaluated, which will definitely remove its contribution.

<div ng-if="value">
  <div ng-include="'/path/to/partial.html'"></div>
</div>

The Curse of IE8

Fortunately there might be only another year of supporting IE8 for those of us who have to grin and bear it today. When it comes to DOM manipulation, IE8 is painfully slow: it crawls to accomplish tasks that modern browsers can do near instantly. Worse, the thread it uses for DOM manipulation is the same thread used for all UI activities, so if you give it a large DOM change to chew over then the browser freezes until it is done.

It is no exaggeration to say that a table of twenty rows and two hundred bindings can take ten to twenty seconds to render in IE8, compared to 50ms or so in Chrome. For any average AngularJS application, IE8 is borderline unusable. Expect to see view rendering times of a few seconds for even simple content.

My advice for anyone designing a new significant application that must support IE8: don't use AngularJS or Ember or any of the other frameworks that are heavy on the DOM manipulation. You'll spend more time working around the slowness inherent in IE8 than you'll save by using the framework in the first place. Either postpone for a year, use something other than a modern single-page application framework, or find a way to avoid supporting IE8.

Mobile Browsers are Slow

Yes, mobile browsers are slow in comparison to the desktop, but since you will be displaying much less data on the screen in a dedicated mobile web application this tends to even things out in practice. All bets are off for a mobile device viewing a non-mobile website of course - but then if the user makes a habit of that they should be used to waiting by now.

Responsive applications are a different story, and more care has to be taken if using AngularJS here. One thing to bear in mind is that a responsive application should probably not just use ngIf, ngShow, or ngHide in order to show or hide sections of content. That content will still be rendered into the DOM by Angular JS, the watches and digests will still run, and the browser will still incur that cost in processing time even though the content is hidden. In effect this is a worse example of the responsive site tax that falls on mobile devices regardless of implementation: a lot of stuff is thrown down the wire and processed, but little of it is used.

Strategies for Replacing Slow Directives Such as ngRepeat

1. Use Bindonce

If you only have to display and not update data in a view then using Bindonce is definitely an improvement over not using it. Unfortunately it's all too rarely the case that an application is display-only, and Bindonce isn't as fast as just using jQuery to directly alter the DOM.

2. Wrap Handlebars and jQuery in a Directive

If the data does in fact update, then you will have to move out of AngularJS and into some other template rendering system. Pretty much any sensible choice will be much faster than the digest cycle, but Handlebars is used in this example, where we can replace this:

<table>
  <tr ngRepeat="value in values">
    <td>{{value}}</td>
  </tr>
</table>

With this:

<table displayTableRows rows="values"></table>
(function () {
  'use strict';
  var example = angular.module('example');

  // Create a Handlebars template to render the inner HTML for
  // this directive.
  var template = [
    '{{#each rows}}',
    '<tr>',
      '<td>{{this}}</td>',
    '</tr>',
    '{{/each}}'
  ].join('n');
  template = Handlebars.compile(template);

  function displayTableRows() {
    return {
      restrict: 'A',
      link: function (scope, elem, attr) {
        /**
         * Create the HTML and put it into place via jQuery. This is about as
         * optimized and rapid as any such operation can be.
         */
        function render() {
          var html = template({
            rows: attr.rows
          });
          jQuery(elem).html(html);
        }

        // Use a watch to link this simple Handlebars templating operation into
        // the AngularJS digest cycle - but do it in a way that is fast. The watch
        // function has to be efficient, as does the reaction to it.
        scope.$watch(
          function () {
            // Only react to changes in length and the first value - this should
            // cover the usual sorts of change in most common tabular data.
            if (attr.rows && attr.rows.length) {
              return attr.rows[0] + attr.rows.length;
            } else {
              return 0;
            }
          }, function () {
            render();
          },
          false
        }
      }
    }
  }

  example.directive('displayTableRows', [
    displayTableRows
  ]);

}());

The downside of this approach is that any update to the data is an all or nothing affair: the whole table is rerendered. On the plus side the use of jQuery makes that pretty fast for any moderately sized set of template HTML.

Another downside is that any meaningful functionality starts to become very complex. If other parts of your AngularJS application alter the data that your directive depends on, or if there are form fields and functionality associated with table rows, then you suddenly find yourself back in the dark ages of jQuery DOM manipulation, event models, and manually ensuring that different parts of the application react when they should. It is very painful and time-consuming, and exactly what you wanted to avoid by choosing AngularJS.

3. Pagination, and Using Smaller Page Sizes

Paginate your data sets: display less at one time, break down the loading and rendering processing into smaller chunks. This doesn't really solve the problem at all, of course. It is just a workaround that might be acceptable in some situations.

4. Forms of Infinite Scrolling

If you absolutely must display a large amount of data, but only some of it is visible any given time, then you can use some form of infinite scrolling. Fortunately this is a popular enough technique that a range of folk have tinkered with it and found various optimizations that can make it fast enough under different scenarios. You might look at:

This has the advantage of maintaining the AngularJS connection between model and DOM, so you don't have to dive in to maintaining that yourself via events and increased complexity of your code.

You Can Hide a Lot Behind Internet Latency

If you absolutely must do something slow in your application, find a way to do it while user-initiated HTTP requests are processing. Viewing a loading spinner for a short period of time after requesting, saving, or updating data is expected in many types of application. To be clear, this is far from ideal: in the perfect application there is no imposed delay in the user experience for any reason. But where there has to be, at least try to combine it with the imposed delay due to internet latency.