Quantcast
Channel: John Bennett's blog » soa
Viewing all articles
Browse latest Browse all 10

How msnbc.com handled the 2010 US election results

$
0
0

This post is long overdue. Developers from a few other news sites (nytimes.com,chicagotribune.com) posted shortly after the US elections in November 2010, describing how their sites weathered the evening’s flood of traffic — or didn’t. I thought I would add to the mix by describing how msnbc.com stayed comfortably above the flood waters.

Our challenge was the same as theirs: Serve up hundreds of election results pages, to tens of millions of people, with very low page load times, and the freshest possible data each time.

It’s pretty easy to serve up pages to almost any number of users – render your HTML once and cache if forever.  It’s also pretty easy to always give your users the freshest possible data – don’t cache anything and go to your database (or other data source) on every request.

What’s hard is serving up very fresh data to a gazillion users when that data is constantly changing. That’s the fundamental technical challenge of online news, and election night is a prime example.

500 election results pages

We created a separate page for each individual house, senate, and gubernatorial race — around 500 in total. Senate and gubernatorial race pages include county-level vote totals. Most race pages also include exit poll results — as many as a couple dozen questions, broken out for each candidate into 5-10 demographic groups.

On top of those individual pages, each state has a summary page containing the top-line results for all races in the state. We also have 4 national summary pages: the main page listing just the key races (determined by our editors), and one each for HouseSenate, andgubernatorial races.

On election night, every page across all of msnbc.com displayed a Flash widget showing key race results. We also promoted a Flash widget that anyone could embed into their own site.  Both widgets were making requests at regular intervals to get the data they needed.

Change and consistency

All of that data changes incessantly.  The unit of consistency we need to maintain is the race. It’s no problem if one race is showing slightly staler data than another. (As long as it isn’t stale for more than about a minute!) But for a given race, we need consistency across the board.

Any update has to take effect across all of our servers in multiple data centers within a very short period of time. Otherwise, a user refreshing the page could bounce back and forth between pages showing different results for the same race. Not good.

We don’t yet use a distributed cache. We’re behind the times there. (We’ll be rectifying that in the near future.) We currently use the ASP.NET cache, which is in-process in each web app instance. We use our service bus to get the data out to each server.

We get our data from the NBC News election database. It is highly normalized and contains historical results for at least several election cycles. The first thing we do is to extract just the data we need into a few denormalized tables.  We then poll those tables frequently, and publish one message on the bus containing the full set of data for each race. Our front-end rendering servers subscribe to those messages.

The CAP theorem tells us that we can’t get consistency, availability, and partition tolerance all at the same time. We must have availability and partition tolerance (for brief periods), so we settle for eventual consistency.  In practice, each race message is received and processed by all front-end servers in less than a second.  That isn’t strictly 100% consistent in an ACID sense, but it’s consistent at a human timescale.

Two levels of caching

The “processing” done by each of our front-end servers is simply updating and invalidating cache items. Our caching for election results is really no different than our everyday content caching scenario.

We cache two types of things: data objects and rendered output. A data object contains the raw data for a news story, a video, race results, etc. Think of .NET objects with properties like Headline, Deck, Body, Candidates, VotePercentage, etc.  Rendered output is the actual html, xml or json that is returned to the browser.

A page’s output is not cached in its entirety. Reusable chunks of page output are independently cached.  This is sometimes called donut caching, although in our case the donut holes are also cached, and may be nested several levels deep.

A chunk is really just an ASP.NET MVC child action. If a child action with essentially the same parameters is reused across hundreds of pages, we dont’t want to drop all of those pages out of cache when the output of the child action changes.

As an action executes, we track each data object that is accessed. When the action is finished, we cache the result with ASP.NET cache dependencies on each of the cached data objects. (Actually, there is a level of indirection in the dependency. We want to allow the data objects to fall out of cache without pulling the rendered output with them. Only an update to a data object causes dependent output to be flushed from cache.)

When a message containing a data update arrives at a rendering server, a message handler first updates the data object in cache.  It then invalidates any cached output that depended on that data object.  This is all done on a message handling thread, not an ASP.NET request thread.

The next request for the action will re-render and re-cache it’s result.  The surrounding page output will likely already be cached, so the re-rendered action output is just inserted in the donut hole as we stream the response to the browser.

In the end, rendering of each page is extremely fast because:

  • We rarely have to render a full page.  Most of the response is already cached as a series of strings, which are written directly into the response stream.
  • For those parts of a page that do need to be re-rendered, all the required data objects are very likely to be in cache.  We very rarely leave the process during an actual request.

Make that three levels of caching

One of the easiest ways to offload traffic for any site is using a CDN, and we use several.  However, a CDN serves inherently stale data. For election night, we configured Akamai to cache race pages for just 30 seconds.

This may sound like it would prevent us from getting many requests to our servers.  However, each of the 80 or so Akamai midgress nodes requesting each of the 1500 pages (500 pages in 1 desktop and 2 mobile output flavors) every 30 seconds means a total load of about 4000 requests per second (RPS). This figure doesn’t include requests from various widgets consuming json and xml formats.

In theory, if we saw our servers tipping over under heavy traffic, we could increase the Akamai cache time.  But there’s a catch-22: It takes up to 4 hours for new Akamai settings to take effect throughout their network.  By the time a change took effect, it would be too late.

For election night we set up a backup configuraion using L3 with a longer cache time, just in case.  As it turned out, we didn’t need it.

Testing, testing

NBC News runs dozens of data transmission tests leading up to election night. Hundreds of permutations of what could happen in a single race, and across all races, is tested, along with failures like bad data, partial data, no data, you name it. Not to mention time compressed tests, where a series of changes representing an entire election night’s results are sent in just one or two hours.

We participated in most of these tests, making sure that our code handled all the expected and unexpected scenarios as desired.

Load, performance, and stress testing is an absolute necessity. We have to know where the breaking point is, so that we can take action (like extending CDN cache time and/or reducing update frequency) before our servers are overwhelmed.

We deploy to test environments and run those full system tests very frequently — daily leading up to election night. (We’re working hard to make these tests continuous.)  We also use performance profiling tools to point us to the worst bottlenecks in our code. That gives us the data to decide whether its worth optimizing specific methods.

Given the load testing results, we were confident we could handle election night.

So how did we do?

News traffic is inherently spiky. A major news event can cause traffic to spike 10x or even 100x in an hour. So our Ops team likes to keep server utilization very low on a normal day.  For example, typical weekday CPU utilization on a front-end server is 5-8%.  The nice thing about election night is that you know the spike is coming.

At our peak on election night, we had sustained CPU on our front-end servers around 40%. Even with Akamai out front, each server was handling on the order of 1000 RPS.

For a few minutes early on, we inadvertently pointed the site-wide Flash widget directly to a small number of servers. Those servers were handling as many as 16,000 RPS until we corrected the mistake. Oops! The good news is that they handled the load quite well.

Page load times were in our normal range throughout the evening.  We updated each of the 500 races at least once a minute for the entire night (i.e., a minimum of 8 races updated per second), and the updates made it out to all front-end servers in less than a second every time.

We had built some “Oh, crap” functionality: Ways to manually update results in the case where we lost connectivity from NBC News or to force updates to all servers if they somehow got out of sync.  But we didn’t need to use any of it.

All in all, it was a very good night for us.


Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles





Latest Images