Finding the Most Gerrymandered Districts


Yesterday, I came across an interesting article discussing Congressional district gerrymandering. In the article, author Andrew Prokop highlighted several of the country’s most gerrymandered districts. Having recently crunched some numbers on geographic data, why not try to quantitatively define the most gerrymandered districts and states?

Defining Gerrymandering

As Prokop noted, there’s not a great way to determine if a district is gerrymandered. Nevertheless, researchers have proposed a few ideas to approximate it. The proposals largely measure gerrymandering in one of two ways: By calculating how far various points on the district’s boundary are from the district’s geographic center, and by comparing the perimeter of the district to that of a similar-sized district with a regular shape (in this case, a circle). Both calculations are far from perfect—the first calculation doesn’t work for noncontinuous districts, while the second is affected by any irregular boundaries, including coastlines and state borders—but they give decent estimates.

Because the effect created by irregular borders is both smaller and a bit easier to adjust for, I chose to rank districts by the ratio of the perimeter of the district to the circumference of a circle of equal area. The larger that ratio, the more gerrymandered it is.

Importantly, this ignores any political definitions for gerrymandering and assumes the best drawn districts are all circles. This assumption clearly doesn’t work geometrically or politically. A more true measure of gerrymandering would compare actual lines to those that make sense given the area’s geography and population distribution, but that adds significantly more complexity to the problem.

Ranking the Districts

After calculating the area and perimeter of each district as well as the circumference of a circle with the same area (discussed in more detail below), districts can be ranked by how severely they’re gerrymandered. The list below shows the top 10 most gerrymandered districts in the United States.

The winner, North Carolina’s 12th District, is hardly a surprise. (According to Wikipedia, “It is an example of gerrymandering.”) Maryland’s 3rd and Florida’s 5th are also examples of some “creative” geographic interpretations. One unexpected addition to the list is Hawaii’s 2nd. In this case, Hawaii’s geography appears to be confusing the ranking, which misinterprets the district’s several islands as severely gerrymandered boundaries. It’s worth noting, however, that this method still ranks three other contiguous districts as having been gerrymandered worse than a district broken into several pieces.

For those interested in the full ranking of all 435 districts, it is available here.

Ranking the States

Four of the top ten most gerrymandered districts are in North Carolina. This certainly makes North Carolina the frontrunner for the most gerrymandered state, but is it actually the worst?

It turns out that’s not a straightforward question to answer. You can’t simply sum each district’s gerrymandering scores because states with more districts tend to have much higher scores. Furthermore, taking the average of all the districts in each state doesn’t work either, because states with irregular borders or coastlines (like Alaska, Hawaii, Louisiana, and North Carolina) will appear more gerrymandered than they actually are.

To attempt to correct for these issues, I adjusted each state’s average score by how gerrymandered the state would appear if it were a single district. The more irregular the state border, the less penalized that state is for having irregular districts.

As the chart below shows, North Carolina is still, in fact, the worst offender. In addition to ranking states, the chart also colors each state according to who controls the process of drawing district lines. States with split executive and legislative branches or with split legislative houses are categorized as having split control. States that only have one Congressional district are grouped in with those that employ independent commissions to draw district lines.

While the graph above implicates both parties, it does suggest that independent commissions, unsurprisingly, help reduce gerrymandering. Half of the ten least gerrymandered states (excluding those with one district) employ independent commissions, while only two of the ten most gerrymandered districts do so. (Additionally, many of the least gerrymandered states have fewer districts. This makes sense because fewer districts provide fewer opportunities for gerrymandering.)

Importantly, this isn’t the only way to rank states—there are other (likely better) ways to measure district gerrymandering, other ways to aggregate it, and additional political and population data that might be useful. If anyone would like to try a different method of analysis or add additional data, I’ll be happy to provide full access to my data and work in Mode.


To rank Congressional districts, I needed to figure out two things: The area and perimeter of each district. Fortunately, D3, the JavaScript visualization library, can do this pretty easily. Using several of D3’s geographic functions and GeoJSON data on Congressional districts, I was able to calculate each district’s (as well each state’s) area and perimeter. That raw data is available here, as is the script that generated it (it’s in an .html file). I. Note that the area data and perimeter data are in different units. Ratios between the two can be compared, but you can’t convert one to the other. Data on who controls the redistricting process was provided by Justin Levitt.

Benn is the chief analyst of Mode.

Plotting the Rest of the Baseball Season


We’re less than two weeks into the 2014 baseball season, and most people would say that it’s too early to make any forecasts about the rest of the year.

Still, as others have noted, though ten games only represents 6% of an MLB season, surely these early games provide some indication of how a team will finish. Does Milwaukee’s 7-2 start mean that they may not be the sub-.500 team they were predicted to be? Are the 4-8 Diamondbacks likely to be even worse than expected?

The graphic below explores this question. It plots the full season for every team over the last 10 years, or 300 seasons in total. By filtering by record, you can see how teams with similar starts fared over the rest of the season, and how this compares to an average season. (Because the graphic is loading nearly 50,000 games, it takes moment to first display.)

Mode Analysis

Note that for records with fewer than 5 teams, the graphic expands the win filter to include at least 5 teams. For example, because only one team started 6-0, the graphic shows teams that started both 6-0 and 5-1.

As the graphic shows, there’s generally a lot of noise early in the season. A small bit of history, however, is on the Brewers' side. Based on the 15 teams that also started 7-2, the Brewers have around an 80% chance to finish above .500. In Arizona, things don’t look too desperate yet: Though teams that start 4-8 average 6 fewer wins than the overall mean, the distribution is still quite wide and many teams finish above .500.

Of course, forecasting (in the loosest sense of the word) future records based on teams' current records is a very simple way to approach this problem. There are a number of other factors, such as runs scored, runs allowed, run differentials, results against other strong or weak teams, and results in home and away games, that could be indicative of future success.

As an example, run differentials could provide more information than just win totals. The table below shows the relationship between run differentials in wins and season win totals. As it shows, teams that win by more runs in their first 30 games tend to have better seasons. While that’s unsurprising, it suggests that wins alone is probably a crude metric, especially early in seasons when the sample is so small.

For anyone interested in exploring this data further, Retrosheet data is available in Mode for every season since 1980. This data can be analyzed, visualized, and shared directly through Mode, and I can provide access to anyone who is interested. If you’d like to modify or double-check any of my analysis above, you can click through the embed links to access the work and data directly.

Benn Stancil is the chief analyst of Mode.

FiveThirtyEight vs. The Oddsmakers


You come at the king, you best not miss. - Omar Little

At the start of this year’s NCAA tournament, FiveThirtyEight, the new website of reigning forecast champion Nate Silver, predicted each team’s chances of making it to different rounds of the tournament. In an update yesterday, FiveThirtyEight looked into how their forecasts were doing. Having made my own predictive bracket based on Las Vegas odds, I figured I’d do the same—and see who comes out on top.

How Did FiveThirtyEight Do?

Rather than simply forecasting winners, FiveThirtyEight’s predictions—like mine—calculate each team’s probability of winning every game. To assess how well these forecasts performed, it’s not appropriate to see how many of their “favorites” won. Instead, it’s better to see if favorites win more or less often than expected. In other words, if FiveThirtyEight identified 100 games in which the favorite had a 60% chance of winning, the favorite should actually win 60 of them. If the results are substantially different from that, then it’s an indication that something’s wrong with the model.

Over the last several years, Silver’s predictions have performed well. The chart below—reproduced using data provided by FiveThirtyEight—compares game results to FiveThirtyEight’s forecasts. As it shows, if you group games by the predicted odds of the favorite winning, the actual results are close to that range.

Mode Analysis

As Silver noted in his post, though the results in each bucket don’t precisely match the forecast, they fall reasonably close and well within his confidence intervals. Silver’s model, it appears, works reasonably well.

Silver vs. Vegas

While FiveThirtyEight’s bracket is based on team rankings and a few other factors, I based my bracket solely on Las Vegas odds. Though the predictions are different, our brackets' favorites are all the same, except for the Championship game. FiveThirtyEight gave a slight edge to Louisville over Florida, while Vegas preferred Florida by a slim margin.

Unfortunately, though Silver’s predictions go back to 2011, I only made forecasts for the tournaments last year and this year. To make the comparison equal, I first trimmed Silver’s data to include only 2013 and 2014 results. The chart below shows the same calculations as above for FiveThirtyEight (the buckets were made larger to adjust for the smaller sample size).

Mode Analysis

Unsurprisingly, with a smaller sample (especially one that includes the chaos of last year’s tournament), Silver’s model looks tarnished (sorry). Still, the trend is generally in the right direction.

Compared to my predictions using Vegas odds, however, Silver regains his luster. Vegas (or my method of interpreting Vegas) performs worse than FiveThirtyEight. As the chart shows—which overlays my model’s results with Silver’s—the model does a particularly poor job of identifying solid but not overwhelming favorites: Favorites only won half the the games in which they were expected to win 70% to 80% of the time.

Mode Analysis

Why The Difference?

Before conceding to Nate Silver’s sterling record and accepting that he is just better at this than me, it’s worth looking into why our predictions came out so differently. Fortunately, there’s a fairly clear explanation. Silver’s recalculates his forecasts as the tournament progresses, updating the predictions after each game. This has two effects. First, his calculations respond to positive and negative signals from previous games. For instance, Virginia’s blowout win against Memphis could improve their odds against Michigan State, or Iowa State’s loss of Georges Niang to injury could lower their chances against Connecticut. My calculations were based on Vegas odds at the beginning of the tournament, and not responsive to new results.

Second—and more importantly—for all the games beyond the first round (when matchups were unknown), I computed game odds by comparing each team’s odds of making the Final Four. This is an imperfect calculation, most notably because those odds are based on a team’s entire path to the Final Four. For teams that face very challenging first games, their odds of making the Final Four are quite low. However, my model doesn’t make any adjustments for teams that overcome this first game.

Florida Gulf Coast’s Cinderella run last year is a perfect example of both of these cases. Not only did Florida Gulf Coast demonstrate that they were a better team than many thought by beating Georgetown and San Diego State by a total of 20 points, but they also cleared two tough hurdles between them and the Final Four. In part because of both of these factors, Silver’s model gave Florida Gulf Coast a 5.8% chance against Florida. My model—which was still based on Florida Gulf Coast’s original 1% chance of making the Final Four—only gave them a 0.2% chance against Florida.

This problem, however, can be partially corrected by only looking at first round games. Because these match-ups are known, forecasts are based on actual game lines rather than derived matchups.

As suspected, the difference in projections are less apparent in the first roud. Charting the relative predictions for each game shows that, in the first round, the differences between models are more or less random, and clustered around zero. In later rounds—when I’m deriving probabilities—Vegas odds almost universally overestimate the favorite’s chances. This makes sense, given the Florida Gulf Coast example above.

Mode Analysis

As this suggests, looking at only games in the first round, the models fair similarly. The chart below shows the same buckets as before, but only includes first round games—and in this case, the model predictions are more closely aligned.

Mode Analysis

Based on this, when picking your bracket next year, it doesn’t really matter if you go with Vegas or Nate Silver in the first round. For later round games, Nate Silver has left me hiding behind a car as he whistles The Farmer in the Dell (WARNING: that link is a Wire spoiler). But this does raise an interesting question: If FiveThirtyEight predicted every matchup at the start of the tournament, how would their results look? And how do FiveThirtyEight’s forecasts compare to Vegas lines at the start of each game? In other words, if we level the playing field, should I bet on Nate Silver or the true kings of sports forecasting—the oddsmakers?


Data was collected from FiveThirtyEight and calculated using Vegas odds. All analysis, data, and visualization code can be found in Mode. The graphs are backed by Variance, an excellent new visualization library.

Benn Stancil is the chief analyst of Mode.

The Odds of Your Bracket


This year, Warren Buffett promised a billion dollars to anyone who picks a perfect bracket. Unfortunately, the odds aren’t in your favor—the chance of picking a perfect bracket if you pick every game at random is one in 9 quintillion (or 9,000,000,000,000,000,000).

But that’s just a hypothetical bracket. What are the odds of your bracket? The interactive below lets you figure that out. Using the betting lines for each game and on each team’s chances of the making the Final Four and winning the NCAA Championship, the graphic calculates the odds of every possible NCAA matchup—and every possible NCAA bracket. The graphic also shows how each team affects your bracket’s odds, and which picks lower your chances of winning a billion dollars the most.

Click on the bracket to see the interactive


The odds for first round games are calculated using the betting lines for those games. In matchups between two teams after the first round, each game’s odds are calculated using the relative odds that the two teams make the Final Four (in the case of regional games) or the relative odds that the two teams win the championship (in the case of Final Four games).

To figure out how much a team contributes to lowering your bracket’s odds, I compared the odds of the selected bracket to the odds of a bracket in which that team has a 100% chance of winning every game you picked them to win. The more this adjustment increased the odds of the bracket (that is, made the bracket more likely it made it), the larger that team’s circle.

The data that powers the bracket, as well as the code for the visualization, are in this GitHub folder.

Benn Stancil is the chief analyst of Mode.

Engineering a Best Picture


When Netflix wanted to create a hit TV show, it turned to data. By analyzing its viewers habits, Netflix uncovered that its customers particularly liked Kevin Spacey, director David Fincher, and political thrillers. In part because of these interests, Netflix brought the three together to create House of Cards—and thus far, the results have been tremendous.

Having binged our way through Season 2 of House of Cards, we in the entertainment world now turn our attention to the Oscars, and particularly, the race for Best Picture. In doing so, perhaps we could take a page from Netflix’s book. Perhaps, using data about movies and the relationships between them, we can identify a perfect cocktail of movie attributes—PG-13-rated biopics about celebrities, or heart-wrenching World War II stories directed by Steven Spielberg, or anything related to Michael Bay—that strikes every Best Picture nerve. Perhaps, just like a hit TV series, a Best Picture can be engineered.

Using data collected from Rovi, I explored attributes that define Best Pictures. In addition to finding characteristics that frequently appear in nominees for Best Picture, I also looked for what made candidates for Best Picture stand out from other Oscar nominees. For example, it’s well known that most Best Picture nominees are dramas, but hundreds of dramas are released every year. Maybe there are rare, niche genres that only produce a few films a year—but always get noticed by the Academy.

Furthermore, concluding “we should make a drama” isn’t very instructive. Why not sketch out the entire movie, complete with a plot, themes and tones, and a cast and crew?

The following does exactly that. I first define the rough outline and plot of the movie, and cast it and pick a crew. I then built a model to blend together movie titles and synopses from Oscar nominated-films. The result is the ultimate Frankensteinian Oscar-bait—and like Frankenstein, it could be a triumph or an abomination.

Sorting out the Basics

When engineering a Best Picture, a few elements are essential. First, popular opinion about dramas is correct—dramas do much better than other genres. Fifty percent of movies nominated for any Oscar are dramas, compared to 75 percent for Best Picture nominees. However, there’s a stronger bias for crime dramas and biographical films, suggesting that some specialization could be beneficial.

On the other end of the spectrum, comedy, science fiction, action, and horror movies are all under-represented in Best Pictures. Out of the roughly 550 movies of these types nominated for Oscars, less than 30 were nominated for Best Picture.

Second, the film shouldn’t be a sequel. Though sequels do well at the box office (for example, Transformers 2 and 3; The Matrix 2 and 3; Spider-Man 2 and 3; Batman 2 and 3; Pirates of the Caribbean 2, 3, and 4; Twilight 2, 3, 4, and 5; and Harry Potter 2, 3, 4, 5, 6, 7, and 8), they rarely impress Oscar voters. Only three of over 80 sequels nominated for Oscars were nominated for Best Picture.

While the Academy shuns sequels for Best Pictures, it actually favors adapted writing over original writing. Over 50 percent of the nominees for Best Adapted Screenplay (or a past variant of the award) were also nominated for Best Picture, but less than 40 percent of Best Original Screenplays garnered Best Picture nominations.

Constructing a Plot

This gives us the basic outline of a film—it should be well-written adapted biopic about crime. But other studies confirmed these discoveries. Surely we can be more specific. Fortunately, Rovi provides detailed classifications of films’ tones and moods, and identifies specific characteristics and keywords related to their plots. Using these attributes, I developed a more well-defined Best Picture.

The table shows the themes and plot elements most over-represented in Best Picture nominees (it excludes characteristics that only appear in very few films). Broadly, Best Picture nominees should take on sweeping themes and be bittersweet and compassionate. They should never be goofy, silly, or campy.


Though the tone of the films are best in a minor key, they should end triumphantly. Light, just-for-fun adrenaline rushes struggle (apologies to Michael Bay).

Regarding the specifics of the plot, movies about cross-cultural relations and forbidden loves are strong performers. The best cultural differences to explore are those that emerge from economic inequality—stories about class differences and servants and employees are among the Academy’s favorites. Social injustices, particularly those that address racism and mental illness are well-received, though movies that touch on injustices done to Native Americans may be ignored. Other important characteristics to avoid are kidnapping, self-referential movies about filmmaking, pregnancy, and evil aliens (apologizes to Michael Bay).

Though these plotlines are solid bets, they’re all also well-trodden—what if we want to try something a little edgier? Though they’re infrequently made (and excluded from the table above), films about wheelchairs and farm life have played well to Best Picture voters. A movie about the struggles of a wheelchair-ridden farmer, perhaps?

Finally, the movie should present these difficult and complex themes in depth and without censorship. Oscar-nominated films are an average of 114 minutes long, Best Picture nominees are an average of 130 minutes long, and Best Picture winners are 142 minutes long. Moreover, the film should be R-rated: 40 percent of Oscar-nominated films are R-rated, compared to 50 percent of Best Picture nominees.

Casting a Best Picture

Bad acting can ruin a film. Can good acting make a movie a Best Picture?

Though it doesn’t appear critical, casting good actors certainly helps: About 60 percent of films nominated for best actor or best supporting actor were nominated for Best Picture. Sadly, the gendered “actor” isn’t incidental—only around 45 percent of films recognized for outstanding performances by actresses are nominated for Best Picture.

When filling the roles, we may be inclined to turn to the greats like Meryl Streep, Tom Hanks, and Jack Nicholson. These three—along with Harrison Ford, Dustin Hoffman, Robert De Niro and Leonardo DiCaprio—have been in a number of films nominated for Best Picture, but they’ve also been in many great films that weren’t nominated for Best Picture. When casting the film—especially if it’s on a budget—we want actors and actresses that collect Best Picture nominations as efficiently as possible.

Tragically, the undisputed champion of the Best Picture nod—John Cazale—died 35 years ago. Remarkably, Cazale appeared in five films in his career, and all five were nominated for Best Picture. Outside of Cazale, Daniel Day Lewis and Al Pacino are the best male leads. For female leads, Ellen Page and Jessica Chastain are excellent choices. Despite their commercial success, Sean Connery, Ewan McGregor, Eddie Murphy, Colin Farrell, and Susan Sarandon have all had little success with Best Picture nominations and are all stars to avoid.

To round out the supporting cast, Billy Boyd—a poor man’s John Cazale—is an obvious first choice. All four of the Oscar-nominated movies in which Boyd has had parts (all three Lord of the Rings films and Master and Commander) were nominated for Best Picture. After Boyd, Shane Rimmer and Peter Cellier are among the best men, while Miranda Otto and Talia Shire stand out among the women.

(As an aside, this film has run into a minor issue at this point. The Academy indirectly told us that we should focus on issues about race. However, the Oscars have historically favored white men. Maybe Oscar-nominee The Last Samurai found the solution to this dilemma—make the hero of the oppressed race…Tom Cruise.)

Finding a Worthy Crew

A quality crew is perhaps even more important than the cast. Seventy-five percent of films nominated for Best Director were also nominated for Best Picture, which is the strongest overlap Best Picture nominees have with any other Oscar category.

To lead the crew, Martin Scorsese is a clear choice for the director. He has more Best Picture nominations than any other director, and has collected them with amazing efficiency. Beyond Scorsese, Norman Jewison, Ang Lee, and James Brooks would all be strong choices. On the other side of the coin, Tim Burton and Michael Bay have been very successful at having their films nominated for Oscars, but not as Best Pictures (apologizes to Michael Bay).

The “Moneyball” picks for the cast are editor Thelma Schoonmaker and cinematographer Robert Richardson. The two have been involved in a total of 27 Oscar nominated films, 15 of which were nominated for Best Picture. Moreover, films nominated for best editing and best cinematography were also nominated for Best Picture at rates above 50 percent, suggesting these two may provide the most bang-for-their buck out of any cast or crew member. Notably, Schoonmaker typically works with Scorsese, so she may be riding his coattails—or he may be riding hers.

When looking for other crew members, we should focus on sound over visual elements. Forty-two percent of films nominated for Best Score received Best Picture nominations. (Interestingly, the rate was only 14 percent for films nominated for Best Song.) By contrast, the overlaps between Best Picture nominees and Best Costumes, Best Makeup, and Best Visual Effects nominees are among the weakest of any Oscar category, excluding those dedicated to particular genres like documentaries or shorts.

Finally, the Weinstein brothers easily top the list as the best candidates to bankroll the movie. Like Scorsese, they’ve been remarkably successful in getting films they produce nominated for Best Picture, and done so with impressive efficiency.

Bullet to the Dark Side

Based on this outline, 12 Years a Slave appears to be the clearest Oscar-bait among this year’s Best Picture nominees. It’s a dramatic biopic; crime is central to the plot (though one of the chief crimes is kidnapping); it was nominated for Best Directing and Best Editing; and the arc of the plot—a hopeless social injustice, accented with cross-cultural relationships, breaks the audience’s spirit before ending in poignant salvation—fits the mold perfectly.

But we can do better. The cast could be improved. At “only” 134 minutes, it could be longer. And imagine if Scorsese had directed it.

What film would be better? Using a model to blend titles and plot synopses of all the Oscar nominees over the last five decades, I generated several potential Best Picture-worthy titles and plot summaries. Among the randomly-generated plots and titles produced by the model, the two titles and six plots below best fit the themes and tones recommended from the analysis above (I paired the synopses with the titles they best matched). Some of the results could be clear winners; others, however, might need some of that Schoonmaker magic…

Suggested Title 1: Bullet to the Dark Side

  • Possible Plot 1: A portrait of a newlywed couple who are reunited in the Afghan mountains.
  • Possible Plot 2: A ‘50s housewife and a disgraced cop team up to exact revenge upon her one-time lover.
  • Possible Plot 3: A crooked cop tries to obtain the ultimate Dalmatian coat.

Suggested Title 2: Hurt Me the Hidden World

  • Possible Plot 1: A suicidal former Union soldier ends up joining a Sioux tribe. He then takes up arms to defend them when they become entangled with Russian mobsters in London.
  • Possible Plot 2: A farmer tries to woo a wealthy uncle, meets and falls for an agnostic Roman soldier during WWII.
  • Possible Plot 3: A rich playboy who escapes from prison to reunite their divorced dad poses as an eccentric teacher at an unconventional brothel.

So to all the aspiring writers and filmmakers in the world, you now know what to do. The path before you is clear. Six outstanding movies are practically written. The necessary themes and plot twists are known. All that’s left to do is assemble the right cast and crew, and collect the inevitable hardware.


I collected data via the Rovi Cloud Services API. While Rovi provides an impressive amount of data on each film, the dataset still has a few holes, most notably regarding the awards each film was nominated for (data on Best Picture nominees is complete). Additionally, Rovi provided no data on about 30 of the 2,900 films that were nominated for an Oscar over the last fifty years. The list of Oscar winners was collected from the Academy Awards Database.

To determine the top attributes in a Best Picture, I found which attributes were most over-represented in Best Pictures relative to all Oscar nominees. Unless otherwise noted, when finding top themes and plot elements, I only considered those attributes that appeared in at least 10 of the nearly 3,000 Oscar nominated movies.

While comparing Best Picture nominees to other Oscar nominees rather than all films introduces some bias (Oscar nominees aren’t a perfect sample of all movies), it has benefits as well. The dataset is restricted to movies that had some degree of critical or popular success, excludes made-for-TV movies, and largely focuses on American films (few foreign films are nominated for Best Picture).

The title and synopsis mash-ups were randomly generated using a Markov n-gram model trained on a dataset of all Oscar nominees. Because the set of Best Picture nominees is small, creating n-gram models using only Best Picture titles and synopses unfortunately isn’t possible.

To anyone who is interested in this work and would like to explore other methods for characterizing Best Picture nominees, I’m happy to share all my analysis. The conclusions presented here are a simple start to figuring out what makes a Best Picture; like so many other analyses, it could be greatly strengthened by others’ data and others’ ideas.

Benn Stancil is the chief analyst at Mode.