March Madness Portfolio Challenge: All Hail Our Champion!

With our inaugural March Madness Portfolio Challenge in the books, we’re going to cover three very important takeaways.

Takeaway #1: I mean, it wasn’t even close…

Yes, in this part we pay homage to our esteemed champion, who has earned the glory due unto him by leading – more or less – the entire way.  His name is Dan Adams, and he pseudonymously submitted one of his entries under the name of his better half, Laura B.


1 Laura B (aka Dan Adams) 10.79
2 Mike P 9.38
3 Corey H 8.32
4 Benchmark – Concentrated (Top 2 Seeds per Region) 7.72
5 Ryan K 7.67
6 David V 7.33
7 Josh A 7.03
8 Mack C 6.91
9 Matt Z 6.74
10 Gordon N 6.72
11 Benchmark – Rank Seed Weight 6.69
12 Jagdeep M 6.67
13 Mike S 6.08
14 Michael L 6.03
15 David G 5.91
16 Benchmark – GMP 5.73
17 Al G 5.65
18 Jagdeep M 5.64
19 Jug M 5.54
20 James H 5.49
21 Mike D 5.46
22 Guy L 5.28
23 Jaime P 5.06
24 Dan A 5.04
25 Anthony M 4.92
26 Panos K 4.75
27 Eli R 4.63
28 Jeff S 4.60
29 David B 4.53
30 Sharon P 4.49
31 Andrew B 4.44
32 Benchmark – Rank Weight 4.38
33 Theo R 4.17
34 Benchmark – Hot Hand (Power Conference Champs) 4.03
35 Trevor S 3.93
36 Scott P 3.92
37 Carmen M 3.88
38 Ben S 3.63
39 Benchmark – Equal Weight 3.52
40 Benchmark – 12-Seeds 2.93

Given the DOMINANT performance, I wanted to know who this guy was and what his keys to success were.  So, I asked!  Here’s a lightly edited Q&A with our new March Madness Overlord.

Dave: Let’s start off by getting to know you.  Can you tell us a bit about yourself?

Dan:  I am happy to share. I turn 30 in a little over a month, I am from Kingston, ON and I work as a manager in clinical research. I completed my undergrad at McMaster. I heard about the portfolio challenge through my fiancée Laura. I have always been interested in the intersection of math and the real world, including sports. With that being said I have just enrolled in the CSC as a potential first step to migrating to a career in finance.

Dave: Are you a basketball fan?  If so, who do you root for?

Dan:  I am a sports fan in general and a big NBA fan, generally most interested in the best teams year to year, with no long term team I follow. I have watched quite a bit of Golden State this year as they are on a historic pace for wins and in my opinion have the most exciting player in the game in Steph Curry. Second to them I have also watched some of San Antonio as they are also having a historical season based on average margin of win. I hadn’t watched any college basketball prior to the tournament, but have watched some of the tournament now. The one game elimination format of the tournament is exciting so I usually watch for that reason and to root on any March Madness pools or brackets I have entered.

Dave: Tell us a bit about your history doing March Madness brackets?  Do you participate every year?  If so, what has your experience been with the standard rules?

Dan:  I have completed March Madness pools on and off for a number of years. These pools are almost always the same standard format, filling out a bracket and being awarded a set number of points per win. Most commonly 1 point for a 1st round win, 2 points for second round win and so on. I wouldn’t say I have put a lot of thought into these in the past, but the last few years I have entered in a pool with around 200 people in it with a prize pool so I took a little time to think about the best strategy for that particular pool based on number of entries and payout structure. It is still fun to enter the standard format and I usually have a few entries in that particular pool to minimize the variance any one game or team would have on one entry.

Dave: When you read about our gripes, and the resultant rules of our portfolio challenge, what was your first thought?

Dan:  I really enjoyed reading about your thoughts on the typical March Madness pool. I was definitely intrigued with the idea of your portfolio challenge and thought it was a fun opportunity to try and think of the best strategy to a new pool structure. I agree that [traditional brackets] aren’t a good way to define the most skilled bracket picker. Most times the person who wins the pool just filled their pool in last minute like everyone else but they happened to get hit the right combination. The winner isn’t synonymous with the best picked bracket as you mentioned.

Dave: How did you develop your strategy?

Dan:  I assume each person had a similar thought process about their selections. Look at the various point values assigned to each of the teams and try and predict how the tournament will unfold and how many games each team will win.  I thought a little more about the best way to pick the teams with the highest Expected Value (EV) based on their probability of winning X number of games.  While it was enticing to pick one team such as Texas A&M (4.01) and assign them 100% of the portfolio I guessed that no one would have this risky of a portfolio so it wasn’t necessary. I also guessed how many entries there likely were to be and figured it would be on the smaller side with 20-30 entries. That turned out to be pretty accurate.  In a winner take all format the amount of risk you should be willing to accept should increase as the number of entries increases. For this format that means how much risk is appropriate when selecting teams with certain win % probabilities as well as what is the appropriate number of teams to fill out your portfolio. 

It is hard to come up the probabilities of anyone team advancing numerous rounds. Luckily there are lots of resources out there where people have already done this work. I put together spreadsheet based on probabilities from Nate Silver’s 538 and then used the probabilities combined with values predetermined by the rules to come up with the expected value (EV) calculation. I used the EV calculations to inform my entry. In simpler terms my portfolio was full chalk, based on the EV calculations. It had the 5 highest EV teams [in equal weight].

Ugh.  When the guy who wins your challenge describes his entry as “full chalk,” you know you screwed up.  Which leads to…

Takeaway #2: Though our portfolio challenge was a significant improvement upon standard brackets, we still didn’t do a great job addressing our major gripes.

Dan also had this to say (emphasis ours):

Any format isn’t going to solve that though based on this very sample size of games.

With the format of a one game elimination tournament and needing to win 6 games in succession the tournament itself is more about excitement than a perfect design to yield the top team. The top two teams in the country this year Kansas and North Carolina had around a 15-20% chance of winning the championship going in to the tournament. So although they were the favourites it was still unlikely they were going to win. Same goes for any single bracket someone fills out, the format is very high variance as any one loss (i.e. Michigan State) can ruin someone’s entire bracket. Even for this format I don’t think a more skillful person is going to come out ahead that much more frequently. It looks like most portfolios were more diversified which I think is where Laura’s bracket had an advantage. Portfolios weren’t rewarded with enough points when a team made it far in the tournament as they only held a smaller portion of the portfolio.

There are two points here, one with which we agree, and the other not so much.

First, with regards to variance of outcomes, our solution this year was not to minimize it explicitly, but to shift its character from inherent flaw to intentional choice.  Back in 2014 we outlined an optimized bracket challenge that maximized the odds that most-skilled picker actually won.  However, that method is difficult to administer and mildly dull.  In order to garner interest for this year’s challenge, concessions had to be made; our best idea was to let entrants select their risk-reward tradeoff.  Some entries courted variance while others courted diversification.  Either way, it was a choice.

The second point, however, is well-taken.  Assuming a sufficiently large number of entries, it was reasonable to assume that a concentrated portfolio would win.

Throughout our tournament analyses, we used HHI as our measure of portfolio concentration.  Simply stated, HHI is the sum of the squares of the allocations.  In our tournament – though this wasn’t always the case – entries tended to be relatively equal weight after determining the number of teams to allocate to.  In the case of equal-weighting, the HHI is also equal to the allocation per team.

Simple math dictates that – even if all our entries were randomly generated – the greater the pool of entrants the more likely the winner would be concentrated.  This year, we had 40 total entrants, including our pre-programmed benchmarks.  That’s not a lot, but it was sufficient to have our top 2 entries have an HHI of .2 (5 equally-weighted teams) and .1875 (6 unequally-weighted teams).

Takeaway #3: Diversification works consistently and well if you want to consistently do well.

At the conclusion of the 1st Round, reflecting upon the rules, we wrote the following:

“…one of our hypotheses entering the Portfolio Challenge was that concentrated portfolios would find their way to the top and the bottom of our rankings.  Lo and behold, after just the first round, we see this pattern (though weakly) starting to emerge.”

How did things ultimately work out?  Truth be told, the diversified portfolios did better on average than even we expected.  While it was unsurprising that a concentrated entry won overall, the following chart makes a pretty clear statement with regards to the value of diversification.

2016 March Madness Portfolio HHI

The above scatter plots include our benchmark portfolios, but when you remove those, the HHI for the human entries that ranked 3-18 averaged .089 – less than half of the top 2 entries.  For further context, the following chart shows the level of concentration where the scores above and below were roughly equal.  Note that the high concentration portfolios experienced 78% greater variability of outcome than the diversified portfolios.

HHIScoreStDev (Points)

Under .13 5.68 1.27
Over .13 5.57 2.26

We see this phenomenon in investing all the time.  Portfolios that target a certain level of return are often courting an unnecessary amount of risk to achieve it.  And for those approaching or in retirement, this mistake can have devastating effects.

Intelligent diversification isn’t the key to finishing first – in March Madness or in investing – but it is the only way to consistently perform well.

Next Year’s Changes

This year was a good first foray into an improved March Madness challenge, but it still left much to be desired.  For starters, the top brackets – both concentrated and diversified – tended to overweight higher seeds.  This suggests that there was a lack of calibration in our scoring method such that the expected total points were not equal by team.  Simply stated, we still awarded too many points for higher seeds and too few points for lower seeds.

As Dan wrote (again, emphasis mine):

While I like this format and think it was designed very well…People will have had the time to think about this format and refine their strategy after seeing the results. I think it will lead to closer looking portfolios and the same issue will arise as standard bracket pools, a small subset of teams or games deciding the winner. Coming up with a new format or an adjustment to this one will get people to come up with new original ideas to solve the next format. I think if this same format ran again I would expect to see most portfolios with 5 or less teams in them and maybe even some assigning a single team to 100% of their portfolio.

We agree, and in retrospect would go a step further in stating that we ought to have seen this coming.  Honestly – and with mild embarrassment – our scoring this season was largely based on intuition.  Using the natural log of the inverse odds, the tournament favorites would be awarded roughly 1.6 point per win, while a 1/1000 longshot would be awarded roughly 6.9 points per win.  We expected this rough disparity to encourage something approaching an even distribution of allocations among teams.  But that obviously didn’t happen, largely because the more analytical amongst our participants noticed that the expected values per team were unevenly distributed in favor of the top teams.

It’s also worth noting that we were largely unable to mitigate the impact of luck on outcomes.  As the best case-in-point, our very own Mike Philbrick revealed after the tournament was done, that he had taken the “Fat Tony” approach to team weighting.  Essentially, he used basic intuition and historical knowledge to develop a chalky portfolio of highly-ranked teams.  In a well-designed challenge, the “Fat Tonys” of the world wouldn’t stand a chance; the entire goal of our redesigned challenge was to distinguish skill from luck.

So, that’s on us, and next year we’ll make a much better attempt to both equalize expected return per team and increase sample size.

There’s another interesting angle that went largely unexplored this year: return per unit of risk.  Often times in investing, optimal strategies are not those with the highest return.  This is especially true of retirees who are drawing down their portfolios, because cash outflows coupled with high volatility can significantly reduce retirement income.  As an investment firm, ideally we’d like to run our portfolio challenge to optimize for that goal – risk-adjusted returns – rather than total points.  That way, the winner of the challenge isn’t just the person whose insanely-concentrated portfolio catches some lucky breaks, it’s the person who accrues points in the smoothest and most efficient manner.  As a hat tip to Corey Hoffstein, it’s worth noting that had we done something like that this year, his robust approach and excellent results would likely have had him crushing the field.

But that’s something we’ll come back to in March 2017.

In the intervening 11 months…wow, this is way more difficult to type than I expected…