Last week’s Donald Trump victory was historic for many reasons. Hopefully for investors, it will be remembered as the day they finally gave up on listening to — and even worse, responding to — those who claim to know what the future holds.

After all, the vast majority of election forecasters got this year’s presidential race wrong.

Past performance is not a guarantee of future performance

After statistician Nate Silver correctly called all 50 states during President Obama’s successful run for a second term in 2012, perhaps too many people came to believe too deeply that election outcomes can be accurately predicted.

Right after that election, a article noted, “What does this victory mean? That mathematical models can no longer be derided by 'gut-feeling' pundits. That Silver's contention — TV pundits are generally no more accurate than a coin toss — must now be given wider credence.”

Perhaps what it really meant was that Silver would need to notch a few more victories before his mathematical models could be trusted.

This year, in the late morning of Election Day, Silver posted his closing prediction, giving Clinton a 72% chance of victory.

Most forecasters clung to their claims of a Clinton victory well into the night. At 10:20 p.m., The Upshot, the New York Times political blog, gave Clinton an 85% chance of victory. In the same post, The Upshot listed predictions from the Princeton Election Consortium, PredictWise, and 6 other forecasting organizations—all using different methodologies. Each one saw Clinton winning. Some gave her a more than 99% chance of victory.

It was just a matter of minutes until the tide turned. Around 10:30 p.m. Eastern Time, Trump began being “called” the winner in a successive number of swing states, including Ohio, Florida and North Carolina. Suddenly, The Upshot’s online “Chances of Winning Presidency” meter flipped from Clinton to Trump.

In Silver’s last post of election night, at 2:52 Wednesday morning, he called the outcome “the most shocking political development of my lifetime.”

A few weeks prior to election day, Charles Schwab Chief Investment Strategist Liz Ann Sonders pondered how the market might react to the election: “The most unsettling [result],” she said, “would likely be a ‘Brexit’-type situation, where polls suggest a clear Clinton win, but we wake up and find Trump has won.”

Of course, that’s exactly what happened. As Trump’s chances of victory moved through election night from unlikely to possible to inevitable, at first it appeared as though Sonders was right. More than unsettled, Dow futures fell hard, losing nearly 800 points. However, stocks had recovered almost all of that decline by the time the market opened Wednesday morning, and the Dow finished the day up nearly 1.5%.

How could so many be so wrong?

So, what happened? Why did so many forecasters fail?

Those trying to answer such questions have pointed to “nonresponse bias” (those who didn’t respond to surveys were most likely to vote for Trump), “shy-Trumpers” (the polls included too many people who intended to vote for Trump, but weren’t willing to say so), and “inaccurate likely-voter definitions” (pollsters did not correctly identify who would vote).

Already, forecasters are busy fine-tuning their models.

But there’s a more important factor that The Atlantic pointed to in a November 9 article when it asked, “Did we all believe Clinton would win because of bad data, or did we ignore bad data because we believed Clinton would win?”

A day later, Will Rahn, political correspondent for CBS News Digital, gave a remarkably candid answer on behalf of himself and many of his colleagues. Using the Twitter hashtag popular with Clinton supporters, he acknowledged, “…with a few exceptions, we were all #WithHer…”

Rahn called out “modern journalism’s great moral and intellectual failing: its unbearable smugness. Had Hillary Clinton won, there’d be a winking ‘we did it’ feeling in the press, a sense that we were brave and called Trump a liar and saved the republic.”     

Even the New York Times appeared to acknowledge bias in its reporting when on November 13th the paper’s publisher and executive editor responded to the election’s outcome by penning a note to readers in which they pledged, “to rededicate ourselves to the fundamental mission of Times journalism. That is to report America and the world honestly, without fear or favor, striving always to understand and reflect all political perspectives and life experiences in the stories that we bring to you.”

Bias can’t help but sway how news organizations conduct, interpret, and report on voter research. Unfortunately, flawed forecasting and biased reporting persuade far too many people to make ill-advised financial decisions. Consider the conventional wisdom before the election that a Trump win would be bad for stocks and good for gold. Actual result: Trump win, stocks up, gold down.

For investors, here’s one of the most important lessons to learn from this year’s presidential election: Once and for all, declare your investment portfolio to be a forecast-free zone.  

As John Kenneth Galbraith once said, "We have two kinds of forecasters: Those who don't know and those who don't know they don't know." 

What investment lessons did you learn from this year’s election?