(Tracking the US Election memes – Trump is the big blob way ahead on the right)
On Saturday November 5th we decided to “go public” with our prediction that Donald Trump was in pole position to win the US Presidential Election. We were in a small minority, most polls and researchers were calling it for Clinton with odds ranging from high 70’s to near 100%, but we trusted what we were seeing. The system had worked for Brexit. So, suitably caveated, we posted it up. Carpe Diem, and all that.
However, it seemed like the minute we posted it, all the good news for Trump started going bad, which from our self-interested point of view was also bad news. First the FBI investigation into her emails was cancelled, and we were told her support was rising. Then news came through that she was storming ahead in early voting in a number of key states. If she won we would have egg on our faces, if we retracted it we would too. We could do nothing except cross our fingers and hope we were still right.
(We make no comment about anyone’s views about the political outcome, this is all about how the technology worked)
Waking up in time for the 6 am BBC morning news in the UK (5 hours ahead of the US) we heard that Trump had almost won, with a far wider margin than our system had shown was possible. By 8 o’clock that morning most pundits had called it. Trump was the next President-elect.
Our system had got it right – it had worked.
So why had our system worked when nearly all the other polls and pundits had called it wrong? Now we have had a day or so to look at the outcomes, we think there are 4 main reasons.
Firstly, Internet vs human polling. Our system is looking at verbatim Social Media data, from Twitter. We had come to the conclusion while monitoring previous UK general elections that people were more willing to share their true thoughts on social media than with pollsters, especially if their views were “non-PC” (in this case, pro-Trump). After the election we read that the LA Times poll, which had consistently been more pro Trump (and been roundly criticised by nearly every pundit), had been an internet poll, not using people to ask questions, and they believed (and were proved right) that people had been more honest on that. (And more recently, an article on TechCrunch showing other Social Media companies were seeing similar to us – though few put it out there 😉 ) In effect by monitoring social media, we were getting the same sort of uncensored opinions, and in that uncensored world Trump was doing a lot better than the standard polls were predicting. Also, we knew from UK elections about the “shy Tory” effect where people say one thing – typically to look good (virtue signalling as it is called) – in public, and do another at the ballot box (To misquote Phil Ochs, Liberals are 10% left of centre in public, 10% right of centre at the ballot box).
Secondly, the way our system works helped quite a bit. It was initially designed to satisfy a BBC requirement to “Find the Zeitgeist” across its media output, as well as compare it to others’ output. To solve this we used a fairly obscure technique that we had become interested in called memetic analysis, that groups memes into groups of fellow travellers (called “memeplexes” in the lingo) rather than look at things one by one, as Boolean analysis forces one to do. We started collecting data the day after Trump became Republican party candidate, and it has crunched a relevant sample of about 170m tweets since then, tracking 4.5m unique memes. What our system was showing was that from the get go, Trump had dominated the memespace (as he had in the primaries too). In meme theory as originally proposed by Richard Dawkins (who coined the term meme – or mental gene), the view is that memes colonise your mindspace – so in effect the Trump memeplex was hogging the electorates’ mindspace, starving out competing memes. Clinton was not anywhere near. To be sure, not all Trump memes were positive, but in essence Trump was using a “Wildean strategy” (The only thing worse than being talked about is not being talked about).
You can see how this works in the Youtube video of the system tracking Trump, above
Thirdly, we knew from Brexit that the “non voters” were very motivated to come out to vote if someone could credibly promise an “out” from the current political system. Trump seemed to be doing that succesfully. (One can argue about the morality of his tactics, but the effectiveness of the strategy had been proven for Brexit), and we thought it was happening again.
Lastly, Confirmation bias. We knew that after Brexit and the Republican Primaries, US pollsters had looked at why they had got it wrong. They had realised that their unwillingness to countenance a Leave/Trump victory had made them look at the data from a point of view of what they wanted to see, not what the saw. Given that the US media and pollsters seemed to be even more pro Clinton that the UK equivalents were pro Remain, we suspected there was a Clinton bias in the polling uncounted supporters.
(Update. There is an odd thing happening with the pollsters. Reading many of the post election analyses they are nearly all saying the result was within the c 3% margin of error of a close run race. What we don’t understand though, is how the most pessiimistic polls were showing Clinton at 70% chance of winning, and most were 80% plus, and not all far nearer 50%)
That is why, with Trump just ahead, we thought he would probably win.
For what its worth, we had thought if we were wrong it would be because the vote split would go Clinton’s way in marginal districts, due to the reputed strength of the Democrat “ground game”, beating the above factors. But as with Brexit, the Trump voters proved more motivated and got out and voted.
(We did note, in our weasily caveats on the 5th, that Clinton could win, but we thought Trump would then have the popular vote. Ironically, it turned out the other way. It was very close, it seems whichever got the prize, the other would get the popularity).