Published: May 2026
For the better part of a century, public opinion polling has been the primary tool for understanding what people think. From Gallup's pioneering election forecasts in the 1930s to the modern era of rolling trackers and probability models, polls have shaped how we understand elections, policy preferences, consumer sentiment, and social attitudes.
But polling is under pressure — and for good reason. The industry is grappling with fundamental challenges that go beyond the methodological. And a new generation of tools, including Dissmarket, is offering a different way to measure collective intelligence.
This is not a polemic against polling. Polls are valuable tools that have served democracy and commerce well. But they have structural limitations that are worth understanding — especially when better alternatives are now available.
The most pressing challenge facing the polling industry is the collapse of response rates. In the 1990s, telephone surveys routinely achieved response rates of 30-40%. By 2020, the Pew Research Center reported that its typical survey response rate had fallen to approximately 6%. Some industry estimates put current response rates even lower.
Low response rates do not automatically produce bad results — if the people who do respond are representative of the broader population. But nonresponse bias is notoriously difficult to correct for, and there is strong evidence that the people who agree to participate in polls differ systematically from those who do not.
Related to nonresponse bias is the problem of social desirability bias — the tendency for respondents to give answers they believe are socially acceptable rather than answers that reflect their true views. This effect was widely discussed in the aftermath of the 2016 US presidential election, when polls underestimated support for Donald Trump, and again in the 2024 cycle when similar dynamics were observed.
The shy voter problem is particularly acute for contentious political and social questions — precisely the topics where accurate measurement matters most.
A poll captures a moment in time. It tells you what a sample of people said on a particular day in response to particular questions. It does not update. It does not respond to new information. By the time a poll is fielded, its data is already beginning to age.
In a world where news cycles move in hours and major events can shift public sentiment overnight, a static snapshot is a profoundly limited tool. The 2024 election demonstrated this vividly: polls released on Monday were effectively irrelevant by Wednesday when new developments reshaped the race.
Despite decades of methodological research, polling remains something of a black art. Different pollsters use different sampling methodologies, different weighting schemes, different likely voter screens, and different question wording — and these choices can produce materially different results.
The consumer of polling data — whether a journalist, a policy maker, or a citizen — rarely has the information needed to evaluate these methodological choices. They see a number and a margin of error, but the margin of error typically reflects only sampling error, not the much larger uncertainties introduced by weighting, nonresponse, and model specification.
High-quality polling is expensive. A national telephone survey in the United States can cost $50,000 to $200,000 or more. This cost structure means that polling is controlled by a small number of well-funded institutions — media organisations, political campaigns, and research centres. The public is a passive consumer of the results, not an active participant in the process.