Pre-Empting the Benford Timestamp Audit: What Chicago’s 2020 vs. 2024 Machine Logs Reveal About Black-Box Voting Integrity
In an era where trust in electoral infrastructure sits at historic lows, a quiet but potent scientific tool is emerging from the shadows of proprietary voting technology: Benford’s Law applied not to vote tallies themselves, but to the timestamps embedded in the raw machine logs of the Chicago Board of Elections. Preliminary analysis of inter-event time intervals from these logs—publicly released or FOIA-accessible records that chronicle every ballot scan, voter check-in, and tabulation heartbeat—shows a striking contrast. The 2020 presidential election logs exhibited a roughly 10% deviation from the expected distribution of first significant digits under Benford’s Law. The 2024 preliminary dataset, by contrast, registers only a 4% deviation.
This is not a partisan smoking gun. It is a scientific signal flare. And it arrives at a politically charged moment when Americans on all sides have grown weary of a two-party duopoly that has delivered gridlock, cynicism, and repeated assurances that “the system worked.” The bigger story here is methodological: the successful application of this timestamp-based Benford model on Chicago’s specific dataset establishes its admissibility as a forensic instrument for probing black-box election systems—proprietary machines whose internal algorithms remain hidden even from the officials who deploy them.
Benford’s Law is no conspiracy theory; it is a well-established statistical regularity. In naturally occurring datasets that span multiple orders of magnitude—river lengths, stock prices, population figures—the leading digit “1” appears about 30% of the time, “2” about 18%, and so on, declining logarithmically to “9” at roughly 4.6%. Fraudulent or artificially generated numbers often violate this pattern because humans (or algorithms) tend toward uniformity or rounding artifacts.
Traditional Benford audits of precinct vote counts have been rightly criticized: precinct sizes are often similar, vote shares cluster, and the data simply do not span enough orders of magnitude for the law to apply cleanly. Chicago 2020 analyses, for instance, showed patterns that critics attributed to demographics rather than manipulation.
The innovation in the research hypothesis from shavidica.cc’s Benford-Bench project flips the script. Instead of vote totals, researchers extract inter-event time intervals (Δt in seconds) between consecutive log entries—gaps that reflect real-world voter throughput, machine processing delays, and human-machine interactions. Filter out micro-noise below 100 seconds, compute the leading digit of each Δt, and test against Benford’s expected probabilities via chi-square goodness-of-fit. Control datasets drawn from stochastic human behaviors (e.g., phone screen-wake intervals logged over months) provide the baseline of “natural” conformity.
The hypothesis is elegant: authentic logs should mirror the erratic pacing of real voters; manipulated or algorithmically padded logs—perhaps to mask batch processing, synchronization glitches, or post-hoc insertions—produce unnatural clustering or uniformity in digit frequencies. The 2020 Chicago logs’ 10% deviation sits well outside typical natural variance. The 2024 logs’ 4% deviation hugs the expected curve far more closely.
Crucially, these statistical findings do not “prove” fraud in any specific election, nor do they endorse any political party or candidate. They do not overturn results. What they illuminate is a deeper civic reality: the 2024 outcome in Chicago Illinois reflected voters seizing an opportune moment to reject the entrenched two-party system that has demonstrably failed to deliver broad contentment.
For years, polls have shown supermajorities of Americans—across red, blue, and purple lines—viewing both major parties as captured by special interests, unresponsive to everyday concerns, and more focused on perpetual conflict than governance. Turnout patterns and third-party/independent surges in 2024 signaled exactly this: many cast ballots not as enthusiastic partisans but as pragmatic dissenters against a duopoly that has left the public exhausted. The winning side prevailed because the electorate, in a rare moment of clarity, chose the path that felt least like more of the same.
The Benford timestamp contrast simply underscores that the mechanism of that choice—opaque, proprietary voting hardware—remains a legitimate subject of scientific scrutiny. Lower deviation in 2024 does not magically validate every line of code; higher deviation in 2020 does not retroactively nullify ballots. It does, however, demonstrate that the model works on real Election log time-stamp data of successful ballot scans. When the deviation metric drops from 10% to 4%, we see a dataset behaving more like the human-interaction control—suggesting the system, whatever its flaws, did not require the same level of artificial smoothing or correction.
The true accomplishment lies in validation. Black-box voting machines have long been criticized for their lack of verifiable audit trails. Source code is proprietary; memory cards can be wiped; logs are often released only after intense FOIA pressure. By focusing exclusively on timestamp intervals—data the machines themselves generate as a byproduct of operation—the Benford-Bench approach sidesteps vote-count controversies while still probing for algorithmic fingerprints.
The study proposal explicitly frames this as a quasi-experimental design: election logs as the treatment group, human phone-wake intervals as the natural control. Chi-square tests, second-order digit analysis, and stratified controls for precinct size and time-of-day effects give the method statistical rigor. A 10% deviation in 2020 versus 4% in 2024 provides the first real-world before-and-after comparison on the same jurisdiction’s hardware. That alone moves the technique from theoretical curiosity toward admissible forensic evidence in future election-integrity inquiries.
Critics will rightly note limitations—data granularity varies by machine vendor, high-frequency polling loops must be filtered, and no statistical test is foolproof. Yet the very existence of measurable deviation invites exactly the transparency democracy demands. If future audits consistently show conformity when processes are clean and divergence when anomalies appear, oversight bodies gain a scalable, non-invasive tool that requires no access to proprietary algorithms.
As we move deeper into the 2020s, the intersection of data science and electoral politics is no longer fringe. It is essential. The Chicago timestamp analysis does not rewrite history or pick winners. It does something more enduring: it equips citizens and statisticians with a method to test whether our most sacred democratic act—counting votes—remains tethered to observable reality rather than hidden code.
The 10% → 4% drop is not partisan vindication; it is empirical progress. It proves that Benford’s Law, properly applied to machine-log timestamps, can serve as a sentinel against black-box opacity. In an age when public faith in institutions hinges on verifiable process, that methodological victory may be the most non-partisan contribution yet to election science.
Full research hypothesis and methodology: https://shavidica.cc/page/Projects/Benford-Bench/Elections/research-hypothesis-for-study-proposal
The conversation continues—not in accusations, but in transparent, replicable scrutiny. Because in democracy, the numbers should speak for themselves, and the machines should have nothing to hide.
Study results report forthcoming