Show HN: LLMs consume 5.4x less mobile energy than ad-supported web search The standard AI energy debate compares server-side LLM inference to a server-side Google query. I think this misses most of what actually happens on a mobile device during a real search session. I built a parametric model of the full end-to-end mobile search session: 4G/5G radio energy, SoC rendering cost for a 2.5MB page, programmatic advertising RTB auctions running in the background, and network transmission costs for both sides. Then compared it to an equivalent LLM session. Main finding across 10,000 Monte Carlo draws: on mobile, a standard LLM session uses on average 5.4x less energy than a classic ad-supported web search session. Programmatic advertising alone accounts for up to 41% of device battery drain per session. Caveats I tried to be explicit about: - Advantage disappears on fixed Wi-Fi/fiber - Reverses for reasoning models - Parametric model, not empirical device measurement. Greenspector has offered to run terminal measurements for v2 - Jevons paradox applies SSRN working paper, not peer-reviewed. Methodology and Monte Carlo distributions fully documented in the paper. Happy to defend the assumptions. DOI: 10.2139/ssrn.6287918 https://ift.tt/Xv1oxkS April 25, 2026 at 01:44AM
0 Comments