In mid-September 2025, Google disabled the `&num=100` parameter, which allowed users and tools to load 100 search results on a single page. Industry discussions began around September 10–11 when rank tracking tools started showing irregularities, and Google Search Console graphs indicated changes. On 18 September, Google confirmed that the parameter was never officially supported, marking a permanent alteration to their system.
As a result, retrieving search positions 1–100 now requires 10 separate paginated requests instead of one bulk request. This change adds technical complexity and costs for any organisation that tracks deep search engine results page (SERP) positions.
This update follows Google’s 2024 decision to revert to traditional pagination by rolling back continuous scroll on both desktop and mobile platforms. This return to numbered pages for users, as demonstrated by recent developments, corresponds with stricter paginated access for automated systems.
How The Parameter Removal Works
The num argument is now ignored. Google serves about 10 organic results per page. To reconstruct the Top 100 for a query, a crawler must iterate start=0,10,20,…,90, issue 10 requests, and parse 10 separate documents. This raises bandwidth, proxy, and CAPTCHA overheads, and heightens the chance of anti-bot triggers. Vendors that once pulled full pages at scale must now absorb higher costs or reduce depth.
Why Google Likely Did This
Google’s public line is limited to confirmation that results per page is not supported. Several rationales are consistent with recent decisions. First, cutting bulk access reduces machine-generated traffic that stresses infrastructure. Second, it pushes rank trackers toward controlled channels and stricter quotas. Third, by eliminating 100 result page loads, Google curbs non-human behaviour that pollutes impression counts. These are inferences based on Google’s statements about automated access, spam policies, and the continuous scroll reversal.
The Vendor Shock And Cost Inflation
Rank tracking platforms reported outages and partial data in the first week. Semrush called it an industry-wide issue and said it had shipped workarounds. Ahrefs acknowledged the cap at 10 results per page and the downstream limits for deeper checks. Independent coverage described a 10x request multiplier to rebuild Top 100 sets. This is the new baseline cost for anyone scraping full-depth rank tracking data.
For vendors the cost stack has multiple levers. More requests mean more proxy IPs, higher egress, more CAPTCHA solving, and more sophisticated browser automation. These inputs scale with depth. Providers serving the enterprise may absorb the hit to preserve continuity. Others are switching defaults to Top 20 or Top 30 tracking to contain spend.
Product Features And Metrics At Risk
Several headline features depend on complete depth. Keyword difficulty models trained on Top 20–50 competitors lose fidelity when coverage thins. Share of voice models undercount long tail presence when positions 21–100 are sampled less often. Competitive research that maps a rival’s footprint becomes patchier, especially across regional or low-volume clusters. Independent analyses already show visibility contraction across tracked portfolios after the change.
Search Console Shifts And The Bot Impression Clean Up
From 10–15 September, many sites saw steep drops in desktop impressions in Google Search Console while clicks stayed stable and average position improved. The best explanation is the removal of bot-generated impressions from 100 result loads. A result at position 67 previously received an impression whenever a tool fetched 100 results on one page. With bulk loads gone, those deep positions no longer accrue the same non-human impressions. An analysis of 319 properties reported that 87.7% lost impressions and 77.6% lost keyword visibility, consistent with a data correction rather than a demand collapse.
Fun fact: Google’s impression definition does not require a user to scroll to the link. If the link appears on the current page of results, it counts as an impression. That nuance explains why the bot loaded 100 result pages, inflating historic desktop impression totals.
From Rank To Visibility With Pixel Position
Traditional rank is a weak proxy for visibility on today’s feature-heavy results. A listing can be #1 and still sit below ads, AI Overviews, a Local Pack, and People Also Ask, reducing real exposure. Pixel-based measures solve this gap by recording the vertical position in pixels from the top of the viewport to the top of the result. A lower pixel count means higher on-screen presence. In the post-change environment, prioritising pixel position for Top 10 targets gives a truer read on search visibility and likely CTR than nominal rank alone.
Impact Across Verticals And SERP Features
The parameter removal does not change ranking algorithms or feature triggers. News carousels, Top Stories, shopping units, and local packs continue to appear based on intent and context. What changes is measurement friction. Tools that profiled the entire Top 100 in one pass must now stitch 10 pages to assess feature prevalence and layout. The rules for what counts as an impression within each feature remain the same.


Business Risks For Brands And Agencies
The strategic risk is opacity. Without routine Top 100 scans, challenger brands building through long tail clusters are harder to detect. Benchmarking compresses toward page one leaders and may miss threats incubating on pages two and three. Reporting baselines break. Year-on-year graphs that combine pre- and post-September data mislead stakeholders unless clearly annotated. Forecasts trained on historic rank series degrade because the collection methodology has changed.
Reset The Baseline And Fix Forecasting
Treat mid-September 2025 as a line in the sand. Annotate dashboards for the week commencing 9 September with “Google results per page methodology change”. Build fresh time series from that date for impressions, average position, and ranking depth. Archive older views for context but avoid trendlines that cross the boundary without explicit caveats. Retrain any predictive models that used pre-September GSC or rank data. Mixing the datasets will produce spurious accuracy.
Client Communications And SLA Repair
Agencies must explain the data correction before clients infer failure. The talking points are simple. Clicks and qualified traffic have not collapsed, the impression denominator has been cleansed, and rank trackers are adapting. Where SLAs referenced Top 100 counts or average position deltas, renegotiate toward outcomes that reflect commercial value, such as organic traffic, conversions, Top 10 coverage for priority terms, or pixel position gains above the fold.
Data Collection Options And Trade Offs
There are four broad paths for data acquisition.
- In-house scraping with pagination. Full control and high fidelity if successful, but 10x requests, more blocking risk, and higher maintenance.
- Headless browsers via Playwright or Puppeteer. Closer to human behaviour, executes JavaScript, but it is slow and resource-intensive at scale.
- Third-party SERP APIs. Vendors absorb proxy, CAPTCHA, and parsing complexity and can abstract pagination behind a single call. Costs will reflect their own increased inputs.
- Official Programmable Search JSON API. Low compliance risk, generous free tier, and clear quotas, but it does not mirror live Google Search behaviour. Results differ from the public index and lack rich result parity, which limits use for accurate rank tracking. Pricing is 100 free queries per day then $5 per 1,000 queries up to 10,000 per day.
When Full Coverage Is Not Viable Use Sampling
A statistically disciplined sampling plan lowers cost without losing the signal that matters.
Stratified sampling. Segment your keyword universe by commercial priority. Track “money” terms daily with high fidelity. Track striking distance terms on pages two and three every 3–7 days. Track long tail clusters weekly. This concentrates the budget on decisions that drive revenue growth.
Rolling windows. Report 7-day or 30-day rolling averages for rank and search visibility to smooth volatility and tolerate occasional collection gaps.
Event flags. Annotate all reports with the September change. Any multi-period analysis must respect that cutover.
Compliance And Legal Considerations
Google’s policies prohibit automated queries for determining how a site ranks and warn that machine-generated traffic violates spam policies. Google’s robots.txt also disallows crawling /search. These instruments are not criminal law, but they frame contractual and technical enforcement. Sites that push high-volume automated queries face rate limits, CAPTCHA, and blocks.
In U.S. precedent, the hiQ v LinkedIn litigation indicates that scraping publicly accessible pages is unlikely to violate the Computer Fraud and Abuse Act, while breach of contract and other civil claims remain live issues. The case ended with a permanent injunction against hiQ, yet the Ninth Circuit’s guidance on public access still shapes risk assessments. For most brands the practical approach is to outsource scraping to vendors with contractual indemnities, transparent operations, and resilient infrastructure.
What To Do Next
Rebaseline and reset expectations. Start Q4 reporting from mid-September 2025 data and educate stakeholders on why impressions fell while clicks held steady. Pivot from counting every ranking to proving search visibility where it pays, using pixel position and Top 10 presence for priority queries. Adopt a hybrid data stack that pairs the GSC API for broad trends with selective third-party API spend for mission-critical terms. Build sampling into your operating model so coverage tracks value, not habit.
This change closes a cheap back door and forces discipline. Think of your SEO data like a lighthouse after a storm. The glass is cleaned, the beam is narrower, and every sweep must focus on waters that carry your ships. The task now is to steer by clearer light rather than chase every ripple.


