Google now limits results per page, what it means for teachers
Google now limits results per page , what it means for teachers
In mid-September 2025, Google stopped honoring the &num=100 URL parameter. This means that you can no longer load 100 results in one go; you get ~10 per fetch and to read the rest you need to paginate. It has had its impact, SEO datasets and rank trackers saw sharp impression drops, but the bigger story sits in schools – and I’m honestly surprised this is not being commented on by more educators – because this change will lead to less depth by default for research assignments and media-literacy lessons.
Let’s see how we can circumvent this problem being teachers. Below is a video that explains what the technical change includes exactly.
- num=100 removed and classrooms will feel it
- Why Google did it
- Concrete risks for education
- Mitigations that work in classrooms
- The counterbalance: use open web datasets
- FAQ on Google’s 100-results change
- What exactly changed?
- Why does this matter in a classroom?
- Does this delete content?
- Who feels the pain first?
- What classroom tasks are most affected?
- Give me a concrete example.
- Is this a legal issue about “access to data”?
- Will other search engines behave the same?
- How do I keep student research deep?
- What query techniques should students learn now?
- Which tools still help without scraping?
- How do I adapt existing classroom tools?
- What should go into my rubric?
- How do I support equity for schools with no budget?
- What do I tell parents and administrators?
- Become a Sponsor
num=100 removed and classrooms will feel it
For years, power users tacked &num=100 onto a Google results URL to view the first 100 organic links on one page. Around September 11–18, 2025, that switch stopped working. Google confirmed it no longer supports a “results-per-page” parameter. Tools built on that behavior now see fewer results per request unless they paginate.
It’s not mass de-indexing. Your sources didn’t vanish. The window just got a lot smaller. Google’s official APIs also cap results per call and expect pagination, so the consumer UI now mirrors that model.
So far for the theory, but why do classrooms feel this as well?
- Page-one gravity increases. Students already over-trust the first hits. When tools surface fewer deep results by default, that bias tightens. Studies have showed that learners pick top-ranked links even when the order is manipulated.
- Narrower source diversity. Many niche, local, or minority-voice sources live beyond the first 10–20 results. Fewer bulk pulls means fewer of those make it into school wikis, reading lists, or citation managers. Related research also finds engines emphasize “authoritative sources,” which can make alternative but credible voices harder to encounter without deliberate effort.
- Thinner literature sweeps. Teachers, librarians, and students using free or low-cost SERP-based tools to build bibliographies will retrieve less in one pass. That raises time and, for some tools, cost.
- Equity gap widens. Well-funded programs can pay for APIs and multi-engine pagination. Cash-strapped classrooms almost completely rely on free tools and manual digging. The new friction punishes those without resources.
Why Google did it
There are various reasons why Google changed its approach. The two main reasons are these:
- Consistency with APIs. The Custom/Programmable Search API returns 10 results per call and never more than 100 total via pagination. Aligning the UI discourages bulk HTML scraping.
- Anti-scraping and performance. Bulk grabs feed rank trackers and AI agents that replicate Google’s result sets. Throttling lowers load and limits cloning of SERPs. Industry analyses describe exactly this knock-on effect.
Most engines already throttle bulk HTML retrieval and push developers to API or paid tiers. Expect continued rate limits, stronger bot detection, and “vetted researcher” portals rather than restoring one-click 100-result pages.
The policy angle teachers should know
When Google narrows the firehose, policy fills the gap. And two of them are crucial.
- No “right to scrape” SERPs. This is a private platform choice. It doesn’t touch your GDPR right of access to personal data or open-government data rules.
- There is a research route. Under the EU Digital Services Act, vetted researchers can request data access from very large search engines to study systemic risks. That’s structured access, not an open firehose – but it does matter for academic projects.
Concrete risks for education
There are concrete risks behind that small shift: less source diversity, weaker argumentation, copy-paste bibliographies, and growing inequity between schools that can afford API-based tools and those that can’t. In short, when search narrows, classrooms lose range.
Here’s an example to make it clearer:
Suppose a teacher assigns “compare three credible sources on microplastics in German rivers.” Before the change, the class tool fetched 100 results in one Google sweep. Students would weigh national news, a university preprint, a provincial water-quality PDF, and an NGO field report. Today they see ~10 links first on Google. Most pick a newspaper, Wikipedia, and one government page. They miss the provincial PDF on page 3 and the NGO’s dataset on page 4. The result: essays converge. Claims go unchecked. Local data disappears from citations. Media-literacy skills flatten because students don’t practice triangulation.
Four key elements are included in this example:
- Media-literacy shortcuts. First-page results become the syllabus if teachers don’t force pagination. That dulls source triangulation and bias detection. Evidence already shows searchers reinforce their prior beliefs via initial queries and top results.
- Assignment sameness. Expect more identical citations across students. Less long-tail discovery means fewer unique angles and weaker argumentation.
- Local knowledge underexposed. Regional NGOs, municipal PDFs, and small-journal articles often sit past page one. They drop out of student bibliographies unless someone insists on deeper drills.
- Tool degradation. Free citation scrapers and “find sources” widgets that relied on 100-result snapshots now return skinnier sets unless updated to paginate. That undercuts classroom efficiency.
Mitigations that work in classrooms
There are several ways educators can counter these risk.
For teachers and librarians
- Set depth rules. Require students to consult at least 20–30 results across two+ engines (Google + Bing/Brave/Kagi) and cite at least one source beyond the first page. Verify with screenshots or exported SERP timestamps.
- Teach pagination and operators. Make
site:, quoted phrases, filetypes (pdf, ppt), and date filters part of the rubric. Example query:site:europa.eu "biodiversity strategy" filetype:pdf 2024. - Broaden the corpus. Add Common Crawl-based tools, library databases, and publisher RSS to research checklists. Keep a standing list of credible vertical search engines per subject.
- Adopt API-aware tools. Pick research dashboards that paginate ethically through official APIs and declare sampling limits. And, last but not least, do ask vendors how they adapted post-September 2025.
For students
- Change one word, rerun. Small shifts uncover new sources: add a year, region, or method (“meta-analysis 2023 Germany”).
- Open five tabs, not one. Compare claims. Track what each source actually measures.
- Document the trail. Record query strings, pages checked, and dates. Teachers can grade the method, instead of just the final list.
Students will of course ask you as an educator why they have to change their approach. Here are 4 easy answers and suggestions you can use:
- Google narrowed the default window.
- Great sources still exist past page one.
- Prove depth: paginate, vary queries, cross-check engines, log methods.
- Cite at least one non-page-one source in every assignment.
For schools and universities
- Create a “depth pack.” Shared guides with query templates, operator cheat-sheets, and a vetted list of alternative engines and databases.
- Pool budgets. Where possible, fund limited API quotas for capstone projects and research methods courses.
- Leverage DSA channels. For accredited researchers studying platform risks, use the Article 40 process to request structured access.
The counterbalance: use open web datasets
Open repositories like Common Crawl keep large-scale web data accessible for teaching and research. They’re not a drop-in replacement for live SERPs like Google, but they let classes explore the broader web without scraping Google. Adoption in academic work has surged and users pair Crawl-based exercises with live, paginated searches.
September 2025 didn’t erase information as many claim. It just raised the effort to reach it. However, without new habits, classrooms will overfit to page one and teach shallower research. With clear rubrics, alternative engines, and a bit of API-savvy tooling, you will restore the wide web to your students.
Life-long learning is not limited to students, teachers and educators should as well.
FAQ on Google’s 100-results change
What exactly changed?
Google no longer loads 100 results in one go via the old num=100 trick. You get ~10 per fetch. Tools must paginate to see page 2–10.
Why does this matter in a classroom?
Students default to page one. With fewer results per load, they meet fewer diverse sources. Essays converge. Media-literacy practice weakens.
Does this delete content?
No. Pages still exist. The window shrank. You must click “next” or use tools that paginate.
Who feels the pain first?
Teachers, school librarians, and students using free research widgets, citation grabbers, or low-cost rank/report tools that didn’t adapt.
What classroom tasks are most affected?
- Source triangulation and bias checks.
- Literature scans for projects and capstones.
- Local/NGO/government PDF discovery.
- Debate prep that needs minority or regional viewpoints.
Give me a concrete example.
Assignment: “Find three credible sources on nitrates in German rivers.” Page one shows a national newspaper, Wikipedia, and a ministry page. The provincial water-quality PDF sits on page 3. The NGO dataset sits on page 4. Without deliberate depth, both vanish from citations.
Is this a legal issue about “access to data”?
No. It’s a product choice by a private platform. Your GDPR right of access (personal data) and public-sector open data still stand.
Will other search engines behave the same?
Most already limit bulk HTML access. Expect tighter rate limits and API-first models across engines.
How do I keep student research deep?
- Require at least one source beyond page one.
- Mandate two search engines per assignment.
- Grade the method: saved queries, pages checked, timestamps.
What query techniques should students learn now?
- Use operators:
site:,filetype:pdf, quotes"exact phrase", date filters. - Add context words: year, region, method (“2024 Belgium meta-analysis”).
- Try vertical tabs: News, Scholar, Images, Forums.
Which tools still help without scraping?
- Official search APIs with pagination.
- Library databases and discovery layers.
- Publisher RSS, newsroom feeds, Common Crawl-based explorers.
How do I adapt existing classroom tools?
Ask vendors three direct questions:
- Do you paginate past the first 10 results?
- Which engines/APIs do you call?
- How do you disclose sampling limits to users?
What should go into my rubric?
- Depth: proof of page-2+ exploration.
- Diversity: at least one local/NGO/government PDF.
- Verification: claim → source → date → quote/page number.
- Reflection: why a source was included or rejected.
How do I support equity for schools with no budget?
Share a “depth pack”: operator cheat-sheet, engine list, query templates, and a 10-minute screencast on pagination. Pair students to split pages (1–3, 4–6, 7–9).
What do I tell parents and administrators?
The web didn’t shrink. Default windows did. With clear habits – paginate, vary queries, compare sources – students regain breadth and produce stronger, more original work.
Become a Sponsor
Our website is the heart of the mission of WINSS – it’s where we share updates, publish research, highlight community impact, and connect with supporters around the world. To keep this essential platform running, updated, and accessible, we rely on the generosity of you, who believe in our work.
We offer the option to sponsor monthly, or just once choosing the amount of your choice. If you run a company, please contact us via info@winssolutions.org.
I specialize in sustainability education, curriculum co-creation, and early-stage project strategy. At WINSS, I craft articles on sustainability, transformative AI, and related topics. When I’m not writing, you’ll find me chasing the perfect sushi roll, exploring cities around the globe, or unwinding with my dog Puffy — the world’s most loyal sidekick.
