Universities See More Use of AI in Theses, But the Quality is Uneven – How to use AI when writing a thesis
How to use AI when writing a thesis
Universities see more and more use of AI in theses; but the quality is quite uneven. Nearly all students now tap AI somewhere in the writing process – 92% in the UK in 2025, with 88% using it in assessed work. That does lift the polish of the text and the speed in which the theses are produced, but it does not always improve the truth or method used by the students.
Examiners report cleaner prose which is too often still is masking pretty shaky argumentation and stray or completely fabricated sources due to the misuse or misunderstanding of AI tools. To give you an idea, there were about 7,000 confirmed AI-related cases across UK universities in 2023–24. There were also experiments where AI answers slipped past humans, at Reading for instance, 94% of GPT-4 exam scripts went undetected and even out-scored real students.
Catching AI use with certainty in take-home work is near-impossible. Oral checks and process evidence- draft history, data and code deposits, prompt logs – will restore trust when paired with documented use and verification. The standard has clearly shifted, how well a thesis reads now matters less than how well its claims can be proven.
I this article I will examine AI-driven theses across universities, what works when using AI to write a thesis, where risks hide when you use AI t write a thesis, and how can students raise the quality of their theses when applying AI.
- How to us AI when writing a thesis, and how not to use it?
Is AI a problem with students writing a thesis?
In order to give you an idea of issues with students using AI when writing their thesis, I pulled up some (news) articles published online regarding best practices and bad practices.
- UK: ~7,000 confirmed AI-cheating cases in 2023–24. A nationwide FOI investigation tallied almost 7k proven incidents across universities; classic plagiarism dropped while AI cases rose.
- Scotland: 700% jump in AI misconduct. Universities logged 1,051 AI cases in one year (up from 131), with several institutions reporting triple-digit totals.
- Reading (UK): AI answers outscored real students. Researchers submitted GPT-4 exam scripts; only 1/33 was flagged, most earned higher grades—forcing assessment redesign.
- Australia: false accusations from AI detectors. Australian Catholic University admitted it wrongly accused students using Turnitin’s AI indicator; many cases were dismissed and the tool was dropped. Adelaide Now
- Belgium (UGent): AI allowed in master’s theses with disclosure. From 2024–25, UGent permits “responsible use” for masterproeven but requires documented AI use; submitting AI text as your own remains plagiarism. UAntwerpen from its side issues ethics/AI guidelines covering research and thesis work.
- Netherlands: law student sanctioned for ChatGPT use. A 2025 ruling reported by Folia describes a “serious fraude” case where AI use violated exam rules because it obscured the student’s own skills.
- USA: public case of proud AI dependency. A UCLA graduate publicly showcased ChatGPT-completed assignments during commencement, triggering broad scrutiny of learning outcomes and integrity.
- Detection fragility highlighted. UK reporting and guidance pieces document unreliable AI-detection scores leading to high-stakes referrals; libraries now advise thesis AI-use disclosures instead of detector reliance.
- Measured learning effects (mixed). A 2025 meta-analysis finds improved academic performance and writing outcomes with structured AI support, while universities warn about hallucinations and shallow sourcing.
How to us AI when writing a thesis, and how not to use it?
AI now sits in the middle of thesis writing. It speeds outlines, cleans prose, and unblocks drafts. It unfortunately still slips in fake citations, muddles methods, and hides weak reasoning behind fluent text.
That is a problem because your goal isn’t pretty paragraphs. Your goal is thesis AI quality you can prove.
This guide shows both sides. You’ll get a lean, verifiable workflow, what to automate, what to double-check, and what to keep strictly human. You’ll also see hard lines you must not cross, with quick examples and checks that make your thesis defendable in the room.
How to us AI when writing a thesis
Where students use AI for planning, editing, and verification, you will get cleaner chapters, fewer language issues, and faster progress, all this without diluting scholarship. The biggest gains show up when institutions teach that workflow and require disclosure, source checks, and artefacts alongside the final PDF.
So, to make it really clear, using AI when writing a thesis is not a crime, on the contrary, it can be a very useful tool. Here’s what actually improves the quality of theses when making good use of AI. I have each time added an example to make clear how it practically works.
Cleaner prose, faster drafting
Students using AI t write theses will write clearer, tighter sentences and reach a full draft sooner. Large reviews in 2025 for instance report positive effects on writing outcomes when AI is used for outlining, feedback, and revision. In the UK, students say they use AI mainly to save time and improve assignment quality. That is a pattern that maps onto thesis chapters, especially the literature review and methods.
Example. A sociology student feeds rough notes and 12 PDFs into an assistant to produce a structured outline with section goals and transition sentences. They then rewrite each section in their own words, using the tool for line-edits only. The draft that used to take two weeks now takes four days, with fewer grammar fixes at supervision.
Stronger structure and argument scaffolding
AI can help students plan chapter architecture, generate alternative framings for the research question, and spot gaps between aims, methods, and measures. Supervisors report fewer “wandering” introductions and tighter links from problem statement to design when AI is used as a planning aide rather than a ghostwriter. Quality agencies now encourage that workflow.
Example. An education thesis starts with a vague aim. The student asks the model to propose three testable versions and map each to feasible instruments. They pick one, then request a checklist to align variables, sampling frame, and analysis plan. The resulting chapter reads cleaner.
Better comprehension of sources (when paired with verification)
Students use AI to summarise dense articles, compare findings across papers, and extract candidate variables. Surveys show heavy use for explaining concepts and summarizing articles, which reduces time-to-understanding and lowers the barrier to reading outside one’s niche. The win is speed; the risk is fabricated details. And because of this there is a rising emphasis on source checking.
Example. A public-health student asks for a side-by-side summary of six asthma-exposure studies with study design, effect sizes, and confounders. They then open each cited paper to confirm numbers before writing. The result is a faster synthesis that still rests on verified primary literature.
Language support for non-native writers
EAL students use AI for tone, clarity, and cohesion. The meta-analyses report improved the readability and reduced surface errors when AI acts as a writing coach rather than a generator. This narrows gaps in grammar and style, letting supervisors spend their time on methods and interpretation instead.
Example. A chemistry student drafts his thesis in Spanish, then asks the tool to suggest clearer English with domain-correct terminology and to flag hedging. They accept only edits they understand and keep a change log.
Formatting, referencing, and compliance
Models speed up style-guide chores: headings, tables, captions, and citation formatting (APA/MLA/Vancouver). Students still need to verify every reference, but once a source is real, the tool reliably applies the template. QAA guidance frames this use as acceptable when disclosed.
Example. A psychology thesis moves from messy citations to a clean reference list. The student pastes DOIs and asks for APA-7 formatting, then cross-checks each DOI link. Supervisors get a submission that meets house style on first pass.
Coding help and statistical checks
For quantitative theses, AI assists with boilerplate code, debugging, and explaining outputs. Students lean on it for R/Python diagnostics, power-analysis stubs, and effect-size reporting. Reviews in 2025 note overall performance gains where AI is used to support problem-solving.
Example. A business-analytics student can’t converge a mixed model. The AI assistant explains the warning, proposes a simpler random-effects structure, and suggests a robustness check. The student implements and justifies the change; the model now fits and the reasoning is defendable.
Project management and traceability
Used well, AI becomes a process notebook that handles meeting digests, to-do lists, draft diffs, prompt logs. Regulators now recommend process evidence (version history, code/data deposits, short viva demonstrations) to shore up authorship and validity which raise thesis ai quality beyond polished text.
Example. An engineering student attaches a prompt log and Git history to the appendix. During the viva they reproduce one plot live. The committee signs off faster because provenance is clear.
How NOT to us AI when writing a thesis
AI lifts fluency, but as I said earlier, right now (and this problem will not disappear very soon) it doesn’t guarantee truth. The biggest drops in AI quality in theses show up where students let chatbots replace reading, method design, and verification.
I have each time added an example as well to make clear how the error happens and how it can be fixed.
Fabricated or mangled citations
Models invent DOIs, page ranges, and author lists and they also misquote real papers. A clean reference list for instance can easily hide non-existent sources which will break the literature review.
Example. An economics thesis cites three “meta-analyses” on carbon taxes. Two don’t exist; the third is a blog summary. The committee forces a rebuild of the problematic chapter. To fix it the student needs to pull every reference from a primary database (Scopus, Web of Science, PubMed, SSRN) and paste the URL source link only after you have the PDF in hand so to speak.
Shallow synthesis masked by polished prose
The student rendered AI strings summaries without weighing study quality, design, or bias. The used arguments look coherent, but in reality they rest on (very) weak evidence.
Example. A psychology thesis concludes that “gratitude journals improve GPA” based on small, heterogeneous studies. There is no risk-of-bias table and no power analysis added. To fix this, the student needs to build an evidence table (design, N, effect sizes, confounders). Only after grading the evidence the student can write the conclusion.
Method–question misalignment
The student accepts a chatbot’s method suggestion that doesn’t answer the research question. Ultimately this will be a problematic choice as you can’t fix a bad design in the discussion chapter.
Example. The question asks “causal impact.” The AI suggests a cross-sectional survey with OLS. As a result the thesis reports correlations as causation. To fix this the student needs to map the question to design first (experiment, quasi-experiment, panel, case-control, ethnography). He then has to defend the choice in 10 lines before coding a single model.
Data and code opacity
The outputs of the AI used appear without reproducible scripts or versioned datasets.In short, examiners can’t verify the pipeline and the errors persist.
Example. A finance thesis reports a Sharpe ratio that no one can reproduce because the student pasted model code from chat and edited it ad-hoc. To fix this, a minimal repo should be shipped with a data dictionary, a notebook, a requirements.txt, and a one-click rerun. The student then has to add a log of AI prompts that affected the analysis.
Hidden assistance gaps and equity issues
Some students buy premium tools or human editors; while others rely on the free versions of the AI tools. You will notice that the thesis quality will literally vary due to the budget used, and not because of the effort.
Example. Two similar projects diverge: one student uses premium workflows with citation retrieval; the other student stays in plain chat and carries errors into the text. To fix this, it’s key to standardize the workflow. Campus should grant access to grounded modes (library databases, citation managers, verified AI with links) and teach the same process to all.
Homogenized voice and idea loss
Over-editing in AI will erase disciplinary tone and personal reasoning. All the texts generated will read the same and ultimately say less.
Example. An anthropology thesis loses its field voice – quotes and context – after it underwent “tone polishing.” The reviewers ask for the raw field notes. To fix this, keep AI to line-edits and do preserve domain-specific language and quoted material. Log any changes to the participant’s wording.
Privacy and ethics breaches
Students paste identifiable data or restricted PDFs into public tools. This clearly violates consent, NDAs, or copyright. On top, recent issues have shown that AI can ‘leak’ info to Google.
Example. A nursing thesis uploads de-identified case notes that still allow re-identification through rare conditions. To avoid this, because fixing will be too late when it leaks, it’s needed to redact before you paste. You should also use institutionally approved tools for sensitive data, and document should be handled in the ethics appendix.
Bias and domain drift
General models oversimplify niche or multilingual literatures. Key sources completely vanish and the produced claims skew to Anglophone, high-visibility work.
Example. An environmental-policy thesis cites only U.S. studies on heat pumps while ignoring EU field trials. To fix this, the student should seed searches with field-specific databases and non-English queries. You should also add a “coverage limits” note to the methods.
Learning loss in core skills
When using AI, students rapidly outsource reading, paraphrasing, and problem-solving. Not surprisingly, vivas expose these created gaps fast.
Example. A candidate can’t derive the estimation equation they submitted. The committee downgrades the thesis. To prevent this, the student should rehearse a 5-minute whiteboard derivation or method walk-through for each main result.
Red flags supervisors will spot in minutes
- References you can’t retrieve, or DOIs that 404.
- Perfectly uniform paragraph lengths and transitions across chapters.
- Methods that don’t answer the stated question.
- Tables or plots that can’t be reproduced on a clean machine.
- Claims that don’t appear in any of the cited PDFs.
- Copy-edited English with freshman-level domain logic.
Detection theater and false positives
The problem is not only with students using AI though, but also with the staff having to scrutinize the theses. When AI detectors flag honest work, or when the staff chases percentages instead of evidence, then students lose time on appeals; while real issues go unchecked.
Example. A non-native writer gets a high “AI score” for formulaic phrasing. The viva confirms mastery.
To fix this, the staff should use detectors as triage only. They should only open cases when they have corroboration in the form of non-existent sources, non-reproducible analysis, or inconsistent reasoning.
Minimum safeguards to protect your thesis quality when using AI
- Demand provenance. Attach prompt logs, drafts, and a short AI-use statement.
- Verify sources. Pull every citation from a library database and store the PDFs.
- Make it reproducible. Submit code, data dictionary, and rerun instructions.
- Test live. In the viva, recreate one figure and trace two references to the primary studies.
- Bound the tools. Use AI for planning, editing, and debugging; not for undisclosed text generation or unverified facts.
- Make a short AI-use and verification statement (what, where, how verified).
- Use oral defenses and replication artifacts (datasets, code, prompts) to anchor your claims.
- Design prompts that AI can’t complete alone (local datasets, original fieldwork, adjudication of conflicting sources).
Differences between the use of free vs paid AI when writing a thesis
No model (free or paid) can promise zero hallucinations in plain chat. OpenAI for instance explains that LLMs can still generate plausible but wrong facts or URLs. So when you write a thesis, you should use modes that fetch and cite sources to minimize this.
Paying only helps if students use the grounded modes (Search, Deep Research, Company Knowledge) that fetch sources and attach citations. Plain chat – free or paid – can and will still hallucinate.
In order to explain, below are two comparison tables explaining the differences between these modes and their error risks.
What changes between Free vs Paid (reliability + links)
| Plan | Models | Web info | Citations in output | Context & limits | Extras that reduce errors |
|---|---|---|---|---|---|
| Free | GPT-5 (incl. thinking) with limited usage | Search the web for up-to-date info | Links in Search results; no connectors; no Company Knowledge | Lower rate limits; shorter runs | “Limited deep research” only; no connectors. |
| Plus | GPT-5 with expanded usage | Search + Deep Research | Deep Research adds source links/citations in every report; exportable | Higher limits; larger context | Agent mode; Deep Research across the open web; connectors enabled. |
| Pro | GPT-5 + Pro reasoning; max limits | Search + maximum Deep Research | Fully linked reports; PDF export with citations | Longest context; fastest | Advanced agent mode; research previews. |
| Business / Enterprise / Edu | GPT-5 with flexible access | Search + Deep Research + Company Knowledge | Company Knowledge shows citations and links back to your own sources (Drive, SharePoint, GitHub, etc.) | Team controls; privacy & compliance | Org connectors; compliance logging; long context windows. |
Error risk by mode (for thesis work)
| Mode | Available on | How it reduces errors | Remaining risks |
|---|---|---|---|
| Plain chat | Free & paid | Fast drafting, style help | Hallucinations and fake references remain; no built-in citations. |
| Search | Free & paid | Pulls live web info and shows links you can verify | Still needs source vetting; web pages can be noisy or adversarial. |
| Deep Research | Plus / Pro / Team / Enterprise / Edu | Multi-step web research with citations on every output | Quality depends on sources gathered; you still verify claims in the paper. |
| Company Knowledge | Business / Enterprise / Edu | Answers cite your own documents; easy to audit provenance | Only as good as your internal corpus and governance. |
Want fewer invented links? Then you should invoke Search or Deep Research explicitly and verify the cited pages. Do know that writing or brainstorming in plain chat can still drift; it’s a lot faster but less grounded. For team work, Company Knowledge gives auditable answers with built-in citations to your files, but that’s not really budget-friendly for students.
Succesfully using AI for your thesis is basically a workflow story
AI has changed thesis writing for good as I explained you already. As a student you should start considering the use of AI as a workflow story when creating your thesis.
Source every claim from primary literature, verify numbers, and keep the ‘receipts’ (in this case consisting of prompt logs, drafts, data, and code).
Sure, you can draft with AI for structure and clarity, then switch to grounded modes for facts and citations. But, verify everything you keep. And above all, keep these 3 key elements in mind:
- Source it: No citation enters the thesis until you’ve opened the paper and confirmed the numbers.
- Show it: Append prompts, version history, data, and runnable code.
- Defend it: Recreate results live and justify design choices.
If you use the tips in this article, you’ll submit work that reads well and stands up when pressed.
Become a Sponsor
Our website is the heart of the mission of WINSS – it’s where we share updates, publish research, highlight community impact, and connect with supporters around the world. To keep this essential platform running, updated, and accessible, we rely on the generosity of you, who believe in our work.
We offer the option to sponsor monthly, or just once choosing the amount of your choice. If you run a company, please contact us via info@winssolutions.org.
I specialize in sustainability education, curriculum co-creation, and early-stage project strategy. At WINSS, I craft articles on sustainability, transformative AI, and related topics. When I’m not writing, you’ll find me chasing the perfect sushi roll, exploring cities around the globe, or unwinding with my dog Puffy — the world’s most loyal sidekick.
