The "Hidden Curriculum": 4 Rules Every UK Marketing Dissertation Follows (That They Don't Always Tell You)

I looked at Dissertation handbooks from 22 Universities to workout what a dissertation really means. Here is the blueprint.

The "Hidden Curriculum": 4 Rules Every UK Marketing Dissertation Follows (That They Don't Always Tell You)
Photo by Hassan Pasha / Unsplash

I spent some time reading 22 dissertation handbooks, from the London School of Economics (LSE) to the University of Bristol. Why? because I wanted to find the "source code" for a great dissertation.

While every university has different deadlines and different word counts (Southampton UG is 10k; Bristol MSc is 12k), the "ingredients of a great dissertation" remain remarkably consistent. The difference between a scraped Pass at Masters level (50%) and a Distinction (70%+) usually comes down to four often unwritten rules that appear in almost every single marking grid.

If you are just starting your dissertation or proposal journey, forget the topic for one minute. Learn the rules of engagement first.


Rule #1: The "Methodology Minimums" (The Safety Net)

Most students ask: "How many people do I need to survey?" and in response many supervisors may answer: "It depends."

That answer is technically true, but largely unhelpful. After cross-referencing guidelines from Warwick, Bristol, and Leeds, a clear "Standard" emerged. If your supervisor is vague, use these numbers as your safety net.

  • Quantitative (Surveys): The floor is 100. The "Safe Zone" is 150+.
    • The Evidence: The University of Bristol expressly cites Cochran’s formula[^1], recommending a sample size of100–200 for credibility. Leeds suggests aiming for 150 to account for bad data.
    • The Takeaway: If you hand in a survey with 43 responses, it will be statistically irrelevant. Aim for 150.
  • Qualitative (Interviews): The "Saturation Point" is 12.
    • The Evidence: Southampton and Warwick both circle the 10–15 range for 45-minute interviews.
    • The Takeaway: Doing 6 interviews is "anecdotal." Doing 12 is "pattern recognition."

When I mentor students, I give them often say 30. But here is the catch that trips everyone up: It is 30 per category, not 30 in total. I call this the "Multiplier Effect."

The Logic of "30" (The Multiplier Effect) suggests that if you just want to survey "Consumers" broadly, then 30 responses is your mathematical floor. However, as an example, should you want to compare Men vs. Women? Now you need 30 Men and 30 Women. Your total is 60. Do you want to compare Age Groups too (e.g., Young Men vs. Old Men)? Now you have 4 categories. 30×4 = 120.

If you collect 100 responses but only have 12 "Young Men," that entire subgroup is statistically invisible. You cannot analyse it. You have wasted your time.

Is 30 actually enough? (The "Distinction" Secret) Technically, yes. The number 30 comes from the Central Limit Theorem (CLT). Once you hit n=30, your data distribution tends to normalise (form a bell curve), which means it is mathematically "safe" to run standard tests like T-Tests without breaking the laws of statistics. However... just because the math works doesn't mean it will find anything. This is the difference between a Pass and a Distinction.

30 Participants is enough to detect "Large Effects" (e.g., "People prefer free money over debt"). It proves the obvious. 50+ Participants: According to Cohen’s Power Primer (1992), if you want to detect "Medium Effects" (subtle things, like "Gen Z trusts AI slightly more than Millennials"), you need closer to 50–64 participants per group. If you stick to the bare minimum of 30, you risk a "Type II Error", where a result exists, but your sample was too small to see it.


Rule #2: The "Criticality" Ratio (The 80/20 Rule)

This is the #1 reason capable students get 58% instead of 72%. They write "Descriptive" literature reviews. A descriptive literature review tells the reader what happened. It sounds like this:

"Smith (2020) argues that AI is changing marketing. Jones (2021) agrees, stating that chatbots are useful."

He said this. She said that. So what? This is just a list of facts. It requires little brainpower. A Critical dissertation tells the reader what it implies:

"While Smith (2020) argues AI is changing marketing, this assumes a Western-centric technology infrastructure. However, as Jones (2021) points out, this model fails in developing markets where..."

See the difference? The first example is a reports. The second example analyses.

Your Literature Review should be 20% Description (explaining the model) and 80% Critique (comparing, contrasting, and applying it). If a paragraph doesn't explain the implication of a theory, where is its 'value'.

The Trap: "Google Scholar Bingo"

In my experience most students play a dangerous game I call "Google Scholar Bingo."

  1. They write down an opinion they already have.
  2. They go to Google Scholar.
  3. They hunt for a title that sounds like it agrees with them.
  4. They paste the citation at the end of the sentence.

They rarely actually read the paper. They just wanted the reference to "prove" they were right. This is academically dangerous. You are citing titles, not arguments. "How many references should I have?" I don't count references.

The Fix: The "Wallace & Wray" Method

To stop playing Bingo and start producing better work, use the framework fromWallace & Wray’s Critical Reading and Writing for Postgraduates. Before you cite anything, answer their 5 "Critical Synopsis" questions. If you can't, you haven't read it.

  1. Why am I reading this? (Am I looking for a theory, a method, or a counter-argument?)
  2. What is the author trying to achieve? (Are they selling a new model or criticizing an old one?)
  3. What are they claiming? (What is the "Headline" finding?)
  4. How convincing is it? (This is the money question. Is their sample size too small? is the data old? Is it biased?)
  5. What use is this to ME? (Does this support my argument, or destroy it?)

The Rule: Your Literature Review should be 20% Description (What they said) and 80% Critique (How convincing it is and why it matters to your study).

WARNING: This is is a long video. Dip in and out as needed.

Why this video is relevant: Wallace & Wray explaining how to move from "reading" to "building an argument," which directly supports the 5 questions listed in the text.


Rule #3: The "Ethics Binary" (The Guillotine)

In almost every handbook I read (Southampton, MMU, LSE), Ethics is not a "marking criteria"—it is a guillotine.

No Ethics Approval = Zero Marks.

One the biggest traps in 2026? Social Media Data. Many students assume that because a Tweet or a TikTok comment is "public," they can use it.

Wrong. Under GDPR and university ethics policies, scraping social media data without consent is a legal minefield.

The Fix: Treat the Ethics Application / Ethical Review as "Chapter Zero." It dictates your methodology. Too many see the ethics application as either a tick box exercise or something that gets in the way of doing the dissertation. This is not the case. The ethical process when done correctly is the back bone of your dissertation. Start with the ethics process, look at what ever forms you have to complete. They are not a chore, they are a window in to what makes good research and therefore they will help you to achieve, not get in the way. Do not start collecting data until you have that signature.


Rule #4: The "Scope" Paradox

The Handbooks from Imperial and Durham explicitly warn against "General" topics. They favour "Applied Projects" or deep "Case Studies." Some institutions have specific 'routes' for certain types of dissertation whereas others are amalgamated into a common handbook / module.

  • Bad Scope: "The Future of AI in Marketing" (Too broad; impossible to answer).
  • Good Scope: "The Impact of AI Chatbots on Trust in UK B2B Banking." (Specific; measurable).

The Rule: A Distinction topic fits on a postage stamp. If you can't define your audience in one sentence (e.g., "UK Gen Z consumers buying luxury goods"), your scope is too wide.

As Nina Reynolds once said to me.

Focus on the 'fairies dancing on a pin head'

What Comes Next?

Now that you know the rules of the game, let's play to win.

Over the next few weeks, I am going to take 5 Marketing Dissertation Topics and build a "Proposal Blueprint" for each one, applying these exact rules.

We will cover:

  • Refined Research Questions (satisfying the Scope rule).
  • The Methodology (hitting the "Golden Numbers").
  • The "Must-Have" Literature (ensuring Criticality).

Coming Next Week: The 5 Marketing Dissertation Topics for 2026...


Footnotes:

[^1]: see "How to choose a sampling technique and determine sample size for research: A simplified guide for researchers": https://www.sciencedirect.com/science/article/pii/S2772906024005089