Design Quality

User Acceptance Testing and Quality Assurance.


We tested 26k clicks and resolved over 2.5k issues before launch.


“You got this done, on time, and under difficult circumstances. You were ideal to manage. I barely heard from you… If everyone worked like you I’d be out of a job.”

Project Manager

Before you launch – User Acceptance Test.


Make sure your users are happy with the site, it’s navigation and content by User Acceptance Testing (UAT). This is an opportunity to have your users check your site, it’s taxonomy, UI and UX, it’s content, copy and processes before you launch.

We tested 26,000 clicks for the Sky Knowledge Management system and resolved all issues before launch.

Although there were many more clicks than that available, by working with the developers to understand the architecture of the system, I reduced the number of tests needed from over 100K to just over 26k.

Why were we testing?

Sky’s internal change departments work at a rapid pace. Directors, product owners, stakeholders, system architects, developers, content authors and subject matter experts were involved in a complete overhaul of all their troubleshooting content.

Using this content is a key metric for Sky’s contact centre agents and should be used on every call, email or social media contact. Making sure the content was useable and correct was vital. The impact of failure affects hundreds of thousands of human hours a month – for the workers and customers.

The UX and UI, and the content were being redesign concurrently.

We built tests to allow all to be checked, signed off or highlighted for faults in one test set. Launching by department brought dates forward, allowed for quick corrections, and reduced strain.

Our approach.

We built hundreds of bespoke testing sheets. Trained some of the company’s most experienced agents. Let them question the taxonomy and UI as they moved through the site to specific content pages, processes, and articles. Then ensured the content was checked for errors in layout and understanding. Addressed grammatical concerns and spelling mistakes. Checked and fixed calls to action, links to other content, and images.

By categorising the severity of faults and raising them for fixes mid test we could continue to complete tests at a rapid rate. By working directly with content authors and knowledge architects faults were fixed and retested on the same day.

Those who found the fault would be assigned retesting wherever possible. This bought great buy-in from the testers, empowered them to feel ownership of the processes and helped build confidence in the system across the estate.


After launch – Quality Assurance.


I worked with a small team of subject matter experts to build custom Quality Assurance checks which were completed as each piece of content was built or changed as part of ongoing improvement. No content should go live until this has been completed. As a minimum, we always check…

Titles and DescriptionsAre they clear and helpful? Do they fit with Tone of Voice and findability criteria.
HyperlinksDo they work? Do they go where they should? Are any missing?
ImagesDo they work? Are they correct? / helpful? / are any missing?
GrammarDoes it follow our editorial guides?
ReadabilityIs it succinct? Can it be simplified without losing meaning?
FindabilityHas it been written following the search guidance documents? Does it return well in search? Are keywords, stop words and synonyms in place?
Tone of VoiceDoes it sound like it should? Is it in line with the TOV documentation?

The Benefits.

By building a scoring system based on the importance of these points I ensured content was fit for purpose and allowed management to track the output of their authors with simple metrics. This created a reward structure based on pass/fail rates; and tracked training needs and opportunities for improvement for authors and content owners.


Making mistakes is OK. Publishing them isn’t.

Publishing mistakes isn’t.



I love a chat.

Talk to me