Running a pilot study prior to the full-scale roll out of a new assessment, training course, or any measure of competency is an essential component in the development of The Cyber Scheme’s services. As a Licensed Body, we are trusted by the UK Cyber Security Council to run pilots which reflect verifiable outcomes and provide valuable learning exercises. In addition to the standard perception of a pilot being conducted to evaluate the feasibility of crucial elements, at The Cyber Scheme we also believe in the following principles, which we have termed our ‘pilot ethos’, allowing us to be as objective as possible when evaluating our findings:
- Define the scope. What exactly are we trying to define with this pilot? When running pilots for Professional Titles for example, we are looking at how to set the correct levels for each title, and this can only be achieved through trial; trialling new assessment methodologies and tools, trying out various approaches to applications, varying the amount of advice given, changing practical details like timings and ensuring outcomes are assessed correctly.
- We don’t know the answers. How do we make sure that the outcomes we are looking for are met objectively? We must ensure that we never assume, think we know the answer, or inadvertently influence an outcome with pre-conceptions or unconscious bias. We draw on our existing expertise and experience, of course, but we are always ready to hear alternative, sometimes opposing viewpoints. This often results in additional trials to ensure every opinion is taken into account.
- Be driven by the data. If the candidate pool is drawn as widely as possible, so that the candidates themselves apply many different approaches based on their real world experience, we can simply rely on the huge range of data they provide for our learning outcomes. We can explore different data points, analyse answers from many different angles and draw conclusions based on solid fact, not conjecture. This means we can be confident in all the decisions we make about the permanent standing-up of a process that has been piloted.
We continue to learn throughout an assessment process with robust feedback mechanisms – and we never expect to get it right first time.