The Corelan Certified Exploit Developer (CCED) exam is designed to validate real-world exploit development skills, not just the ability to complete tasks. The evaluation model is based on two equally important components: the ability to produce results and the ability to understand those results. Candidates are required to complete practical challenges, study advanced technical material, and perform independent research. However, submitted solutions alone are not sufficient to pass. A dedicated validation phase ensures that candidates can explain, justify, and adapt their work. This makes it possible to distinguish between genuine competence and externally assisted output. The exam does not restrict the use of public resources or AI tools. Instead, it ensures that such tools cannot be used to pass without a clear and demonstrable understanding of the underlying concepts. The result is a balanced and rigorous certification process that reflects both practical capability and technical depth. For a detailed description of the grading methodology, scoring model, and validation process, please refer to the full document.
No. Completion alone is not sufficient. Candidates must demonstrate that they understand their solutions during the validation phase. Understanding accounts for a significant portion of the final score.
The validation phase is a technical interview where the candidate must explain and defend their work. It ensures that the submitted solutions reflect the candidate’s own knowledge and not just the ability to produce results.
The CCED exam does not restrict the use of AI tools, public resources, or other forms of technical assistance. However, assistance from other individuals is not permitted. AI is increasingly part of everyday professional workflows, including cybersecurity research and exploit development. The exam is designed to reflect this reality and simulate real-world conditions, where practitioners are expected to use available tools effectively. However, the objective of the certification is not to measure how effectively a candidate can use AI, but to assess their individual technical competence. For that reason, all submitted work must be supported by demonstrated understanding. During the validation phase, candidates are required to explain, justify, and adapt their solutions. This ensures that they can critically evaluate results, even when those results are produced or influenced by external tools. In practice, this means: ✅ The use of AI or other resources does not negatively impact scoring ✅ Reliance on such tools without understanding will result in (significantly) reduced scores during validation ✅ Attempts to simulate or “fake” understanding will result in failing the exam This approach ensures that the certification reflects genuine expertise, while remaining aligned with modern professional practices.
Each technical phase is evaluated based on both completion and understanding. Completion measures whether the work was delivered. Understanding measures whether the candidate can explain and justify that work. The final score is a weighted combination of the different phases.
Partial progress can still receive credit if it demonstrates meaningful technical reasoning and a clear path toward a solution. Candidates are expected to explain what was achieved, what remains, and how they would proceed.The exam instructions provide more information on what is expected in this scenario.Providing partial results, with only partial reasoning, will very likel lead to failing that component of the exam