My Plan for Assessing and Evaluating My Research Project
As I reach the end of Week 3 in PROM05 Research Project Management, I’ve begun shaping a clearer picture of how I will assess and evaluate the outcomes of my research project.
11/19/20252 min read
As I reach the end of Week 3 in PROM05 Research Project Management, I’ve begun shaping a clearer picture of how I will assess and evaluate the outcomes of my research project. Understanding this early is important because the success of any research study doesn’t just depend on the artefact we produce—but on how effectively we measure, analyse, and reflect on the results.
In this blog post, I outline the types of data I plan to collect, the methods I will use, and the tools and techniques that will guide my evaluation process.
1. Deciding What Data to Collect
Before evaluating results, I need to define the evidence that will show whether my project has met its aims and objectives.
The data I plan to gather includes:
✔ Primary data
Collected directly from users, stakeholders, or the system I develop.
Examples: surveys, interviews, usability tests, system performance logs.
✔ Secondary data
Existing research or benchmark datasets to compare my results against.
Helps me evaluate whether my artefact performs at industry-standard levels.
This combination will give my project richer insight and more reliable findings.
2. Choosing My Assessment Methods
Different types of data require different assessment methods. Here are the methods most relevant to my project:
Quantitative Methods
Used when I need measurable, numerical outcomes.
Examples:
Surveys with rating scales
System performance metrics (speed, accuracy, error rates)
Statistical comparisons or benchmarks
These will help me determine how well the artefact performs.
Qualitative Methods
Used to understand user experience and behaviour.
Examples:
Semi-structured interviews
Observations during testing
Thematic analysis of user feedback
These methods help me understand why certain results occur.
3.Techniques for Analysing My Findings
Once I’ve gathered my data, I’ll apply structured analysis techniques:
Thematic Analysis
For analysing qualitative feedback, identifying repeated patterns, concerns, or suggestions.
Descriptive Statistics
For understanding averages, trends, and variance in quantitative data.
Comparative Evaluation
To compare my artefact’s performance with alternatives or pre-existing solutions.
Triangulation
Cross-checking results from different methods to increase reliability.
4. Tools and Software I Will Use
To support my analysis and evaluation process, I plan to use:
Excel / Google Sheets – for data organisation and statistical summaries
NVivo / manual coding – for qualitative thematic analysis
Survey tools (Google Forms, Microsoft Forms) – for collecting primary data
Project management tools (Trello, Gantt chart software) – for monitoring progress
Evaluation frameworks – such as usability heuristics or performance benchmarks
These tools will help me stay organised, systematic, and efficient.
5. Ensuring Quality and Validity
To keep my findings credible, I will use:
Pilot testing – to refine tools such as surveys before full data collection
Clear criteria for success – based on my objectives
Multiple data sources – to cross-validate results
Transparent documentation – to ensure repeatability and reliability
This will strengthen the academic value of my research outcomes.
Conclusion
As my project moves closer to the implementation phase, having a clear plan for assessment and evaluation gives me confidence—and protects the quality and credibility of the research. By combining qualitative and quantitative methods, using appropriate tools, and applying robust analysis techniques, I will be able to measure the performance and impact of my artefact and reflect critically on my findings.
This evaluation strategy will play a major role in the final dissertation, ensuring my final work is not only practical but also academically rigorous.
Join Us
Questions, ideas, or just want to chat?
© 2025. All rights reserved.