The results of base validity measurement indicated that tests of 61.11% of courses had base validity. The prevalence of 38.89% in evaluations without base validity showed the weakness of evaluation system by base validity perspective in special theoretical courses of midwifery. Also, Najar reported that just 10 teachers reviewed base validity and 7.5% re-checked criterion validity of questions, while the knowledge of 41.15% of teachers about base validity and 42.8% of them about criterion validity were enough. The high rate of prevalence of questions with no base validity indicated the fault in designing multiple choice questions; it also showed that question designers of these tests had no good consideration in suitable answers, as the good students choose those. So it's needed that the question designers remove these faults by evaluating base validity of their questions. The results of the paper let us know that criterion and base validity of the questions suffered from some weaknesses. It also informed us that for item analyzing and its promotion, a correlating link should be existed between technical structure of education (like medical education development center) and the teachers.
We found this article on . It demonstrates some pretty clear (and humorous) fallacies when designing multiple choice questions. We think that any of these can apply to writing questions for your game show as well.
Designing a Multiple Choice Question
How should multiple choice tests be scored and graded, in particular when students are allowed to check several boxes to convey partial knowledge? Many strategies may seem reasonable, but we demonstrate that five self-evident axioms are sufficient to determine completely the correct strategy. We also discuss how to measure robustness of the obtained grades. Our results have practical advantages and also suggest criteria for designing multiple choice questions.