طراحی الگوی مفهومی برای سنجش تحصیلی دانشگاه باز و از دور
محورهای موضوعی :
آموزش از راه دور
فهیمه السادات حقیقی
1
,
مهران فرجاللهی
2
1 - دکتری برنامهریزی آموزش از دور، دانشگاه پیام نور، تهران ، ایران
2 - دانشیار گروه علوم تربیتی، دانشگاه پیامنور، تهران، ایران
تاریخ دریافت : 1393/01/25
تاریخ پذیرش : 1393/07/25
تاریخ انتشار : 1394/02/11
کلید واژه:
الگو,
آموزش عالی,
آزمون,
دانشگاه باز و از دور,
سنجش پیشرفت تحصیلی,
چکیده مقاله :
هدف این تحقیق، طراحی و اعتباریابی الگوی مفهومی برای آزمونهای دانشگاه باز و از دور است. روش تحقیق از نوع آمیخته (کمی و کیفی) است. جامعه تحقیق، اعضای هیأت علمی دانشگاه پیامنور ایران در سال تحصیلی 93-1392 بودند. حجم نمونه کمی طبق جدول کرجسی و مورگان، 286 نفر تعیین شد که به طور تصادفی انتخاب شدند. نمونه کیفی شامل 7 متخصص طراحی سؤال بود که به روش گلوله برفی انتخاب شدند. ابزارهای تحقیق، پرسشنامه محققساخته با 58 سؤال با پایایی 964/0 و مصاحبه نیمه ساختارمند بود. عناصر پنجگانه فلسفه،اهداف،طراحی،اجرا و ارزیابی نتایج و همچنین شاخصهای سهگانه هر عنصر مورد تحلیل توصیفی و آزمون t تک گروهی قرارگرفتند. نتایج نشان داد که عنصر فلسفه از میانگینی در حد ملاک (9) و سایر عناصر میانگینی بالاتر از ملاک داشتند و هر 5 عنصر در سطح 99% معنیدار بودند. شاخصهای چشمانداز توسعه، فلسفه آموزش عالی و دانشگاه باز و از دور عنصر فلسفه، میانگینی در حد ملاک (6/3) و 99% معنیداری؛ شاخصهای اهداف دانشگاه، اهداف شاخه، رشته و گروه از عنصر اهداف، میانگینی بالاتر از ملاک و 99% معنی داری؛ شاخص های قوانین ارزشیابی و کیفیت آزمون از عنصر طراحی، میانگینی بالاتر و معنیداری 99% داشتند؛ اما شاخص استاندارد طراحی سؤال میانگینی در حد ملاک داشت که معنیدار نبود. شاخصهای سیاست و شیوه اجرا از عنصر اجرا، میانگینی در حد ملاک و 99% معنیداری و شاخص شرایط اجرا میانگینی بالاتر از ملاک و 95% معنیداری داشت. دو شاخص تصحیح و نمرهگذاری از عنصر ارزیابی، از میانگینی در حد ملاک و شاخص تحلیل و بازخورد میانگینی بالاتر از ملاک برخوردار و هر سه شاخص 99% معنیدار بودند. یافتهها نشان دادند که میتوان از این الگو در طراحی آزمونهای دانشگاه باز و از دور استفاده کرد.
چکیده انگلیسی:
The aim of the study was to design and validate a conceptual model for open and distance universities academic tests. The research adopted a quantitative (descriptive survey) and qualitative approach to conduct the study. The statistical population included all faculty members of Payame Noor University in the academic year 2013-14 in Iran. The sample size was 286 people based on Krejcie and Morgan table who were randomly selected and the qualitative sample comprised 7 experts in test developing who were selected by snowball method. A researcher-made questionnaire with 58 questions with the reliability of 0.964 and semi organized interview served as the research tools. The five factors of philosophy, objective, design, implementation and result analysis along with the triple indices of each element were analyzed through descriptive statistics and one sample t-test. The results showed that the element of philosophy had the average at the criterion level (9), and the other elements had averages higher than the criterion and all 5 elements were meaningful at 99%. Indicators of development prospects, philosophy of higher education and distance and open university had the average at the criterion level (3.6) which were significant at 99%, the indexes of the University's goals, departments’ course and faculty objectives had average higher than criterion and were significant at 99%, the assessment rules index and test quality in test design had averages above the criterion and were significant at 99%, but the standard index of question developing had the average similar to the criterion which was not significant. The policy and implementation in implementation stage had the average at the criterion which was significant at 99% and the implementation had the average higher than the criterion and was significant at 95%. Two indices of correction and grading in assessment criterion had the average close to the criterion and the averages of analysis and feedback were higher than the criterion and were significant at 99%. The results showed that this model can be used in the design of open and distance university exams.
منابع و مأخذ:
Asadzadeh Dahraei, H. (1993). Evaluation of students' views about the manner of educational evaluation of teachers of what they have learned. Master's Thesis, Allameh Tabatabaei University, Faculty of Psychology and Educational Sciences. (in Persian).
Barnett, R., Parry, G., & Coate, K. (2001). Conceptualizing curriculum change. Teaching in Higher Education, 6(4), 435-449.
Bloom, B. (1968). Taxonomy of educational Objectives. New York: Holt.
Brookhart, S. M. (2007). Expanding views about formative classroom assessment: A review of the literature. In J. McMillan (Ed.), Formative classroom assessment: Theory in to practice (pp. 29-42). New York: Teachers College.
Darvishi, M. (2005). Evaluation of students' progress Birjand University, M.A. Thesis, Faculty of Humanities, Birjnd University. (in Persian).
Eckhout, T., Davis, S., Mickelson, K., & Goodburn, A. (2005). A method for providing assessment training to in-service and pre-service teachers. Paper presented at the Annual Meeting of the Southwestern Educational Research Associattion, New Orlean, LA.
Eizadi Firouzabadi, F. (2004). Preparing a software for clinical evaluation of students in the department of orthodontics. Ph.D. Thesis, Shahid Sadoughi University of Medical Sciences and Health Services, Dentistry Faculty. (in Persian).
Fegenbaum, A. V. P. (1994). Quality education and mericus competitiveness. Quality Progress, 27(9), 83-84.
Freeman, R. (1993). Quality assurance in training and education. How to apply BS 5750(ISO 9000) Standards. Kogan Page Limited, London.
Hafezi Koneshgari, E. (1996). Examine the views of Professors and students of Shiraz University about students learned assessment practices, Shiraz University. Master's Dissertation. Shiraz University, Department of Humanities (in Persian).
Henderson, L., & Putt, I. (1999). Evaluating audio-conferencing as an effective learning tool in corss-cultural contexts. Open Learning, 14(1), 25-37.
Impara, J. C., Plake, B. S., & Fager, J. J. (1993). Assessment competencies of teachers: A national survey. Educational Measurement: Issues and Practice, 12(4), 10-12, 39.
McAlister, S. (1998). Crtedible or tentative? A model of open university students with ‘low’ educational qualifications. Open Learning, 13(3), 33-42.
McCulloch, K. H. (1997). Participatory evaluation in distance learning. Open Learning, 12(1), 24-30.
McMillan, J. H. (2001). Secondary teachers' classroom assessment and grading practices. Educational Measurement: Issues and Practice, 20(1), 20-32.
Merriam, S. B. (2002). Qualitative research in practice: Examples for discussion and analysis. San Francisco: Jossey-Bass.
Mertler, C. A. (2004). Secondary teachers' assessment literacy: Does classroom experience make a difference? American Secondary Education, 33(1), 49-64.
Messick, S. (1988). The once and future issue of validity: Assessing the meaning and consequences of measurement. In H. Wainer & H. I. Braun (Eds.), Test validity (pp. 33-45). Hillsdale, NJ: Lawrence Erlbaum Associates.
Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed.) (pp. 13-103). New York: Macmillan.
Peggy, C. (2005). Teacher candidates' literacy in assessment. Academic Exchange Quarterly, 9(3).
Popham,W. J. (1988). Educational evaluation. Englewood Cliffs. NJ: Prentic Hall.
Provus, M. (1973). Discrepancy for education program improvement and assessment. Berkeley: MacCutchan Publishing Corporation.
Rezaei, G. (2001). The evaluation of Zahedan University of Medical Sciences. Ph.D. Dissertation Zahedan University. (in Persian).
Ruhe, V., & Zumbo, B. D. (2009). Evaluation in distance education and e-learning. Guiford Press. 27 Spring Street, New York, NY10012. www.guilford. com
Scriven, M. (1967). The methodology of Education. In curriculum evaluation. Edited by R. E. Stake. Aeramonograph Series on evaluation, No, I. Checago: Rand McNally.
Seif, A. A. (2005). Educational assessment, measurement, and evaluation (Third Edition). Douran Publishing Company. (in Persian).
Shepard, L. A. (2001). The role of classroom assessment in teaching and learning. In V. Richardson (Ed.), Handbook of research on teaching (4th Ed.) (pp. 1066-1101). Washington, DC: American Educational Research Association.
Shiraz University. (1991). Anatomical examinations value in predicting academic success. Research Paper.
Stake, R. E. (1967). The counterance of educational evaluation. Teachers College Record, 68, 523-540.
Stufflebeam, D. L. (1971). Educational evaluation and decision making. Itasca. IL; Peacock.
Taba, H. (1962). Curriculum development. New York: Harcourt, Brace & World.
Tallent-Runnels, M. K. (2006). Teaching course online: A review of the research. Review of Educational Research, 76(1), 93-135.
Tyler, R. W. (1969). The objectives and plans for a national assessment of educational progress. Journal of Educational Measuerment, 3(1), 1-4.
Younesi, J. (2002). Psychometric properties of the test questions in Psycology course PNU. Master's Thesis, Allameh Tabatabaei University. (in Persian).