Presentation Title

Leveling the Field: Developing a Coding System to Identify Cultural Bias in International Test Comparisons

Faculty Mentor

Dr. Guillermo Solano-Flores

Start Date

18-11-2017 2:00 PM

End Date

18-11-2017 2:15 PM

Location

15-1822

Session

Social Science 3

Type of Presentation

Oral Talk

Subject Area

education

Abstract

While it is impossible to ensure one hundred percent equity amongst International Large Scale Assessments (ILSA), there are methods of minimizing potential sources of bias. One way is to be sensitive to the cultural reality of the target demographic. We know from our understanding of cultural validity that one’s social reality largely affects how they are able to make sense of the world, or in this case, a test item. To check for cultural validity on ILSA exams, we created a coding system that filters for potential sources of cultural bias that could potentially create inequity between different countries/cultural groups. By reviewing test items in mathematics and science from the Programme for International Student Assessment (PISA), we sought to identify error types. We then created categories that would identify these potential sources of bias. We ended up identifying nine categories where cultural bias could potentially reside. Future steps with this project would be to lend this coding system to ILSA test makers to apply the codes and receive their input on the validity and effectiveness of the codes.

Summary of research results to be presented

Of the 116 math and science test questions reviewed from the PISA exam, we identified 9 types of errors. They were categorized as follows: Socio Cultural Context; Figures of Speech; Context for Meaning Making; Unfamiliarity; Syntax and Structure; Lack of Directive/Descriptive Information; Format; Redundancy/Unnecessary item description; and Symbols and Punctuations. The first four categories identify test questions that contain a form of cultural bias. These may include questions that inquires a scenario which may not be relevant to a student’s everyday experience, such as using the internet, walking through a revolving door, or knowledge of what a conifer is. These types of questions may be biased because they may be outside the social reality and cultural relevance of the student. The remaining five categories contained errors that questioned the structure and format of the test item more-so over its content. These questions pose concerns that those types of errors would ultimately pose a form of cultural bias with some students over others, such as the different ways countries use symbols or units of measurement. We hope to offer these codes to ILSA test makers to further review if the codes correctly identify the errors we identified in our initial reserach.

This document is currently not available here.

Share

COinS
 
Nov 18th, 2:00 PM Nov 18th, 2:15 PM

Leveling the Field: Developing a Coding System to Identify Cultural Bias in International Test Comparisons

15-1822

While it is impossible to ensure one hundred percent equity amongst International Large Scale Assessments (ILSA), there are methods of minimizing potential sources of bias. One way is to be sensitive to the cultural reality of the target demographic. We know from our understanding of cultural validity that one’s social reality largely affects how they are able to make sense of the world, or in this case, a test item. To check for cultural validity on ILSA exams, we created a coding system that filters for potential sources of cultural bias that could potentially create inequity between different countries/cultural groups. By reviewing test items in mathematics and science from the Programme for International Student Assessment (PISA), we sought to identify error types. We then created categories that would identify these potential sources of bias. We ended up identifying nine categories where cultural bias could potentially reside. Future steps with this project would be to lend this coding system to ILSA test makers to apply the codes and receive their input on the validity and effectiveness of the codes.