Homework #4: Blog Post on O’Neil / DIKW

First Example: The Teacher Selection WMD from the chapter Intro

In this example, I will be deconstructing the teacher selection WMD from the introduction chapter. This algorithm, called IMPACT, was designed to separate “good” and “bad” teachers based on a scoring system of math and language skill teaching effectiveness. At the first level, data, the algorithm is aware of a low IMPACT score. At the second level, information, the algorithm knows that a low IMPACT score designates a teacher as “bad”. At the third level, knowledge, the algorithm has determined that a teacher it is calculating has received a low IMPACT score. At the fourth and final level, wisdom, the people interpreting the scores believes the “right” thing to do is to label that teacher as “bad” and fire them. The issue with this algorithm is that it does not factor in on outside factors that can affect how a student tests, such as home and family issues, disabilities, bullying, etc., which cannot necessarily be calculated by a computer program. It cannot determine what amount, if test results decrease, is due to a teacher or due to something else. As the text says, it is near impossible to be able to program each individual factor that can affect the scores. In addition, this type of algorithm is something that needs to be adjusted as issues arise, but because of the way school systems simply fire the teachers, it is unlikely that changes can be made.

 

Second Example: The Employee Selection WMD from the chapter Ineligible

In this example, I will be deconstructing the employee selection WMD from the chapter ‘Ineligible’. This algorithm was designed to test prospective candidates similarly to the “Five Factor Model” test, which tested for “extraversion, agreeableness, conscientiousness, neuroticism, and openness to new ideas”. Using Kyle, a young man with bipolar disorder, as an example, we can see how the algorithm can go wrong. At the first level, data, the algorithm is testing for certain characteristics in those looking to be employed. At the second level, information, the algorithm knows that certain characteristics are deemed undesirable, such as mental illnesses or disorders. At the third level, knowledge, the algorithm has determined that Kyle has bipolar disorder, and undesirable trait as it could possibly disrupt a working environment. At the fourth level, wisdom, the algorithm decides that the “right” thing to do is to deem Kyle unfit to work at the corporation he applied to. Where the algorithm went wrong is sorting based on the “desirable characteristics”. The issue with this algorithm is that it bases work eligibility based on one’s mental state. Not only is that illegal, but it doesn’t use other factors that an in-person interview would discover, such as education level, previous work experience, etc.

Leave a Reply

Your email address will not be published. Required fields are marked *