
Algorithms are logical-mathematical functions, rules, and other processes that can make predictions. While algorithms have improved our lives in many ways, efforts must be made to ensure there is transparency in discussions concerning the deployment of algorithms in our daily lives. In many cases, algorithms can present distorted predictions and cause discrimination against a specific group in our society. Often, conversations on algorithms exclude the designers and creators of these mathematical functions. However, it is essential to remember that these designers’ and creators’ thoughts and perceptions can impact these algorithms’ results.
The reality is that these logical-mathematical functions and rules can be fundamentally distorted based on the biases, ideological assumptions, and prejudices of their developers. For example, in many cases, prediction algorithms used in crime data often over-represent racial and ethnic minorities. For instance, one study cited that an algorithm indicated black defendants were at high risk of recidivism, even though they were low risk. In comparison, white defendants were mainly listed as low-risk recidivism, although they were at high risk (Angwin et al., 2016). These biases create further injustices towards marginalized groups in our society by promoting the continuation of systematic racism.
Within the educational landscape, algorithmic methods are being deployed to place students in the school system. However, the deployment of a biased algorithmic system could have devastating effects if not designed appropriately. For example, in the city of Boston, it was established that Unified Enrollment systems (UE) used to mediate schools exacerbated segregation by locking many Black and Latino students into low-performing schools (Harrison, 2019). Consequently, such systems will create barriers for students of color and limit students from achieving their full potential.
The use of algorithms in the health sector is clearly established in patient care. Furthermore, there have been substantial gains in improving the health outcomes for patients by the implementation of mathematical models. Notwithstanding, algorithms in the health
perpetuate discriminatory practices. Interestingly, in 2007 I applied for a mortgage in England. To qualify for the mortgage, I had to undergo a medical examination and HIV test. Based on the rigor of the process, I asked my white colleagues about the requirements. Surprisingly, they told me they had never heard of someone having to submit their HIV status for a mortgage. The mortgage was approved with a higher interest rate. The reality is that algorithmic decision-making has the potential to lead to inadvertently discriminatory decisions

.
Comments