Resist, Regulate, Reimagine, and Reinforce : How Social Workers can Advocate for Digital Inclusion


 
 
Algorithms have become more complex, creating artificial intelligence (AI) with the hope it will match human decision making. They are now being used behind-the-scenes in areas such as healthcare, housing and employment, and criminal justice. These computer formulas were created by a privileged set of individuals who often prioritized profitand growth over privacy and protection. This has led to gross injustices that have prevented marginalized communities from receiving care, finding jobs, or gaining freedom. Social workers must be able to digitally advocate for their clients. Resisting these technologies, regulating them through legislation, reimagining the role one can play, and reinforcing what is already experienced in day-to-day interactions with AI are all ways social workers can be involved in creating a world that is digitally inclusive. 
 
 


method for getting results (Ausiello, 2013). It was not until the mid-1950s algorithms as the structure and data as the substance, technologists started to use AI as a substitute for human analysis and interpretation. turn used to train machine learning (ML) algorithms (Anyoha, 2017). Computers became more adept at problem solving and interpreting language. As they became cheaper, more institutions became involved in research and development.
Today, we are seeing AI and big data-a term used to describe the vast amount of online information that companies are able to garner on an individual-come together (Bean, 2017). Digital footprints consisting of all the data a person has following them online are thus being used for employment, and national security (Anyoha, 2017;Benjamin, 2019).

AREAS OF IMPACT
Technologists wrongfully assumed that a computer would eliminate bias by being based in formulas and mathematical calculations.
decision-making, especially in areas such as criminal justice, in which judges were making subjective decisions (Eckhouse et al., 2019).
reproduced and reinforced by these systems in what experts are calling "The New Jim Code" or "Coded Bias" (Benjamin, 2019; Buolamwini that by not including race as a data point, the machine would not codes. Geographic data-such as the area someone lives in-is often a proxy for race due to residential segregation and redlining (Eckhouse et al., 2019). By not being familiar with this history, technologists created a RESIST, REGULATE, REIMAGINE, AND REINFORCE system that reinforced existing biases. Accountability must be taken by companies instead of operating under the guise of expertise.
The following sections outline three areas where algorithms have oppression.

HOUSING AND EMPLOYMENT CREDIT SCORE AND HOMEOWNERSHIP
Algorithms have been used to determine credit score since the 1980s (Trainor, 2015). Before that, lenders would keep their own records of who they believed was "trustworthy" enough to receive a loan, often barring everyday life, including loan eligibility, home ownership, utility rates, and social standing.
Transunion-use data such as bill payment history, employment information, and current debt to determine one's score. They also factor in child support payment history, arrest and incarceration history record, and app usage. The companies have not released information on what metrics are used to determine the weight of each category (Hao, 2020).
With the rise of big data, smaller credit score companies are beginning to use data outside the typical sources used by larger companies. This includes social media information (likes, friends, locations, and posts), the amount of time you spend on their website, and what percent of income is spent on rent given geographic location (Hurley et al., media or data scraping companies (entities one can hire/pay to collect vast amounts of information from people online) in order to build their reports. Despite the Fair Credit Reporting Act of 1970 outlining what technology is changing and becoming more integrated in our everyday lives. Social media likes, for example, are not mentioned as an accepted or prohibited datum anywhere in the bill.
There is a large racial discrepancy between those with good vs. bad credit (Singletary, 2020). This directly correlates with the biased history of credit scoring and systemic oppression that is inherent in the rating. If areas as "risky" for lending (Lerner, 2020). At the same time, these 2021). Since credit score focuses on ownership through mortgages, the majority of Black Americans do not have this assurance to add into the algorithm. If rental payments, however, were taken into consideration, a credit.
by AI in other ways. Without any privacy regulations or civil rights laws candidates using racial proxy data, resulting in digital discrimination and continued historic exclusion. For example, Black and Latinx individuals are charged more for home loans, amounting to an 11 to $500 million annually from Black and Latinx individuals. Despite Foggo et al. reporting that lending discrimination being on a "steady decline," the authors did not indicate how that was measured (2020). Most importantly, any lending discrimination directly impacts the potential to buy a home, one of the main ways a family can build generational wealth can be passed down to children or other dependents.
Achievements such as the Fair Housing Act of 1968-which disallowed to protect those they were meant to. Algorithms are often protected proves someone's civil rights were violated.

HIRING
AI is also being used by companies to accelerate the hiring process.
considered for a position than, for example, when an individual in the HR department had to manually review resumes. However, AI is also being used for facial recognition to deduce applicants' personalities based on their expressions and appearance (Castelvecchi, 2020). of a candidate's social media platforms, such as LinkedIn or Facebook. The practice of discerning personality traits from face recognition algorithms has been proven generally inaccurate but some companies are still deploying this technology (Wells, 2020).
In addition, facial recognition technology is shown to be less accurate on dark skinned faces and women/femmes' faces (Buolamwini et al., intersecting communities often registering as "non-human" to these computer systems. This will be discussed more in a later section. Gender bias in hiring algorithms was most notably reported in 2018 who had "women's" in their application-that is, attended a women's college or were in a women's group (Vincent, 2018). According to sources at the company, this was because the algorithm was trained on employees are men, the algorithm decided that applications with the word "woman" or "women" should be rejected, reinforcing the preexisting gender bias at the company. By learning from data based on this unconscious preference in Silicon Valley.

INSURANCE COSTS
Lifestyle data-the food you eat or how much you watch TV-is now readily available as industries collect information they hope to use to keep you as a customer. In addition, many insurance companies are also using this data to determine a patient's risk of incurring high medical costs (Allen, 2019). Concerns are mounting over whether or health insurance rate. In addition, the accuracy of the predictions is groups of people.
The Health Insurance Portability and Accountability Act (HIPAA) only covers medical information that was collected through a "covered entity," which limits the bill's protective capabilities for health and mental health facilities. In recent years, health insurance companies such as Aetna and UnitedHealth have been collecting (either independently or through contracts) personal or lifestyle data such as social media activity, hours spent watching TV, education status, place of residence, and net worth (Allen, 2019).
By raising health insurance costs based on certain social demographics, communities become stuck in a cycle of poor health and poverty as the assessment is based on metrics they cannot change. In addition, by records, health insurance companies perpetuate racist oppression. Thus, the algorithmic results are inherently biased.

AT-HOME CARE HOURS
The use of algorithms to make healthcare decisions is becoming more widespread as industries try to streamline processes in order to cut time and cost while also eliminating human bias. In Arkansas, a software was implemented to determine how many hours of at-home-care Medicaid their assessments were done by individuals who would make decisions that favored some and were arbitrary with others.
After the algorithm, which was developed by a group of health researchers at InterAI, was implemented, many people had their hours assistance (Lecher, 2018). Legal Aid of Arkansas started receiving calls to lack of care.
When the president of InterAI was interviewed about transparency in the algorithm's metrics, he argued that one should trust that "a bunch of smart people determined this is the smart way to do it" (Lecher, 2018). However, during court proceedings it was revealed that the wrong calculation was being used for at least one case. This kind of error could have been caught if someone had overseen the deployment and checked all results.

POTENTIAL ILLNESSES
A risk-assessment tool used by large health systems in the United States was shown to give sick Black patients the same score it was algorithm did not use race as one of its data points; it did, however, use insurance claims data over a certain year (information such as age and sex, insurance type, diagnosis, medications, and detailed costs). In the end, it predicted accurately what people would spend on healthcare the following year; it did not predict who was more in need of improved care due to adverse health conditions. Proxies for race are often unknowingly used in developing algorithms, which then produce biased results. Ruha Benjamin refers to this as (Benjamin, 2019). Without proper knowledge of systemic racism, the individuals working for companies such as InterAI continue to power to the privileged. The notion that healthcare should be provided to an individual based on the amount of money they are able to spend with greater capital.

RISK ASSESSMENT
In the 1980s, lawmakers across the United States passed legislation for harsh, mandatory minimum sentencing in order to eliminate human bias in decision making (Forman, 2017). This meant an individual had to spend a certain amount of time in prison based on the crime they committed. With the crack-cocaine epidemic ravaging Black industrial complex (PIC) in the U.S. has in part expanded because of this legislation as the number of people incarcerated rose from hundreds of thousands to millions over the following decades (The Sentencing Project, 2021). The need for improved criminal risk assessment therefore became present and private companies started creating algorithms in order to more accurately predict the probability of a defendant medium, or high-and has been shown to reproduce racial disparities in its results (Angwin, 2016). Black people are twice as likely as white The biased results are not the only problem. The labels produced by months, etc. In addition, these results are shown to judges without any explanation of the data that went into them or the formula used.
In 2016, one defendant challenged a Wisconsin court's ruling and the label produced by the risk-assessment. The judge decided that because the algorithm was not deterministic in the ruling, there was no goes against the purpose of using an algorithm-eliminating human RESIST, REGULATE, REIMAGINE, AND REINFORCE bias-by adding the judge's input on top of the low, medium, or high result, and by not using the algorithm in a deterministic way, its objectivity (assuming they were objective, which they are not) is not being employed. At the end of the day, a judge-a human with biasinaccurate algorithms.
In the Wisconsin case, the judge declared that since the defendant was able to see the results of the algorithm, there was nothing else that needed to be revealed (Eckhouse et al., 2019). However, the data, metrics, and formulation all impact the algorithm's output and can all be sources of bias (Miron, 2020). As stated previously, using static shown to correlate with the social factors of sensitive groups more so than dynamic information (current substance use, peer rejection, hostile behavior).

FACIAL RECOGNITION
been used by the criminal justice system for decades (Najibi, 2020). In addition, TSA's advanced imaging technology present at airport enter the machine: man or woman. This means anyone who does not is being deployed across the widest variety of industries, including law enforcement, employers, manufacturers, and government housing authorities (Klosowski, 2020). In 2018, the Gender Shades study found skinned males, the disparity is astonishing.
However, the impact of this bias is more frightening. In a test conducted results reinforce the historic over-policing of the Black community and the problem of over-policing; in fact, it might exacerbate it. During enslaved people to carry a light by their faces in order to remain visible (Najibi, 2020). This same tracking of Black individuals could thus be done by high resolution cameras disproportionately located in certain neighborhoods which capture images and use them for databases.

DIGITAL INCLUSION: WHAT CAN SOCIAL WORKERS DO?
Even as technology expands and overtakes many human jobs, social workers are here to stay. According to a 2015 study done by NPR, mental health workers are the least likely profession to be automated by there will be social workers ready and able to advocate.
According to the NASW Code of Ethics, social workers must challenge social injustice and address social problems (NASW, 2021). With technology companies often unknowingly perpetuating systemic healthcare, or discrimination, social workers have the responsibility to advocate for those targeted by these practices. The following outlines current models addressing algorithmic harm and ways social workers can be involved in mitigating the gap of algorithmic knowledge, digital RESIST provide an interactive map marking places where facial recognition is used (Ban Facial Recognition, n.d.). This not only includes law It is nearly impossible today to avoid an online footprint. However, resisting the use of AI in one's everyday life is one of the main forms of not only advocacy but protection. Social workers can both inform their clients and resist these technologies in their own lives. Guidelines to follow include limiting the amount of information shared online, refusing for personal identifying information (PII) such as full name, birthdate, and address. Unless absolutely needed, providing these sensitive facts about oneself can result in unwanted tracking and associations.

REGULATE
Currently, there are no federal laws in the US regulating AI. Governing bodies lack the expertise and knowledge to properly create legislation that protects privacy, limits surveillance, and bans discrimination structural biases that have been present in society for centuries, and from the 1960s-80s are now out of date. Data protection only covers government and medical databases while anti-discrimination in housing and employment does not extend to a computer formula (Bock, implementation of AI.
Technology companies monitor their systems in-house and rarely researchers. They claim their system is protected by being a trade secret: intellectual property that cannot be released because it is integral However, this claim prevents diverse and informed research entities from many data-driven industries hide behind this trade secret policy, which intentionally obscures them from public review.
Social workers in policy can educate themselves on the uses of AI in a can write briefs on biased algorithms and the need for federal regulation as members of SAFElab at Columbia University did (Anguiano et al., 2021). Cities such as San Francisco and Boston have passed their own legislation disallowing facial recognition technology, ahead of federal changes (Associated Press, 2021).
Petitioning lawmakers to focus on AI and its potential for harm is another way social workers can get involved in advocating for digital inclusion. As stated before, with biometric systems such as facial recognition spreading surveillance, it is likely that a more accurate algorithm will be used to continue the over-policing of Black individuals. Social workers, who are educated in the historic and systemic harms the ways in which AI perpetuates this oppression.
their systems to the same degree as outside researchers. Limiting the uses of a product, whether by disallowing hate groups from posting on a in the form of a tax (ideally on data storage) that encourages these companies to delete digital footprints.

REIMAGINE
There are many other roles that social workers can take in advocating for digital inclusion. Technology companies are now creating jobs attempting to diversify their hiring practices through apprenticeships for people with unconventional backgrounds. With an extensive a part of these discussions.
workers must be a part of. Knowledge of criminal justice and healthcare is integral in decisions concerning what data should be used, whether that data is a proxy for race, and if the data results in biased outputs empathy will be a growing necessity as automation continues to expand (Johnson, 2021).
In research, teams improving machine learning algorithms need annotators from a wide range of backgrounds in order to capture the nuances of human expression (Johnson, 2021). By including stakeholders with varying sources of knowledge, discussions open up and opinions are provided which could not have been captured by people who mostly think the same. Time and diligence are also needed, something tech companies try to cut by paying annotators by the social media post. Working with a group means a consensus must be reached, rather than allowing one person to determine the meaning behind a post (Patton et al., 2020). the systems they create, social workers can be consultants for unbiased hiring practices. Firms such as Race Forward are employing people to projects to create more inclusive employment searches and outreach, REINFORCE how personal data is used, where one may encounter bias due to AI, and ways to protect oneself are all crucial for agency in the digital world.
Socioeconomic status is viewed as the main determinant for algorithmic knowledge (Cotter et al., 2020). In the US, class often relates to one's race, as a disproportionate number of Black and Latinx individuals people in the US use social media (Pew, 2021). This means a vast majority of Black users-given the disproportionate number of Black individuals who experience intersecting poverty-likely are not aware of the underlying algorithms, data scraping, or implications of their online presence in their physical lives.
it is used-is another way social workers can support digital inclusion approach by creating a workshop called "Drag vs. AI" (AJL, 2020). Participants learn about facial recognition software and then learn from drag performers how to do their makeup in order to escape the information on how to resist, not only individually but as part of an CONCLUSION It is necessary for social workers to become advocates for digital inclusion. Technology is only progressing and becoming a greater part characteristics such as race, class, and gender. Well-versed in systemic oppression-its roots, causes, and manifestations-social workers (Benjamin, 2019). Through resistance, regulation, reimagination, and reinforcement social workers in any position are able to advocate for those being harmed by an algorithm. Heilweil, R. (2019, December 12 Contextual analysis of social media: The promise and challenge of eliciting context in social media posts with natural language processing. Association for Computing Machinery