Accessible, Affordable, and AI: How Artificial Intelligence Can Advance Healthcare Access

Between the Constitution of the World Health Organization, the Universal Declaration of Human Rights, and the International Covenant on Economic, Social, and Cultural Rights, the human right to a high standard of physical and mental health has been determinedly codified in international law. Providing this is more difficult. According to the World Health Organization, mostly low and lower-middle income countries will experience a healthcare shortage of 11 million workers within five years, and an estimated 4.5 billion people already lacked access to affordable essential care in 2021. Evidently, the global healthcare system needs a lifeline; with staff shortages and unmet needs, this help cannot come soon enough. Despite my criticisms of Artificial Intelligence’s implementation in healthcare due to data failures and biases, there is real potential for Artificial Intelligence to make the human right to health more accessible, affordable, and efficient. From wearable devices to Telehealth to risk and data analysis, the implementation of AI within healthcare systems can help relieve medical professionals from menial tasks, provide better access to health services for the disadvantaged, and aid in the overall efficiency of often bottlenecked healthcare systems.

REMOTE SERVICES & WEARABLE PRODUCTS

The access to one’s human right to adequate healthcare can be largely determined by geolocation; rural populations suffer significantly worse health outcomes than their urban counterparts, largely due to isolation from hospitals and medical professionals. People living in rural areas may not have the time or financial means to access efficient, affordable health services. Artificial Intelligence can help address this disparity by powering remote services such as Telehealth, aiding individuals in contacting physicians, and even potentially generating diagnoses without patients’ having to sacrifice their time or resources to travel. The primary use of AI within Telehealth aims to alleviate scheduling problems by training algorithms to match patients with the proper providers and ensure the smoothness of scheduling and accessing virtual appointments. This could significantly reduce the delay in access to Telehealth services that rural patients can experience.

A man measures his heart rate on an Apple Watch
Adobe Stock, DenPhoto, #290469935
A man measures his heart rate on an Apple Watch

In addition, wearable products utilizing Artificial Intelligence have shown potential in monitoring chronic conditions, eliminating the need for frequent check-ups, and reducing the burden on healthcare providers. Using data collected by wearable devices, AI algorithms can potentially detect signs of health problems and alert those with chronic conditions if their vitals are amiss. Patients can also receive AI-generated reminders to take medications and health check-ins to ensure proper care on a day-to-day basis.

The use of remote Artificial Intelligence technology to provide healthcare services also has the potential to increase access to mental health resources, especially in rural areas, where psychological help may be expensive, far away, or overly stigmatized. AI-driven personal therapists show potential to improve access to mental health services that traditionally are difficult to schedule and afford. Artificial intelligence has been used to analyze sleep and activity data, assess the likelihood of mental illness, and provide services related to mindfulness, symptom management, mood regulation, and sleep aid. 

ACCESSIBILITY

On top of increased accessibility for rural residents, various employments of Artificial Intelligence in healthcare have the potential to cater to the needs of those with cognitive or physical disabilities. Models can aid in simplifying text, generating text to speech audio, and providing visual aids to assist patients with disabilities as they receive care and monitor their conditions. The ability of Artificial Intelligence to streamline potentially incomprehensible healthcare interfaces and simplify information can also assist elderly patients in accessing health services. Older people can often be intimidated by the complexity of online healthcare’s technological hurdles, preventing them from effectively accessing their doctors, health records, or other important resources; Artificial Intelligence can be harnessed to adapt user personalization on websites and interfaces to best accommodate the problems an elderly or disabled person may experience trying to access online care.

Generative language models, a particular type of Artificial Intelligence that uses training data to generate content based on pattern recognition, has also been employed to overcome language barriers within medical education. The ability of Artificial Intelligence models to effectively translate educational curriculum has contributed to the standardization of medical practices and standards across countries. The digitalization of this process also makes medical educational material more accessible to those without direct access to a wealth of resources, furthering the World Health Organization’s Digital Health Guidelines, which aims to encourage “digitally enabled healthcare education.” The use of AI as a translation tool within healthcare also shows broader potential to be utilized for patient care, eliminating the need for costly translators and ensuring that non-native speakers fully comprehend their diagnoses and treatments. One example of this is the American company “No Barrier AI”, which employs an AI-driven interpreter to provide immediate, accurate, and cost-effective translation for those with little proficiency in English seeking healthcare.

Side view of a focused elderly man sitting before his laptop
Adobe Stock, Viacheslav Yakobchuk, #390382830
Elderly man accesses health portal from his laptop

PATIENT AND DATA ANALYSIS

A whole other blog post could be dedicated entirely to the use of Artificial Intelligence in hospitals and as an aid to medical professionals. Broadly, the integration of Artificial Intelligence into clerical and administrative tasks, health data analysis, and care recommendations has reduced the time and money spent on the slow, bureaucratic processes that weigh down medical professionals. Nearly 25% of healthcare spending in the United States is devoted to administrative tasks, but according to a McKinsey & Company study, the adoption of AI and machine learning could save the American healthcare industry $360 billion, mostly by assisting with clerical and administrative tasks. For instance, AI systems have proved effective in boosting appointment scheduling efficiency, speeding up an infamously difficult process. Because of its ability to detect, analyze, and predict patterns, Artificial Intelligence has also been utilized to track inventory and increase supply chain efficiency, ensuring proper amounts of essential medical supplies and medicines are in stock when they are most needed.

Beyond managerial and administrative duties, Artificial Intelligence has also been integrated into clinical decision-making, data and visual analysis, risk evaluation, and even the development of medicines. Trained models have proven capable of analyzing data from brain scans, X-rays, other tests, and patient records to detect and predict health problems; this ability to detect patterns and predict outcomes has also enabled early detection of diseases and conditions such as sepsis and heart failure. Medical professionals can take the model’s analysis into account while also considering treatment suggestions from Artificial Intelligence as they proceed with patient care. This can reduce the likelihood of clinical mistakes as doctors can compare their findings with those of the AI model. Artificial Intelligence has also been used in telesurgical techniques to improve accuracy and supervise surgeons as they operate. The integration of Artificial Intelligence has also advanced vaccine development, as it aids in identifying antigen targets, helps predict a particular patient’s immune response to specific vaccinations, creates vaccines tailored to an individual’s genetic makeup and medical needs, and increases the efficiency of vaccine storage and distribution.

These are only a few examples of the potential usefulness of Artificial Intelligence within healthcare settings. The examples are countless and increasing every day, and, as I believe, the potential for further advancement is immeasurable.

Two doctors analyze brain scans on a tablet.
Adobe Stock, peopleimages.com, #1599787893
Two doctors analyze a brain scan with suggestions from AI tech

WHAT WE MUST KEEP IN MIND

While these advancements in the accessibility, affordability, and efficiency of healthcare systems show undeniable promise in accessing the human right to health, the development and integration of these Artificial Intelligence technologies must be undertaken with equality at the center of all efforts. As I highlighted in my last post, it is imperative that underlying societal biases be accounted for and curbed within these models to prevent inaccurate results and further harm to individuals from marginalized groups. A survey at the University of Minnesota found that only 44% of hospitals in the United States conducted evaluations on system bias in the Artificial Intelligence models they employed. It is essential to pursue efforts to ensure that Artificial Intelligence promotes not only the human right to health, but also the human right to freedom from discrimination within healthcare practices, especially those aided by systems potentially riddled with bias based on age, race, ethnicity, nationality, and gender.

These technologies are as practical as they are exciting. Still, as the healthcare industry moves forward, Artificial Intelligence developers and healthcare providers alike must maintain the core ideals of the Human Rights framework– equality, freedom, and justice.

Training to Treatment: AI’s Role in Healthcare Inequities

My first English professor here at UAB centered our composition class entirely around Artificial Intelligence. He provided our groups with articles highlighting the technology’s potential capabilities and limitations, and then he prompted us to discuss how our society should make use of AI as it expands. Though we tended to be hesitant toward AI integration in the arts and service industries, there was a sense of hope and optimism when we discussed its use in healthcare. It makes sense that these students, most of whom were studying to become healthcare professionals or researchers, would look favorably on the idea of AI relieving providers from menial, tedious tasks.

AI’s integration in healthcare does have serious potential to improve services; for example, it’s shown promise in examining stroke patients’ scans, analyzing bone fractures, and detecting diseases early. These successes don’t come without drawbacks, however. As we continue to learn more about the implications of AI use in healthcare, we must take into account potential threats to human rights, including the rights to health and non-discrimination. By addressing the human rights risks of AI integration in healthcare, algorithmic developers and healthcare providers alike can implement changes and create a more rights-oriented system. 

A woman stands in front of a monitor, examining head and spine scans.
Adobe Stock #505903389 Gorodenkoff A woman stands in front of a monitor, examining head and spine scans.

THE INCLUSION OF INEQUALITIES

Artificial Intelligence cannot operate without data; it bases its behaviors and outcomes on the data it is trained on. In healthcare, Artificial Intelligence models rely on input from health data that ranges from images of melanoma to indicators of cardiovascular risk. The AI model uses this data to recognize patterns and make predictions, but these predictions are only as accurate as the data they’re based on. Bias in AI systems can often stem from “flawed data sampling,” which is when sample sizes of certain demographics are overrepresented while those of others, usually marginalized groups, are left out. For example, people of low economic status often don’t participate in clinical trials or data collection, leaving an entire demographic underrepresented in the algorithm. The lack of representation in training data also generally applies for women and non-white patients. When training datasets are imbalanced, AI models may fail to accurately analyze test results or evaluate risks. This has been the case for melanoma diagnoses in Black individuals and cardiovascular risk evaluations in women, where the former model was trained largely on images of white people and the latter on the data of men. Similarly, text-to-speech AI systems can omit voice characteristics of certain races, nationalities, or genders from training data, resulting in inaccurate transcriptions. 

A woman at a computer examines unequal data sets on two sheets of paper.
Adobe Stock #413362622 Source: Andrey Popov A woman at a computer examines unequal data sets on two sheets of paper.

The exclusion of certain groups from training data points us to the fact that AI models often reflect and reproduce already existing human biases and inequalities. Because medical data reflects currently existing healthcare disparities, AI models train themselves in ways that internalize these societal inequalities, resulting in inaccurate risk evaluations, especially for Black, Hispanic, or poor patients. These misdiagnoses and inaccurate evaluations create a feedback loop where an algorithm trained on poor data creates poor healthcare outcomes for marginalized groups, further contributing to healthcare disparities. 

FRAGMENTATION AND HALLUCINATION

Another limitation of the data healthcare AI models are trained on is their fragmented sourcing. Training data is often collected across different sources and systems, ranging from pharmacies to insurance companies to hospitals to fitness tracker records. The lack of consistent, holistic data compromises the accuracy of a model’s predictions and the efficiency of patient diagnosis and treatment. Other research highlights that the majority of patient data used to train algorithms in America comes from only three states, limiting its consideration of geo-locational factors on patient health. Important determinants of health, such as access to nutritious food and transportation, work conditions, and environmental factors, are therefore excluded from how the model diagnoses or evaluates a patient. 

A computer screen shows an AI chatbot, reading "Meet AI Mode"
Adobe Stock #1506537908 Source: Tada Images A computer screen shows an AI chatbot, reading “Meet AI Mode”

When there are gaps in an AI system’s data pool, most generative AI models will fabricate data to fill these gaps, even if this model-created data is not true or accurate. This phenomenon is called “hallucination,” and it poses a serious threat to the accuracy of AI’s patient assessments. Models may generate irrelevant correlations or fabricate data as they attempt to predict patterns and outcomes, resulting in overfitting. Overfitting occurs when models learn too much on the training data alone, putting weight on outliers and meaningless variations in data. This makes models’ analyses inaccurate, as they fail to truly understand patient data and instead manipulate outcomes to match the patterns they were trained on. AI models will easily fabricate patient data to create the outcomes that make the most sense to their algorithms, jeopardizing accurate diagnoses and assessments. Even more concerning, most AI systems fail to provide transparent lines of reasoning for how they came to their conclusions, eliminating the possibility for doctors, nurses, and other professionals to double-check the models’ outputs.

HUMAN RIGHTS EFFECTS

All of this is to say that real patients are complex, and the data that AI is trained on may not accurately represent the full picture of a person’s health. This results in tangible effects on patient care. An AI’s misstep in its analysis of a patient’s health data can result in prescribing the wrong drugs, prioritizing the wrong patients, and even missing anomalies in scans or x-rays. Importantly, since AI bias tends to target already marginalized groups such as Black Americans, poor people, and women, unchecked inaccuracies in AI use within healthcare can pose a human rights violation to the Universal Declaration of Human Rights (UDHR) provisions of health in Article 25 and non-discriminatory entitlement to rights as laid out in Article 2. As stated by the Office of the High Commissioner for Human Rights, human rights principles must be incorporated to every stage of AI development and implementation. This includes maintaining the right to adequate standard of living and medical care, as highlighted in Article 25, while attempting to address the discrimination that occurs within healthcare. As the Office of the High Commissioner for Human Rights states, “non-discrimination and equality are fundamental human rights principles,” and they are specifically highlighted in Article 2 of the UDHR. These values must remain at the forefront of AI’s expansion into healthcare, ensuring that current human rights violations are not magnified by a lack of careful regulation.

WHAT CAN BE DONE?

To effectively and justly apply Artificial Intelligence to healthcare, human intervention must ensure that fairness and accuracy remain at the center of these models and their applications. First, the developers of these algorithms must ensure that the data used for training is drawn from a diverse pool of individuals, including women, Black people, and other underrepresented groups. Additionally, these models should be developed with fairness in mind and should work to mitigate biases. Transparency should be built into models, allowing providers to trace the thought processes used to create conclusions on diagnoses or treatment choices. These goals can be supported by advocating for AI development teams and healthcare provider clinics that include members of marginalized groups. The inclusion of diverse life experiences, perspectives, and identities can remedy biases both in the algorithms themselves and the medical research and data they are trained on. We must also ensure that healthcare providers are properly educated about how these models operate and how to interpret their outputs. If developers and medical professionals do address these challenges, then Artificial Intelligence technology has immense potential to improve diagnostic accuracy, increase efficiency in analyzing scans and tests, and alleviate healthcare providers of time-consuming, menial tasks. With a dedication to accuracy and human rights, perhaps the integration of Artificial Intelligence into healthcare will meet my English classmates’ optimistic standards and aid them in their future jobs.

 

Rights and Regulations: A Case Study on Guidelines for AI Use in Education

Based on my previous two articles, a reader of this blog might assume that I’m an advocate for the complete eradication of Artificial Intelligence, given the many criticisms I’ve made of the AI industry. While you shouldn’t expect these critiques to stop on my end, I also accept the fact that AI has effectively taken over the technological world and will not easily be vanquished. Therefore, a more realistic approach to keeping AI within acceptable bounds is regulating its use. This regulation is especially imperative when it comes to our nation’s youth. Their human right to quality education centered on tolerance and respect should not be infringed upon by generative AI use.

That is why programs addressing AI literacy and guidelines on its use in schools are so essential. The Alaska Department of Education’s Strategic Framework on AI use in the classroom, released in October 2025, outlines strategies on safe, responsible, and ethical AI integration in K-12 schools. Alaska is merely the latest state to adopt guidelines for AI use in public schools; a total of 27 states and Puerto Rico have established such policies. Today, I’ll be concentrating on Alaska’s framework as a case study to explore the value in creating state and local guidelines on the education on and use of AI in the classroom.

FEDERAL REGULATIONS

In April of this year, an executive order was signed promoting AI competency in students and establishing a Task Force on Artificial Intelligence Education. In response, the U.S. Department of Education has released potential priorities for grants funding the integration of AI into education: “evidence-based literacy, expanding education choice, and returning education to the states”. While these statements are an encouraging acknowledgement of the need to turn our attention to the use of Artificial Intelligence in academia, they fail to provide tangible guidelines or policies that effectively promote the proper use of AI in schools. These statements also fall short of acknowledging the need for regulation and limitations on AI’s role in academia; in fact, “America’s AI Action Plan” highlights the administration’s aversion towards regulation by providing that states should not have access to federal funding on AI-related matters should they implement “burdensome AI regulations.”

STATE-LEVEL POLICIES

The federal government’s failure to acknowledge AI’s limitations when it comes to privacy, ethics, and functionality in education creates a vacuum devoid of guidelines or regulations on AI’s educational use. A lack of parameters has raised concerns about academic misconduct, plagiarism, privacy breaches, algorithmic bias, and the dogmatic acceptance of generated information that may be inaccurate or unreliable. Complete bans fail to address AI’s potential when used responsibly and create environments where students find new and creative ways to access generative AI despite the ban.

Thankfully, states are beginning to recognize the need to fill the void to maintain the quality and safety of children’s education. Alaska’s Department of Education answered this call by providing its K-12 AI Framework document, which provides “recommendations and considerations for districts” to guide their school districts’ Artificial Intelligence policies and guide educators on how to treat AI use in their classes.

A metal placard on a building reads "Department of Education"
Adobe Stock, D Howe Photograph #244617523

These guidelines serve to “augment human capabilities,” educating students on how to maintain critical thinking and creativity while employing generative AI in their studies. This purpose is supported by the following guiding principles for AI Integration outlined in the framework; these principles serve as building blocks for fostering a positive relationship between students and generative AI, educating about its limitations while highlighting how it can be used properly. To take a human-rights based approach to highlighting the value of these principles, I’ll be providing specific human rights that each guideline works to preserve.

ARTICLE 27

Article 27 of the Universal Declaration of Human Rights (UDHR) establishes the right to enjoy scientific advancements as well as the protection of ownership over one’s scientific, literary, or artistic creations. Alaska’s AI Guideline provides for a human-centered approach to AI integration, emphasizing that districts should move beyond banning generative AI while adopting initiatives to ensure AI enriches human capabilities rather than replaces them. This ensures that students have access to the scientific advancement of generative Artificial Intelligence without diminishing the quality of their education. The “Fair Access” aspect of Alaska’s framework outlines additional provisions for ensuring students have equal access to AI-based technological advancements. It calls for allocating funding dedicated to accessible Internet and AI access, as well as implementing an AI literacy program within school districts.

A boy looks at a computer monitor, generating an AI image.
Adobe Stock, Framestock
#1684797252

Additionally, the “Transparency” and “Ethical Use” principles provide that AI generated content should be properly attributed and disclosed. Citations are a requirement under these guidelines, and any work completed entirely by generative AI is considered plagiarism. This maintains the right to ownership over one’s creations by ensuring that generative AI and the data it pulls from are properly attributed.

ARTICLE 26

Article 26 of the UDHR codifies the right to education that promotes tolerance for other groups and respect for fundamental freedoms and rights. Alaska’s AI framework calls for recognition of generative AI’s potential algorithmic biases against certain ethnic, racial, or religious groups. It states that students should be educated about the prejudices, misinformation, and hallucinations a generative AI model may produce, emphasizing that its outputs must be critically examined. By overtly acknowledging the manifestation of societal prejudices in these algorithms, Alaska’s guidelines preserve the human right to uphold dignity and respect for others within education. This requires the inclusion of diverse local stakeholders such as students, parents, and community leaders in discussions and policymaking regarding AI regulations in the classroom, which the guideline provides suggestions for.

ARTICLE 12 and ARTICLE 3

The final human rights Alaska’s framework works to uphold are outlined in Article 3 and Article 12 of the UDHR, which state the right to security of person and privacy, respectively. The AI Framework establishes that student data protection and digital well-being are essential to maintain and educate on. It highlights a responsibility on the districts to support cybersecurity efforts and compliance with federal privacy laws such as the Family Educational Rights and Privacy Act and the Children’s Internet Protection Act. Schools also have an obligation to review the terms of service and privacy policies of any AI tools used in classrooms to ensure students’ data is not abused. Educators also should teach their students how to protect their personally identifiable information and the consequences of entering sensitive information into generative AI tools.

A page in a book reads "FERPA, Family Educational Rights and Privacy Act"
Adobe Stock, Vitalii Vodolazskyi
#179067778

WHAT’S NEXT

Alaska’s framework is only an example of a wider trend of states adopting guidelines on Artificial Intelligence’s role in education. These regulations ensure that students, educators, and stakeholders acknowledge the limitations and potential of AI while implementing it in a way that serves human ingenuity rather than replacing it. These guidelines go only so far when implemented locally, though. We must civically engage with local school boards, individual school administrations, educators, and communities to ensure these helpful guidelines are properly abided by. Frameworks like Alaska’s provide sample policies for school boards to enact and provide examples of school handbook language that can be employed to preserve human rights in the face of AI expansion; all it takes is local support and implementation to push these policies into action. Community training and panels could be utilized to start conversations between families, students, community members and AI policymakers and experts.

As individuals, it is our place to engage in these community efforts. And if you’re a student reading this, take Alaska’s frameworks on guiding AI use in education into consideration the next time you’re thinking about using ChatGPT on an assignment. From plagiarism to biases to security, there’s good reason to tread carefully and emphasize a responsible approach to AI use that doesn’t encourage over-reliance but rather serves as a helping hand.

Economy and Exploitation: The AI Industry’s Unjust Labor Practices

I remember when ChatGPT first started gaining popularity. I was a junior in high school, and everyone around me couldn’t stop marveling over its seemingly endless capabilities. The large language model could write essays, answer physics questions, and generate emails out of thin air. It felt to us, sixteen and seventeen-year-olds, like we had discovered magic – a crystal ball that did whatever you told it.

I’m writing this, three years later, to break the news that it is, unfortunately, not magic. Artificial Intelligence (AI) relies on human input at nearly every stage of its preparation and verification. From content moderation to data collection, outwardly automated AI systems require constant human intervention to ensure the algorithm runs smoothly and sensically. This intervention calls for human labor to sift through and manage a given model’s data and performance. But where does this labor come from? And what are the implications of these workers’ invisibility to the public?

Labor Source

On the surface, it appears that Big Tech companies such as OpenAI, Sama, Meta, and Google are bearing the brunt of the labor it takes to develop and operate their AI systems. A closer look reveals that the human labor these AI systems require is distributed across the globe. These massive companies employ subcontractors to hire and manage workers who will perform the countless small, repetitive tasks required. These subcontractors, looking for maximum profit, often hire workers from less developed countries where labor rights are less strictly enforced and wages are not stringently regulated. What does this mean? Cheap, exploitative labor. Those living in poverty, refugee camps, and even prisons have been performing data tasks for subcontractors like Amazon Mechanical Turk and Clickworker. The outsourcing of workers in countries such as India and Kenya by affluent businesses in mostly Western countries seems to perpetuate patterns of exploitation and colonialism and play into global wealth disparities. 

Woman in a chair looking at computer screens
Crowdsourced Woman Monitors Data. Source: Adobe Stock

Wages

On top of the larger systemic implications of wealthier countries’ outsourcing their labor to less affluent countries, the individual workers themselves often suffer human rights abuses regarding wages.

According to the International Labour Organization (ILO), wage theft is a pressing issue when it comes to crowdworkers; this is due to employers denying wages to anyone who is deemed to have completed their tasks incorrectly. Issues with software and the flagging system can result in employers withholding wages due to completed tasks being labelled as incorrectly done. In the ILO’s survey, only 12 percent of crowdworkers conceded that all of their task fulfillment rejections were justifiable, with the majority claiming that only some of them were warranted. In other instances, pay can take the form of vouchers or gift cards, some of which are deemed invalid upon use. Unexpected money transfer fees and hidden fines can also result in wages being lower than initially expected or promised. 

Woman looking at her phone and credit card in shock.
Woman Looks at Her Wages, Which Are Lower than Expected. Source: Adobe Stock

Even if outsourced workers did always get paid correctly, it usually doesn’t amount to much. According to an ILO survey, the median earnings of microworkers were 2 US dollars an hour. In one specific case in Madagascar, wages were as low as 41 cents an hour. These workers are being paid far less than a livable wage under the excuse that their work is menial and performed task-by-task. The denial of wages and the outsourcing companies’ low pay rates violate ‘equal pay for equal work’ under Article 23 of the Universal Declaration of Human Rights (UDHR).

For some in periphery countries like India and Venezuela, data microwork is people’s only source of income. Its convenience and accessibility are attractive to those who don’t have the resources to apply for typical jobs, but its wages do not pay for the decent standard of living that is outlined in the UDHR in Article 25. As one microworker from Venezuela said in an interview with the BBC, “You will not live very well, but you will eat well.” 

 

Working Conditions

In addition to low wages, crowdworkers often face human rights violations regarding working conditions, and most of them are largely unable to access methods to advocate for better treatment from their employers. For those who classify and filter data, a day at work may include flagging images of graphic content, including murder, child sexual abuse, torture, and incest. This was the case for Kenyans employed by Sama and subsequently OpenAI; workers have testified to having recurring visions of the horrors they’ve seen, describing their tasks as torturous and mentally scarring. Many requests for mental health support are denied or not fulfilled. These experiences make workers vulnerable to developing post-traumatic stress disorder, depression, and emotional numbness.

 

Woman covering her face as she looks at a laptop.
Woman Looks At Disturbing Images as She Monitors Data. Source: Adobe Stock

In one instance, the subcontractor Sama shared the personal data of one crowdworker with Meta, including parts of a non-disclosure agreement and payslips. Other workers on Amazon Mechanical Turk experienced privacy violations like “sensitive information collection, manipulative data aggregation and profiling,” and methods of scamming and phishing. This arbitrary collection and abuse of workers’ private data directly violates Article 12 of the UDHR, which enshrines the protection of privacy as a human right.

The nature of crowdwork is such that individuals work remotely and digitally, granting more power to the contractors over their workers and significantly diminishing microworkers’ capacity to take collective action and compromise with employers for better conditions. This independent contractor relationship between employers and employees has weakened the ability for microworkers to unionize and bargain with their contractors. Employers are able to rate crowdworkers poorly, which often results in the rejection of workers when they attempt to find new tasks to fulfill. There are few ways for workers to review their employers’ performance in a similar way, creating an unjust power imbalance between employer and employee, and various violations of labor rights. The possible convenience of self-employment and remote work comes with surrendering basic workers’ rights, such as “safeguards against termination, minimum wage assurances, paid annual leave, and sickness benefits”. Each of these aspects of microwork denies employees the labor rights outlined in Article 23 of the UDHR, another direct violation of human rights by these outsourcing companies. 

What’s Next?

The first step to addressing the human rights violations that are facing outsourced Artificial Intelligence data microworkers is ensuring their visibility. Dismantling the narrative of Artificial Intelligence models as fully automated systems and raising awareness about the essential roles microworkers play in the preparation and validating of data can help garner public attention. Since many of these crowdworkers are employed abroad, it is also important for advocates to highlight the exploitation that these tech companies and contractors are profiting from. In addition, because these workers have little bargaining power, making their struggles visible and starting dialogue with companies on their behalf can be a crucial step towards ensuring that microworkers have access to their human and privacy rights. While research and policy continues to expand regarding AI’s impact on the labor force, it is essential that academics and lawmakers alike consider the effects on the whole production chain, including low-wage workers abroad, rather than just the middle-class domestic workforce. Finally, it is imperative that big tech businesses and the crowdsourcing companies they contract with are held publicly accountable for their practices and policies when it comes to wages, payment methods, mental health resources, working conditions, and unionization. These initiatives can begin only once the public becomes aware of the exploitation of these invisible workers. So, the next time someone throws a prompt at ChatGPT, start a conversation about how reliant AI is on human labor. Only then can we start to grant visibility to microworkers and work towards change.

Griefbots: Blurring the Reality of Death and the Illusion of Life

Griefbots are an emerging technological phenomenon designed to mimic deceased individuals’ speech, behaviors, and even personalities. These digital entities are often powered by artificial intelligence, trained on data such as text messages, social media posts, and recorded conversations of the deceased. The concept of griefbots gained traction in the popular imagination through portrayals in television and film, such as the episode “Be Right Back” from the TV series Black Mirror. As advancements in AI continue to accelerate, griefbots have shifted from speculative fiction to a budding reality, raising profound ethical and human rights questions.

Griefbots are marketed as tools to comfort the grieving, offering an opportunity to maintain a sense of connection with lost loved ones. However, their implementation brings complex challenges that transcend technology and delve into the realms of morality, autonomy, and exploitation. While the intentions behind griefbots might seem compassionate, their broader implications require careful consideration. With the rising intricacy of the morality of AI, I want to explore some of the ethical aspects of griefbots and ask questions to push the conversation along. My goal is not to strongly advocate for or against their usage but to engage in philosophical debate.

An image of a human face-to-face with an AI robot
Image 1: An image of a human face-to-face with an AI robot. Source: Yahoo Images

Ethical and Human Rights Ramifications of Grief Bots

Commercial Exploitation of Grief

The commercialization of griefbots raises significant concerns about exploitation. Grieving individuals, in their emotional vulnerability, may be susceptible to expensive services marketed as tools for solace. This commodification of mourning could be seen as taking advantage of grief for profit. Additionally, if griefbots are exploitative, it prompts us to reconsider the ethicality of other death-related industries, such as funeral services and memorialization practices, which also operate within a profit-driven framework. 

However, the difference between how companies currently capitalize on griefbots and how the death industry generates profit is easier to tackle than the other implications of this service. Most companies producing and selling griefbots charge for their services through subscriptions or minute-by-minute payments, distinguishing them from other death-related industries. Companies may have financial incentives to keep grieving individuals engaged with their services. To achieve this, algorithms could be designed to optimize interactions, maximizing the time a grieving person spends with the chatbot and ensuring long-term subscriptions. These algorithms might even subtly adjust the bot’s personality to make it more appealing over time, creating a pleasing caricature rather than an accurate reflection of the deceased.

As these interactions become increasingly tailored to highlight what users most liked about their loved ones, the griefbot may unintentionally alter or oversimplify memories of the deceased, fostering emotional dependency. This optimization could transform genuine mourning into a form of addiction. In contrast, if companies opted to charge a one-time activation fee rather than ongoing payments, would this shift the ethical implications? In such a case, could griefbots be equated to services like cremation—a one-time fee for closure—or would the potential for misuse still pose moral concerns?

Posthumous Harm and Dignity

Epicurus, an ancient Greek philosopher, famously argued that death is not harmful to the deceased because, once dead, they no longer exist to experience harm. Griefbots challenge the assumption that deceased individuals are beyond harm. From Epicurus’s perspective, griefbots would not harm the dead, as there is no conscious subject to be wronged. However, the contemporary philosopher Joel Feinberg contests this view by suggesting that posthumous harm is possible when an individual’s reputation, wishes, or legacy are violated. Misrepresentation or misuse of a griefbot could distort a person’s memory or values, altering how loved ones and society remember them. These distortions may result from incomplete or biased data, creating an inaccurate portrayal of the deceased. Such inaccuracies could harm the deceased’s dignity and legacy, raising concerns about how we ethically represent and honor the dead.

a version of Michelangelo's famous painting "The Creation of Adam" but with a robot hand instead of Adam's
Image 2: A robot version of Michelangelo’s painting “the Creation of Adam” Source: Yahoo Images

Article 1 of the Universal Declaration of Human Rights states, “All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.” Because griefbots are supposed to represent a deceased person, they have the potential to disrespect people’s dignity by falsifying that person’s reason and consciousness. By creating an artificial version of someone’s reasoning or personality that may not align with their true self, griefbots risk distorting their essence and reducing the person’s memory to a fabrication. 

But imagine a case in which an expert programmer develops a chatbot to represent himself. He perfectly understands every line of coding and can predict how the griefbot will honor his legacy. If there is no risk to the harm of his dignity, is there still an ethical issue at hand?

Consent and Autonomy

Various companies allow people to commission an AI ghost before their death by answering a set of questions and uploading their information. If individuals consent to create a griefbot during their lifetime, it might seem to address questions of autonomy. However, consent provided before death cannot account for unforeseen uses or misuse of the technology. How informed can consent truly be when the long-term implications and potential misuse of the technology are not fully understood when consent is given? Someone agreeing to create a griefbot may envision it as a comforting tool for loved ones. Yet, they cannot anticipate future technological advancements that could repurpose their digital likeness in ways they never intended.

This issue also intersects with questions of autonomy after death. While living individuals are afforded the right to make decisions about their posthumous digital presence, their inability to adapt or revoke these decisions as circumstances change raises ethical concerns. In HI-PHI Nation’s Podcast, The Wishes of the Dead, they explore how the wishes of deceased individuals, particularly wealthy ones, continue to shape the world long after their death. The episode uses Milton Hershey, founder of Hershey Chocolate, as a case study. Hershey created a charitable trust to fund a school for orphaned boys and endowed it with his company’s profits. Despite changes in societal norms and the needs of the community, the trust still operates according to Hershey’s original stipulations. Critics questioned whether continuing to operate according to Hershey’s 20th-century ideals was still relevant in the modern era, where gender equality and broader educational access have become more central concerns.

Chatbots do not have the ability to evolve and grow the way that humans do. Barry explains the foundation of this concept by saying, “One problem with executing deeds in perpetuity is that dead people are products of their own times. They don’t change what they want when the world changes.” And even if growth was implemented into the algorithm, there is no guarantee it would be reflective of how a person changes. Griefbots might preserve a deceased person’s digital presence in ways that could become problematic or irrelevant over time. Although griefbots do not have the legal status of an estate or will, they still preserve a person’s legacy in a similar fashion. If Hershey was alive today, would he modify his estate to reflect his legacy?

It could be argued that the difference between Hershey’s case and Chatbots is that wills and estates are designed to execute a person’s final wishes, but they are inherently limited in scope and duration. Griefbots, by contrast, have the potential to persist indefinitely, amplifying the damage to one’s reputation. Does this difference encompass the true scope of the issue at hand, or would it be viable to argue that if chatbots are unethical, then persisting estates would be equally unethical as well? 

A picture of someone having a conversation with a chatbot
Image 3: A person having a conversation with a chatbot. Source: Yahoo Images

Impact on Mourning and Healing

Griefbots have the potential to fundamentally alter the mourning process by offering an illusion of continued presence. Traditionally, grieving involves accepting the absence of a loved one, allowing individuals to process their emotions and move toward healing. However, interacting with a griefbot may disrupt or delay this natural progression. By creating a sense of ongoing connection with the deceased, these digital avatars could prevent individuals from fully confronting the reality of the loss, potentially prolonging the pain of bereavement.

At the same time, griefbots could serve as a therapeutic tool for some individuals, providing comfort during difficult times. Grief is a deeply personal experience and for certain people, using chatbots as a means of processing loss might offer a temporary coping mechanism. In some cases, they might help people navigate the early, overwhelming stages of grief by allowing them to “speak” with a version of their loved one, helping them feel less isolated. Given the personal nature of mourning, it is essential to acknowledge that each individual has the right to determine the most effective way for them to manage their grief, including whether or not they choose to use this technology.

However, the decision to engage with griefbots is not always straightforward. It is unclear whether individuals in the throes of grief can make fully autonomous decisions, as emotions can cloud judgment during such a vulnerable time. Grief may impair an individual’s ability to think clearly, and thus, the use of griefbots might not always be a conscious, rational choice but rather one driven by overwhelming emotion.

Nora Freya Lindemann, a doctoral student researching the ethics of AI, proposes that griefbots could be classified as medical devices designed to assist in managing prolonged grief disorder (PGD). PGD is characterized by intense, persistent sorrow and difficulty accepting the death of a loved one. Symptoms of this disorder could potentially be alleviated with the use of griefbots, provided they are carefully regulated. Lindemann suggests that in this context, griefbots would require stringent guidelines to ensure their safety and effectiveness. This would involve rigorous testing to prove that these digital companions are genuinely beneficial and do not cause harm. Moreover, they should only be made available to individuals diagnosed with PGD rather than to anyone newly bereaved to prevent unhealthy attachments and over-reliance.

Despite the potential benefits, the psychological impact of griefbots remains largely unexplored. It is crucial to consider how these technologies affect emotional healing in the long term. While they may offer short-term comfort, the risk remains that they could hinder the natural grieving process, leading individuals to avoid the painful yet necessary work of acceptance and moving forward. As the technology develops, further research will be essential to determine the full implications of griefbots on the grieving process and to ensure that they are used responsibly and effectively.

Conclusion

Griefbots are at the intersection of cutting-edge technology and age-old human concerns about mortality, memory, and ethics. While they hold potential for comfort and connection, their implementation poses significant ethical and human rights challenges. The concepts I explored only scratch the surface of the iceberg. As society navigates this uncharted territory, we must critically examine its implications and find ways to use AI responsibly. The questions it raises are complex, but they offer an opportunity to redefine how we approach death and the digital legacies we leave behind.