America’s Exit from the Convention on Climate Change

On January 7th, 2026, a Presidential Memorandum from the White House called for the United States of America’s withdrawal from a number of international organizations, including the United Nations Framework Convention on Climate Change (UNFCCC). The convention’s mission since its adoption in 1992 has concentrated on the prevention of “dangerous anthropogenic interference” with the Earth’s climate system– changes in the climate due to human activity. It provides a way for low-income countries to finance and shape emerging states’ climate mitigation efforts, outlines frameworks for nations to report their climate efforts and emissions, and creates opportunities for a multilateral approach to climate mitigation and adaptation. Though the UNFCCC, like any international agreement, has its flaws, it boasted near-universal membership, with 197 states in compliance. The withdrawal of the United States is unprecedented in this regard, as it will become the only United Nations member state not engaged in the convention. This blog will examine the possible reasons for America’s withdrawal as well as its possible consequences for human rights, both domestically and internationally. 

An alley of each UN Member's flag in front of the United Nations office in New York.
UN Members’ Flags Photo by Mathias Reding from Pexels: https://www.pexels.com/photo/flag-of-different-countries-un-members-4468974/
EXPLANATIONS FOR WITHDRAWAL

In the Presidential Memorandum demanding America’s removal from the UNFCCC, the White House described the convention, along with other international organizations mentioned, as “Contrary to the Interests of the United States”. Additionally, a press statement regarding the memorandum further labelled these organizations as “wasteful, ineffective, and harmful”. There are several possible reasons behind the current administration’s negative view of the climate convention.

According to a fact sheet released by the White House, departure from the UNFCCC will ensure that American taxpayer money will be funneled into domestic interests rather than in international efforts. The UNFCCC is upfront in its demand for industrialized countries to fund its efforts to strengthen climate change action in low-income countries. As the top funder, the United States’ financial contributions have made up around 22% of the convention’s budget. The White House’s fact sheet also outlined the threat to state sovereignty the UNFCCC poses, claiming that its functions “undermine America’s independence.” Specific and binding international laws require an inevitable trade-off, as high legalization comes with “costs to sovereignty.” Such specificity is characteristic of agreements like the Kyoto Protocol and the Paris Climate Agreement, both of which are outgrowths of the UNFCCC. These treaties set specific obligations, goals, and limits regarding adaptation and mitigation efforts, global temperature rise, and emissions. As is evident from the White House’s withdrawal from the Paris Climate Agreement, stringent international law has been interpreted by the White House as straitjacketing America’s autonomy and its abilities to act in its own interests.

A laptop browser shows the United Nation's Framework Convention on Climate Change website.
A laptop browser shows the United Nation’s Framework Convention on Climate Change website.
Tada Images – stock.adobe.com #689529031
CLIMATE CHANGE AS A HUMAN RIGHTS TOPIC

Why is it important for climate change to be addressed internationally in the first place? At first glance, the climate crisis may appear to be purely an environmental problem. Yet the reality is that humans rely on their natural surroundings. From shelter to food to health, humanity’s dependence on the environment positions the climate crisis as “one of the most pervasive threats to human rights today,” as it endangers human rights such as life, security, and freedom. Health impacts range from disease and injury to malnutrition, due to air pollution, increased natural disasters, and food shortages. Housing, part of the human right to an adequate standard of living, will become further strained due to an inevitable increase in climate refugees. Water scarcity and salinization of freshwater threaten the human right to clean drinking water. Additionally, those most at risk of suffering at the hands of anthropogenic climate change will be communities already made vulnerable by exploitation and discrimination, both globally and domestically. The adverse human impacts of the climate crises are sweeping and urgent, making climate adaptation and mitigation essential to national security, public health, and human rights efforts.

POSSIBLE CONSEQUENCES

So, how does the United States’ withdrawal from the UNFCCC impact human rights and progress towards climate adaptation and mitigation?

Because of the global effects of anthropogenic climate change, it is essential that international coalitions exist not only to provide funding for low-income countries’ climate efforts but also to set standard goals and encourage accountability. Climate mitigation has been established as a global public good, with benefits extending “to all countries, people and generations,” as the effects are experienced by all but the problem cannot be solved by anyone alone. This makes multilateral conventions and agreements essential in tackling climate efforts, especially when considering the financial needs of lower-income countries and their critical input in decision-making.

The strength of multilateral conventions like the UNFCCC relies heavily on the level of legalization within the treaties; the more legally binding, the more responsibility states feel to adhere to the agreements. However, high legalization depends greatly on “if the most powerful state(s) is in favour of it,” pointing to a need for large global powers to come to a consensus in order for climate goals to be reached. The United States, a global power and a major financial contributor to the UNFCCC, has seemingly crippled the legitimacy of the climate convention by withdrawing from its consensus. This will likely threaten the legal credibility of the UNFCCC, possibly leading to a “gridlock in negotiations” on climate action.

In addition, evidence suggests that costs of unilateral exits can be “diffuse and long-term,” meaning that short-term advantages to withdrawal may be surpassed over time by the sustained costs. This is especially true of withdrawal from climate agreements, as can be seen in the book A Perfect Moral Storm: The Ethical Tragedy of Climate Change, which highlights stakeholders’ struggle to properly address climate change due to its generationally delayed and geographically asymmetrical effects. According to the book, a failure to participate in climate actions also allows higher-income countries such as America, the largest historical emitter of greenhouse gases, to cast off responsibility for unsustainable behaviors that will ultimately affect lower-income areas of the globe more intensely.

The consequences of withdrawal from the UNFCCC are not only global, but domestic, as well. Scaling back on climate mitigation and adaptation is expected to “impede the rate of economic growth” within the United States through this century. As many other high-income countries continue developing cheaper renewable energy sources that are more efficient than fossil fuels, America is projected to see less affordable energy, transportation, food, and insurance by continued reliance on nonrenewable energy. Add this to environmental strains, food shortages, infrastructure weakness, and health threats, and it appears that the long-term economic and human rights costs of withdrawing from mitigation and adaptation are grave.

Furthermore, research on the effects of treaty withdrawal suggests that unilateral exits damage relationships between states, possibly undermining cooperation in other political or economic agreements. A state’s exit can be viewed by treaty participants as an indication that other normative commitments may be abandoned. The withdrawal from the UNFCCC, among other international agreements, may signal a renunciation of obligations and demonstrate “disdain for both the process and participants” of multilateral agreements.

A sign with President Donald Trump crossed out with the word "Klimaleugner" across his face (meaning "Climate Denier").
“Climate Denier” Sign of President Trump
Photo by Markus Spiske from Pexels: https://www.pexels.com/photo/banner-demonstration-politics-protest-2990646/
LOOKING FORWARD

The chance for rejoining the UNFCCC is a contested possibility. However, there is legal uncertainty regarding whether the convention would need to be ratified again or if it could be joined just with a signature from a future President. The legality of the withdrawal is even being contested, with debate over its validity without Senate approval.

In any case, the United States has made its move away from multilateral climate mitigation and adaptation, creating tension within the international community at large. Other global efforts towards lowering emissions, developing clean energy, and adapting to climate change will continue on without current support from America. The absence of financial and normative support from the US could result in long-term losses in speed and efficiency of global climate action. Only time will reveal the human rights consequences of the United States’ withdrawal from the UNFCCC’s obligations and limitations, both for itself and the world.

Accessible, Affordable, and AI: How Artificial Intelligence Can Advance Healthcare Access

Between the Constitution of the World Health Organization, the Universal Declaration of Human Rights, and the International Covenant on Economic, Social, and Cultural Rights, the human right to a high standard of physical and mental health has been determinedly codified in international law. Providing this is more difficult. According to the World Health Organization, mostly low and lower-middle income countries will experience a healthcare shortage of 11 million workers within five years, and an estimated 4.5 billion people already lacked access to affordable essential care in 2021. Evidently, the global healthcare system needs a lifeline; with staff shortages and unmet needs, this help cannot come soon enough. Despite my criticisms of Artificial Intelligence’s implementation in healthcare due to data failures and biases, there is real potential for Artificial Intelligence to make the human right to health more accessible, affordable, and efficient. From wearable devices to Telehealth to risk and data analysis, the implementation of AI within healthcare systems can help relieve medical professionals from menial tasks, provide better access to health services for the disadvantaged, and aid in the overall efficiency of often bottlenecked healthcare systems.

REMOTE SERVICES & WEARABLE PRODUCTS

The access to one’s human right to adequate healthcare can be largely determined by geolocation; rural populations suffer significantly worse health outcomes than their urban counterparts, largely due to isolation from hospitals and medical professionals. People living in rural areas may not have the time or financial means to access efficient, affordable health services. Artificial Intelligence can help address this disparity by powering remote services such as Telehealth, aiding individuals in contacting physicians, and even potentially generating diagnoses without patients’ having to sacrifice their time or resources to travel. The primary use of AI within Telehealth aims to alleviate scheduling problems by training algorithms to match patients with the proper providers and ensure the smoothness of scheduling and accessing virtual appointments. This could significantly reduce the delay in access to Telehealth services that rural patients can experience.

A man measures his heart rate on an Apple Watch
Adobe Stock, DenPhoto, #290469935
A man measures his heart rate on an Apple Watch

In addition, wearable products utilizing Artificial Intelligence have shown potential in monitoring chronic conditions, eliminating the need for frequent check-ups, and reducing the burden on healthcare providers. Using data collected by wearable devices, AI algorithms can potentially detect signs of health problems and alert those with chronic conditions if their vitals are amiss. Patients can also receive AI-generated reminders to take medications and health check-ins to ensure proper care on a day-to-day basis.

The use of remote Artificial Intelligence technology to provide healthcare services also has the potential to increase access to mental health resources, especially in rural areas, where psychological help may be expensive, far away, or overly stigmatized. AI-driven personal therapists show potential to improve access to mental health services that traditionally are difficult to schedule and afford. Artificial intelligence has been used to analyze sleep and activity data, assess the likelihood of mental illness, and provide services related to mindfulness, symptom management, mood regulation, and sleep aid. 

ACCESSIBILITY

On top of increased accessibility for rural residents, various employments of Artificial Intelligence in healthcare have the potential to cater to the needs of those with cognitive or physical disabilities. Models can aid in simplifying text, generating text to speech audio, and providing visual aids to assist patients with disabilities as they receive care and monitor their conditions. The ability of Artificial Intelligence to streamline potentially incomprehensible healthcare interfaces and simplify information can also assist elderly patients in accessing health services. Older people can often be intimidated by the complexity of online healthcare’s technological hurdles, preventing them from effectively accessing their doctors, health records, or other important resources; Artificial Intelligence can be harnessed to adapt user personalization on websites and interfaces to best accommodate the problems an elderly or disabled person may experience trying to access online care.

Generative language models, a particular type of Artificial Intelligence that uses training data to generate content based on pattern recognition, has also been employed to overcome language barriers within medical education. The ability of Artificial Intelligence models to effectively translate educational curriculum has contributed to the standardization of medical practices and standards across countries. The digitalization of this process also makes medical educational material more accessible to those without direct access to a wealth of resources, furthering the World Health Organization’s Digital Health Guidelines, which aims to encourage “digitally enabled healthcare education.” The use of AI as a translation tool within healthcare also shows broader potential to be utilized for patient care, eliminating the need for costly translators and ensuring that non-native speakers fully comprehend their diagnoses and treatments. One example of this is the American company “No Barrier AI”, which employs an AI-driven interpreter to provide immediate, accurate, and cost-effective translation for those with little proficiency in English seeking healthcare.

Side view of a focused elderly man sitting before his laptop
Adobe Stock, Viacheslav Yakobchuk, #390382830
Elderly man accesses health portal from his laptop

PATIENT AND DATA ANALYSIS

A whole other blog post could be dedicated entirely to the use of Artificial Intelligence in hospitals and as an aid to medical professionals. Broadly, the integration of Artificial Intelligence into clerical and administrative tasks, health data analysis, and care recommendations has reduced the time and money spent on the slow, bureaucratic processes that weigh down medical professionals. Nearly 25% of healthcare spending in the United States is devoted to administrative tasks, but according to a McKinsey & Company study, the adoption of AI and machine learning could save the American healthcare industry $360 billion, mostly by assisting with clerical and administrative tasks. For instance, AI systems have proved effective in boosting appointment scheduling efficiency, speeding up an infamously difficult process. Because of its ability to detect, analyze, and predict patterns, Artificial Intelligence has also been utilized to track inventory and increase supply chain efficiency, ensuring proper amounts of essential medical supplies and medicines are in stock when they are most needed.

Beyond managerial and administrative duties, Artificial Intelligence has also been integrated into clinical decision-making, data and visual analysis, risk evaluation, and even the development of medicines. Trained models have proven capable of analyzing data from brain scans, X-rays, other tests, and patient records to detect and predict health problems; this ability to detect patterns and predict outcomes has also enabled early detection of diseases and conditions such as sepsis and heart failure. Medical professionals can take the model’s analysis into account while also considering treatment suggestions from Artificial Intelligence as they proceed with patient care. This can reduce the likelihood of clinical mistakes as doctors can compare their findings with those of the AI model. Artificial Intelligence has also been used in telesurgical techniques to improve accuracy and supervise surgeons as they operate. The integration of Artificial Intelligence has also advanced vaccine development, as it aids in identifying antigen targets, helps predict a particular patient’s immune response to specific vaccinations, creates vaccines tailored to an individual’s genetic makeup and medical needs, and increases the efficiency of vaccine storage and distribution.

These are only a few examples of the potential usefulness of Artificial Intelligence within healthcare settings. The examples are countless and increasing every day, and, as I believe, the potential for further advancement is immeasurable.

Two doctors analyze brain scans on a tablet.
Adobe Stock, peopleimages.com, #1599787893
Two doctors analyze a brain scan with suggestions from AI tech

WHAT WE MUST KEEP IN MIND

While these advancements in the accessibility, affordability, and efficiency of healthcare systems show undeniable promise in accessing the human right to health, the development and integration of these Artificial Intelligence technologies must be undertaken with equality at the center of all efforts. As I highlighted in my last post, it is imperative that underlying societal biases be accounted for and curbed within these models to prevent inaccurate results and further harm to individuals from marginalized groups. A survey at the University of Minnesota found that only 44% of hospitals in the United States conducted evaluations on system bias in the Artificial Intelligence models they employed. It is essential to pursue efforts to ensure that Artificial Intelligence promotes not only the human right to health, but also the human right to freedom from discrimination within healthcare practices, especially those aided by systems potentially riddled with bias based on age, race, ethnicity, nationality, and gender.

These technologies are as practical as they are exciting. Still, as the healthcare industry moves forward, Artificial Intelligence developers and healthcare providers alike must maintain the core ideals of the Human Rights framework– equality, freedom, and justice.

Training to Treatment: AI’s Role in Healthcare Inequities

My first English professor here at UAB centered our composition class entirely around Artificial Intelligence. He provided our groups with articles highlighting the technology’s potential capabilities and limitations, and then he prompted us to discuss how our society should make use of AI as it expands. Though we tended to be hesitant toward AI integration in the arts and service industries, there was a sense of hope and optimism when we discussed its use in healthcare. It makes sense that these students, most of whom were studying to become healthcare professionals or researchers, would look favorably on the idea of AI relieving providers from menial, tedious tasks.

AI’s integration in healthcare does have serious potential to improve services; for example, it’s shown promise in examining stroke patients’ scans, analyzing bone fractures, and detecting diseases early. These successes don’t come without drawbacks, however. As we continue to learn more about the implications of AI use in healthcare, we must take into account potential threats to human rights, including the rights to health and non-discrimination. By addressing the human rights risks of AI integration in healthcare, algorithmic developers and healthcare providers alike can implement changes and create a more rights-oriented system. 

A woman stands in front of a monitor, examining head and spine scans.
Adobe Stock #505903389 Gorodenkoff A woman stands in front of a monitor, examining head and spine scans.

THE INCLUSION OF INEQUALITIES

Artificial Intelligence cannot operate without data; it bases its behaviors and outcomes on the data it is trained on. In healthcare, Artificial Intelligence models rely on input from health data that ranges from images of melanoma to indicators of cardiovascular risk. The AI model uses this data to recognize patterns and make predictions, but these predictions are only as accurate as the data they’re based on. Bias in AI systems can often stem from “flawed data sampling,” which is when sample sizes of certain demographics are overrepresented while those of others, usually marginalized groups, are left out. For example, people of low economic status often don’t participate in clinical trials or data collection, leaving an entire demographic underrepresented in the algorithm. The lack of representation in training data also generally applies for women and non-white patients. When training datasets are imbalanced, AI models may fail to accurately analyze test results or evaluate risks. This has been the case for melanoma diagnoses in Black individuals and cardiovascular risk evaluations in women, where the former model was trained largely on images of white people and the latter on the data of men. Similarly, text-to-speech AI systems can omit voice characteristics of certain races, nationalities, or genders from training data, resulting in inaccurate transcriptions. 

A woman at a computer examines unequal data sets on two sheets of paper.
Adobe Stock #413362622 Source: Andrey Popov A woman at a computer examines unequal data sets on two sheets of paper.

The exclusion of certain groups from training data points us to the fact that AI models often reflect and reproduce already existing human biases and inequalities. Because medical data reflects currently existing healthcare disparities, AI models train themselves in ways that internalize these societal inequalities, resulting in inaccurate risk evaluations, especially for Black, Hispanic, or poor patients. These misdiagnoses and inaccurate evaluations create a feedback loop where an algorithm trained on poor data creates poor healthcare outcomes for marginalized groups, further contributing to healthcare disparities. 

FRAGMENTATION AND HALLUCINATION

Another limitation of the data healthcare AI models are trained on is their fragmented sourcing. Training data is often collected across different sources and systems, ranging from pharmacies to insurance companies to hospitals to fitness tracker records. The lack of consistent, holistic data compromises the accuracy of a model’s predictions and the efficiency of patient diagnosis and treatment. Other research highlights that the majority of patient data used to train algorithms in America comes from only three states, limiting its consideration of geo-locational factors on patient health. Important determinants of health, such as access to nutritious food and transportation, work conditions, and environmental factors, are therefore excluded from how the model diagnoses or evaluates a patient. 

A computer screen shows an AI chatbot, reading "Meet AI Mode"
Adobe Stock #1506537908 Source: Tada Images A computer screen shows an AI chatbot, reading “Meet AI Mode”

When there are gaps in an AI system’s data pool, most generative AI models will fabricate data to fill these gaps, even if this model-created data is not true or accurate. This phenomenon is called “hallucination,” and it poses a serious threat to the accuracy of AI’s patient assessments. Models may generate irrelevant correlations or fabricate data as they attempt to predict patterns and outcomes, resulting in overfitting. Overfitting occurs when models learn too much on the training data alone, putting weight on outliers and meaningless variations in data. This makes models’ analyses inaccurate, as they fail to truly understand patient data and instead manipulate outcomes to match the patterns they were trained on. AI models will easily fabricate patient data to create the outcomes that make the most sense to their algorithms, jeopardizing accurate diagnoses and assessments. Even more concerning, most AI systems fail to provide transparent lines of reasoning for how they came to their conclusions, eliminating the possibility for doctors, nurses, and other professionals to double-check the models’ outputs.

HUMAN RIGHTS EFFECTS

All of this is to say that real patients are complex, and the data that AI is trained on may not accurately represent the full picture of a person’s health. This results in tangible effects on patient care. An AI’s misstep in its analysis of a patient’s health data can result in prescribing the wrong drugs, prioritizing the wrong patients, and even missing anomalies in scans or x-rays. Importantly, since AI bias tends to target already marginalized groups such as Black Americans, poor people, and women, unchecked inaccuracies in AI use within healthcare can pose a human rights violation to the Universal Declaration of Human Rights (UDHR) provisions of health in Article 25 and non-discriminatory entitlement to rights as laid out in Article 2. As stated by the Office of the High Commissioner for Human Rights, human rights principles must be incorporated to every stage of AI development and implementation. This includes maintaining the right to adequate standard of living and medical care, as highlighted in Article 25, while attempting to address the discrimination that occurs within healthcare. As the Office of the High Commissioner for Human Rights states, “non-discrimination and equality are fundamental human rights principles,” and they are specifically highlighted in Article 2 of the UDHR. These values must remain at the forefront of AI’s expansion into healthcare, ensuring that current human rights violations are not magnified by a lack of careful regulation.

WHAT CAN BE DONE?

To effectively and justly apply Artificial Intelligence to healthcare, human intervention must ensure that fairness and accuracy remain at the center of these models and their applications. First, the developers of these algorithms must ensure that the data used for training is drawn from a diverse pool of individuals, including women, Black people, and other underrepresented groups. Additionally, these models should be developed with fairness in mind and should work to mitigate biases. Transparency should be built into models, allowing providers to trace the thought processes used to create conclusions on diagnoses or treatment choices. These goals can be supported by advocating for AI development teams and healthcare provider clinics that include members of marginalized groups. The inclusion of diverse life experiences, perspectives, and identities can remedy biases both in the algorithms themselves and the medical research and data they are trained on. We must also ensure that healthcare providers are properly educated about how these models operate and how to interpret their outputs. If developers and medical professionals do address these challenges, then Artificial Intelligence technology has immense potential to improve diagnostic accuracy, increase efficiency in analyzing scans and tests, and alleviate healthcare providers of time-consuming, menial tasks. With a dedication to accuracy and human rights, perhaps the integration of Artificial Intelligence into healthcare will meet my English classmates’ optimistic standards and aid them in their future jobs.

 

Rights and Regulations: A Case Study on Guidelines for AI Use in Education

Based on my previous two articles, a reader of this blog might assume that I’m an advocate for the complete eradication of Artificial Intelligence, given the many criticisms I’ve made of the AI industry. While you shouldn’t expect these critiques to stop on my end, I also accept the fact that AI has effectively taken over the technological world and will not easily be vanquished. Therefore, a more realistic approach to keeping AI within acceptable bounds is regulating its use. This regulation is especially imperative when it comes to our nation’s youth. Their human right to quality education centered on tolerance and respect should not be infringed upon by generative AI use.

That is why programs addressing AI literacy and guidelines on its use in schools are so essential. The Alaska Department of Education’s Strategic Framework on AI use in the classroom, released in October 2025, outlines strategies on safe, responsible, and ethical AI integration in K-12 schools. Alaska is merely the latest state to adopt guidelines for AI use in public schools; a total of 27 states and Puerto Rico have established such policies. Today, I’ll be concentrating on Alaska’s framework as a case study to explore the value in creating state and local guidelines on the education on and use of AI in the classroom.

FEDERAL REGULATIONS

In April of this year, an executive order was signed promoting AI competency in students and establishing a Task Force on Artificial Intelligence Education. In response, the U.S. Department of Education has released potential priorities for grants funding the integration of AI into education: “evidence-based literacy, expanding education choice, and returning education to the states”. While these statements are an encouraging acknowledgement of the need to turn our attention to the use of Artificial Intelligence in academia, they fail to provide tangible guidelines or policies that effectively promote the proper use of AI in schools. These statements also fall short of acknowledging the need for regulation and limitations on AI’s role in academia; in fact, “America’s AI Action Plan” highlights the administration’s aversion towards regulation by providing that states should not have access to federal funding on AI-related matters should they implement “burdensome AI regulations.”

STATE-LEVEL POLICIES

The federal government’s failure to acknowledge AI’s limitations when it comes to privacy, ethics, and functionality in education creates a vacuum devoid of guidelines or regulations on AI’s educational use. A lack of parameters has raised concerns about academic misconduct, plagiarism, privacy breaches, algorithmic bias, and the dogmatic acceptance of generated information that may be inaccurate or unreliable. Complete bans fail to address AI’s potential when used responsibly and create environments where students find new and creative ways to access generative AI despite the ban.

Thankfully, states are beginning to recognize the need to fill the void to maintain the quality and safety of children’s education. Alaska’s Department of Education answered this call by providing its K-12 AI Framework document, which provides “recommendations and considerations for districts” to guide their school districts’ Artificial Intelligence policies and guide educators on how to treat AI use in their classes.

A metal placard on a building reads "Department of Education"
Adobe Stock, D Howe Photograph #244617523

These guidelines serve to “augment human capabilities,” educating students on how to maintain critical thinking and creativity while employing generative AI in their studies. This purpose is supported by the following guiding principles for AI Integration outlined in the framework; these principles serve as building blocks for fostering a positive relationship between students and generative AI, educating about its limitations while highlighting how it can be used properly. To take a human-rights based approach to highlighting the value of these principles, I’ll be providing specific human rights that each guideline works to preserve.

ARTICLE 27

Article 27 of the Universal Declaration of Human Rights (UDHR) establishes the right to enjoy scientific advancements as well as the protection of ownership over one’s scientific, literary, or artistic creations. Alaska’s AI Guideline provides for a human-centered approach to AI integration, emphasizing that districts should move beyond banning generative AI while adopting initiatives to ensure AI enriches human capabilities rather than replaces them. This ensures that students have access to the scientific advancement of generative Artificial Intelligence without diminishing the quality of their education. The “Fair Access” aspect of Alaska’s framework outlines additional provisions for ensuring students have equal access to AI-based technological advancements. It calls for allocating funding dedicated to accessible Internet and AI access, as well as implementing an AI literacy program within school districts.

A boy looks at a computer monitor, generating an AI image.
Adobe Stock, Framestock
#1684797252

Additionally, the “Transparency” and “Ethical Use” principles provide that AI generated content should be properly attributed and disclosed. Citations are a requirement under these guidelines, and any work completed entirely by generative AI is considered plagiarism. This maintains the right to ownership over one’s creations by ensuring that generative AI and the data it pulls from are properly attributed.

ARTICLE 26

Article 26 of the UDHR codifies the right to education that promotes tolerance for other groups and respect for fundamental freedoms and rights. Alaska’s AI framework calls for recognition of generative AI’s potential algorithmic biases against certain ethnic, racial, or religious groups. It states that students should be educated about the prejudices, misinformation, and hallucinations a generative AI model may produce, emphasizing that its outputs must be critically examined. By overtly acknowledging the manifestation of societal prejudices in these algorithms, Alaska’s guidelines preserve the human right to uphold dignity and respect for others within education. This requires the inclusion of diverse local stakeholders such as students, parents, and community leaders in discussions and policymaking regarding AI regulations in the classroom, which the guideline provides suggestions for.

ARTICLE 12 and ARTICLE 3

The final human rights Alaska’s framework works to uphold are outlined in Article 3 and Article 12 of the UDHR, which state the right to security of person and privacy, respectively. The AI Framework establishes that student data protection and digital well-being are essential to maintain and educate on. It highlights a responsibility on the districts to support cybersecurity efforts and compliance with federal privacy laws such as the Family Educational Rights and Privacy Act and the Children’s Internet Protection Act. Schools also have an obligation to review the terms of service and privacy policies of any AI tools used in classrooms to ensure students’ data is not abused. Educators also should teach their students how to protect their personally identifiable information and the consequences of entering sensitive information into generative AI tools.

A page in a book reads "FERPA, Family Educational Rights and Privacy Act"
Adobe Stock, Vitalii Vodolazskyi
#179067778

WHAT’S NEXT

Alaska’s framework is only an example of a wider trend of states adopting guidelines on Artificial Intelligence’s role in education. These regulations ensure that students, educators, and stakeholders acknowledge the limitations and potential of AI while implementing it in a way that serves human ingenuity rather than replacing it. These guidelines go only so far when implemented locally, though. We must civically engage with local school boards, individual school administrations, educators, and communities to ensure these helpful guidelines are properly abided by. Frameworks like Alaska’s provide sample policies for school boards to enact and provide examples of school handbook language that can be employed to preserve human rights in the face of AI expansion; all it takes is local support and implementation to push these policies into action. Community training and panels could be utilized to start conversations between families, students, community members and AI policymakers and experts.

As individuals, it is our place to engage in these community efforts. And if you’re a student reading this, take Alaska’s frameworks on guiding AI use in education into consideration the next time you’re thinking about using ChatGPT on an assignment. From plagiarism to biases to security, there’s good reason to tread carefully and emphasize a responsible approach to AI use that doesn’t encourage over-reliance but rather serves as a helping hand.

Economy and Exploitation: The AI Industry’s Unjust Labor Practices

I remember when ChatGPT first started gaining popularity. I was a junior in high school, and everyone around me couldn’t stop marveling over its seemingly endless capabilities. The large language model could write essays, answer physics questions, and generate emails out of thin air. It felt to us, sixteen and seventeen-year-olds, like we had discovered magic – a crystal ball that did whatever you told it.

I’m writing this, three years later, to break the news that it is, unfortunately, not magic. Artificial Intelligence (AI) relies on human input at nearly every stage of its preparation and verification. From content moderation to data collection, outwardly automated AI systems require constant human intervention to ensure the algorithm runs smoothly and sensically. This intervention calls for human labor to sift through and manage a given model’s data and performance. But where does this labor come from? And what are the implications of these workers’ invisibility to the public?

Labor Source

On the surface, it appears that Big Tech companies such as OpenAI, Sama, Meta, and Google are bearing the brunt of the labor it takes to develop and operate their AI systems. A closer look reveals that the human labor these AI systems require is distributed across the globe. These massive companies employ subcontractors to hire and manage workers who will perform the countless small, repetitive tasks required. These subcontractors, looking for maximum profit, often hire workers from less developed countries where labor rights are less strictly enforced and wages are not stringently regulated. What does this mean? Cheap, exploitative labor. Those living in poverty, refugee camps, and even prisons have been performing data tasks for subcontractors like Amazon Mechanical Turk and Clickworker. The outsourcing of workers in countries such as India and Kenya by affluent businesses in mostly Western countries seems to perpetuate patterns of exploitation and colonialism and play into global wealth disparities. 

Woman in a chair looking at computer screens
Crowdsourced Woman Monitors Data. Source: Adobe Stock

Wages

On top of the larger systemic implications of wealthier countries’ outsourcing their labor to less affluent countries, the individual workers themselves often suffer human rights abuses regarding wages.

According to the International Labour Organization (ILO), wage theft is a pressing issue when it comes to crowdworkers; this is due to employers denying wages to anyone who is deemed to have completed their tasks incorrectly. Issues with software and the flagging system can result in employers withholding wages due to completed tasks being labelled as incorrectly done. In the ILO’s survey, only 12 percent of crowdworkers conceded that all of their task fulfillment rejections were justifiable, with the majority claiming that only some of them were warranted. In other instances, pay can take the form of vouchers or gift cards, some of which are deemed invalid upon use. Unexpected money transfer fees and hidden fines can also result in wages being lower than initially expected or promised. 

Woman looking at her phone and credit card in shock.
Woman Looks at Her Wages, Which Are Lower than Expected. Source: Adobe Stock

Even if outsourced workers did always get paid correctly, it usually doesn’t amount to much. According to an ILO survey, the median earnings of microworkers were 2 US dollars an hour. In one specific case in Madagascar, wages were as low as 41 cents an hour. These workers are being paid far less than a livable wage under the excuse that their work is menial and performed task-by-task. The denial of wages and the outsourcing companies’ low pay rates violate ‘equal pay for equal work’ under Article 23 of the Universal Declaration of Human Rights (UDHR).

For some in periphery countries like India and Venezuela, data microwork is people’s only source of income. Its convenience and accessibility are attractive to those who don’t have the resources to apply for typical jobs, but its wages do not pay for the decent standard of living that is outlined in the UDHR in Article 25. As one microworker from Venezuela said in an interview with the BBC, “You will not live very well, but you will eat well.” 

 

Working Conditions

In addition to low wages, crowdworkers often face human rights violations regarding working conditions, and most of them are largely unable to access methods to advocate for better treatment from their employers. For those who classify and filter data, a day at work may include flagging images of graphic content, including murder, child sexual abuse, torture, and incest. This was the case for Kenyans employed by Sama and subsequently OpenAI; workers have testified to having recurring visions of the horrors they’ve seen, describing their tasks as torturous and mentally scarring. Many requests for mental health support are denied or not fulfilled. These experiences make workers vulnerable to developing post-traumatic stress disorder, depression, and emotional numbness.

 

Woman covering her face as she looks at a laptop.
Woman Looks At Disturbing Images as She Monitors Data. Source: Adobe Stock

In one instance, the subcontractor Sama shared the personal data of one crowdworker with Meta, including parts of a non-disclosure agreement and payslips. Other workers on Amazon Mechanical Turk experienced privacy violations like “sensitive information collection, manipulative data aggregation and profiling,” and methods of scamming and phishing. This arbitrary collection and abuse of workers’ private data directly violates Article 12 of the UDHR, which enshrines the protection of privacy as a human right.

The nature of crowdwork is such that individuals work remotely and digitally, granting more power to the contractors over their workers and significantly diminishing microworkers’ capacity to take collective action and compromise with employers for better conditions. This independent contractor relationship between employers and employees has weakened the ability for microworkers to unionize and bargain with their contractors. Employers are able to rate crowdworkers poorly, which often results in the rejection of workers when they attempt to find new tasks to fulfill. There are few ways for workers to review their employers’ performance in a similar way, creating an unjust power imbalance between employer and employee, and various violations of labor rights. The possible convenience of self-employment and remote work comes with surrendering basic workers’ rights, such as “safeguards against termination, minimum wage assurances, paid annual leave, and sickness benefits”. Each of these aspects of microwork denies employees the labor rights outlined in Article 23 of the UDHR, another direct violation of human rights by these outsourcing companies. 

What’s Next?

The first step to addressing the human rights violations that are facing outsourced Artificial Intelligence data microworkers is ensuring their visibility. Dismantling the narrative of Artificial Intelligence models as fully automated systems and raising awareness about the essential roles microworkers play in the preparation and validating of data can help garner public attention. Since many of these crowdworkers are employed abroad, it is also important for advocates to highlight the exploitation that these tech companies and contractors are profiting from. In addition, because these workers have little bargaining power, making their struggles visible and starting dialogue with companies on their behalf can be a crucial step towards ensuring that microworkers have access to their human and privacy rights. While research and policy continues to expand regarding AI’s impact on the labor force, it is essential that academics and lawmakers alike consider the effects on the whole production chain, including low-wage workers abroad, rather than just the middle-class domestic workforce. Finally, it is imperative that big tech businesses and the crowdsourcing companies they contract with are held publicly accountable for their practices and policies when it comes to wages, payment methods, mental health resources, working conditions, and unionization. These initiatives can begin only once the public becomes aware of the exploitation of these invisible workers. So, the next time someone throws a prompt at ChatGPT, start a conversation about how reliant AI is on human labor. Only then can we start to grant visibility to microworkers and work towards change.

Construction and Consequences: The Human Impacts of Artificial Intelligence Data Centers

This summer, I worked with a few different advocacy organizations during Louisiana’s 2025 Congressional Session. The amount of policy issues flying around was mind-spinning, but a constant murmur about the new Meta data center popping up in Richland Parish always seemed to pierce through the chaos. I couldn’t help but think, “Of all the state issues we could be debating, what could be so provocative about a data center?”

Data centers are nothing new; ever since the birth of the Internet, they have been used for the large-scale computing that comes with ever-advancing technology. With the rapid expansion of generative AI, our country is seeing more and more of these processing centers pop up, especially in rural areas. Governments, researchers, and communities alike have been forced to face the glaring reality that comes with the construction and maintenance of new AI data centers: where there are new data centers, there are human lives directly impacted by their creation. Debate on whether these effects are a net positive or negative to these communities has prompted closer examination on the human impact of data centers. Only through a thorough analysis of this ongoing research can we determine the nature and scope of these impacts and explore proper policy responses.

A large computing center surrounded by rural farmland.
Source: Adobe Express, Sepia100, #566722487

WATER

We rely on water; it’s as simple as that. We need water to drink, bathe, flush the toilet, wash our hands and dishes, and water our crops; it’s a necessity to life, and an officially recognized human right. As much as we need water, data centers are even thirstier. It takes a lot of water to cool down all of the computing that takes place in these buildings. In 2021, just one of Google’s data centers in Oregon used up 355 million gallons of water. In 2023, all of Meta’s data centers worldwide guzzled around 1.4 billion gallons of water. Where is this water coming from? Of Meta’s 1.4 billion gallons, about 672 million gallons came from local water sources. The extraction process is permanent, meaning data centers deplete millions of gallons of water from communities’ local water supply yearly, and with the industry’s rapid expansion, its water consumption will only grow. Some residents living nearby these new data centers, such as Beverley Morris in Mansfield, Georgia, believe that these centers are draining wells and aquifers, leaving locals without drinkable or fully functional running water in their homes. For communities in the Southwest, this could pose an especially pressing threat during droughts as the scarce water supply is divided between industrial and civilian use.

Landon Marston, a professor in environmental and water resources engineering at Virginia Tech University, points out that since companies like Meta and Google tend to choose areas outside of cities to construct these data centers, the surge in water demand could also necessitate water infrastructure updates, the costs of which could fall partly on local ratepayers.

ENERGY

AI data centers require tons of energy. We’re talking 200 trillion watts an hour, and that was only in 2016. The power usage of these data centers is projected to rise to nearly 2967 trillion watts an hour by 2030. The previously flatlined demand for electricity has been increasing nationally since 2023 partly due to the energy-intensive operations of growing data centers. The majority of data centers’ energy relies on fossil fuels and power plants, putting pressure on local energy grids. This increased pressure poses the threat of more frequent, long-lasting, and expensive blackouts for the communities surrounding these energy-hungry data centers.

More pressure on the grid naturally means more pressure to update the grid. Local belief and research alike contend that the cost of these grid updates, as well as the price tag of the extra energy demand, will show up in locals’ energy bills. A Harvard study provides evidence that under-the-table agreements between utilities and Big Tech consumers could be partly responsible for increased rates on everyday residents’ bills. Additionally, in places like Louisiana, the combination of prolonged need for air conditioning and damage to energy infrastructure due to storms drive energy bills up as it is; the intense energy demands of the new data center will serve only to exacerbate the steep cost of energy and amenities in nearby homes and businesses. Utilities are essential to decent quality of life and even employment, tying their accessibility directly to human rights.

A person with a calculator in one hand and a utility bill in the other attempts to calculate what they owe.
Source: Adobe Express, Anna, #529027855

PUBLIC HEALTH

Since AI data centers rely heavily on the fossil-fuel energy of power plants, they run the risk of increasing local pollution and threatening public health in already vulnerable rural locations. AI centers, on top of their energy use from the grid, also employ backup generators in case of grid failure; these diesel generators can release 200 to 600 times more nitrous oxides (NOx) than a natural gas plant while producing the same amount of energy. NOx pollution can cause irritation in the eyes, throat, and nose, as well as more severe cases of respiratory infection, reduced metabolism, and even death. According to the Institute of Electrical and Electronics Engineers, IEEE, data centers caused about $6 billion in public health damages due to this type of air pollution in 2023. That being said, location matters. Often, these data centers choose rural areas, and in cases like that of Bessemer, Alabama, these areas are often home to a large Black population. Black Americans already suffer disproportionately from air pollution and other environmental injustices; in fact, low-income Black Americans have the highest mortality rate due to fine particulate matter air pollution. The emergence of data centers in rural Black communities only serves to exacerbate this phenomenon. This can be directly traced to industrial zoning policies, which often result in the sacrifice of poor, rural, often Black areas to attract business and wealth to cities. The result? Higher rates of asthma, respiratory issues, even pollution-related death, and a direct violation of the human right to clean air.

 

Smog plumes out of a large plant, polluting the sky.
Source: Adobe Express, Jaroslav Pachý Sr., #175217425

ECONOMY

While industrial zoning and property value are the most important location factors, choosing a lower income, rural area also poses possible economic advantage for the communities. The construction of processing centers can require thousands of workers, offering steady employment opportunities for locals. After construction, companies like Meta, Google, and Microsoft will have to hire employees to keep their data centers managed and running properly, another new job opening for those in the surrounding area. Some locals have expressed excitement over the new economic growth data centers will bring, especially in areas with dwindling industries like coal and timber. Working in data centers is an attractive alternative to the low-paying, dangerous agricultural jobs some of these areas rely on. Others have raised concerns that while many jobs will certainly appear during the construction period of the centers, employment opportunities from data centers seem to fall off afterwards. Depending on the size, each data center building could operate with as little as fifty employees, according to Microsoft. Larger ones like the one developing in Louisiana are required to employ 500 locals, but even that opportunity seems small to some residents in comparison to the harm the center could bring to their community. Members of communities impacted by the development of data centers have also expressed concerns about land usage, pointing out that the extensive land taken up by these new data centers had potential to be used for farming or other less health-damaging economic development. The right to employment good working conditions are outlined directly in the Universal Declaration of Human Rights, and these economic impacts could very well jeopardize them for those living in surrounding areas.

What Now?

Artificial Intelligence isn’t going away; in fact, we can expect its rapid expansion in the coming years, including the construction of dozens of new data centers. Behind AI’s captivating technologies, there are human lives impacted by the processes it takes to power its functions. Considering the damage data centers can do to local resources, it certainly seems like measures need to be taken to ensure the escalating growth of AI doesn’t come at the expense of communities, especially those that already face disadvantage. First and foremost, companies establishing these centers should focus on using renewable energy for much of their power, thereby decreasing their environmental impact on local communities. In addition, companies should adopt initiatives to maintain the local water supply’s integrity, recycle water when possible, and ultimately, improve the efficiency of their computing to save resources like water and electricity. Local governments must ensure that the price of increased pressure on electricity and water infrastructure does not end up on ratepayers’ bills; this means more transparency from large companies and their agreements with local utility providers and governments regarding the construction and maintenance of these centers and the impacts on local residents’ well-being. These centers, if built sustainably and with people in mind, could ultimately have a positive impact on industry and economy within these communities. The development of data centers must not concentrate solely on maximum profit and computing power but also on the adverse effects the center has on utility bills, air quality, water demands, the power grid, and public health as a whole.

So, really, it’s no wonder advocates, lobbyists, and policymakers couldn’t stop talking about Richland Parish’s new data center. It’s nearly as big as Manhattan, and its effects on the surrounding community may end up being just as sizable.