Rights and Regulations: A Case Study on Guidelines for AI Use in Education

Based on my previous two articles, a reader of this blog might assume that I’m an advocate for the complete eradication of Artificial Intelligence, given the many criticisms I’ve made of the AI industry. While you shouldn’t expect these critiques to stop on my end, I also accept the fact that AI has effectively taken over the technological world and will not easily be vanquished. Therefore, a more realistic approach to keeping AI within acceptable bounds is regulating its use. This regulation is especially imperative when it comes to our nation’s youth. Their human right to quality education centered on tolerance and respect should not be infringed upon by generative AI use.

That is why programs addressing AI literacy and guidelines on its use in schools are so essential. The Alaska Department of Education’s Strategic Framework on AI use in the classroom, released in October 2025, outlines strategies on safe, responsible, and ethical AI integration in K-12 schools. Alaska is merely the latest state to adopt guidelines for AI use in public schools; a total of 27 states and Puerto Rico have established such policies. Today, I’ll be concentrating on Alaska’s framework as a case study to explore the value in creating state and local guidelines on the education on and use of AI in the classroom.

FEDERAL REGULATIONS

In April of this year, an executive order was signed promoting AI competency in students and establishing a Task Force on Artificial Intelligence Education. In response, the U.S. Department of Education has released potential priorities for grants funding the integration of AI into education: “evidence-based literacy, expanding education choice, and returning education to the states”. While these statements are an encouraging acknowledgement of the need to turn our attention to the use of Artificial Intelligence in academia, they fail to provide tangible guidelines or policies that effectively promote the proper use of AI in schools. These statements also fall short of acknowledging the need for regulation and limitations on AI’s role in academia; in fact, “America’s AI Action Plan” highlights the administration’s aversion towards regulation by providing that states should not have access to federal funding on AI-related matters should they implement “burdensome AI regulations.”

STATE-LEVEL POLICIES

The federal government’s failure to acknowledge AI’s limitations when it comes to privacy, ethics, and functionality in education creates a vacuum devoid of guidelines or regulations on AI’s educational use. A lack of parameters has raised concerns about academic misconduct, plagiarism, privacy breaches, algorithmic bias, and the dogmatic acceptance of generated information that may be inaccurate or unreliable. Complete bans fail to address AI’s potential when used responsibly and create environments where students find new and creative ways to access generative AI despite the ban.

Thankfully, states are beginning to recognize the need to fill the void to maintain the quality and safety of children’s education. Alaska’s Department of Education answered this call by providing its K-12 AI Framework document, which provides “recommendations and considerations for districts” to guide their school districts’ Artificial Intelligence policies and guide educators on how to treat AI use in their classes.

A metal placard on a building reads "Department of Education"
Adobe Stock, D Howe Photograph #244617523

These guidelines serve to “augment human capabilities,” educating students on how to maintain critical thinking and creativity while employing generative AI in their studies. This purpose is supported by the following guiding principles for AI Integration outlined in the framework; these principles serve as building blocks for fostering a positive relationship between students and generative AI, educating about its limitations while highlighting how it can be used properly. To take a human-rights based approach to highlighting the value of these principles, I’ll be providing specific human rights that each guideline works to preserve.

ARTICLE 27

Article 27 of the Universal Declaration of Human Rights (UDHR) establishes the right to enjoy scientific advancements as well as the protection of ownership over one’s scientific, literary, or artistic creations. Alaska’s AI Guideline provides for a human-centered approach to AI integration, emphasizing that districts should move beyond banning generative AI while adopting initiatives to ensure AI enriches human capabilities rather than replaces them. This ensures that students have access to the scientific advancement of generative Artificial Intelligence without diminishing the quality of their education. The “Fair Access” aspect of Alaska’s framework outlines additional provisions for ensuring students have equal access to AI-based technological advancements. It calls for allocating funding dedicated to accessible Internet and AI access, as well as implementing an AI literacy program within school districts.

A boy looks at a computer monitor, generating an AI image.
Adobe Stock, Framestock
#1684797252

Additionally, the “Transparency” and “Ethical Use” principles provide that AI generated content should be properly attributed and disclosed. Citations are a requirement under these guidelines, and any work completed entirely by generative AI is considered plagiarism. This maintains the right to ownership over one’s creations by ensuring that generative AI and the data it pulls from are properly attributed.

ARTICLE 26

Article 26 of the UDHR codifies the right to education that promotes tolerance for other groups and respect for fundamental freedoms and rights. Alaska’s AI framework calls for recognition of generative AI’s potential algorithmic biases against certain ethnic, racial, or religious groups. It states that students should be educated about the prejudices, misinformation, and hallucinations a generative AI model may produce, emphasizing that its outputs must be critically examined. By overtly acknowledging the manifestation of societal prejudices in these algorithms, Alaska’s guidelines preserve the human right to uphold dignity and respect for others within education. This requires the inclusion of diverse local stakeholders such as students, parents, and community leaders in discussions and policymaking regarding AI regulations in the classroom, which the guideline provides suggestions for.

ARTICLE 12 and ARTICLE 3

The final human rights Alaska’s framework works to uphold are outlined in Article 3 and Article 12 of the UDHR, which state the right to security of person and privacy, respectively. The AI Framework establishes that student data protection and digital well-being are essential to maintain and educate on. It highlights a responsibility on the districts to support cybersecurity efforts and compliance with federal privacy laws such as the Family Educational Rights and Privacy Act and the Children’s Internet Protection Act. Schools also have an obligation to review the terms of service and privacy policies of any AI tools used in classrooms to ensure students’ data is not abused. Educators also should teach their students how to protect their personally identifiable information and the consequences of entering sensitive information into generative AI tools.

A page in a book reads "FERPA, Family Educational Rights and Privacy Act"
Adobe Stock, Vitalii Vodolazskyi
#179067778

WHAT’S NEXT

Alaska’s framework is only an example of a wider trend of states adopting guidelines on Artificial Intelligence’s role in education. These regulations ensure that students, educators, and stakeholders acknowledge the limitations and potential of AI while implementing it in a way that serves human ingenuity rather than replacing it. These guidelines go only so far when implemented locally, though. We must civically engage with local school boards, individual school administrations, educators, and communities to ensure these helpful guidelines are properly abided by. Frameworks like Alaska’s provide sample policies for school boards to enact and provide examples of school handbook language that can be employed to preserve human rights in the face of AI expansion; all it takes is local support and implementation to push these policies into action. Community training and panels could be utilized to start conversations between families, students, community members and AI policymakers and experts.

As individuals, it is our place to engage in these community efforts. And if you’re a student reading this, take Alaska’s frameworks on guiding AI use in education into consideration the next time you’re thinking about using ChatGPT on an assignment. From plagiarism to biases to security, there’s good reason to tread carefully and emphasize a responsible approach to AI use that doesn’t encourage over-reliance but rather serves as a helping hand.

Why Big Data is a Human Rights Concern

What Is Big Data?

Big Data refers to the collection and use of massive volumes of data to study, understand, and predict human behavior. Big data or predictive analytics is a new field that is growing fast. In order to predict human behavior with a high degree of accuracy, data researchers require millions of data points. Collecting this data is far easier than using it to make useful predictions. A large portion of these researcher’s work lies in organizing and manipulating the data into a format that allows for detailed analysis to be conducted. Researchers make use of highly advanced algorithms to process data and make useful predictions about future human behavior. Many of the world’s largest tech and social media companies, including Google, Amazon, Facebook, Twitter, and Snapchat are at the forefront of this industry. In this article, I will focus on the cases of China and Facebook. To clarify Big Data and Artificial Intelligence are not the same thing. Rather Big Data is a general term referring to the collection of massive amounts of information and algorithms to make predictions on future outcomes. Artificial Intelligence falls under Big Data but it is a distinct subcategory that refers to the programming of computers to do tasks that normally require human intelligence. This can range from Tesla’s self-driving car mode, Siri’s speech recognition, or Amazon’s predictions for shopping habits.

The primary reason is that these companies are perfectly positioned to make use of Big Data. Each of these companies has access to the information of millions of their users and customers, which allows them to fulfill the massive amount of data needed for Big Data research. Many governments are also becoming increasingly involved with the use of predictive analytics. Examples of the use of Big Data are counter-terrorism, personalized ads, increasing work productivity, and predicting virus outbreaks.

Why Is Big Data Concerning?

While Big Data presents many possibilities for good, it raises many moral and ethical concerns. The primary concern is an individual’s right to privacy online. In the United States and many countries around the world, personal rights to privacy in the physical world are well established. Law enforcement, or anyone for that matter, cannot search our belongings, homes, cars, or persons without consent unless there are legal grounds for probable cause and a warrant to search. However, most of these laws and regulations fail to extend to privacy rights for online activity. The quick rise of the Internet and the rapid pace of technological innovation has left these laws outdated and inadequate for the modern age where the Internet is a daily requirement for many peoples’ lifestyles. The Internet is necessary for most jobs, access to the news, social connectivity with friends and family, entertainment, and freedom of expression. One could make the argument that access to the Internet is an ‘inherent and inalienable’ human right under the ‘preservation of life, & liberty, & the pursuit of happiness’ guaranteed to all men in the Declaration of Independence. With it being the case that the Internet is an essential part of our lives that are increasingly becoming bigger and bigger, shouldn’t our rights to privacy extend to our time on the Internet? This is the human rights case being made for the creation of online privacy laws.

How Does Big Data Affect Me?

Currently, governments and companies are utilizing our information as they see fit with very little or in some cases no consent or oversight. Big Data has become a valuable commodity that is being bought and sold between these entities. Almost every aspect of our lives that is possible to monitor is being tracked with the widespread use of surveillance cameras, logging of browser search history, online purchasing habits, flight reservations, financial records, social media posts, and physical appearance and security data. Private companies, such as Facebook and Google, are using this data to create millions of detailed user profiles. These companies then monetize their customer information by selling access to other companies, governments, and organizations attempting to conduct research, target ads, or make other use of this massive amount of information. It is hard to even imagine how large Big Data is. The majority of billions of peoples’ online activities are being stored and collected. The data is so huge that no computer’s hard drive can store it all and it must be accessed through the cloud. Giant server farms located all around the world maintain and hold onto this personal data.

With this much personal information on this many people, there is a great risk for abuse. There are many well-documented scandals of misuse of personal data and illegal online surveillance by many governments. Companies have come under pressure for hacks and bugs that have exposed personal information on users, many of which weren’t even aware that their information was being collected. These instances are clear violations of basic human rights to privacy and further highlight the need for online privacy legislation.

China’s War on Online Freedom & Privacy

China is the largest country in the world by population, the world’s third largest economy behind only the United States and the European Union, and very advanced in terms of education and technological innovation. However, it also is responsible for the largest case of mass censorship, denial of freedom of Internet access, and mass online tracking and surveillance the world has ever seen. In 1995, China allowed the general public to have access to the Internet. However, the regime quickly realized the potential for political opposition movements to utilize the Internet as a means of protest and call for change. To combat this, the leaders enacted laws that punished those who used the Internet and posted anything that could be deemed to “hurt national security or the interests of the state.”

To enforce these laws the government invested heavily into a means of removing information that violated these terms and regulating the flow of data into and out of the country. This tool has become known as the “Great Firewall of China.” When Xi Jinping became president in 2012, he restricted access to the Internet further and increased the penalties for violating the country’s strict Internet laws. Xi mounted a heavy offensive against any resistance by arresting many individuals and punishing dozens of companies for violating his policies. China, under his rule, employs over two million people just to regulate and censor information that is contrary to the “interests” of the country. This has culminated into what has been labeled the Great Cannon, a massive team of Chinese government hackers that targets any site in violation of Chinese internet regulations.

China is a clear example of the dangers of the government having too much control over the internet. With no transparency or oversight, governmental abuse of power will often result. Xi was able to single-handedly suppress over a billion people’s rights, beliefs, and access to news and important information. These are risks when rights to the Internet and online privacy are not protected through legislation. Internet access and privacy laws will safeguard against the government violating the rights of citizens.

Does Facebook Care About My Privacy?

On the other end of the spectrum, the absence of regulation can also be detrimental. A lack of stringent internet privacy laws to control how big tech companies in the US can use the information of their user base has led to many citizens privacy rights being violated. One of the most alarming incidents came this past year with the Cambridge Analytica scandal. Cambridge Analytica collected the personal data of 87 million users due to Facebook’s inadequate safeguards against data harvesting and lack of oversight of Facebook developers.

Facebook has policies in place that have allowed people and companies claiming to conduct research the ability to gain access to users accounts if given permission. However, by one user downloading the developers’ app and agreeing to share their information, the information of every single one of that user’s Facebook friends could also be obtained. This flaw is what allowed 87 million people’s personal information to be improperly collected without their knowledge.

The app was created by a researcher at Cambridge University and programmed to harvest the data of the user and every one of that user’s friends. This brings up ethical concerns as well as a security issue. The vast majority, around 99.7%, of the 87 million accounts harvested was of user’s that were unaware that companies and researchers had possessed their information or how their information was being used. One wonders how Facebook could legally and ethically allow for millions of their user data to be harvested without consent. Even more concerning is the lack of protocols for ensuring that these companies’ motives are legitimate and that the data harvested is protected. Since there isn’t much oversight, this data was able to be licensed illegally to Cambridge Analytica and other companies for over a year before Facebook became aware of the incident. This isn’t an isolated case with The Washington Post reporting that Facebook announced “malicious actors” abused the search function to gather public profile information of “most of its 2 billion users worldwide.” Facebook’s lack of stringent safety measures to prevent the harvesting and abuse of user’s personal data is alarming. They have misused and failed to protect user data many times and have suffered little to zero punishment due to the lack of laws to hold them accountable.

What Needs to Happen to Protect Online Privacy

China and Facebook have shown us the dangers of not having well-crafted internet privacy laws and policies in place. It is clear that a balance needs to be struck on the level of internet regulation. With too much regulation the government has a large amount of power that could be abused as seen with China. However, a lack of regulation will allow private companies and citizens the opportunity to freely abuse the rights of others. The goal is to push for internet privacy laws that will adequately protect user rights while preventing either of these cases from occurring. When this is achieved then the human right to freedom of expression and privacy online will be secured for all people.

 

*additional resource