top of page

AI Concerns for the School Photography and Yearbook Industry

Updated: Nov 6, 2023



When it comes to using artificial intelligence (AI) and generative artificial intelligence in the school photography and yearbook industry, there are several concerns related to privacy and safety as well as ethical considerations. This is exactly why the top leaders in the industry are coming together in Las Vegas in two weeks to discuss building an Ai Standards and Guidelines committee and creating a document for the protection of students, schools and the customers of our industry. Some of the top concerns and issues that all should be aware of are:


  1. Privacy: AI systems often require large datasets to learn and continually improve. When these datasets include children's pictures, there is a risk of compromising their privacy, especially if the images are not properly anonymized. Ensuring that children's identities are protected is a paramount concern and is vital for our industry.

  2. Data Security: Storing and processing children's pictures requires robust security measures to prevent unauthorized access and data breaches. Safeguarding this sensitive information is crucial to prevent potential misuse. SPOA has partnered with the nation's leading student data privacy organizations who can help each company ensure they not only meet state and federal guidelines but exceed them. Click here for more information

  3. Ethical Use: AI systems should be used ethically and responsibly. Avoiding any form of exploitation, manipulation, or harmful content creation involving children is vital. Ensuring that AI applications are designed to benefit children and society as a whole is a fundamental ethical consideration. We have heard loud and clear from across the country that parents do not want their child's appearance edited without their permission. There are many times a company could consider using AI to enhance their images, however companies should be very careful not to use the word enhance to remove birthmarks, alter teeth, change smiles or potentially make a child look like someone they are not.

  4. Bias and Discrimination: AI systems, including those dealing with pictures, can have inherent biases present in the training data. This can lead to discriminatory outcomes, especially for underrepresented groups. It's essential to continuously monitor and mitigate bias to ensure fair treatment of all children. This is also an ethical concern. This is a slippery slope and should be handled very carefully in use.

  5. Online Safety: Children's pictures processed using AI may end up online through a data breach, potentially making them targets for various threats, such as online predators. Ensuring that these images are not misused or leaked is critical for children's safety. We recommend partnering with a cybersecurity partner and following one of the leading student data privacy commitment pledges to help mitigate any online safety concerns.

  6. Regulations and Compliance: Adhering to relevant laws and regulations, such as the Children's Online Privacy Protection Act (COPPA) in the United States, is mandatory. Understanding (FERPA) along with other state and other federal laws is imperative to your company's success. Companies and developers working with AI technologies involving children's data must comply with these regulations to avoid legal repercussions. We would encourage you to read more by visiting.

  7. Emotional Well-being: In some cases, AI applications involving children's pictures, such as deepfake technology, can be emotionally distressing for both children and their families. Protecting the emotional well-being of children is a significant concern in these contexts. There are more companies building AI solutions for deep fakes than there are companies building solutions and software to uncover deep fakes. However, we have been very impressed to see companies like Intel making major investments to create software to expose these types of deep fakes. Please visit their website to learn more.

Addressing these concerns requires a combination of robust privacy policies, strong security measures, ethical guidelines, adherence to legal regulations and leadership at the organizational level. Here at SPOA, we simply want to get in front of future issues and develop a committee that is educated, informed and passionate for our industry and the products and services we create safe for generations to come. Our AI Summit on Nov. 27th in Las Vegas, is for developers and organizations to discuss the above issues along with why our industry must be vigilant in our approach to ensure the safe and responsible use of AI in dealing with children's pictures and yearbooks.






Recent Posts

See All
bottom of page