The U.S. Department of Education (Department) is committed to supporting the use oftechnology to improve teaching and learning and to support innovation throughout educationalsystems. This report addresses the clear need for sharing knowledge and developing policies for“Artificial Intelligence,” a rapidly advancing class of foundational capabilities which areincreasingly embedded in all types of educational technology systems and are also available tothe public. We will consider “educational technology” (edtech) to include both (a) technologiesspecifically designed for educational use, as well as (b) general technologies that are widely usedin educational settings. Recommendations in this report seek to engage teachers, educationalleaders, policy makers, researchers, and educational technology innovators and providers as theywork together on pressing policy issues that arise as Artificial Intelligence (AI) is used ineducation.
Challenge: Responding to Students Strengths While Protecting Their Privacy Educators seek to tackle inequities in learning, no matter how they manifest locally (e.g. in access to educational opportunities, resources, or supports). In culturally responsive42 and culturally sustaining43 approaches, educators design materials to build on the assetsindividual, community, and cultural strengths that students bring to learning. Along with considering assets, of course, educators must meet students where they are, including both strengths and needs. AI could assist in this process by helping teachers with customizing curricular resources, for example. But to do so, the data inputted in an AI-enabled system would have to provide more information about the students. This information could be, but need not be, demographic details. It could also be information about students preferences, outside interests, relationships,
id: a118f9ad3558a5f4f6f2463d8008524c - page: 36
41 Wagner, A.R., Borenstein, J. & Howard, A. (September 2018). Overtrust in the robotics age. Communications of the ACM, 61(9),22-24. 42 Gay, G. (2018). Culturally responsive teaching: Theory, research, and practice. Teachers College Press. ISBN: 978-0807758762 43 Paris, D., & Alim, H.S. (Eds.). (2017). Culturally sustaining pedagogies: Teaching and learning for justice in a changing world. Teachers College Press. ISBN: 978-0807758342 32
id: fb462551679c6a189aafeabece320800 - page: 36
44 What happens to this data, how it is deleted, and who sees it is of huge concern to educators. As educators contemplate using AI-enabled technologies to assist in tackling educational inequities, they must consider whether the information about students shared with or stored in an AI-enabled system is subject to federal or state privacy laws, such as FERPA. Further, educators must consider whether interactions between students and AI systems create records that must be protected by law, such as when a chatbot or automated tutor generates conversational or written guidance to a student. Decisions made by AI technologies, along with explanations of those decisions that are generated by algorithms may also be records that must be protected by law. Therein, a third tension emerges, between more fully representing students and protecting their privacy (Figure 10). Figure 10: Responding to students strengths while fully protecting student privacy
id: 22edef6baceb047d06d723a8f69062cb - page: 37
Further, representation would be just a start toward a solution. As discussed earlier in this report, AI can introduce algorithmic discrimination through bias in the data, code, or models within AIenhanced edtech. Engineers develop the pattern detection in AI models using existing data, and the data they use may not be representative or may contain associations that run counter to policy goals. Further, engineers shape the automations that AI implements when it recognizes patterns, and the automations may not meet the needs of each student group with a diverse population. The developers of AI are typically less diverse than the populations they serve, and as a consequence, they may not anticipate the ways in which pattern detection and automation may harm a community, group, or individual.
id: 31883a8d584f4a80f10249509b9229e9 - page: 37