Artificial Intelligence

Professors, educators, tech leaders call for AI to advance learning while upholding values of students and teachers

Merlyn Mind
June 21, 2024

Who takes the ownership of AI, and what do they use it for?” “How do we address racial biases in genAI?” “Who has access to AI’s data, and why?”

These questions and more were posed on Nov. 8 at Merlyn Mind’s responsible AI initiative kick-off.

Over food and drinks, a group of 20 thought leaders in AI, brought together by Merlyn Mind, discussed responsible AI principles. A spectrum of emerging perspectives illuminated the need for further consideration of AI companies’ roles in the future.

Latha Ramanan opens the evening.

Latha Ramanan, Senior VP of Product Growth and Strategic AI Initiatives at Merlyn Mind, provided the group with a recounting of the many challenges Merlyn Mind has faced while building a successful edtech product. She discussed the need to work closely with policymakers and school administrations while still pushing technological boundaries.

Professor Bruce McCandliss of the Stanford Graduate School of Education gave the dinner’s opening remarks, making a strong call to action for the creation of a National Consortium for Responsible AI in Education. He implored all AI stakeholders to speak transparently to their perspectives on the matter.  

Professor John Mitchell addresses the group.

As such, Professor John Mitchell of the Stanford Computer Science Department provided the room with an overview of the foundational concepts in AI affecting education. He touched upon contextual integrity and the requirements he felt needed to be met for AI to be integrated into educational institutions.

Geoff Cox, Senior Associate Dean for Finance & Administration and Lecturer on AI & Ethics at Stanford, responded by referencing the philosophy of education. He brought the table’s attention to the fact that a child in school today will exist in a workforce which is difficult to predict and in which societal power structures may be shifted dramatically.

Sharad Sundararajan speaks to Merlyn's experience.

Sharad Sundararajan, Merlyn Mind’s co-founder and CIO, addressed the group by sharing the challenges Merlyn has faced while avoiding and limiting data sharing in educational settings, highlighting the importance of protecting the privacy of students. He emphasized the growing, industry-wide trust barrier stoked by AI companies whose data policies are misaligned with the needs of schools.

Conversations proceeded to be shared with present industry figures such as Kian Katanforoosh, CEO of Workera, Julia Cowles, General Counsel of Khan Academy, as well as educators, students, and organizational leaders.

After much freeform discourse, each attendee was given a notecard titled with a known principle of responsible AI. They were asked to summarize their thoughts in writing on their notecard.  

Dinner guests summarize their thoughts.

Merlyn Mind collected these cards, synthesizing the ideas in our ongoing research on effective Responsible AI efforts.

The notecard themes that were most commonly selected were “fairness,” “purpose-built AI,” and “transparency.”  

Erik Burmeister, an ex-superintendent of the Menlo Park School District, spoke to his primary hesitance being the hasty adoption of generative AI tools for use by students. He cited concerns about the protection of students’ privacy. Are school districts prepared to face the potential risks associated with these tools?

A sentiment which echoed through the night was that education is more than just knowledge being imparted from teachers to students. It is a social, emotional, and physical process of learning to be in a modern community.  

Many notecards addressed the challenges of implementing fair and safe AI systems, which involves recognizing and countering data biases to ensure inclusivity and equal representation. The concept of fairness can be subjective, and the challenge lies in creating AI tools that are equitable for all students. All agreed that safety concerns like data expiration, contextual integrity, and political discourse risks need to be managed with care.

Justin Nunez discusses issues with his tablemates.

Justin Nunez, the Head of Strategy at the UNCF (United Negro College Fund), spoke passionately about the need to platform underrepresented communities by giving them access to new technologies. He lamented that after decades of policy changes with regards to education in underprivileged spaces, many communities felt nothing had changed.

The discussions touched upon the need for purpose-built AI, where the intentions behind AI tools align with end-user activities. Transparency of purpose, especially in data handling, was consistently referenced as crucial for gaining trust among educators and students.

The group agreed that human teachers are key to the art of preparing our students to face the future: co-designing and co-creating responsible AI solutions with teachers to address the unique needs of education is a necessary path forward for edtech products.

Ultimately, the experts at our dinner table highlighted the need for cautious adoption with an emphasis on student safety, fairness, and transparency.  

As AI continues to evolve, it's crucial for educators, policymakers, and technologists to collaborate in creating an environment where AI tools not only enhance learning but also uphold the values of students, teachers, and our society.  

Attendees collaborate to capture and codify the themes and ideas of the night.

“I believe fairness to be important [to consider] based upon a lived experience using AI, in particular generative art AI. [We] need to place a sense of urgency in diversity of the creation/input of user data to ensure all people are represented when utilizing software technology. All students deserve a positive experience and deserve to see themselves [represented] when using AI in art.” - Markesha Tatum, Streetcode Academy.

Merlyn Mind will be continually platforming thinkers, enthusiasts, and stakeholders in the ethical use of artificial intelligence. We believe the time is now to shape a framework with which our society will guide the use of powerful new technologies. If you’re interested in joining the discussion, please sign up for our new Consortium on Responsible AI.

Update 1/18/24: As a followup, here is our first responsible AI policy brief and a checklist for schools.

All Posts

Professors, educators, tech leaders call for AI to advance learning while upholding values of students and teachers

Artificial Intelligence
June 21, 2024
Merlyn Mind

Who takes the ownership of AI, and what do they use it for?” “How do we address racial biases in genAI?” “Who has access to AI’s data, and why?”

These questions and more were posed on Nov. 8 at Merlyn Mind’s responsible AI initiative kick-off.

Over food and drinks, a group of 20 thought leaders in AI, brought together by Merlyn Mind, discussed responsible AI principles. A spectrum of emerging perspectives illuminated the need for further consideration of AI companies’ roles in the future.

Latha Ramanan opens the evening.

Latha Ramanan, Senior VP of Product Growth and Strategic AI Initiatives at Merlyn Mind, provided the group with a recounting of the many challenges Merlyn Mind has faced while building a successful edtech product. She discussed the need to work closely with policymakers and school administrations while still pushing technological boundaries.

Professor Bruce McCandliss of the Stanford Graduate School of Education gave the dinner’s opening remarks, making a strong call to action for the creation of a National Consortium for Responsible AI in Education. He implored all AI stakeholders to speak transparently to their perspectives on the matter.  

Professor John Mitchell addresses the group.

As such, Professor John Mitchell of the Stanford Computer Science Department provided the room with an overview of the foundational concepts in AI affecting education. He touched upon contextual integrity and the requirements he felt needed to be met for AI to be integrated into educational institutions.

Geoff Cox, Senior Associate Dean for Finance & Administration and Lecturer on AI & Ethics at Stanford, responded by referencing the philosophy of education. He brought the table’s attention to the fact that a child in school today will exist in a workforce which is difficult to predict and in which societal power structures may be shifted dramatically.

Sharad Sundararajan speaks to Merlyn's experience.

Sharad Sundararajan, Merlyn Mind’s co-founder and CIO, addressed the group by sharing the challenges Merlyn has faced while avoiding and limiting data sharing in educational settings, highlighting the importance of protecting the privacy of students. He emphasized the growing, industry-wide trust barrier stoked by AI companies whose data policies are misaligned with the needs of schools.

Conversations proceeded to be shared with present industry figures such as Kian Katanforoosh, CEO of Workera, Julia Cowles, General Counsel of Khan Academy, as well as educators, students, and organizational leaders.

After much freeform discourse, each attendee was given a notecard titled with a known principle of responsible AI. They were asked to summarize their thoughts in writing on their notecard.  

Dinner guests summarize their thoughts.

Merlyn Mind collected these cards, synthesizing the ideas in our ongoing research on effective Responsible AI efforts.

The notecard themes that were most commonly selected were “fairness,” “purpose-built AI,” and “transparency.”  

Erik Burmeister, an ex-superintendent of the Menlo Park School District, spoke to his primary hesitance being the hasty adoption of generative AI tools for use by students. He cited concerns about the protection of students’ privacy. Are school districts prepared to face the potential risks associated with these tools?

A sentiment which echoed through the night was that education is more than just knowledge being imparted from teachers to students. It is a social, emotional, and physical process of learning to be in a modern community.  

Many notecards addressed the challenges of implementing fair and safe AI systems, which involves recognizing and countering data biases to ensure inclusivity and equal representation. The concept of fairness can be subjective, and the challenge lies in creating AI tools that are equitable for all students. All agreed that safety concerns like data expiration, contextual integrity, and political discourse risks need to be managed with care.

Justin Nunez discusses issues with his tablemates.

Justin Nunez, the Head of Strategy at the UNCF (United Negro College Fund), spoke passionately about the need to platform underrepresented communities by giving them access to new technologies. He lamented that after decades of policy changes with regards to education in underprivileged spaces, many communities felt nothing had changed.

The discussions touched upon the need for purpose-built AI, where the intentions behind AI tools align with end-user activities. Transparency of purpose, especially in data handling, was consistently referenced as crucial for gaining trust among educators and students.

The group agreed that human teachers are key to the art of preparing our students to face the future: co-designing and co-creating responsible AI solutions with teachers to address the unique needs of education is a necessary path forward for edtech products.

Ultimately, the experts at our dinner table highlighted the need for cautious adoption with an emphasis on student safety, fairness, and transparency.  

As AI continues to evolve, it's crucial for educators, policymakers, and technologists to collaborate in creating an environment where AI tools not only enhance learning but also uphold the values of students, teachers, and our society.  

Attendees collaborate to capture and codify the themes and ideas of the night.

“I believe fairness to be important [to consider] based upon a lived experience using AI, in particular generative art AI. [We] need to place a sense of urgency in diversity of the creation/input of user data to ensure all people are represented when utilizing software technology. All students deserve a positive experience and deserve to see themselves [represented] when using AI in art.” - Markesha Tatum, Streetcode Academy.

Merlyn Mind will be continually platforming thinkers, enthusiasts, and stakeholders in the ethical use of artificial intelligence. We believe the time is now to shape a framework with which our society will guide the use of powerful new technologies. If you’re interested in joining the discussion, please sign up for our new Consortium on Responsible AI.

Update 1/18/24: As a followup, here is our first responsible AI policy brief and a checklist for schools.

Book a demo

Schedule a free personalized demo to see our purpose-built solutions in action, and hear how innovative schools are leveraging the power of Merlyn in their classrooms.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.