Leaders often make the mistake of assuming that events in the world are linear – that single causal factors reliably lead to specific desired outcomes. In reality, outcomes emerge from complex systems, and causal factors routinely lead to counterintuitive outcomes. A related mistake is the belief that systems and the actors within them (such as governments or corporations) are static. But, in fact, systems and their various parts are dynamic. They change over time, sometimes slowly and sometimes rapidly. To engage in ethical leadership, one must embrace systems thinking and a systems approach to creating change.
One example from our work at the Neely Center is that of content moderation and social media platforms. It can seem intuitive that moderating content (eliminating distasteful, harmful posts and comments) is the best and only way to create a healthy platform. However, while some degree of content moderation is warranted, it is an ineffective way to guarantee healthy interactions, as our managing director, Ravi Iyer, has argued. Instead, acknowledging that platforms are complex systems, we seek design choices that increase positive behaviors and reduce negative behaviors at scale. This works better than seeking to do the impossible task of making the “right” decisions about what constitutes unacceptable content and the impossible task of catching and removing all of it in a fair and consistent manner. We make nine recommendations in our social media design code and track users’ experiences across major platforms. So far our design code has been used by several policymakers and companies seeking to improve societal health and well-being.
In the same way that recommender AI produced new types of platforms, generative AI is leading to new products, such as AI chatbot assistants, AI companions, and AI agents. We are currently working with our partners on a design code for social AI (let us know if you’re interested in collaborating) and will share more about that in coming months. In the meantime, I’d like to highlight another concept from systems thinking. As the late Donella Meadows suggested, perhaps the greatest point of leverage in a system is to influence the mindset or paradigm out of which the system (and its goals, rules, structures, parameters, etc.) emerge.
In the case of the Silicon Valley’s tech ecosystem, the current mindset is that of profit maximization. In spite of the flowery language founders often use when talking about the future benefits of AI, the real conversation around AI centers around whether the companies creating it are profitable, whether their elevated valuations are warranted, whether the businesses adopting AI tools are becoming more profitable as a result, and whether individual users are becoming more productive (and ostensibly profit-producing) employees.
While money is crucial, and should not be ignored, putting profit front and center will influence the values and policies that govern its use. At the Neely Center, we argue that the tech ecosystem should place purpose, not profit maximization, at the forefront of what it seeks to accomplish. Indeed, we’re at a singularly unique and exciting moment in history, and creating powerful tools to sell more ads or replace the work people find meaningful is missing a major opportunity. Orienting AI companies and the broader tech ecosystem around the goal of improving education, eliminating disease, or preventing wars (to list a few possibilities) would lead to different priorities, decisions, and conversations. And just claiming to be purpose-driven without acting that way is very different from actually aligning decisions, products, and metrics around the outcomes. I recently gave a talk at Stanford’s Cyber Policy Center on this very topic. If you’re interested, please check it out, and let me know what you think.
In the meantime, read on to check out some of what we’ve been up to!
Sincerely,
Nate Fast, Director
Platform Design vs. Moderation
On Friday, March 14, 2025, our managing director, Ravi Iyer, joined a conversation on the Lawfare Podcast titled "A World Without Caesars." Alongside Glen Weyl from Microsoft Research, Jacob Mchangama from Future of Free Speech Project at Vanderbilt, and Renee DiResta from Georgetown University, Ravi discussed the importance of platform design vs. moderation in shaping our online experiences. The panel explored how social media design influences user behavior, content amplification, and information ecosystems, and discussed the promise of decentralized platforms in empowering users and fostering prosocial interactions online. You can listen to the podcast here.
Social Media and Politics from 2023-2025
Matt Motyl, our Senior Advisor, recently shared some insights from the Neely Tech & Ethics Indices. This latest analysis dives into a 2-year longitudinal survey tracking the political identities, extremity, and polarization among US adults over 2023 - 2025. The Neely Indices data provide a fascinating view of political shifts across major platforms. A couple of key takeaways:
Extremism held steady at 24-29%, though conservatives were more likely to be extreme than liberals.
Platforms like Discord and Reddit skew liberal, while X (Twitter) has shifted to be more conservative over time.
Read the full analysis here.
Neely Center Shaping AI’s Future with Democratic Tools
On Wednesday, March 19, 2025, the USC Marshall Alumni Association - LA Chapter hosted a virtual event titled "Democracy and AI with the Neely Center" that explored how the Center is applying democratic tools and practices—such as the Neely Indices and deliberative polling—to shape the future of AI system design and policy. Our director, Nate Fast, discussed the ways emerging AI technologies can enhance democracy itself, improving civic engagement, combating misinformation, and fostering more productive public discussions. This event drew tech leaders, policymakers, students, and those passionate about aligning technology with democratic values.
Neely Center Supports the Protection of Digital Identities
One of the most pressing issues with AI today is the unauthorized use of individuals' identities, including within AI systems. While the debate is not new, many jurisdictions have long recognized people's right to control their privacy and public image. The Neely Center has been researching the application of the "Right of Publicity" to modern AI technologies. In partnership with the Tech Law Justice Project, we advocate for a dignitary right for individuals over the use of their likeness. Legislators in Georgia have introduced Bill 354 to protect digital identities, following the appropriation of a murdered teen’s identity by a chatbot. Minnesota is also considering similar legislation. The Neely Center is working with stakeholders to define the rights individuals should have over their likeness. For more information on Georgia Senate Bill 354, click here.
Safety Measures for Kids on Social Media May Become Law in Washington State
Our managing director, Ravi Iyer, was recently featured on Seattle's NPR station KUOW for their "Booming" podcast, discussing Washington state’s Senate Bill 5708, which aims to address the mental health impacts of social media on minors. In this interview, Ravi explores how minimum standards can benefit both users and the industry, and how we can measurably maintain the benefits of social media while mitigating its harms for children.
As a former Meta data scientist and contributor to The Anxious Generation, Ravi shared: “Seventy percent of kids feel manipulated by these products... A lot of kids will cite things like TikTok as stopping them from sleeping. Half of kids will say that they have a problem with too much internet use. A lot of them will also say that it affects their studies and their sleep. So we're not just doing this because we think kids shouldn't be using these products. Kids themselves will often say they think they're using these products too much.” You can listen to the discussion here and here.
Neely Ethics & Technology Fellows Program
The USC Marshall’s Neely Center for Ethical Leadership and Decision Making is thrilled to announce the second round of the Neely Ethics & Technology Fellows Program. Through this Program, we aim to support visionary graduate students poised to become the next generation of technology leaders. Each cohort will explore and guide the development of a new area of transformative technology. The 2024-25 cohort will focus on extended reality (XR), with implications for entertainment, gaming, collaboration, education, healthcare, and more. The program officially kicked off on Friday, January 31, 2025. We looking forward to this exciting journey with the selected students!
A Legal Victory for Algorithmic Regulation
The US District Court for the Northern District of California recently upheld California's law regulating engagement-based algorithms that target youth—a ruling aligned with research and arguments advanced by the Neely Center. Our insights were cited in amicus briefs, highlighting how algorithmic design can be effectively regulated while still respecting free speech protections. Read more about the case and its implications in our Substack post here.
Tech Policy Press Podcast Features the Neely Center, Google Jigsaw, and Taiwan's Cyber Ambassador
On March 6, 2025, Ravi Iyer, managing director, was featured in the Sunday Show Podcast with Tech Policy Press, discussing the Neely Center’s collaborative research with Google Jigsaw and Taiwan's Cyber Ambassador on opportunities and risks specific to AI and digital public squares. In the conversation, Ravi emphasized the importance of focusing on the functional aspects of algorithms—such as their design and operational mechanics—rather than solely on content moderation. Other guests on the podcast included Audrey Tang, Taiwan’s Cyber Ambassador and former Digital Minister, and Beth Goldberg, Head of Research and Development at Google Jigsaw and Lecturer at Yale School of Public Policy.
Corporations and Democracy in the Age of AI: A Fireside Chat with Michael Posner
On Thursday, April 10, 2025, the Neely Center welcomed renowned human rights advocate and former Assistant Secretary of State for the Bureau of Democracy, Human Rights and Labor Michael Posner for a fireside chat moderated by our director, Nate Fast. Attendees gained valuable insights from Posner’s extensive expertise and his recent book, exploring critical questions such as whether big tech firms should face legal accountability for disinformation, and discussing strategies policymakers and business leaders can adopt to promote ethical, sustainable practices. The event drew a strong turnout, and sparked thoughtful conversations. As one student participant noted: “My conversation with Professor Posner was not only engaging but also prompted deep reflection on what it truly means to champion human rights while pursuing pro-business strategies.”
An Investigation into Groups Targeting Children on Facebook
Our managing director, Ravi Iyer, was featured in a recent Tech Policy Press article, exploring the critical issue of child safety online on platforms like Facebook. The article is part of a collaborative effort with Latin American Center for Investigative Journalism (El CLIP), documenting how predators use public Facebook groups to identify and target children for sexual exploitation. Ravi underscored the need for design-based solutions to prevent predatory behavior online, stating:
“While most people online are not predators, the design of these systems allows a small group of criminals to target others en masse. Attempts to develop new policies and procedures, without addressing these root design issues, will inevitably fail—no matter how many resources Meta throws at enforcing against the problem… Clearly, online spaces need to be designed differently for children, which is what current child safety legislation is attempting to mandate.”
Neely Center on NPR Marketplace
As part of a series on the effects of social media on youth, NPR's Marketplace interviewed Ravi Iyer, Neely Center’s managing director, about the critical issue of kids' online safety. The conversation centered on the delicate balance parents seek—helping kids stay socially connected while mitigating the risks of excessive social media exposure. Ravi noted: “We should be able to tell our phones, ‘This is my broad preference’ — not the individual feature level for every single app, but just broadly — ‘I don’t want to be contacted by strangers. I don’t want to be encouraged to spend more time on my phone. I want my data and my images to be more secure.’ And then, all the apps should respect those preferences.” He advocated for making it easier for parents to set the default settings on their kids' social media accounts. The issue is nuanced, and it’s encouraging to see the perspectives of both parents and youth shaping the conversation on how we balance connectivity and safety.
Neely Center Supports Consumer Choice in Utah
Ravi Iyer, our managing director, recently testified and provided insights regarding Utah's Digital Choice Act, which ensures that consumers can move their data across platforms and use third-party tools to manage their digital lives. The Neely Center is considering adding interoperability to its core social media design recommendations which have been adopted by numerous jurisdictions.
Neely Center’s Research Informs Legislative Action
On February 4, 2025, the Office of the Minnesota Attorney General released its 2025 Report on Emerging Technology and Its Effects on Youth Well-Being, which outlines the harmful impact of AI and social media on young people. This report builds on Attorney General Ellison’s 2024 report, expanding its focus to address the rapidly evolving landscape of AI-powered tools and their growing role in shaping youth experiences online. Minnesota is likely to introduce new legislation based on this report, as it did in 2024, including laws targeting algorithm design and an AI safety bill that will give all users the right to protect their image from being used without consent online. A key aspect of this report is its design-focused legislative recommendations, which were informed by research from the Neely Center. The Center’s expertise in platform design and digital safety helped shape crucial policy proposals aimed at mitigating the most harmful effects of social media. The AI Safety Bill is expected to be introduced by Minnesota Senator Maye-Quade in the upcoming legislative session. With other states looking to follow Minnesota’s lead, the Neely Center continues to advocate for actionable, design-based solutions to create healthier digital spaces for youth.
Neely Design Code in TechDirt Podcast
The Techdirt podcast episode 375, hosted Ravi Iyer, our Managing Director, to discuss the Neely Center Design Code for Social Media. The podcast reaches a broad audience, particularly among professionals involved in technology policy. In the episode, Ravi discussed the specifics of the Design Code and how it advances current efforts to improve the impact of social media on youth.
SB 976 Upheld: Protecting Kids from Addictive Feeds
In a landmark decision, Judge Edward J. Davila upheld most of California’s SB 976—the Protecting Our Kids from Social Media Addiction Act—marking a major win for youth mental health. The law bans “addictive feeds” for minors without verifiable parental consent, targeting algorithms built to hook users for engagement rather than their expressive features. This aligns with the court’s careful distinction between function and free speech in algorithmic design. The Neely Center’s research, featured in our report Feed Algorithms Contain Both Expressive and Functional Components, was cited in amicus briefs for the case, shaping efforts to tackle algorithm-driven harms. Key insights include:
Algorithmic feeds often favor engagement and ad revenue over minors’ well-being.
SB 976 pushes platforms to put user welfare ahead of profit.
“This ruling zeroes in on the functional, non-expressive parts of algorithms,” said Ravi Iyer. “It curbs harmful business practices while respecting free speech.”
What Do Parents Know About Generative AI in Schools?
In collaboration with the Center for Economic and Social Research (CRPE), our director, Nate Fast, conducted a survey of over 1,800 American households during the summer of 2024. This probability-based sample was part of the Understanding America Study, focusing on parental awareness of Generative AI (GenAI) usage in schools. From the survey results, four key findings emerged about parents' current awareness and perceptions:
Schools and teachers are not effectively communicating with parents about GenAI.
Parents for the most part either do not know if their children are using GenAI or assume they are not.
Parents hold mixed and cautious attitudes about the role of GenAI in education.
Educational attainment influences parental attitudes toward GenAI, with more educated parents generally being more supportive.
Advocating for Phone-Free Schools
The USC Neely Center is proud to support the Becca Schmill Foundation and The Anxious Generation in their nationwide initiative to reduce cell phone distractions in schools. Over 20 states have introduced legislation addressing this critical issue, with several states, including Massachusetts, Vermont, and the District of Columbia, utilizing language from a collaboratively developed model bill. The Neely Center has contributed by identifying and providing relevant research cited in legislative efforts. Additionally, in collaboration with Stanford University and the Anxious Generation team, the Center is developing an evaluation toolkit designed for use by school districts and researchers to assess the impact and effectiveness of cell phone policies. You can learn more about our efforts here.
Measurably Improving Online Experiences by Design
Ravi Iyer, managing director, delivered a seminar at Stanford’s Cyber Policy Center on January 7, 2025, titled "Measurably Improving Online Experiences by Design." Moderated by Nate Persily, the seminar explored the harms stemming from online platforms, categorized into harmful contact, harmful experiences, and excessive usage. Ravi highlighted how design choices, as outlined in the Neely Center’s Design Code for Social Media, can effectively mitigate these harms without relying solely on content moderation. The talk also introduced the Neely Indices, which measure user experience on platforms to inform debates on societal impacts. Ravi discussed how these tools are shaping both platform design and regulatory frameworks worldwide, paving the way for a more ethical and socially responsible digital environment.
Promoting Safer Platform Design Through DSA Codes of Conduct
On Thursday, February 6, 2025, Ravi Iyer, our managing director, presented at a webinar organized by the DSA Decoded project at Sciences Po Paris and the Weizenbaum Institute for the Networked Society in Berlin. The webinar explored how DSA codes of conduct can promote safer and more ethical platform design practices. Ravi joined a panel of experts to discuss how platform design can address critical challenges such as hate speech, online harassment, disinformation, and child safety—issues that go beyond content moderation. Panelists were Sander van der Waal, Research Director at Waag Futurelab; Stevi Kitsou, PhD Researcher in EU Law at Maastricht University; and Tommaso Chiamparino, Policy Officer for Fundamental Rights at the European Commission. The webinar was open to the public.
Neely Center Joins CITED Advisory Board
As part of our efforts to educate those involved in policymaking our managing director, Ravi Iyer, joined the Technology and Scholars Advisory Council of the California Initiative for Technology and Democracy (CITED). CITED works on state-level solutions to address threats to democracy and elections posed by disinformation, AI, deep fakes, and other emerging technologies. Their initiatives include developing voter education recommendations and providing unbiased analysis for the press and civil society.
Collaboration with Google Jigsaw on AI and the Future of Public Squares
Building on the Neely Center’s work in algorithms, bridging systems, and design opportunities, we coauthored a paper titled AI and the Future of Public Squares as part of a project led by Google Jigsaw. This paper will be used to promote the use of LLM-based technologies to create public squares that foster informative, connecting conversations. Read the full paper here.
Neely Center Co-authors "Better Feeds": A Report on Aspirational Algorithms
A key design principle of the Neely Center is to advocate for the development of online algorithms that prioritize consumer aspirations over business needs. Building on our earlier paper, Non-engagement Signals, which was created in collaboration with over eight online platforms, we are proud to coauthor a newly released report, Better Feeds: Algorithms That Put People First, in partnership with the Knight Georgetown Institute. As stakeholders worldwide explore how to create algorithmic products that align with user aspirations rather than business interests, this report offers a comprehensive roadmap. It is a valuable resource for both industry product developers and societal stakeholders committed to building products that prioritize the well-being of individuals.
New Publication on Social Media Harm Abatement
In a recently accepted paper in the Annals of the New York Academy of Sciences, the Neely Center advocates for specific mechanisms designed to operate at the intersection of legal procedures, transparent public health assessment standards, and the practical requirements of modern technology products. The article is currently being circulated among regulators, public health authorities, litigators, and companies. Read the full paper here.
Neely Center Advocates for Designing AI Chatbots for Human Flourishing
In collaboration with the Harvard Human Flourishing Program, the Neely Center co-authored a forthcoming paper in the Global Solutions Initiative's Journal that outlines a roadmap for how standards for chatbot design can be adopted, iterated upon, and measured. The Neely Center is actively working with civil society stakeholders, technology companies, policymakers, and academics to understand the impact of social chatbots on youth and ensure they are designed to support human flourishing. This effort will culminate in a proposed design code for social chatbots.
Meta’s Algorithm Design Changes: Why They Matter More Than Moderation
In a recent Tech Policy Press article, Neely Center Managing Director, Ravi Iyer, explored Meta’s latest announcements and their potential impact on online discourse. While much of the attention was centered on shifts from moderation and fact-checking to community notes (similar to X, formerly Twitter), Ravi highlighted algorithm design as the hidden driver that profoundly shapes how content is consumed and amplified. In the piece, Ravi raised a critical question: Will these new changes undo the progress made in removing engagement optimization for content known to be harmful to youth? Engagement-driven algorithms may boost metrics, but they often cause real-world harm—especially to young people. Thus, focusing on product design is essential to keep our youth safe.
The Promise and Pitfalls of Generative AI
In a recent article in Nature Reviews Psychology, director Nate Fast was one of six researchers from cognitive science, clinical psychology, social psychology, language science and public health to share their perspectives on current and future uses of generative artificial intelligence, including its impacts on research and society more broadly. Read the paper here.
Evaluating DSA Risk Assessments on Platform Design Risk
In a recent Tech Policy Press article, Peter Chapman of Knight Georgetown examined the reports technology platforms have produced on the systemic risks of their products. Referencing the Platform Design Taxonomy developed by the Neely Center and the Tech Justice Law Project, the piece concludes that the DSA’s risk assessments and audits provide a foundation for advancing platform transparency and accountability, but much work remains to ensure these assessments fulfill their potential. Furthermore, risk assessments and audits must address two critical gaps highlighted at the outset of the paper: 1) Meaningfully assessing platform design; 2) Improving the data, metrics, and methods used to evaluate risks.
USC Marshall Alumni LA: Creating Impact with New Immersive Technologies
On Wednesday, April 23, 2025, the LA Chapter of the USC Marshall Alumni Association will host a virtual event titled "Creating Impact with New Immersive Technologies," exploring the exciting business potential of this rapidly evolving field. Neely Center Director Nate Fast will lead the conversation, and our Neely Ethics & Tech Fellows will showcase their impact-driven projects, highlighting mixed reality’s transformative potential across industries such as sports, entertainment, healthcare, and education. This event is ideal for industry leaders, policymakers, students, and anyone passionate about the intersection of technology and business. Registration is now open—secure your spot today!
Meaningful XR 2025
Neely Center is co-sponsoring this year's Meaningful XR 2025 Conference, scheduled for May 22-23, 2025, at UC Davis. Meaningful XR explores cutting-edge research and innovative applications of VR, AR, MR, and XR technologies across critical areas such as education, healthcare, the environment, and industry. It’s a fantastic opportunity to present your insights, engage with experts in the field, and drive impactful innovation in immersive technologies. The conference will be held in-person with an option for virtual participation. Registration is now open; you can reserve your spot here.
Thanks for reading. As always, if you have any thoughts, suggestions, or questions about our work, don’t hesitate to reach out!