If you’re like any of the leaders we’ve been working with recently, the speed of new tech development is probably making your work simultaneously more exciting and more challenging. The acceleration of tech has forced decision makers across all industries to grapple with new ethical questions. Evidence of this challenge is everywhere – people are working overtime to ensure that the “ultimate election year” is protected from tech misuse (see our own election guide here); courts are being asked to rule on issues such as the use of copyrighted material in the training of AI and the creation of additive tech used by children; and some have even suggested that our legal system, itself, may be vulnerable to generative AI-driven implications. Meanwhile, questions about AI policy – including whether and when AI models should be open-source – continue to multiply.
The “Acceleration-Adaptation Gap”
In a recent talk at CES in Las Vegas and at our “Responsible AI for Business” event on 1/29 at USC (shown above), I argued that we need to tackle the acceleration-adaptation gap. In other words, the development of new tech is advancing faster than society’s capacity to use and govern it wisely. Technically, one way to address this gap is to slow down the tech. While this perspective has some merit (which we’ll consider in future newsletters), it also has downsides. For example, it is likely not possible to slow down tech development around the world, it might hinder much-needed innovation, and it could slow down the key safety lessons we can learn from rapid iterations. While we believe in holding tech companies accountable for the products they create (which serves as a disincentive to deploy powerful technologies before they’re ready), we’re much more keen to focus on the other side of the equation, which is to speed up the development of society’s capacity to adapt to our new tech advancements. Toward that end, we’ve been very busy in 2023 and we’ll be just as busy in 2024, building tools, networks, and spaces that allow leaders to improve their abilities to make informed, ethical decisions in the age of AI. Here is a bit of where we’ve focused and where we’re headed in our work to improve humanity’s experiences with the tech across our three main priorities of social media, AI, and mixed reality.
Social Media
In 2023, the Neely Center was at the forefront of efforts to design better technology systems. Led by our Managing Director, Ravi Iyer (shown below presenting at the Knight Foundation’s “Informed” conference), we launched the Neely Social Media Index in order to inform the public and decision makers about how social media platforms are affecting mental and societal health. We helped organize workshops and made arguments for the importance of designing products to improve human well-being at conferences (e.g., at Yale, Michigan, Columbia, in SF, Informed, in Kenya, in longer form publications, and in the press (e.g., Lawfare, WSJ, NYTimes, Time, Tech Policy Press) and served as a founding sponsor of the Prosocial Design Network. Following these public events, we have begun collaborations with technologists at large and small companies, with stakeholders focused on youth and on societal cohesion, and with policy makers – including invited briefings with federal officials, with the New York Health Commissioner, with the state of Minnesota (e.g., see the report released today, which built on Ravi Iyer’s expertise and work for the Neely Center), and with the UK government – on understanding and implementing specific design changes. In collaboration with dozens of academics, technologists, and stakeholders, we launched the Neely Center Design Code for Social Media to broaden our impact beyond our immediate influence.
In 2024, we plan to build on these efforts to ensure that the changes we have advocated for are adopted by technology companies, especially in the context of global elections, and that society is well equipped to hold companies accountable for their impact, leveraging our design code and social media index. We also plan to make a meaningful and measurable contribution to society’s loneliness crisis and ensure that the future design of technology systems is done purposefully and democratically, in service of global human well-being.
Artificial Intelligence
In 2023, the Neely Center built intellectual, social, and data-collection infrastructure in order to positively influence how AI is designed, used, and governed. We launched and reported on our Neely-UAS AI Index where we began tracking adoption of generative AI and other AI systems across the U.S. and will continue tracking usage, benefits, and harms as these tools are adopted throughout society. We organized our annual conference around the theme of the psychology of AI Value Alignment and shared research insights in presentations, articles, and workshops at Wharton, Stanford, Informed, Kenya (shown below), NPR, Fast Company, podcasts, CES, and the Federal Reserve Bank of New York. Most recently, Fast shared his views with the American Psychological Association’s popular Speaking of Psychology podcast, and co-organized a mini-conference on “Responsible AI in Business: Ethical Challenges and Opportunities” at USC.
In 2024, we plan to build on these efforts with a focus on making AI globally representative, accessible, and safe. We aim to ensure that input from people around the world gets integrated into the design, use, and governance of AI systems. Toward this end, we have plans to expand our policy and measurement efforts globally and to expand the success we have had informing debates about social media’s AI powered algorithms to inform wider debates about AI’s future. We will incorporate students and faculty into these efforts, as well as integrate key lessons into AI ethics modules to be used in classrooms across USC Marshall and beyond.
Mixed Reality
While the bulk of our work in 2023 went into social media and AI, we recognize that there is a small window of opportunity to influence the unfolding of powerful mixed reality technologies. We hosted an invite-only “Anticipating the Metaverse” conference in March 2023 at USC with 40 leading AR/VR researchers and technologists. This event led to the conclusion that we should advance purpose-centered experiences (i.e., AR/VR applications and experiences that serve a specific and identifiable goal, such as improving physical health, enabling distance-based training, or increasing psychological well-being) as opposed to creating a generic virtual “metaverse” container that people feel compelled to spend all their time in. So, instead of incentivizing companies to “maximize engagement” in virtual worlds we aim to maximize value derived from AR/VR experiences that couldn’t be accessed in the physical world. To support this initiative, we have launched the Neely Mixed Reality Index, where we will be tracking user experiences similar to our social media and AI indices.
In 2024, we will continue to write about and work on purpose-centered technology, especially in the space of mixed reality. To support this work, we helped organize the USC Connected Futures mixed reality coalition of faculty and affiliated technologists advancing research and practice in AR/VR. Be on the lookout for exciting announcements about this group. And finally, we are very pleased to announce the launch of our Ethics and Technology Fellows at USC Marshall. Five talented MBA students were selected as our inaugural fellows and will help us build a purpose-centered design library for AR/VR experiences as well as launching several innovative projects. We’ll introduce you to them very soon!
Read on to learn more about some of our recent events and initiatives aimed at advancing the knowledge and communities essential for ethical decision making in today’s world. And, as always, don’t hesitate to reach out with suggestions and offers to help support this meaningful work.
Sincerely,
Nate Fast, Executive Director
2024 Consumer Technology Association’s CES Conference
The 2024 CES conference, a pivotal event in the tech world, featured the Neely Center's Director, Nathanael Fast, and Managing Director, Ravi Iyer, as contributing speakers (see below). They delivered sessions that provided enlightening insights into the ethical implications of technology, covering both well-established and emerging areas. This conference represents an invaluable opportunity for attendees to delve into the rapidly evolving landscape of tech and understand the critical role of leadership in navigating its complexities. Learn more in this article by the American Psychological Association quoting Fast and Iyer.
Using Tech for Global Peace (conference in Africa)
The USC Marshall Neely Center for Ethical Leadership and Decision Making was pleased to serve as a sponsor of the Build Peace 2023 conference, which took place in Nairobi, Kenya from December 1-3, 2023. On Day 1 of the conference, the Neely Center hosted a workshop on “Risks and Benefits of AI for a Global Community” and on Day 2 we participated in sessions focused on improving the design choices of social media platforms, with implications for polarization, mental health, and democracy. The event served as an interdisciplinary forum for addressing critical topics and transformative practices in peace, conflict, and innovation.
HBR Article: Is GenAI’s Impact on Productivity Overblown?
With co-author Ben Waber, Nate Fast wrote an article for Harvard Business Review that is making waves. They challenge prevailing narratives about AI-driven productivity, arguing that evidence of short-term gains at the individual task level may not always translate into long-term, firm-level productivity gains. As they say in the article, “Making grandiose claims about LLMs may help people sell software or books in the short term, but in the long term, unthinking organization-wide application of these models could very well lead to significant losses in productivity.”
Read the full article here.
New Partnership: Council on Tech and Social Cohesion
The Neely Center is proud to co-chair the Council on Tech and Social Cohesion, alongside the Toda Peace Institute and Search for Common Ground. The Council is committed to shaping technological systems that foster trust and collaboration, not polarization and violence. Join us as we lead the charge in creating a more unified, ethical digital future.
Could A Design Code Help Social Media Serve Society Better?
Ravi Iyer, the Managing Director of the Neely Center, was a featured guest on the 375th episode of the Techdirt Podcast. In the segment, Ravi talked about the Design Code for Social Media developed by the Neely Center which proposes specific steps we can take to design social media systems that safeguard society more effectively. A lively debate ensued!
Speaking of Psychology: How to Use AI Ethically (Episode 271)
The APA's "Speaking of Psychology" podcast recently featured Nate Fast, Director of the Neely Center, where he discussed AI ethics, including the need to make AI more democratic. “One of the big issues that I see with artificial intelligence is that we’re building these powerful AI systems that are shaping the world, and they’re influencing the future for everyone who lives in the world, but also everyone who will live in the future. But they’re being created by a tiny minority of the world’s population. So a very small number of people in a room building these AI systems. And one of the problems that emerges with that kind of relationship is that sometimes the benefits and the harms of the AI systems that we’re building are unevenly distributed.”
Listen to the full podcast here.
Blueprint for Action
We are excited to share the "Blueprint for Action" by the Convergence Collaborative on Digital Discourse, featuring contributions from our own Ravi Iyer. The digital environment can be fertile ground for disinformation and misinformation, psychological and behavioral manipulation, polarization, radicalization, surveillance, and addiction. As we kick off 2024, a pivotal election year for many countries around the world, this report offers timely strategies for enhancing digital discourse and strengthening democracy, including a resource such as the Design Code for Social Media proposed by the Neely Center.
Human Favoritism, Not AI Aversion
The Neely Center is pleased to announce the publication of an insightful paper by our postdoc Yunhao (Jerry) Zhang in the Cambridge University Press & Assessment. The paper addresses consumer perception (and bias) towards persuasive content generated by Generative AI (GPT-4) vs professional consultants. Findings show that (1) people perceive content generated by ChatGPT-4 as higher quality compared to content generated by human experts; (2) people display human favoritism (but no AI aversion) when they are informed of who created the content.
AI+HEALTH 2023
The Neely Center’s Managing Director, Ravi Iyer, presented a talk on how AI powered social media systems are affecting mental and physical health at Stanford University’s AI+Health Conference held on December 6-7, 2023. The audience, which included medical practitioners from a wide range of institutions, was eager to understand how AI is likely to impact their professional work. In his talk, Ravi discussed how systems could be designed to improve mental health and reduce health misinformation.
Beyond Moderation 2023
Stanford University’s McCoy Family Center for Ethics in Society hosted a conference on November 19, 2023, entitled Beyond Moderation 2023 that brought together academics and organizations interested in exploring how society could get beyond moderation to improve technology's impact on society. At the event, Ravi Iyer introduced the Neely Center's Design Code for Social Media in making an argument that society has a meaningful role to play in designing better technological systems.
The Unbearably High Cost of Cutting Trust & Safety Corners
"Laying experts off may have saved these companies money in the short term, but at what cost, and will these cuts come back to haunt them?" Recently, Matt Motyl, Ph.D., the Senior Advisor to the Neely Center and Resident Fellow at the Integrity Institute, coauthored an insightful article published in the Tech Policy Press that discusses the dire consequences of compromising on trust and safety measures in tech which serves as a stark reminder of the ethical responsibilities that come with technological advancement and the importance of integrating trust and safety in the design process. Notably, the paper utilizes data analysis developed by the Neely Social Media Index.
A Design Code for Big Tech
The Tech Policy Press' Sunday Show podcast hosted Ravi Iyer, the Neely Center Managing Director, to discuss the recently released Neely Center’s Design Code for Social Media. The podcast is widely listened to amongst those working on technology policy. In the podcast, Ravi discussed the specifics of the Design Code and how it advances current efforts to improve the impact of social media on youth.
Why Online Moderation Often Fails During a Conflict
In this Time magazine Op-Ed, Ravi Iyer, the Managing Director of Neely Center, highlights how challenges in moderating content will always be present when dealing with conflict, including the recent conflict between Israel and Hamas. Leveraging the Neely Center's Design Code for Social Media, he presents a case for improving the design of online platforms as a timely alternative to attempting to adjudicate what people should and should not be able to say online.
Neely Ethics & Technology Fellowship Program
The USC Marshall’s Neely Center for Ethical Leadership and Decision Making is thrilled to announce its new Ethics & Technology Fellows program. Through this Fellowship, we aim to support visionary MBA students poised to become the next generation of technology leaders. Each cohort will explore and guide the development of a new area of transformative technology. The 2023-24 cohort will focus on mixed reality (AR/VR), with implications for entertainment, gaming, collaboration, education, and healthcare. The review process to select the inaugural cohort of fellows is currently ongoing. We are looking forward to embarking on this exciting journey with the selected students!