In the last newsletter, I discussed the Neely Center’s core project areas, highlighting how they’re aimed at speeding up the development of society’s capacity to adapt to new tech. Today, I’d like to talk about one of the core ways we do this: by making the design, use, and governance of AI more democratic, a topic I’ve talked about frequently. Our view is that the AI-powered technologies shaping the future of humanity ought to be informed by the many, not just the few. Toward that end, we’re building tools and developing social infrastructure that foster healthy dialogue between the general public, tech companies, and policymakers.
One such tool for collective input is the Neely Ethics & Technology Indices. By tracking and reporting on people’s uses and experiences — both positive and negative — we are improving and speeding up the process by which companies and governments can make widely informed decisions about design. One reason we focus on designing for better user experiences is that, in working across U.S. and global contexts, we have noted how positive and negative experiences vary across users. By helping users indicate what they explicitly value from platforms and designing systems that are robust against manipulation by small groups of hyper-active users, we can enable the majority of global citizens to tailor experiences for themselves.
In the social media space, our Design Code for Social Media has helped make these objectives concrete. In the first few months of 2024, we have seen our design code inform a report from the MN attorney general and a new state law, inform recommendations put forth by thought leaders, and be introduced into policy discussions in Kenya, Ghana, and Sri Lanka. We have been invited to present our Design Code to congressional staffers, the Federal Trade Commission, and at numerous academic conferences. We’re especially grateful to our Managing Director, Ravi Iyer, for leading these efforts and our Senior Advisor, Matt Motyl, for providing comprehensive analyses and reports of the findings.
We are also working to facilitate broader input into the design and governance of AI-based large language models (LLMS). We recently collected data on disparities in usage of AI and presented that data to government policy makers in DC. In addition to our U.S. sample, we are working with partners to expand our surveys around the world. Currently we are collecting more inputs for AI development from civil society actors in India and representative samples of Polish and Kenyan citizens. I have also had the privilege of giving talks and leading workshops around the U.S. as well as in Poland, Kenya, Spain, and Italy, and participating in a NIST-sponsored National Academy of Sciences working group that is designing a series of four workshops to mitigate AI risk in organizations and society.
As technology continues to push forward, bridging the gap between innovation and societal needs has never been more critical. We're doubling down on efforts to ensure that AI serves the greater good, reflecting our collective values and ethics. We invite you to join us in this endeavor — your insights and participation are crucial to creating a more equitable tech landscape. Stay tuned for updates on our initiatives and collaborations aimed at making AI a democratic force in our society.
Sincerely,
Nate Fast, Executive Director
Design Solutions Summit 2024
On March 11, 2024, the Neely Center, in collaboration with the Council on Technology and Social Cohesion, hosted the Design Solutions Summit 2024 in Washington DC. This event brought together a select group of thought leaders and innovators at the forefront of technology and democracy, focusing on the critical role of design in enhancing online civic discourse. The event was kicked off by a speech from Kathy Boockvar, former Secretary of State of Pennsylvania and included talks by numerous technologists with experience at companies such as Google, Facebook, and Instagram. Participants came from leading universities as well as Meta, Twitter (X), Google, Build Up, Search for Common Ground, the Prosocial Design Network, the National Democratic Institute, Reset.Tech, Aspen Institute, the US State Department, the Department of Homeland Security, the American Psychological Association, Athena Strategies, the Alliance for Peacebuilding, Protect Democracy, and India Civil Watch International. Several participants leveraged the Neely Center's design code and election recommendations in their remarks.
As one attendee noted, “For someone working in the responsible tech field, the summit was an incredible opportunity to learn not just about new design solutions but, almost more importantly, where the field is converging on which design solutions are most powerful.”
MN Attorney General Uses Neely Center Design Code in Report on Emerging Technologies
In a recently released report, the Minnesota Attorney General's office leveraged the Neely Center for Ethical Leadership and Decision Making's Design Code for its comprehensive study on the impacts of social media and artificial intelligence on young people. The report not only highlights the challenges posed by digital platforms but also recommends actionable steps towards creating a safer online environment for youth, drawing on the principles outlined in the Design Code. Moreover, the report cites the Neely Center's Social Media Index as a credible tool for monitoring user experiences with technology. Attorney General Ellison emphasized the report's importance in shaping policies that protect young internet users from the adverse effects of emerging technologies. "The report my office released today details how technologies like social media and AI are harming children and teenagers and makes recommendations for what we can do to create a better online environment for young people. I will continue to use all the tools at my disposal to prevent ruthless corporations from preying on our children. I hope other policymakers will use the contents of this report to do the same."
Minnesota Introduces Legislation Based on Neely Center Design Code
Following recommendations from the Neely Center, Rep. Zach Stephenson has introduced the “Prohibiting Social Media Manipulation Act” aimed at curbing design practices that undermine user autonomy and elevate the risks for Minnesotans using social media platforms. Ravi Iyer, Managing Director, contributed insights and testified in support of the bill, which incorporates several of the Center's proposals, including enhanced privacy settings, ethical content amplification, reasonable usage limits, and greater transparency in platform testing.
Neely Center Presents to White House Kids Online Health and Safety Task Force
Ravi Iyer was invited to join a panel at Stanford to present academic and industry perspectives to the White House Kids Online Health and Safety Task Force. In his remarks, he emphasized the recommendations from the Neely Center's Design Code that call for specific changes to platforms, backed by empirical evidence, that empower kids to avoid negative experiences with technology.
Neely Center Gives Invited Talk to Federal Trade Commission (FTC)
In an invited talk at the Federal Trade Commission (FTC), Neely Center's Ravi Iyer discussed the impact of manipulative design patterns in social media, aligning with the FTC's focus on "Dark Patterns." His testimony emphasizes our role in advocating for transparency and fairness in digital design. The Neely Design Code provides specific design recommendations for policymakers and technologists to improve the impact of social media platforms on society. We are excited to see the Neely Center's work contributing to substantive discussions on digital ethics.
Neely Center Joins Ofcom Academic Panel
We have shared our Design Code with decision-makers within the UK Government and the UK’s communications regulator (Ofcom), as they develop the new Online Safety Act. Ofcom is now designing and consulting on its codes of practice to implement the Act. Ofcom also has a history of measuring user experiences online, similar to our Neely Center Indices, and there is much to be learned methodologically across both efforts. Ofcom recently added Ravi Iyer, our Managing Director, as a member of their Economics and Analytics Group Academic Panel. As Ofcom implements the Online Safety Act in the UK, Ravi Iyer will be advising them on conceptual frameworks and empirical approaches to understand, measure, and improve outcomes for people in digital communications.
Neely Center Design Code Goes Global
It is exciting to see several of our partners leveraging the Neely Center’s Design Code to engage with global policymakers and civil society groups. Our partners at Build Up have engaged with the Kenyan and Ghanaian governments about specific ideas within our Design Code and continue to have fruitful dialogues about how these codes can be integrated into government policies. On March 22, 2024, our partners at Search for Common Ground organized a gathering in Sri Lanka of global civil society organizations working to combat Technology Facilitated Gender Based Violence (TFGBV) and invited the Neely Center to present our design recommendations to the group, as an alternative to current content-based legislation being considered in places like Sri Lanka, which civil society organizations worry will be used to curb free expression.
What Social Media Platforms Users Are Most Likely to Encounter Societally Harmful Content
When we launched the Neely Social Media Index last year, we found that US adults who used X (Twitter) and Facebook were 2-3 times more likely to see content on those platforms that they considered bad for the world. In a Substack post, Matt Motyl delved into whether these experiences with detrimental content have evolved over the past year and how this varies across different social media and communication platforms. He found a decrease in reports of harmful content on Facebook, whereas on X (Twitter), there was an increase in reports of content potentially escalating the risk of violence.
The Fear Factor: Better Understanding Online Discussions About Crime and Safety
In the summer of 2023, a small team at Yale's Justice Collaboratory—comprising Matt Katsaros, Andrea Gately, and Jessica Araujo—collaborated with Ishita Chordia, a researcher at the University of Washington Information School, to better understand discussions about crime on Nextdoor. They leveraged both the Neely Center Social Media Index and the Neely Center Design Code in their work. Their paper, published recently in Tech Policy Press, proposes recommendations for enhancing online crime discussions.
Tracking Chat-Based AI Tool Adoption, Uses, and Experiences
Artificial Intelligence is weaving its way into the fabric of our daily lives more seamlessly than ever before. According to the latest analysis from the Neely UAS AI Index, 18% of US adults have interacted with AI-driven chat tools such as ChatGPT, Bard, and Claude. With this rapid growth in adoption of AI tools and an estimated generative AI market value of $1.3T by 2030, we must examine the adoption of chat-based AI tools. In a thought-provoking Substack post, our senior advisor Matt Motyl, postdoctoral researcher Jimmy Narang, and director Nate Fast, unpack the potential ramifications of chat-based AI tool adoption.
Neely Center Advocates for Design-Based Solutions for EU Digital Services Act (DSA) AI Efforts
The European Commission recently sought our feedback on guidelines to mitigate systemic risks in electoral processes on large online platforms. The Neely Center provided input on the importance of design-based solutions that address many of the known limitations of watermarking. The Commission specifically cited the Neely Center among academic stakeholders who warned against an over-reliance on watermarking and labeling. We advised exploring design-based approaches, especially for scenarios where malicious actors might circumvent detection through watermarking.
Neely Center is Driving Innovation in Technology Policy
NBC11 in Minneapolis spoke with Ravi Iyer, Managing Director, about the Neely Center's role in helping to shape the state's recommendations to safeguard social media user experiences.
How Have Social Media Experiences Changed from 2023-2024
In a recent Substack post, Neely Center’s senior advisor Matt Motyl delves into the shifting dynamics of social media usage and its impact on user well-being and societal norms between 2023 and 2024. The study, supported by the Neely Social Media Index, provides a comparative look at how engagement with social media platforms has evolved since our initial survey in early 2023, revealing interesting trends such as a 5.8% decrease in YouTube and 2.9% decrease on LinkedIn and X usage among US adults. No platform increased its share of users in this time span.
Ranking Content On Signals Other Than User Engagement
We are excited to share this recently released paper that Neely Center helped sponsor and co-author that illuminates industry knowledge about the tradeoffs between quality and engagement optimization within algorithms. The paper highlights one of our core design code proposals that platforms should not optimize for engagement, but instead for judgments of quality. In collaboration with numerous partners in academia (University of California, Berkeley, Cornell Tech), civic society (Integrity Institute), and industry (Pinterest, LinkedIn), it also discusses many concrete alternative ways that platforms have introduced signals of quality into algorithms, often by eliciting explicit preference, with measurable results. The paper was also recently covered in Tech Policy Press's Sunday Show podcast.
Linda Yaccarino Says X Needs More Moderators After All
X CEO Linda Yaccarino's recent Senate testimony revealed a shift in the platform's approach to safety, with a notable increase in trust and safety staff and plans to hire more moderators. However, this move has sparked discussions about its sufficiency in ensuring user protection, especially for minors. In this Wired article, Matt Motyl, our senior advisor at the Neely Center, highlights the challenges of such measures, calling for a more genuine commitment to safety in tech.
How Mediators and Peacebuilders Should Work With Social Media Companies
We are thrilled to share an essay that emerged from our collaboration with global peacebuilding organizations. Published by Conciliation Resources in “Accord: An International Review of Peace Initiative” (Issue 30), this piece advocates for stakeholders to not only identify and address individual instances of harmful content within their communities but also to push for systemic reforms of the incentives within these digital ecosystems. The essay argues that peacebuilders and mediators must move beyond reactive moderation to proactive prevention, influencing the foundational policies that govern social media platforms.
Deepfake Democracy: Behind the AI Trickery Shaping India’s 2024 Election
In an article discussing the risk of AI powered deepfakes for India's 2024 election, Al Jazeera talked with Ravi Iyer, the Neely Center's Managing Director, about how platforms should respond. In keeping with our previous work on algorithmic design, Ravi discussed the difficulty platforms would have in detecting deepfakes and instead suggested redesigning algorithms that currently incentivize polarizing content. The ethical implications of deepfakes are undeniable, and regulating them remains a complex issue. Yet, safeguarding the integrity of our elections and democracy is paramount.
Neely Center at the Social Media & Society in India Conference
On April 8-9, 2024, the University of Michigan hosted a hybrid conference on Social Media and Society in India, featuring a host of speakers to discuss various ways in which social media is impacting contemporary life in India. The event is in its fourth iteration at the University of Michigan and is a premier venue for conversations around social media and society in India. Jimmy Narang, a postdoc at the Neely Center, spoke about his research on the mechanics of misinformation distribution in India. The Neely Center is proud to co-sponsor the event.
Neely MBA Ethics and Technology Fellows
Stay tuned for exciting updates on the Neely MBA Ethics & Tech Fellows; each of them is working on a project integrating ethics and mixed reality tech across different domains. We’re impressed with the work they are doing and are grateful for the leadership that Parama Sigurdsen and Carsten Becker are providing for the program.