Meta will finally give researchers to access to targeting data for political ads — information that academics have been demanding and using legally risky workarounds to collect themselves for years.
The company had argued in the past that sharing targeting information could violate users’ privacy. Last year, he went so far as to permanently shut down a widely used ad transparency project at New York University after serving researchers there with a cease and desist.
In a blog post published on Monday, Meta said it would share targeting data on individual advertisements with pre-approved searchers who are part of its Open search and transparency on Facebook project. The company piloted this kind of data sharing last year with a subset of 2020 election ads. Now it’s expanding that work for researchers in its network.
The company also offers more information about political ads in its ad library, which anyone can access. Rather than sharing targeting data on individual ads, the Ads Library will show aggregated data on how many ads a page has served targeting a given demographic and how much that page has spent targeting that demographic. . “For example, the Ads Library might show that in the last 30 days a Page ran 2,000 ads about social, election, or political issues, and that 40% of its spend on those ads was for “ people who live in Pennsylvania” or ‘people who are interested in politics,'” the blog post read.
One of the reasons Meta has been reluctant to share more granular targeting data widely is that the company thinks it would be too easy to reverse engineer who saw which ads and infer certain characteristics. on individual users. “If you combine these two sets of data, you could potentially learn things about the people who interacted with the ad,” Steve Satterfield, director of privacy and public policy at Facebook, told Protocol last year.
In the absence of this information, NYU researchers developed a browser extension through which Facebook users could choose to share with researchers the political ads they saw in their own feeds. The extension also retrieved information shared by Facebook with those users about why they were seeing the ad — information that, when collected in bulk, amounts to targeting data. The researchers then released this information to a public archive. That work ended last summer, however, when Facebook suspended the researchers’ accounts after a months-long legal impasse.
Targeting data is key to understanding the underlying motivation for political advertising. By bringing this data to more researchers, starting later this month, Meta could contribute significantly to public understanding of how political campaigns and groups work. It will also undoubtedly lead to a deeper examination of Meta and how it allows political actors to manipulate and microtarget users.
Meta’s FORT program hasn’t been without its problems either. Last year, the company admitted that it had sent a set of faulty data to the program’s researchers, leading to potentially erroneous results for the academics who had relied on it.
Meta also opened up additional data to an even more selective group of outside researchers, who studied the platform’s impact on the 2020 election during the Jan. 6 riot. But the results of this research, expected last year, have been postponed and have not yet been published.