# Get Out Your Pitchforks

1995 was a monumental year for music, and not just because the Wu-Tang clan released *Liquid Swords*. It marked the beginning of the definitive voice in music, *Pitchfork*. That year, Ryan Schreiber launched the site from his humble basement in Minnesota with no inclination as to what it would eventually become. By 2006, the site was growing rapidly and gaining a reputation as a rising voice in the industry. 9 years later, the site was acquired by Condé Nast the multimedia mega-corporation responsible for some of the most well-known magazines and publications. *Pitchfork* would now be joining the likes of *Vanity Fair*, *GQ,* and *The New Yorker*, cementing itself as the most influential modern voice in the music industry. In the years since the Condé Nast buyout, *Pitchfork*’s reputation has changed, to say the least. Public perception of the site has taken a hit; many feel that it is not as fair as it was in the past and that their attention has shifted in a regressive direction. Readers often complain of seeing scores completely detached from the review accompanying it, or “best new music” selections that seem completely out of left field. Whether or not *Pitchfork* has changed may already be made up in the minds of the public, but is it true statistically? Has *Pitchfork* really changed since being purchased by Condé Nast, and if so how? In my research, I sought to answer this question and the intricacies surrounding it. Using a full set of all *Pitchfork* reviews beginning in 1999 and spanning through 2019, I hypothesized that yes, *Pitchfork* has changed since being purchased in 2015 and that they have likely become more positive in general with even more specific tendencies towards popular genres.

*Pitchfork* may hold a high position in the music industry, but it is worth researching for reasons beyond this. The site has the ability to make or break careers. Given the power they hold it is worth seeing how they choose to use it. The most infamous negative review, and clearest evidence of the “*Pitchfork* Effect,” is probably the Chris Dhalen written review of *Travistan. *The album received a 0.0, but what is most notable is the financial effect it had. Dozens of local indie record stores refused to even *carry *the album because of the *Pitchfork* score. Other artists like Phoebe Bridgers or Big Thief have seen immense periods of growth following cosigns from *Pitchfork*. Another clear example is Bon Iver, whose 2007 album *For Emma, Forever Ago* topped only *Pitchfork’s* year-end list and no other publications. A year later when Justin Vernon a.k.a. Bon Iver re-released the album, it landed on several major sites’ best of 2008 lists. Today the site carries around 1.4 million users a day, which makes it by far the biggest music publication site. *Pitchfork* makes for an intriguing subject of research because of not only their size but also their arc. Growing from humble beginnings to the largest independent music blog ever created is sure to create some interesting changes over two decades. Couple that with a multi-billion dollar buyout from a publication giant and it makes for an interesting subject to analyze. After weeks of analyses, visualizations, and an important regression I found that *Pitchfork* scores have in fact changed. How those scores have changed proved to be the most interesting part.

Research on critics is not uncommon. The “experience goods” that sites like *Pitchfork *write about are ambiguous and researchers have tried to solidify those ideas for years (Dhar & Chang 2007). The* *specific* *notoriety of *Pitchfork* has made them a subject of research in the past, and my own study draws on and builds upon the findings of several others. The first major piece of writing and research on *Pitchfork* was done in 2006 by David Itzkoff from *Wired* magazine. The article was written at a point when *Pitchfork* was just turning the corner from an indie blog to a major news site. Itzkoff’s article from 2006 is a good peek into the past and it shows the power and presence that *Pitchfork* held at the time. Most importantly the article’s story of *Travistan*, mentioned earlier, is indicative that negative reviews make for bad business (Itzkoff, 2006). Around this same time, the *Washington Post* published their own article of a similar fashion on the budding magazine. Written by J. Freedom du Lac it also focuses primarily on the power Ryan Schreiber was gaining at this point. The article mentions Schreiber’s review of Arcade Fire’s *Funeral. *The album received a* *9.7, and shortly after became the highest, and fastest selling record in Merge records history. The ability of critics to alter the sales of something has been widely studied in the past (Eliashberg and Shugan 1999). Since *Pitchfork *also has this ability it makes them more worthy of extensive research. Most interestingly, towards the end du Lac’s article, Schreiber goes as far as to say he has “no-interest” in selling even a small share of the company at any point (du Lac, 2006). Obviously, the opposite came to be and Schreiber did eventually sell. The quote gives intriguing context to the eventual Condé Nast buyout.

An especially important piece of scholarship on *Pitchfork* that I built upon in my work is Neal Grantham’s extensive statistical analysis from 2015. His work titled “Who Reviews the *Pitchfork* Reviews” is a pretty straightforward look at a whole variation of statistics and visualizations from random samples of *Pitchfork* reviews. Grantham’s work made for a good comparison piece because it was written in 2015, just eight months before the Condé Nast purchase. Many of Grantham’s findings, particularly the ones around which genres were reviewed the most and most favorably, had changed a lot in the gap between his work and mine (Grantham, 2015).

While research into acquisitions has been done in the past, very few of them focus much on the company that is acquired and more so on the financial performance of the parent company (Wiles, Morgan, and Rego 2012). In 2016, Karen Donders & Van Den Bulck published an article on the effects of acquisitions on smaller broadcasting networks. Their study focused on the smaller broadcasts acquiring the content, and they found that the acquisitions did not have a structural effect. While Donders & Buclk’s work is similar to mine, it is not a one-to-one comparison and our results were much different (Donders & Bulck, 2016). Additionally, there has been a moderate amount of work done around Pitchfork, but again nothing that surrounds the specifics of my own question. My work is more interested in the effects of the Condé Nast buyout and in this sense it stands mainly on its own.

The focus of my question is around pitchfork scores and how they have changed over time, with a specific focus on the 2015 buyout by Condé Nast. I hypothesized that, when controlling for other factors like genre, I would see a steady growth in average and median score over time with even clearer changes following 2015. To show how Pitchfork had changed I wanted to focus on other factors besides score. Genre seemed to be a good variable to observe because the genre of a certain album will probably have a greater affect on the score than any other variable. I also hypothesized that genre coverage would change over time to become more inclusive of popular genres that would be better for business.

I designed my study around a few central questions and then I created visualizations that would answer those questions. The first question is whether or not the median score of *Pitchfork* reviews has changed over time. To do this I ran regression tests accounting for score, year, and genre. The scores range from, you guessed it, 0 to 10 with over 20,000 observations and there was an equal number of observations for years. Genre was a little more tricky since it was a string variable, but I converted it to a “dummy” with numbers 0–7 corresponding to the different genres, rock, pop, rap, metal, electronic, experimental, folk/country, and global. A summary of my key variables is listed below.

For a little context, regression scores measure and predict linear relationships between two variables while holding for all other variables. With this test we can really see whether or not the median score is increasing over time or not.

*summ dumbgenre year score date2*

**Research Findings**

The most important aspect here is the coefficient of the regression equation for score, and that is 0.0198. This means that for each increase in year there is an expected increase of 0.0198 for the score of the album. This may not seem like much, and frankly it is not enormous, but the results were significant and it shows a positive correlation year over year.

A good way of contextualizing this result is that if the median score in 2000 were 7 then the median score would be 7.198 by 2010 and 7.396 by 2020. Given that the range of medians in the set is only 0.5, this coefficient is more significant than it seems at first glance.

In 1999, the first year of my data set, the median score was 7.15. For the last four years of the set that has risen to 7.4. It is worth mentioning that 2001 and 2004 had the highest medians, both of which were 7.5.

When researching the other changes in scores over my data set I first looked at the median but other statistics proved more telling. One important number is the 5th percentile of scores between each year.

The 5th percentile represents a number on the lower end of *Pitchfork’s* spectrum since 95% of scores will lie above it. This number substantially increased nearly every single year. In 1999, it was higher than expected, but that can be attributed to the lower overall amount of scores in the dataset for that year. The fifth percentile score in the year 2000 was 2.5 while in 2018 that number was 5.9, over twice as big. 2019 was omitted from this portion of the research and many others because that dataset had very few values in that year.

Another compelling way to observe the change in scores over time is through kernel density graphs. These graphs show the density of each possible score for a given year. The first of these graphs is a one to one comparison of 2018 and 2003. These years were chosen because they are the latest and earliest years with a robust set of scores.

The comparison here is meant to show early *Pitchfork* versus late *Pitchfork*. One comparison does not paint the full picture though so I constructed a second graph with the kernel densities for every year in the data excluding 1999, 2000, and 2019.

This graph is a little confusing at first because it appears to be just a medley of colors but the colors are actually coded to specific ranges. Densities of .4 and below are black, above .45 is blue and everything in between is orange. The years are then shown in the legend with the density line next to it, indicating the density range of each year.

The years are grouped by which color they were. Every year after, and including, the buyout is in blue. This graph shows a better comparison of how things changed following the buyout. Lastly, I have a density graph of all scores spread out over the whole data set with a LOWESS (Locally Weighted Scatterplot Smoothing) line on top of the graph.

The LOWESS line is used to see relationships and predict variation in a regression, as you can see the LOWESS line here is on a steady upward trend.

In addition to examining the scores this dataset also provides variables for genre that were helpful in creating more data and visualizations. As part of my hypothesis stated there is to be an expected change in genres over time for *Pitchfork’s* reviews. The most interesting findings around genre came more from the changing frequencies of reviews. After tabling all of the genre observations for each year I found that the number of rock albums reviewed decreased by an average of five every year after initially increasing until 2009.

On the other hand, the number of pop albums reviewed substantially increased over time by an average of 8.7 per year. Interestingly enough the biggest year over year increase was between 2015 and 2016, immediately following the Condé Nast purchase.

Rap albums continued on a similar trajectory to pop albums albeit with even more interesting results. Each year the number of rap albums reviewed increased by a whopping 13.5, with an enormous increase between 2015 and 2016.

Examining the changes between three major genres tells a very interesting story, while pop and rap albums are both still reviewed less often than rock albums they have both seen huge increases in the number of reviews received each year. While the rock genre has seen a steady decrease in albums since the mid 2000s. Having all of this data presented does not tell the full picture though; to understand it all there is quite a bit of analysis involved.

My original hypothesis was that *Pitchfork* reviews have changed over time and that the Condé Nast buyout had an effect on these changes. Given the data I have collected I do believe that is the case. First, observe the regression tests, percentile changes, and kernel density graphs for each year. The regression analysis proves there is a positive correlation in the median score for each year. That growth coefficient is not massive, but it is certainly substantial. For as tightly bound as the median range was for this dataset, an increase of .0198 each year becomes significant over a medium term length of time. The regression test also holds for genre, meaning that increases in score are related to year and not genre. The actual median score did not change very much over time, though it is worth pointing out that the median for the last four years has been a consistent 7.4. That number is above the median of the whole set and of most years. The median score may not be a clear enough indicator that *Pitchfork* is becoming more positive than before, but the fifth percentile score is more than substantial. This number represents the bottom tier of *Pitchfork* scores given in a certain year and the substantial increase over time proves that *Pitchfork* is becoming less harsh. The lower scores of each year have become increasingly higher, a trend that accelerated after 2015. As for the kernel density graphs, these prove that *Pitchfork* scores are becoming dense around the 6.5–8 range. This indicates that there is a higher frequency of albums scoring in that range than ever before. It would appear that more “middle of the road” type albums are receiving higher and higher scores which drives up the peaks of the density graphs. This development is especially obvious in the years 2016, 2017, and 2018. These are all subsequent years from the Condé Nast buyout and it seemingly proves that in this time period *Pitchfork* has become more positive on albums than ever before.

Moving on to the data on genres and it becomes even more clear that changes are happening internally at *Pitchfork*. The chart showing the number of rock albums reviewed each year presents an incline at first. The increase in rock albums reviewed from 2001 to 2009 shows that the site was becoming bigger, all albums reviewed increased during this period. After this, the number of rock albums reviewed each year started to decrease significantly. Nearly every year from 2010 onward saw a decrease from the year before. After the Condé Nast buyout the biggest drop in albums occurred, and though this has stabilized since then it still shows how things have changed. As for pop and rap these genres have seen a steady increase rather than decrease in the last several years. As *Pitchfork* grew they expanded to cover more pop albums each year, this is clear from the graphs. This trend held steady at first and stabilized in 2009 much like the number of rock albums scored each year. Unlike the graph for rock albums, this trend exploded in 2015 (the year of the buyout) and in 2016 (the year after). For rap there was a similar but even more extreme arc happening. Like the two previous genres, the number of rap albums reviewed saw a wavering but positive growth over time as *Pitchfork* expanded. Following the Condé Nast buyout in 2015 this number nearly doubled, going from 87 albums to 155.

One may ask the question “why are certain genres being reviewed more and others reviewed less?” The answer is that the more popular genres, pop and rap in this case, take precedence over the decreasingly popular genre of rock; and what is popular is good for business. Look at 2015 to 2016 for rap albums reviewed, the number jumped tremendously and I hypothesize that this increase is a direct result of influence from Condé Nast. It is likely that Condé Nast prefers that more popular genres be reviewed because that is better for business, and this decision is what leads to the trends happening in my results.

All of this adds up to paint a clear picture of change at *Pitchfork* that was accelerated by the 2015 buyout from Condé Nast. From the regression and median charts there is a clear increase in overall positivity. The fifth percentile scores illustrate a sharp decrease in the “harshness” of lower scores, and the density graphs show a substantial increase in albums being scored at higher levels since 2015. On top of these changes the genre coverage has been affected by the buyout as well. Rock albums have been shelved for more popular and lucrative genres like pop and rap. A move that, given the events in 2015, appears to be entirely financially motivated. While I feel my hypothesis was proven, there are a few things to consider about my results that could have possibly affected the results.

Finding a complete data set was very difficult. The one used in this study proved to be helpful, but two full years have passed since the latest full year listed here. An additional two years of data might have done a lot to prove or potentially disprove my work. My research focused on mainly the way scoring and coverage has changed with an emphasis on the 2015 buyout, but there are other factors that may have changed too. I ignored any analysis on staff makeup during this period, something that likely went through lots of change during this time and especially following 2015. Lastly, my analysis is much more focused on hard numbers. This means I ignored any sort of content analysis that may have proven interesting following the Condé Nast purchase. So while this is not a complete study on all things *Pitchfork* scores, it does a good job of showing how score trends and genre coverage have changed over time. It also demonstrates the effects of Condé Nast’s purchase in 2015 by showing clear examples where the site changed its habits after that year. More work needs to be done moving forward that takes into account 2019 and 2020 as well as more in depth analysis on content. For now this study shows a technical look at change and opens the door for more research to follow.