Here's How to Create Your Brand's Own Social Media Benchmarks

We’ve heard it before: What is the average engagement rate for our industry? What is the optimal follower growth per month? Is our content clickthrough rate high enough? What is a good salt ratio for per pound of french fries?

Okay, maybe not that last one, but a majority of these questions are simply addressed through social media benchmarking

What is Social Media Benchmarking?

Social media benchmarking evaluates your current social media performance relative to industry standards, competitors, or even your own historical performance. 

Most major social media platforms have a wealth of data available for you, which allows you to take a look at your content at more than surface level. 

You may be wondering: why go through all the trouble of benchmarking your own social media data when industry averages are just a Google search—or even a ChatGPT prompt—away? The simple overarching answer lies in the fact that social media is a highly dynamic space, making it extremely rare to find a single optimal estimate for most metrics tailored to your specific profile.

How Social Media Benchmarks Are Calculated

Lets take a popular and frequently benchmarked metric for example: engagement rates. Engagement rates represent the overall percentage of engagements on your social media content in relation to your audience. Countless studies on this percentage for various industries are scattered across the internet, but many users may not realize that there is no single, uniform method for calculating engagement rates. 

Here’s some of the ways we have seen this metric be formulated:

Oftentimes, studies won’t disclose the exact formula for the metrics in question. To use a statistical term, there is a significant risk of heterogeneous measurement—where metrics are not standardized across studies. If one study uses an engagement rate formula that differs from the one you use, it can lead to inconsistencies and improper benchmarking.

Beyond measurement inconsistencies, another limitation of generalized benchmarking is that industry averages are, after all, just that—averages. More often than not, they fail to account for other dependent variables, such as the follower count of analyzed profiles, the total number of accounts included, post volume, content distribution, and more. Combined with the potential opacity of the calculations themselves, this can create further comparative issues. Therefore, it is best to use industry benchmarks as an extremely high-level overview of your performance and focus on developing your own personalized benchmarks.

Steps to Create Your Own Social Media Benchmarks

Today, we are going to take a dive into how you can benchmark your own data using some simple methods, all without needing to rely on third-party tools or breaking the bank. 

We’ll use Meta Business Suite as our example. 

Meta Business Suite is a free platform that allows businesses to manage their Facebook and Instagram profiles. It provides extensive access to your channel's analytics, which can be used for benchmarking. You will need a Facebook page in order to access this. 

Let’s continue with the topic of engagement rates in our example and create personalized benchmarks. We’ll need data on our content to follow this process.

1.) Locate ‘Meta Business Suite’ on the left panel once you are accessing your page.

2.) In another left panel once in Meta Business Suite, look for the ‘Content’ option.

3.) Once in the Content page, you should see an option towards the top right to export performance data on your page’s Meta posts. Use this to export data on either Facebook or Instagram content through a set date range. 

From this point, you need to do some spreadsheet work to formulate some content benchmarks for our channel. There are potentially hundreds and thousands of ways you can explore this dataset for benchmarking purposes. 

For engagement rates, Random typically uses reach for Facebook or Instagram. This is Meta’s native denominator for calculating the metric (feel free to use impressions if preferred). This also gives us a better understanding of how our content is resonating with engaged users, both followers and non-followers, compared to dividing by followers. 

Furthermore, we utilize rolling benchmarks with a maximum of three to six months of content data, as social media platforms tend to be fluid with algorithm changes and major updates. While there's no real downside to setting your duration longer than this, we find relatively shorter durations to be pretty representative as long as posting frequency is adequate. Note: Meta Business Suite won’t let you export beyond 92 days of duration. This means you may have to aggregate multiple datasets if wanting longer durations. 

Continuing with our example, let’s say you want to calculate your up-to-date content average for engagement rate. In the 'Export Data' option, you can select the platform, set the dates (we’ll use the full 92 days up until the current day), and then export the data. 

Here are a few calculation examples:

Average Engagement Rate

  1. Take the sum of the ‘Reactions, Comments and Shares’ column (if using Facebook). 
  2. Take the sum of the ‘Reach’ column. 
  3. Take the output of Step 1 and divide it by the output of Step 2. 

What you have now is your channel’s up-to-date average engagement rate for the past three months of content. You can compare this to the next three months of content, the same three-month period from the previous or next year, or update the value with newer months. The method for benchmarking this average is entirely flexible.

Median Engagement Rate

  1. Within the same dataset, create a new column for engagement rates. 
  2. Set a formula that divides the engagement totals of that row’s content by the reach of the same content. If you want to include clicks in your engagement totals, you can add them to the engagement and then divide the result by reach.
  3. If you're using Excel, move your pointer to the bottom-right corner of the cell and double-click when the black plus symbol appears to apply it to the entire column. You can also drag and drop the cell vertically.
  4. On an empty cell, output the median of this new column of engagement rates. You can use this formula template: ‘=MEDIAN(COLUMN LETTER:COLUMN LETTER)’.

Calculating the median is an excellent way to benchmark your content engagement rate. Medians are naturally more resistant to outliers, which are common in social media analytics—such as viral videos, content types, etc. Although the exact formula for engagement rates is often difficult to discern in any particular study, most will utilize either averages or medians. We recommend comparing the median engagement rate to your individual content as this is likely a better representation of typical performance. 

Why You Should Create Your Own Social Media Benchmarks

As we have shown, there are numerous ways to approach benchmarking. These include followers, clicks, click-through rates, video metrics, and more. 

However, there are many considerable benefits to benchmarking your own performance:

Want personalized help with benchmarking strategies to optimize your social media results? Chat with our team today! 

Random Labs: The Performance Impact of Blogs for Websites

We’re kicking off our first quarterly Random Labs experiment of 2025! Before diving in, you might be wondering, “What is Random Labs?” The answer is simple: Random Labs is a quarterly blog series where our team of marketers and data scientists put a theory to the test to either myth-bust or prove a marketing-related hypothesis. For Q1, 2025, here’s what we tested: What’s the performance impact of blogs on websites?

You’ve probably heard it a million times: “Blogs are essential to boost SEO and your website performance!” But what impact do they actually have? We tested this hypothesis through a more extensive lens and implemented robust data analysis.

We Tested How Blogs Affect Website Performance

For this experiment, we gathered blog data from early 2024 to early March 2025, tracking the total number of blogs and selecting dependent variables related to website performance. Dependent variables are the factors we are measuring to assess how blog presence influences website performance.

Dependent Variables:

New users and sessions are both crucial metrics for analyzing overall website traffic. The gap between sessions and new users can provide insight into how often users return to the site. Additionally, bounce rate helps assess user engagement, though its significance depends on the context. In many cases, a lower bounce rate is preferable.

In this analysis, we will break down higher-level variables, such as total blog quantity, and then drill down into specific blog categories and topics to assess their impact on website performance.

Step 1: Impact of Total Blogs and Cadence

We begin with a high-level correlation check between the total number of blogs and the dependent variables. While we do not expect a strictly linear relationship between blog quantity and traffic—given the influence of other marketing tactics such as organic media, paid media, and referrals—blog content can still indirectly support these channels over time. This check provides a broad understanding of overarching trends before we narrow our focus to more precise insights in the next steps.

We construct the following scatterplots:

Scatterplots are excellent visualizations for analyzing relationships between independent and dependent variables. Our output shows an increasing trend in new users and sessions as the total number of monthly blogs increases, with greater dispersion in values. 

The variance in bounce rates appears to decrease when the monthly blog cadence reaches approximately 8–10 blogs per month. 

We can further supplement our scatterplots by constructing a correlation matrix, which provides more of a numerical perspective of the relationships between these variables.

Reading a correlation matrix is quite simple in this example. The numbers listed in the table represent what is known as a correlation coefficient. This is a number between -1 and 1 that measures the strength and direction of the linear relationship between two variables. In practice, it is rare to see a value of exactly -1 or 1, as these represent perfect negative or positive correlations, which are uncommon in real-world data. The closer these coefficients are to those endpoints, the stronger the linear relationship we can conclude between those variables. 

Insights:

The outcome suggests a weak positive correlation between the total number of monthly blogs and the resulting new users and sessions

Additionally, a weaker negative correlation exists between the total number of monthly blogs and website bounce rates.

This aligns with what we see in practice, as blog content is only one of many channels that drive traffic to a website—others include organic and paid social, referrals, links, search, and more. While we see some mild signs of increased website traffic alongside a potential decrease in bounce rates, more content may not always be the answer, as our upcoming findings will suggest.

Step 2: Impact of Different Blog Topics

Due to the nature and purpose of blogs, which cover a wide range of topics, our next analysis can focus on specific blog topics. The following categories were used on the same dataset to organize the large number of blog posts:

Considering the significant amount of topics we are utilizing compared to our previous step, we can use a different visualization to analyze the performance between these blog topics in relation to website performance. A correlation heatmap can be a great visual tool for this example:

The interpretation of this chart is similar to that of the correlation matrix, with the key difference being the use of color shading instead of coefficients, along with an expanded set of independent variables in the blog topics. In our example, a darker shade of blue or red indicates that a variable's correlation coefficient is closer to -1 or 1. Conversely, a lighter shade signifies a weaker linear relationship between those variables. The bounce rate is represented by a separate color scheme, reflecting its decline as blog activity and other optimizations increased over the months.

Much like our previous step, we can also supplement with a correlation matrix:

Based on the output of both charts, we can create some conclusions based on specific blog topics relative to the website performance metrics:

Insight:

Blog content related to Analytics or Case Studies showed a relatively strong positive linear relationship with an increase in new website users and sessions.

Summary: Blogs on this topic include social media benchmarking, various tests, and case studies on well-known, high-interest questions, which could explain the relatively higher search intent. Additionally, we have observed a growing share of traffic coming from AI over the year (with ChatGPT being a prime example), and this channel has been notably more consistent for blogs related to benchmarks. The general query for AI users for finding benchmarks may be higher in volume than other topics, which may have more varying topics. 

Insight:

Blog content related to Trends or Memes showed a slight negative linear relationship with website bounce rates. However, this relationship was the strongest among all topics.

Summary: Trends and memes are often highly engaging visually and in terms of topic, which may influence readers to stay on the page. Additionally, blogs discussing trends or memes usually link to related posts, videos, or explanations, naturally encouraging visitors to click through rather than bounce. Current events and cultural moments may also attract users with a naturally higher level of interest.

As an additional point, the results here do not suggest a lack of relevance for the other blog topics. As mentioned before, numerous influences affect the website side of things. For example, although employee advocacy blogs don’t stand out in terms of the most substantial linear relationship with website traffic, they are often our highest-performing content on social media.

Additionally, blogs on Guides and Strategies represent our densest volume in total blog posts and likely cover the most variance of topics. This explains why it may contribute to higher bounce rates overall. A fully layered evaluation is critical for analyzing the overall effectiveness of your blog posts. 

Step 3: Specific Blogs That Stand Out

We can further narrow our focus on blog content by evaluating the specific blogs that stood out regarding website performance. Here are some of the top Random content for the year!

1.  2025 Pop Culture Events Social Media Marketers Need to Know

The blog directly addresses a topic of massive interest: pop culture. This naturally attracts a large audience, as people are drawn to discussions about movies, music, events, and trends. Additionally, pages with embedded videos have the potential to rank for more keywords and achieve higher average positions on Google’s SERPs. Overall, this blog ranked highest for us in terms of both the total volume of new users and sessions, which is quite impressive considering its recent publication.

2. Here Are The Latest TikTok Benchmarks for 2024

As we’ve observed in our previous steps for analyzing blog topics, “benchmarking” is likely a strong keyword for SEO, especially in the marketing and social media niches. While “benchmarking” itself is broad, long-tail variants (e.g., “Here Are the Latest TikTok Benchmarks for 2024”) attract more targeted traffic with less competition. Upon consistently evaluating blogs related to these topics, we’ve also noticed their high evergreen impact, as they remain among our highest-performing blogs by traffic even in 2025. Lastly, as previously mentioned, the implications of search traffic coming from AI have significantly benefited these blogs and will likely continue to enhance their performance.

Although the August 2024 iteration stood out for its overall performance in hindsight, we consistently find great success with these memes and trend recaps each month. The blog’s blend of timeliness, shareability, and utility makes it a traffic magnet. Memes will likely experience short-term traffic surges but at significantly high magnitudes, driven by the trends themselves. The August iteration's standout performance is a good example of this, as it aligned with highly popular events like “demure,” the Paris Olympics, and the summer season in general.

4. Celebrating National Small Business Week: 8 Social Media Post Ideas

This blog led the way in the Guide or Strategy category of Random blog content. As with some of our other high-performing blogs, timeliness was a key factor, as this post was published in anticipation of National Small Business Week (April 28 - May 4). Other high-performing aspects included elements common in successful blogs, such as the numbered listicle format, content that resonates well with our specific audience, and seasonality. 

Do Blogs Impact Website Performance?

To conclude our first quarterly Random Labs experiment of 2025, blog content can be an effective tactic for potentially elevating overall website performance. 

While the overall cadence of blogs may not strongly correlate with an increase in total website traffic or changes in bounce rates, factors such as timeliness, shareability, and relevance can significantly impact performance on an individual case-by-case basis.

As we have frequently observed with their evergreen success, tailoring and evaluating blogs across all facets—such as social media engagement, audience relevance, and content quality—are essential for long-term success. Focusing solely on increasing website traffic can overlook the importance of creating meaningful, shareable content that resonates with readers and enhances overall brand presence.


Looking to elevate your digital marketing and SEO game? Chat with our team today!

Random Labs: Do Trending Sounds Impact Instagram Reel Performance?

In this month’s iteration of Random Labs, we continue exploring all aspects of Instagram. After evaluating SEO implications, geotagging, and more (check out our previous blogs here), we now analyze the potential impact of audio on Instagram content performance. 

Specifically, do reels with an original audio experience impact performance changes compared to reels that utilize Instagram’s native music features?

The focus of this experiment will be reels, as video content has seen a definitive rise on the platform over the years—almost to the point where Instagram is transforming into a video-dominant platform.

The way we will separate our comparative data among Instagram reels will include one basic factor. Under the profile portion of a posted reel, there is currently a label that indicates the audio being used for the content. We want to compare reels that are labeled with “Original Audio” vs. those with named music. Here are the key differences:

Once again, we will evaluate our own content here at Random! The following are some examples from a couple reels that quickly helps you identify each type. 

Reels with Original Audio

Reels with Overlaid Music

Also, feel free to give us a follow

When it comes to hypotheses, it’s safe to say that no definitive claim has been made regarding whether original reel audio or native named music performs better on Instagram Reels. There have been arguments in favor of both, with some suggesting one leads to higher engagement and reach. This underscores the importance of analyzing your own content, which we routinely do at Random.

Our Analysis of Instagram Reel Sounds 

This test comparing both types of reels is relatively simple and not overly demanding in terms of statistical methodology. We begin by organizing our dataset, which includes only reels from the last six months. Next, we create a binary outcome variable to indicate whether a reel contains original audio or named music. Finally, we conduct two separate t-tests to analyze differences in reach and engagement.

A t-test is a statistical tool that helps us determine if there's a meaningful difference between two groups. Imagine you're comparing two groups of people - let's say men and women - to see if there's a real difference in their average height. A t-test is like a mathematical referee that decides whether the difference you observe is significant or just a result of random chance. So in our example, we are going to see if the difference in either engagement or reach for both types of reels contains a significant difference. 

We can formulate the following table of p-values after conducting our t-tests:

EngagementReach
Reels w/ Original Audio8.00142.87
Reels w/ Named Music10.08149.67
p-value0.1449 <0.050.8681 < 0.05
ConclusionNo significant differenceNo significant difference

We once again tested at a confidence level (α) of 0.05, meaning the statistical test would be significant if falling under this threshold. This value is set to determine a tolerance for meaningful results (see our previous blog on Instagram Geotagging for a quick and free statistics lesson and application).

Conclusion: What Type of Instagram Sound Should You Use?

Based on our results, there were no statistically significant differences (at a 95% confidence level) between reels with original audio vs. named music in their effects on engagement or reach. This means, we did not find any significant impact on post performance when using one type of audio over the other for Instagram reels. 

This result is what we expected for this analysis, primarily because of how many other dimensions there are in evaluating reels as a content type. There are likely other aspects of a reel that contribute to post performance at a higher magnitude, such as visual appeal, captions, timeliness, and trends.

This also does not mean that using original audio or Instagram's audio has zero impact. Leverage can certainly vary based on how things align for the overall post. Brand-specific or voiceover content may find better leverage with original audio while trend-based or emotional content may very well work better with Instagram’s music. There's no one-size-fits-all approach for each individual piece of content.

Ultimately, the choice between original audio and named music should depend on the specific content and goals of each reel. Experimenting with both and analyzing performance can help determine what works best for your audience. It's important to make assessments based on evaluation that best tailors to your profile and leverage your own data. 

Need data-driven solutions to grow your business with Instagram? Chat with our team today! 

Random Labs: Does Instagram Caption Length Affect Performance?

As we continue navigating the turbulent social media landscape of 2025, we embark on another Random Labs experiment—this time analyzing how caption length might impact Instagram post performance.

In our previous experiment, we analyzed the impact of using hashtags versus keywords on this platform (check out this blog to see the results). We continue our statistical adventures by analyzing another component of Instagram captions–length. With growing discussions about the increasing impact of SEO on this platform over the years, this analysis will serve as a great supplement to the overall topic.

How We Tested Instagram Caption Length

To analyze the impact of caption length on Instagram post performance, we first need to manually create a variable representing the size of a post’s caption

Instagram, through Meta Business Suite, does not include such variables in its exportable data. Instead, this data typically consists of standard social media metrics, such as engagement, reach, and more. 

Fortunately, most datasets from each platform almost always include a column listing the caption associated with each post. With a little spreadsheet magic, we can create a new column that counts the character length of the caption and add it into our statistical model. 

We will be conducting this experiment with some statistical modeling, specifically a Multiple Linear Regression (MLR) model. 

A simple example for this methodology would be: imagine you're trying to guess the price of a house. You know that bigger houses usually cost more, but that's not the only thing that matters. The neighborhood, the number of bedrooms, and how old the house is all play a role too. MLR is like a smart calculator that looks at all these factors together to make a good guess about the house price.

In our example, we are testing to see if the factor regarding caption length will statistically have an impact on content performance (engagement or reach). 

In our example today, to add more context to the model, we also want to include a variable that shows whether that specific post contained a hashtag or not. This can give us more insights on whether there’s any specific sweet spots with caption length with or without hashtags. 

What is the Best Length for Instagram Captions?

Here are the conclusions for both of our models:

Variable of InterestAdjusted R-SquaredCaption Length Significant
Reach0.0222No
Engagement0.0515No

Interpretation

Today, Random Labs introduces a new statistical measurement: the Adjusted R-Squared. This metric is frequently used in MLR’s to provide a more accurate assessment of a model's predictive power than a regular R-Squared. 

Let’s use another fun example to explain this value:

So, if adding "what the student had for breakfast" doesn't really help predict test scores, the adjusted R-squared would decrease, telling us it's probably not worth including that factor. This is key for our social media example, which innately will include various potential performance factors. 

Based on our output in the study, we can see that we outputted a very low Adjusted R-Squared value for both models including caption length as a predictor for a post’s reach or engagement. The low values for both tests suggests that after accounting for all the different variables relating to a post’s caption, including caption length, they only account for 2.2% and 5.1% of post’s reach and engagement respectively. 

We can confidently conclude that a post’s reach and engagement are more likely affected by other factors.

How to Optimize Your Instagram Captions

These results aren’t particularly surprising to us, as we understand the dynamic nature of social media content. A post’s performance on most platforms is ultimately difficult to predict, although we have identified significant factors in the past. Specifically, when it comes to captions, we are likely looking at the keywords themselves as more of an impactful component as opposed to the overall length of the caption: quality over quantity.

We also continue to emphasize in each of these experiments that all social media profiles are different and that drawing conclusions tailored to your profile’s needs is key.

Need a data-backed strategy to grow your business? Chat with our team today!

Random Labs: Are Keywords or Hashtags Better for Reach on Instagram?

As we turn the page into 2025, we decided it was a great opportunity to open the year with another Random Labs experiment. This time, we focused our analysis on Instagram, a rapidly changing social media platform, to understand the impact that keywords or hashtags may or may not have on Instagram reach.

Do Hashtags Actually Increase Reach on Instagram?

There have been frequent arguments throughout Instagram's lifespan associating the use of hashtags with improved post exposure. If this was true, we would expect higher reach among these posts. However, questions about the effectiveness of this conventional aspect of social media posting have been increasing in recent times.

One rebuttal involves the idea that performance relative to captions is more attributed to the keywords themselves rather than strictly to hashtags. Adam Mosseri, the CEO of Instagram, has previously commented about his platform’s changing environment to user experience and content discovery, which may have seemed like an exclusive role to hashtags in the past. 

How We Tested the Effectiveness of Keywords

At Random, curiosity—one of our core values—often drives experimentation. With this in mind, we tested different caption strategies on our own Instagram content to evaluate this hypothesis. Below is an example of what the differences in captions looked like.

Keyword-Rich Instagram Caption:

View this post on Instagram

A post shared by That Random Agency (@thatrandomagency)

Hashtag-Rich Instagram Caption:

View this post on Instagram

A post shared by That Random Agency (@thatrandomagency)

To maintain an adequate sample size for this hypothesis testing, we also included zero-hashtag content that did not contain additional keywords. 

Performance metrics were analyzed on a per-month basis and compared with similar content types to further narrow down performance within the same time frame and context. For example, Instagram reach for reels will average significantly higher in this metric compared to single image posts so we compared reels without hashtags to reels with hashtags. There’s a lot of random (no pun intended) noise in the world of social media, so it’s essential that we fine-tune our testing. 

The Results: Are Keywords or Hashtags Better for Reach?

We utilized 8 of the 12 calendar months to create individual monthly comparisons of averages between no-hashtag content and hashtag content. The content we compared them to was pulled from a random sample of similar content within the same month and cleaned of any outliers or viral content. The following table summarizes the average results for each month:

MonthReach (No Hashtags)Reach (Hashtags)Difference
February24683+196.4%
April44644+913.6%
June14474+94.6%
August11459+93.2%
September11168+68.2%
October15299+53.5%
November45767+582.1%
December56139-59.7%

The table contains columns representing the months where the experiments were held, the average post reach of content with or without hashtags, and the percent difference between these averages. 

Interestingly, in 7 out of the 8 months tested, we observed that content without hashtags had a higher average reach. In several of these months, the differences in averages were in the triple-digit percentage range. Surprisingly, in 6 out of those 7 months, the posts without hashtags outperformed those with hashtags in reach. They also ranked as the top-performing posts for their respective months.

What Is the Significance of Our Results?

Before drawing any definitive conclusions, we need to establish some basic statistical boundaries. The majority of the tested content consists of reels, which inherently show greater variation in reach compared to other content. Other factors likely have a greater influence, including the topic, quality, timing, and various external elements. 

Additionally, this is a newer experiment, so a larger sample of posts still needs to be evaluated. However, the initial results align with the platform's general trend of decreasing reliance on hashtags.

So, how will hashtags be utilized on this platform in 2025? They will continue to serve their primary purpose of helping users manually discover content within specific categories. However, overloading posts with hashtags is unlikely to significantly impact reach and may even lead to negative returns. As we’ve often concluded in previous Random Labs sessions, every profile is unique. Be sure to base your evaluations on your own data.


Need a data-backed strategy to grow your digital results? Chat with our team today!

Random Labs: Does Geo-Tagging on Instagram Improve Performance?

Geotagging has long been a popular feature on Instagram, allowing brands and users to tag locations on their Instagram posts, thus enabling their followers to see where the content was created.

The general understanding is that enabling this feature on your Instagram content can potentially (and positively) impact the performance of said post. Some commonly cited benefits include higher engagement, increased reach, and improved discoverability of content. 

But we wondered…is this true? In this edition of Random Labs, we will be doing a basic comparison of metrics between content with geotagging enabled and content without geotagging to validate some of these hypotheses. 

What is a Geo-Tagged Instagram Post?

We will be evaluating organic content performance for Instagram from our awesome partners over at Trasca & Co Eatery and Ponte Vedra Tap Room. We tallied all the content posted on these Instagram channels over 2024, using geo-tagging as a separator. The following images show examples of Instagram posts that feature this:

View this post on Instagram

A post shared by Trasca & Co Eatery (@trascaandco)

Our Analysis of Geo-Tagged Instagram Posts

For the comparative analysis, we compiled a sample of approximately 150 posts from 2024 and compared the descriptive statistics of post metrics, using a simple identifier to separate the data: posts with geotagging versus those without. As it is generally difficult to obtain identical samples of both types of content throughout the year, we will use a similar method of comparing uneven samples, as we did in our previous Random Labs analysis on the performance impact of faces in social media content.

The following table compares the average metrics between the two types of Instagram content as well as the test results for each unique post metrics:

Once again, in this Random Labs analysis, we use hypothesis testing to statistically determine whether any Instagram performance metrics differ between content with or without geotags. 

A Quick Statistics Lesson

The sample size for each post type is denoted by n, a widely used standard notation in statistics. We also report the averages for both types of content (with and without geotags) along with the corresponding p-value from the test for each specific metric. 

A p-value helps you understand whether the result of an experiment happened by random chance or by something meaningful—geotagging, in our example. Furthermore, we need a way to decide whether the p-value is small enough to consider the result significant. 

This is where the alpha (α) level comes in. This alpha level is a threshold we set before the test to decide when we will be convinced that those results were meaningful. 0.05 is a commonly practiced threshold, which means in our example that we are willing to accept a 5% chance that our results happened by random chance. If our p-value is less than this threshold, we conclude a significant test for the metric. 

Should You Geo-Tag Your Instagram Posts?

Based on our tests, we did observe a few statistically significant impacts of geotagging

We found that Instagram posts featuring geotags did not significantly differ in terms of comments, saves, reach, or impressions compared to those without geotags. This may be due to a few hypotheses or constraints. Geotagging represents only one difference between the content being compared. Higher-weighted engagements, such as comments and saves, may be more influenced by factors like visuals, messaging, content type, seasonality, etc.

Additionally, the inclusion of geotagging alone may not result in increased post reach. But we believe this could vary depending on the specific location. Highly popular locations may yield different results.

On the flip side, we did observe significant results when comparing geotagged posts to non-geotagged posts in terms of likes. There was enough evidence to suggest that geotagging has a meaningful impact on the number of likes a post can receive. The difference in likes was also strong enough to conclude significance in overall engagement, as this metric typically has the highest volume of interactions compared to other engagement types. 

People may be more inclined to like posts related to places they are familiar with, as users are more likely to engage with content from their city or favorite spots. Additionally, there may be a psychological factor at play. Geotagging adds a sense of authenticity, making posts from real, specific locations feel more genuine.

Conclusion

Overall, the impact of geotagging may vary depending on your industry, location, audience, and other factors. However, it would be premature to assume that geotagging cannot have an impact on your content. It is important to frequently leverage the social media data available to your business. 


Need a data-backed strategy to grow your business? Chat with our team today!

Random Labs: Do Faces Boost Social Media Post Performance?

It’s often claimed that showing real people in social media posts helps to humanize a brand and potentially garner more engagement. But can the simple act of showing a face in a post actually boost post performance?

In this month’s edition of Random Labs, we explored the impact this basic human element has on post performance across various platforms. 

Our Methods to Test Post Performance

As we've reiterated in past experiments, modern social media platforms offer users a wealth of data on their own content, providing great opportunities to fine-tune and optimize strategies. (Check out our previous Random Labs blog on TikTok video length performance.) We will analyze the posts we have shared in 2024 so far on our own agency’s social media platforms.

In our experiment, we focused primarily on image content. 

Short-form videos and Reels have become highly effective content formats across most platforms. We've observed that a large percentage of this type of content already features people within the posts themselves. 

We will utilize multiple hypothesis testing across three major platforms (Facebook, Instagram, and LinkedIn) to answer a series of questions: Is there a statistically significant difference between posts with faces and those without? If so, what metrics are primarily affected?

How We Set Up Our Experiment to Test Post Performance

Before we deep dive and answer these questions, we will make the following data assumptions and preparation disclaimers across all platforms:

Let’s take a look at some Random content!

How Do Posts with Faces Perform on Facebook?

Content w/ FacesContent w/ No Faces
Average Engagement: 5.12
Likes:  2.52
Comments: 0.20
Shares: 0.08
Average Impressions: 38.08
Average Engagement: 4.05
Likes: 2.50
Comments: 0.06
Shares: 0.03
Average Impressions: 32.47
Hypothesis Testing Results:
Engagement: No statistically significant difference between the averages of the two datasets at the 95% confidence level.
(P-value: 0.402 > 0.05)
Likes: No statistically significant difference between the averages of the two datasets at the 95% confidence level.
(P-value: 0.323 > 0.05)
Comments: No statistically significant difference between the averages of the two datasets at the 95% confidence level.
(P-value: 0.245 > 0.05)
Shares: No statistically significant difference between the averages of the two datasets at the 95% confidence level.
(P-value: 0.195 > 0.05)
Impressions: No statistically significant difference between the averages of the two datasets at the 95% confidence level.
(P-value: 0.127 > 0.05)
Methodology explained: In our statistical method, think of the p-value as a measure of surprise. It shows how likely your data could occur by random chance if there’s no real effect. The alpha level (usually 0.05) is the cutoff we set to decide how much surprise we're okay with. If the p-value is less than alpha, it means the result is surprising enough to believe something is happening (significant). If it’s higher, we assume it's just random chance (not significant).

What Do These Results Mean?

In summary, Facebook differed significantly from the other two platforms we analyzed by showing no statistically significant differences in metrics between the two content types. Whether posts had faces or no faces on this platform, there is not enough evidence to suggest its presence has any impact on post performance. Although the averages may seem to show some differences at first glance, hypothesis testing reveals that these differences are neither justified nor consistent. 

We have some external hypotheses that may potentially support these results, including the year-over-year trend of a general decline in organic engagement on the platform. During this period, Facebook has shifted its focus significantly towards an algorithm centered around advertising, and paid content has disproportionately outperformed organic content based on our observations. 

Additionally, we are just one account within a single industry. These results can vary depending on factors such as objectives, follower size, and more.

How Do Posts with Faces Perform on Instagram?

Content w/ FacesContent w/ No Faces
Average Engagement: 22.45
Likes:  19.94
Comments: 1.61
Average Impressions: 153.61
Average Engagement: 12.53
Likes: 11.58
Comments: 0.28
Average Impressions: 81.88
Hypothesis Testing Results:
Engagement: Statistically significant difference between the averages of the two datasets at the 95% confidence level.
(P-value: 0.006 < 0.05)
Likes: Statistically significant difference between the averages of the two datasets at the 95% confidence level.
(P-value: 0.0096 < 0.05)
Comments: Statistically significant difference between the averages of the two datasets at the 95% confidence level.
(P-value: 0.0005 < 0.05)
Impressions: Statistically significant difference between the averages of the two datasets at the 95% confidence level.
(P-value: 0.0062 < 0.05)
Methodology explained: When a p-value is really close to zero, it suggests that the difference between the averages is highly unlikely to be due to random chance. In simpler terms, it means the data is showing a very strong signal that there’s a real difference between the two groups (face vs. no face content) you’re comparing. The closer the p-value is to zero, the more confident we can be that the observed difference is meaningful and not just a fluke.

On Instagram, our statistical tests showed significant differences in average metrics between content types. Nearly every test indicated a strong bias toward content featuring faces, with those posts consistently outperforming posts without faces, as confirmed by the statistical analyses.

This was an easy assumption considering the nature of Instagram, a highly visual platform. On Instagram, users are more drawn to emotionally engaging, personal, and relatable content, which faces provide. Faces capture attention more effectively. As additional support beyond the content we tested, the majority of high-performing Reels also tended to feature faces—this was so evident on our end that we didn’t feel the need to specifically test this content type.

How Do Posts with Faces Perform on LinkedIn?

Content w/ FacesContent w/ No Faces
Average Engagement: 17.28
Likes: 5.44
Clicks: 11.44
Average Impressions: 121.25
Average Engagement: 11.63
Likes: 3.70
Clicks: 7.75
Average Impressions: 94.22
Hypothesis Testing Results:
Engagement: Statistically significant difference between the averages of the two datasets at the 95% confidence level.
(P-value: 0.014 < 0.05)
Likes: Statistically significant difference between the averages of the two datasets at the 95% confidence level.
(P-value: 0.012 < 0.05)
Clicks: No statistically significant difference between the averages of the two datasets at the 95% confidence level.
(P-value: 0.063 > 0.05)
Impressions: Statistically significant difference between the averages of the two datasets at the 95% confidence level.
(P-value: 0.037 < 0.05)
Methodology explained: Based on the previous channels, since we generally concluded significance across Likes and Shares in tandem, we test an engagement type in clicks which is more highly prevalent on LinkedIn.

In terms of total engagement and impressions, we observe similar patterns on LinkedIn as on Instagram. The average overall engagement, including likes and impressions, is significantly higher for content featuring faces, with a high level of statistical confidence. Interestingly for post clicks, a highly prevalent engagement type for LinkedIn, there was no significant difference in the averages between both content types. 

Clicks often represent a more intentional action, such as wanting to learn more or visit a website. Users might engage with content featuring faces by liking or viewing it without necessarily clicking through, especially if they find it visually appealing but not informative enough to warrant further action. LinkedIn users might be more likely to click on content with a strong call to action (CTA) or business-related context, which may or may not always include faces.

Bottom Line: Include More Faces in Your Posts

In summary, both Instagram and LinkedIn demonstrated strong performance across multiple metrics when content featured faces, with Instagram showing a more pronounced disparity. This raises several potential explanations, such as an algorithmic boost for this type of content, increased visual appeal, or a heightened sense of human connection.

Try testing more posts on your Instagram and LinkedIn that feature real people–whether that be your team members, customers, or clients–to start increasing your post performance.

As a reminder, results can vary across different industries and strategies, highlighting the importance of refining a strategy tailored specifically to your business needs.

Want us to bring statistically backed results for your strategy? Send us a message!

Random Labs: Does Video Length on TikTok Matter?

With TikTok's rapid growth and the countless videos uploaded daily, one question remains: Does video length matter? Is a short, snappy clip more effective than a longer, more detailed one? 

Like most major platforms, TikTok empowers users to analyze their own performance data directly, providing valuable insights into their marketing strategies. At That Random Agency, we leverage this capability daily to optimize our campaigns. 

So we were curious - could we use performance data to analyze what video lengths on TikTok provide the best metrics?

Let’s put on our lab coats and dive into our first Random experiment.

Our Methods to Determine Best TikTok Length

Video length has been a prevalent topic of discussion for TikTok videos over the years. Specifically, is length a variable that may impact content performance?

The general recommendation? Shorter videos tend to perform better in terms of views. 

However, the issue with general recommendations is that they often pool results across a broad range of accounts. These vary significantly in terms of audience type, audience size, content strategy, industry, and more. 

We have evaluated how consistently results vary across different platforms based on individual accounts. To better understand this general trend, we will compartmentalize our analysis of TikTok video length by industry. This will help us assess video performance more accurately. 

Here are five different industries sampled among clients anonymously who have seen some marketing success in the platform. 

(Disclaimer: These results are samples of larger industries and are not meant to guarantee improved performance for your personal accounts.)

Best TikTok Length for Marketing Brands

We'll begin with an industry which we happen to be rather passionate about: marketing. We randomly selected a large sample of our own TikTok video performance data and chose to eliminate extreme outliers due to the platform's viral tendencies. We generated the following output when plotting average video view performance (in seconds) in relation to video length at different intervals.

Nothing in this output came as much of a surprise to us. A video length of 30 seconds is a recommendation we consistently hear. Most of our video analytics from the year to date show a central tendency of video length between 30 and 90 seconds. Additionally, the relatively higher average for videos between 0 and 10 seconds is a key observation for future analysis.

Best TikTok Length for Food Brands

We output a similar chart utilizing video performance for TikTok accounts within the food or restaurant industry. 

We observed a similar pattern in the video data for this industry, noting that the average number of video views increases beyond the 30-second threshold. While the maximum interval in seconds can still be classified as a “short” video at around 80 seconds, we noticed a larger increase in average video views even beyond a minute long. We can consider the storytelling nature of food preparation that could keep viewers engaged as a potential element for this pattern. 

Best TikTok Length for Keynote Speakers

Another industry in which we are well-versed is the keynote speaking industry. If you're specifically involved in the keynote speaking business, feel free to check out our upcoming SPEAKR  event to amplify your speaking career! 

In terms of video length to video views performance in this industry, we are once again observing consistent behavior with a moving average once videos exceed 30 seconds. However, there is a significantly higher increase in average video views for videos within the 0 to 10-second range. This variation may be attributed to the nature of keynote speaking videos, which often include short, memorable quotes or soundbites in brief clips, or longer videos that expand on compelling or educational topics.

Best TikTok Length for Automotive Brands

We also work with clients in the automotive industry. Some immediate observations when evaluating video performance data for this category reveal a tendency for videos to skew towards shorter lengths. This is likely due to the nature of the industry. Automotive videos are often action-focused and visually dynamic, making shorter content more common and effective. 

We account for the natural tendency toward shorter video lengths in this industry by analyzing 5-second intervals. Interestingly, a more positive linear trend emerges as videos approach 30 seconds, likely due to these shorter intervals. Videos reaching at least 30 seconds continue to mark a significant threshold for increased average views.

Best TikTok Lengths for Health Brands

The final industry we will evaluate in this analysis is the Health industry, specifically in the non-profit sector. We see a bit more variability in the ranging video views in the chart due to some difference in the video quantity as well as audience size. 

Relative to the other industries we have analyzed so far, we are still able to depict a consistent pattern when we key into the 30 second threshold for video length. Including the 30 to 40 second intervals itself, 3 of the top intervals by average video views were apparent for videos that extended beyond this half minute threshold. Content wise, videos under this industry can use a combination of educational, attention-grabbing research along with a versatile range of visuals. 

Summary

After evaluating the TikTok video view performance from the five different industries, we are able to conclude some insights. 

Most notably, for all industries, average video views were significantly higher for video intervals longer than 30 seconds as opposed to the first 30 seconds. 30 to 90 seconds seems like a general sweet spot based on our data. This aligns with multiple general recommendations that we have observed. 

In a few industries, videos shorter than 10 seconds had a notably high average view count. This could be due to a systematic reason: video replays are automatic and contribute to view metrics. As a result, videos of this length are more likely to loop repeatedly, boosting their view counts.

In summary, although video length on TikTok showed some significance in view differences, it shouldn't take priority over other important aspects such as crafting engaging content, relevance, sounds, storytelling, and more. Additionally, like other major platforms, TikTok is constantly evolving, with longer-form content and new content types (such as carousels) emerging. Staying updated on new trends, alongside analyzing the best performance strategies tailored to your account, will be crucial.

Need a data-backed strategy to grow your TikTok? Chat with our team today!