Today’s episode is inspired by several recent conversations I was having with clients, about how they’re a tiny bit concerned that their content is getting low engagement. And because of that, they are also a tiny bit concerned that things aren’t working and hence that they need to do more things or try something else.
I have a LOT to say about this topic and this episode is going to get a bit technical. My goal with today’s episode is to offer a different perspective and help you view your engagement and metrics in a more helpful light.
Before we dive in, I want to invite you to come join the next cohort of The Thought Leader Club, which is starting on March 11, 2024. In The Thought Leader Club, you will become a thought leader both internally and externally. Learning to build a body of work that not only lets you become known for something but also magnetizes clients & opportunities to you. It is undoubtedly one of the most powerful skills you can learn in your lifetime as an entrepreneur.
Here’s the thing. Maybe one of your 3-year dreams is to create speaking opportunities. Or maybe your dream is to create a group coaching program that attracts best-fit clients from all over the world. Or maybe you want to have a popular podcast where you bring on super cool guests who are genuinely honored to be on your show. Or maybe you want to build community and host in-person events.
Whatever your dreams are for the long term, the common thread that underlies all of your big, audacious dreams is that you can actually start laying down the foundation for each of them, step by step, RIGHT NOW.
And how exactly can you do this? You can do so by starting to build a body of work that lets you become known for your unique thought leadership, story, and how amazing you are at what you do. This is precisely what we will do inside The Thought Leader Club.
Again, our next cohort will start on March 11, 2024. For the exact details of the program, you can hop on over to cheryltheory.com/program. This is also where you can submit your application to join the next round of The Thought Leader Club and book a discovery call for us to discuss how exactly this program can support your 2024 goals, as well as how building a body of work and becoming known for your thought leadership fits into your 3-year vision and goals.
I want to start by first acknowledging that there’s a lot of dialogue in the online space about how you need an engaged audience and a lot of advice on how to get more likes, how to go viral on LinkedIn, and so on. Logically, it makes sense. More numbers = more chances of something working, right?
First and foremost, it’s important to recognize that Likes do not equal clients. Comments do not equal clients. Followers do not equal clients. Clients becoming clients is the only actual indicator of clients.
For instance, I have a very quiet audience and someone would say that I have low engagement, but I’d say that this has not been a hindrance to me growing my business and brand.
At the moment, my Instagram story views typically range between 60 to 100 views a day. I typically don’t get more than 30 likes per Instagram post.
My recent podcast episodes typically get less than 150 listens in the first few months. And so on. Overall, my numbers aren’t astonishing. They are average to most people.
That being said, some people who might have similar numbers/social media analytics as myself might think that it is AWFUL. On the other hand, someone might look at the same numbers and don’t think it’s an issue whatsoever.
Meaning: different individuals can look at the same cold, hard numbers and have completely different interpretations of it.
Speaking of which, I want to share a personal life example of this:
My cat, Nugget, was starting to limp about 2 months ago. At first, I thought she just sprained it. But after a few weeks, I could see that her hind legs simply were not getting any better, so I brought her to a vet. The vet did an x-ray and interpreted the x-ray results as there being a ‘hairline fracture’. The vet then prescribed me some pain-killing medication and said that Nugget’s fracture should heal within a few weeks. Well, a few weeks have passed and honestly, I didn’t see any improvement.
So, just yesterday, I actually brought Nugget to another vet just to seek a second opinion. When this second vet looked at the original x-ray scan from the first vet, he said that he didn’t think that the original x-ray showed any signs of a fracture. Which is shocking to me because the first vet was very confident that the scan showed that there was a fracture.
The second vet then did further x-rays just to double-check. And based on this new set of x-rays, it does seem like that there is NO fracture whatsoever. Now Nugget is going to be referred to another specialist, and we will figure out why she’s limping and dragging her hind legs around and always tipping over as she’s walking.
But the point I want to highlight here is that even with COLD, HARD DATA, like an x-ray, there can still be different interpretations. So with your social media analytics, it’s no different.
Because what often happens is that people will look at their analytics and immediately think something is wrong with their content or marketing or messaging, especially when, to them, they think that people aren’t liking their posts.
Do also remember that likes and comments have no correlation with whether someone is thinking of working with you and whether or not they actually work with you. And we’re about to dive into specific reasons why this is the case.
Then, I want to wrap up the episode by discussing what are actually more helpful indicators of your progress.
My hope is that by the end of the episode, you won’t be as shaken up whenever you look at your social media analytics, and that you’ll leave this episode with another set of tools you can use to evaluate what’s working and what’s not working.
First, let’s have a quick lesson on data.
A lot of people think they’re being analytical and strategic by “looking at their data.”
But honestly, number one, we don’t have enough good quality data to even draw proper conclusions from your Instagram analytics.
And number two, too many people are trying to interpret data without fully understanding the data analysis behind how to accurately interpret such information.
Regarding point number one, for most of us as solopreneurs building our businesses and brands using social media or tools like email, podcasting, etc, we truly don’t have a quality data set to know if something is statistically significant or not.
To explain why sample size matters, let me cite a passage from the following website: https://www.qualtrics.com/au/experience-management/research/determine-sample-size/
When you survey a large population of respondents, you’re interested in the entire group, but it’s not realistically possible to get answers or results from absolutely everyone. So you take a random sample of individuals which represents the population as a whole.
The size of the sample is very important for getting accurate, statistically significant results and running your study successfully.
If your sample is too small, you may include a disproportionate number of individuals which are outliers and anomalies. These skew the results and you don’t get a fair picture of the whole population.
If you’re going to look at, let’s say, your Instagram analytics, then you first need to make sure you have a large enough audience size, AKA your sample size.
But before you can calculate a sample size, you need to determine a few things about the target population and the level of accuracy you need. And one of the most important variables you need to consider is population size.
How many people are you talking about in total? To find this out, you need to be clear about who does and doesn’t fit into your group. For example, if you want to know about dog owners, you’ll include everyone who has at some point owned at least one dog. (You may include or exclude those who owned a dog in the past, depending on your research goals.) Don’t worry if you’re unable to calculate the exact number. It’s common to have an unknown number or an estimated range.
As a math example, let’s say in this entire globe, across the world, there are a total of 100,000 human beings who match the profile of your ideal client. In order for there to be a confidence level of 95% and a margin of error of 5%, you’d need to have an audience of only 383 people. However, these 383 must actually fit the profile of your ideal client.
383 people might not seem like that big of a number. Many of us might have more than 383 followers on Instagram, for example. But the caveat here is that your audience MUST be 383 IDEAL CLIENTS. And that’s where most of us miss the mark.
Because even if we have like 500 or 1000 followers on Instagram, it’s most likely that the majority of that 500 or 1000 people will never become a paying client, and hence it’s questionable whether they’re even an ideal client in the first place.
In a nutshell, you need a sufficiently large sample size of data in order to be able to draw accurate, helpful conclusions from that data. And you need this sample/audience to be ACTUALLY representative of the population you want to reach.
This means that ideally, you’d have a large enough audience filled with ideal clients so that you can actually look at your Instagram analytics, for example, and actually be able to make accurate conclusions.
Because for some large organizations, they may have access to large enough and high-quality datasets. But for us as solopreneurs, that’s unlikely to be the case.
Next, I want to have a discussion about data reliability and data validity.
To help me do this, I’m going to cite a lot of paragraphs from the following website: Source: https://statisticsbyjim.com/basics/reliability-vs-validity/
How we’ll go about this is I’ll first read a few passages from this website, and then I’ll share my translation of what this means, as best I can, in layman terms.
For data to be good enough to allow you to draw meaningful conclusions from a research study, they must be reliable and valid. What are the properties of good measurements? In a nutshell, reliability relates to the consistency of measures, and validity addresses whether the measurements are quantifying the correct attribute.
Reliability refers to the consistency of the measure. High reliability indicates that the measurement system produces similar results under the same conditions. If you measure the same item or person multiple times, you want to obtain comparable values. They are reproducible.
If you take measurements multiple times and obtain very different values, your data are unreliable. Numbers are meaningless if repeated measures do not produce similar values. What’s the correct value? No one knows! This inconsistency hampers your ability to draw conclusions and understand relationships.
Suppose you have a bathroom scale that displays very inconsistent results from one time to the next. It’s very unreliable. It would be hard to use your scale to determine your correct weight and to know whether you are losing weight.
Cheryl’s translation: Here’s an example – Let’s say you post on Monday morning. Then, two days later, you post the exact same thing on Wednesday afternoon. Will you expect to get roughly the same results? (Ex: likes, comments, impressions, reach, etc.) Likely no.
There are so many possible reasons. Maybe someone didn’t even check their phone. Maybe they saw it on Monday and didn’t bother to like it on Wednesday. Or maybe they didn’t bother to like it on Monday but now they’re like, “Hmm, I actually like it the second time I’m seeing it.” Etc, etc, etc.
This example alone already demonstrates the ‘inconsistency’ of social media data.
Inadequate data collection procedures and low-quality or defective data collection tools can produce unreliable data. Additionally, some characteristics are more challenging to measure reliably. For example, the length of an object is concrete. On the other hand, a psychological construct, such as conscientiousness, depression, and self-esteem, can be trickier to measure reliably. When assessing studies, evaluate data collection methodologies and consider whether any issues undermine their reliability.
Cheryl’s translation: For example, if you are analyzing a set of data in a research or professional setting, you will need to “clean up” and validate the dataset before you can perform any data analysis on it. This includes removing irrelevant, duplicate, incomplete, or inaccurate data points that will skew your results. It also includes making sure the data meets the specific criteria you’ve set for your goals.
One of the biggest issues with social media analytics, in my opinion, is that your audience/follower count is filled with NON-IDEAL CLIENTS (ex: fake accounts, people who follow you because they know you in person but will never ever buy from you, etc).
These non-ideal clients who either 1) follow you/are in your audience count or 2) engage with your content (ex: like your post) will therefore “skew” your data.
For example, if you get 100 likes from 100 non-ideal clients, a layperson will immediately think, “WOW, THIS POST IS DOING SO WELL I NEED TO DO MORE OF THIS.” But is this even “good data” if it’s not coming from ideal clients? Or would a post that gets 5 likes from 5 people who eventually buy from you, be a better indicator?
From a research/professional data standpoint, you want your data to come from IDEAL CLIENTS. However, the problem is that most of us do not have the expertise or skills or even access to the right tools/information to know how to clean up our social media data.
Validity refers to whether the measurements reflect what they’re supposed to measure. This concept is a broader issue than reliability. Researchers need to consider whether they’re measuring what they think they’re measuring. Or do the measurements reflect something else? Does the instrument measure what it says it measures? It’s a question that addresses the appropriateness of the data rather than whether measurements are repeatable.
Validity is a smaller concern for tangible measurements like height and weight. You might have a biased bathroom scale if it tends to read too high or too low—but it still measures weight. Validity is a bigger concern in the social sciences, where you can measure elusive concepts such as positive outlook and self-esteem. If you’re assessing the psychological construct of conscientiousness, you need to confirm that the instrument poses questions that appraise this attribute rather than, say, obedience.
Cheryl’s translation: Some can argue that vanity metrics (ex: Likes, followers) do not reflect your business progress or success. On the other hand, metrics such as sales, conversion, retention, bounce rate, click-through rate, number of mentions, might arguably be more useful.
That said, some may say that the number of followers is helpful indicators of things like brand awareness.
Overall, in my opinion, the reliability of your social media analytics matters more than the validity of your social media analytics. Primarily because some will argue that LinkedIn analytics are useful, while others will say it’s rubbish.
Because the argument can go both ways, I personally think it’s more important to understand whether your data/analytics is reliable in the first place.
Overall, the thesis I’m laying down here is that it’s arguable whether or not your social media analytics, whether it be Instagram, LinkedIn, podcast, YouTube, etc, can meet the criteria of being “good data” because it’s questionable whether it checks off the criteria of reliability and validity.
One more discussion I think is relevant to have when it comes to the limitations of looking at your social media analytics is that oftentimes we are so fixated on the things we can immediately see, that we fail to consider what this “data” actually means and whether it’s even a helpful indicator of your progress.
Whether it’s the immediately visible likes and comments or number of followers, or how many re-shares you get on your Instagram post, or the number of impressions you got on your LinkedIn post… Basically anything you can see. But there’s also so much underlying, invisible data that you literally have no access to.
For example, you don’t know if someone refers you to a friend via word of mouth. You have no idea who forwarded your email to someone else. Or maybe someone shared your podcast on their Instagram stories but didn’t tag you and you’d have no idea about that.
You don’t even know who listened to your podcast episode or who saw your LinkedIn post and really liked it and are now checking out your sales page, but they didn’t like that specific sales post.
There’s so many things that your social analytic metrics cannot track. It’s simply not feasible or possible to track that, at least given the current state of technology.
Out of sight, out of mind. Out of sight, out of your awareness entirely.
But also, out of sight, and you make what is within sight mean all sorts of discouraging and highly invalidating things about whether things are working or not.
What I really want to offer today is to remember that more often than not, what’s out of sight is actually a much better and much more helpful indicator of what’s working and what’s not. After all, we intellectually know that buyers can be really quiet. We’re often surprised by who actually reaches out and becomes a client. And likewise, many of us have seen how the people who most regularly engage with your content never become paying clients.
I mean, I’m sure you yourself don’t often engage with other people’s content, even if you’ve thought about working with them. I sure as heck rarely engage with other people or message them, even if I have considered working with them.
So many people won’t make themselves visible to you until the moment they click the button to book a discovery call with you, or straight up message you to ask about working together.
Having a low number of likes or low engagement means nothing about whether people want to buy.
Looking back to the earlier point about how different people can have entirely different perspectives on the same set of numbers, one more question to consider is: Are you looking at your numbers in a way that is helpful?
Because more often than not, we end up spiraling in a pit of despair every time we look at our analytics. That simply isn’t helpful. Especially when the data you’re looking at in the first place wasn’t even a useful set of data in the first place.
It’s like you see 3 likes and make large sweeping statements about that post. For example, maybe you think that this topic just isn’t landing. Some version of “it’s not working”, essentially.
If you were already in doubt about that topic or post, looking at your numeric data will simply amplify that doubt. But also, it’s like, 3 likes. It’s 3 data points. That isn’t a large enough data set to draw ANY coherent conclusions from.
Perhaps it’s time to rethink how often you’re looking at how many unsubscribes you get from your email list. Or how many people dropped off through your series of Instagram stories.
You’re interpreting this “data” in ways that simply aren’t helpful, nor is it accurate. You’re using it to fuel the belief that things aren’t working, that you don’t know what to do, and that you’re just not meant for this. You’re creating “evidence” for things not working.
Ultimately, your thoughts about that data will spill over into how you show up and make decisions about your content, business, and brand moving forward. And if it is the case that it is pretty difficult to use your social media analytics to gauge whether someone is going to apply to work with you, then is it even helpful to ruminate over whether your last post got enough likes and how you have “low engagement”?
Now that we’ve had a pretty in-depth discussion about how your analytics may not necessarily be that helpful, the question is, then, what is actually helpful for us as solopreneurs and personal brands and creators to look at?
I personally would suggest you to put more emphasis on looking at what ideal clients have said to you word-for-word.
In research, this could be referred to as “qualitative data”, and this could potentially look like DM messages or comments from ideal clients, what people say on your application forms or sales calls, what actual paying clients say to you inside your programs, etc.
Again, please collect information from individuals who really fit the type of client you want to work with.
In my opinion, the most useful qualitative data would come from actual clients you’ve worked with and who you had a great experience with and they met the program objectives and got results, as opposed to someone who is in your DMs but never becomes a client.
I’d argue that the former is a much more accurate source of qualitative data.
Here’s what to do. Focus on the exact words that ideal clients have said because that is information that is most ‘representative’ of what your ideal clients are thinking, what they’re looking for in a coach, the problems they’re facing, etc.
For example, some questions you could ask yourself include:
As an example, I want to share how I’m translating some “qualitative data” from one of our newest members inside The Thought Leader Club into upcoming content.
For this awesome individual in particular, they shared with me how at the moment, signing clients isn’t the priority. But instead, their biggest motivator for joining The Thought Leader Club is to work on their identity as an author and content creator.
Eventually, they might offer some sort of coaching program or services, but for this round of The Thought Leader Club, they want to work on positioning themselves as an author who is proud of their work, as well as relaunching their brand, getting support on their workflow, and building up the skill of not doing/saying/writing what other creators are doing, saying, or writing in their own content.
So, in an upcoming piece of content, I might consider focusing on how each member of The Thought Leader Club will have their own unique set of goals and dreams that they’re working towards, and how The Thought Leader Club will support them towards their specific goals.
For example, I might use language such as:
Inside TLC, we will help position you as an incredible author as you niche closer and closer to the launch for your first book. We will also help you to ease back into a regular routine and workflow of content creation and brand-building so that:
We will also set you up for your 1-3 year goals, including the launch for your first book, re-establishing your brand online, offering some sort of coaching (ex: helping people write their first book) or editing services in the near future, etc.
All of these are points that I could include in my content. Honestly, that’s precisely what I just did in this episode. The above points that I just shared, even though I positioned it as an example to help explain a teaching point. It also serves as marketing for The Thought Leader Club.
As you can see, the words that your clients say are a GOLDMINE of content and marketing. Not only can you get so many ideas from listening to what your people are saying, but it’s also much more helpful and exciting to look at this type of “qualitative data”. It’s so much fun!
I know that this episode might be quite dense, and the information might be kind of dry. Honestly, I was never a data analysis kind of person and I dreaded the data analysis courses we had to take during my PhD.
But I hope that you’re able to walk away from this episode with a more helpful understanding of data, and that moving forward, you are able to focus your time and attention toward the data that actually can help you grow your business, brand, and body of work.
Sounds good? Awesome. Let’s get to work.
SOUNDS GOOD? AWESOME. LET'S GET TO WORK
Copyright © 2024 Cheryl Lau Coaching and Consulting All Rights Reserved | Privacy Policy · Terms of Use · Brand & Website Design by Studio Naghisa