Google has long been the search engine of choice in a world where anyone can jump online, come up with any content they please, and hit ‘post’. While this has introduced us to gems such as the Running Man Challenge and Salt Bae memes, it has also brought about a lot of bad with the good.
Now, Google has declared it will take a stronger stance against “harmful content”.
Google frequently makes changes to its policies, and often these shifts aren’t something you need to pay attention to. This time, it’s probably something you should keep an eye on.
Google’s new policies
Back in March, Google announced they would be taking tougher action on harmful content such as hate speech.
It all started when a few major brands noticed their names popping up next to less-than-desirable content, which saw those advertisers cancel their contracts with Google. In fact, an article from The Guardian noted that in a single week in March, the company lost millions of dollars when advertisers like Pepsi and Walmart pulled their ads from YouTube after being featured alongside offensive videos.
Google apologised of course, but they’ve also gone a step further by making a few changes.
Google has promised to:
What “harmful content” means for your business
As a small business, you stand to gain a lot from using social media platforms like Twitter and Facebook. Similarly, YouTube can be a massive boon for any digital marketing strategy.
But with these new changes coming from Google, do businesses using these platforms have anything to worry about? The answer is two-pronged.
First, in terms of content creation, you’ll want to avoid falling into the harmful content trap yourself. Fortunately, unless you are consciously trying to push the envelope with the content you’re producing, you should be in the clear.
While Google hasn’t defined exactly what it means by harmful content, there are a few traits we can assume fall under that label:
For more info, check out YouTube’s policies here.
For most business owners, it should be pretty easy to avoid harmful content if the content you’re creating is purely for business purposes. However, if you take a more personal/casual angle to your content, or if you have multiple people uploading content, then creating guidelines for everyone to follow is a good idea.
The other side of the coin involves paid advertising. If you use display ads, particularly on YouTube or other Google-run platforms, then you’ll want to be sure that your ads aren’t appearing alongside any harmful content. This means it’s a good idea to monitor your ads and where they are appearing.
When setting up your ad campaigns, aim to be as targeted as you can – the more narrowly you define your audience, the more likely you are to end up on sites or videos that your audience is watching. (You’ll also be more likely to avoid anything unsavoury.)
As you set up a campaign, you have the option to add topics, sites, and content creators to a list of exclusions, meaning your ads definitely won’t show up there. Of course, it’s impossible to think of every single potentially harmful site out there, so you should occasionally run placement reports. This will allow you to see where your ads have been appearing – if anything looks off, you can then add that site or content creator to your list of exclusions.
While it’s still unclear what the full impact of Google’s new policy will be, the move to remove harmful content looks like a step in the right direction. We’ll be watching to see how things change for advertisers and content creators alike.