SEO 101: Exploring the technical side of SEO
We’ve discussed a lot of important parts of SEO in this series such as link building, keywords and content; but what about on-page optimisation? In this month’s blog post, we’ll focus on the specific technical aspects of building (or modifying) your web pages so they’re structured for both human visitors and search engines alike.
To perform better in search engine listings, your most important content should be in HTML format. Search engines won’t crawl your images, Flash files, Java applets and other non-text content. So HTML text is the easiest way to ensure the words and phrases you display to your visitors are visible to search engines.
Top tip: See your site as the search engines do
To ensure your content is ranking, make sure to double check from the search engines perspective. There are several tools that will let you do this such as Google’s cache, BROWSEO and MozBar. Many times what humans see will look extremely different to what the search engines see, so these tools help you see if the pages you’re building are visible and indexable to the engines.
Check out this great example from Moz. As you can see, there’s a distinct difference between the human view and the search engine view. When your web page isn’t indexed properly, it makes it difficult for search engines to interpret its relevance. So without HTML text, your web pages will have a very hard time ranking in search results.
Crawlable link structures
Search engines also need to see links in order to find the content in the first place. A crawlable link structure is one that lets crawlers browse the pathways of a website. A crawlable link structure is vital because you need a navigation the search engines can access – otherwise your pages won’t get listed on their indexes.
Here’s some of the common mistakes you might be making:
- Submission-required forms – If you require users to complete an online form before accessing certain content, chances are engines will never see those protected pages.
- Links pointing to pages blocked by the Meta Robots tag or robots.txt – Many websites block access to rogue bots, only to discover the engines will cease their crawl as well.
- Frames or iframes – Both present structural issues for the engines in terms of organisation and following.
- Robots don’t use search forms – Crawlers don’t perform searches to find content, leaving millions of pages inaccessible and doomed to anonymity until a crawled page links to them.
- Links in Flash, Java and other plug-ins – No crawler can reach these links imbedded inside your website, rendering them invisible to the engines and hidden from users’ search queries.
- Links on pages with many hundreds or thousands of links – Engines will only crawl so many links on a given page to cut down on spam and conserve rankings, so pages with hundreds of links on them are at risk of not getting all of these links crawled and indexed.
Keyword usage & targeting
Keywords are at the heart of SEO and keyword usage and targeting are a big part of the search engines’ ranking algorithms. As the engines crawl and index the contents of pages around the web, they keep track of pages in keyword-based indexes.
What you shouldn’t be doing
One of the biggest no-nos in the SEO world is abusing your keywords. This involves “stuffing” keywords into text, URLs, meta tags and links and can do more harm than good for your website.
The best tactic is to use your keywords naturally and strategically. For instance, if your page targets a specific keyword phrase like “donuts”, then you might naturally include content about donuts, including where to find the best donut in Perth, the best donut flavours to try, etc.
On the other hand, if you sprinkle (no pun intended) the keyword “donut” onto a page with irrelevant content like “high heels”, then your efforts to rank for “donut” will be a struggle. Why? Because the point of using keywords is not to rank highly for all keywords, but to rank highly for keywords that people are searching for when they want what your website provides.
What you should be doing
One of the best ways to optimise your page’s rankings is to ensure the keywords you want to rank for are prominently used in titles, text and metadata. Here’s where you should be using your keywords for on-page success:
- In the title tag at least once
- Prominently near the top of the page at least once
- In the body copy of the page at least two or three times including variations
- In the alt attribute of an image on the page at least once
- In the URL once
- In the meta description tag at least once
Warning: Don’t use keywords in link anchor text pointing to other pages on your site. This is known as keyword cannibalisation and can detrimentally affect your rankings.
Meta data is the sections of text that describe your pages. When we talk about meta data, we’re looking at a few things:
A title tag should be an accurate, concise description of a page’s content. Here’s some top tips when crafting your titles tags to ensure a higher click-through-rate:
- Place important keywords close to the front – Did you know that 94% of SEO industry leaders said the title tag was the most important place to use keywords to achieve higher rankings?
- Include branding – Ending every title tag with a brand name mention helps increase brand awareness and create a higher CTR for people who like and are familiar with your brand.
- Consider readability and emotional impact – It’s a new visitor’s first interaction with your brand, so you should convey the most positive impression possible and grab attention on the search results page immediately.
- Be mindful of length – Keep it to 65-75 characters because after that engines will cut your title tag off and show an ellipsis (…).
A meta description is a short description on a page’s content used as the primary source for the snippet of text displayed beneath a listing in the results. Similar to title tags, there’s a few key ways to optimise your meta descriptions:
- Make sure it’s readable and compelling
- Use important keywords
- Keep it to 160 characters or less
Meta robots are used to control search engine crawler activity on a per-page level. There are several ways to do this:
- index/noindex – Tells the engines whether the page should be crawled and kept in the engines’ index for retrieval.
- follow/nofollow – Tells the engines whether links on the page should be crawled.
- noarchive – Restricts engines from saving a cached copy of the page.
- nosnippet – Informs the engines they should refrain from displaying a descriptive block of text next to the page’s title and URL in the search results.
- noodp/noydir – Tells the engines not to grab a descriptive snippet about a page for display in the search results.
URLs are the addresses for documents on the web and offer great value from a search perspective. They appear in multiple locations, including:
- The results
- The web browser’s address bar
- The link anchor text pointing to a referenced page
- Employ empathy – Place yourself in the mind of a user when crafting your URL and you’ll be able to easily and accurately predict the content they’d expect.
- Shorter is better – Minimise lengths and trial slashes, which make URLs easier to copy and paste and will be fully visible in the search results.
- Keyword use is important – However, don’t go overboard by trying to stuff in multiple keywords as overuse will result in less usable URLs and can tip spam filters.
- Go static – The best URLs are human-readable and without lots of parameters, numbers and symbols.
- Use hyphens to separate words – Use the hyphen character (-) to separate words.
If you employ these simple tricks, you’ll find your URLs are more likely to be clicked on by users.
Duplicate & canonical versions of content
Duplicate content is one of the biggest problems a website can face SEO wise. In recent years, search engines are beginning to crack down on pages with thin or duplicate content by assigning them lower rankings. So now is the time to get on top of your website content and make sure there’s no double-ups.
So how can you organise your content to ensure there’s no duplicates? Through canonicalisation. Canonicalisation is the practice of organising your content in such a way that every unique piece has one, and only one, URL.
The problem: When two or more duplicate versions of a web page appear on different URLs it presents a big problem for the search engines: which version of this content should they show to searchers? As a result, all of your duplicate content could rank lower than it should.
The solution: When multiple pages with the potential to rank well are combined into a single page, they not only stop competing with each other, but also create a stronger relevancy and popularity signal overall.
Need helping optimising your on-page SEO? Have a chat to Bang Digital today. Our team of digital experts have the skills and experience to get your website in tip-top shape.