Search Engine Marketing Terms, Part 2: SEO
The land of Search Engine Optimisation (SEO) can look and sound like a scary place from the outside. In the second part of our series, we explore SEO terms. SEO is defined as the strategies and actions used to influence the rankings and visibility of a website within search engines resulting in increased traffic to the site. Here is a list of terms to help you navigate backlinks, Panda and Penguins and canonical tags.
The SEO pocket directory
Search engines can’t read images because they are not text based. As a result, Alt Attributes /Alt Tags have been created to describe images within HTML or XHTML documents allowing search engines to add relevance to an image.
Anchor text describes the clickable characters of a hyperlink. They often look like this and when clicked, take you through to the linked webpage. The benefit of hyperlinking words is that it helps search engines to pass relevance to the site linked, helping to improve the ranking of the linked webpage.
A link from one website to another. Each backlink is seen by search engines as a vote of confidence and legitimacy for the linked site but each link can transfer a different amount of value. As well as authority, backlinks can also pass information about the theme or relevance of a webpage.
A canonical tag tells search engines which URL is the ‘original page’ or preferred page when there are multiple pages with similar content. Canonical tags allow the SEO value to be passed on to the original or preferred page, helping it to rank.
An example of this would be when a store website has identical content accessible on multiple pages so that users can search by price or alphabetically.
Page and Domain Authority
Each page that is indexed by search engines retains a page value and authority ranking.
- Page Authority is influenced by the number and quality of links back to the individual page.
- Domain Authority is the overall strength of a domain.
HTTP Response Status Code
You probably already know what a Hypertext Transfer Protocol (HTTP) responsive status code is. Remember the last time you clicked on a link and suddenly your online world temporarily crashed with an ‘ERROR 404’ message?
This my friend, is what we are talking about.
So what do these status codes actually mean?
- 404 – page does not exist.
- 200 – page is functioning as it should.
- 301 redirects – the page has been permanently moved. Users will be redirected to the current location of this original page, the advantage in that 301s pass on majority of the SEO value from the old location to the landing page of the redirect.
- 302 redirects – a temporary redirect that works similarly to the 301 redirect. However, SEO value is not passed on via the redirect (or is it?).
Keyword Cannibalisation is when certain keywords and phrases are optimised on multiple pages across a website. This can create confusion within the search engine as to which page to distribute value to. As a result, the pages end up competing against each and the search engine will be unaware which one is most appropriate to rank.
Do Follow / No Follow
Links are extremely valuable within the world of SEO. However, not all links pass value for search engines. Normal links are do follow and can pass SEO value. Whereas if the no follow attribute is implemented, authority does not flow.
No follow links are usually used on sites that allow user generated content such as blogs or Wikipedia to avoid people leaving spammy SEO links.
An indicator from Google of the authority of a webpage. A scale of 1-10 is used to assign value.
Panda & Penguin
Two of the least favourite animals in the (SEO) world. Panda was launched with the aim to penalise poor or duplicated content. If enough pages are flagged, a site-wide penalty can be issued for having ‘thin content.’
Penguin was designed to target spam SEO or Black hat’ SEO methods (techniques used to increase rankings or traffic in a way Google dislikes.) E.g. over-optimised link building in relation to keywords and links from non-relevant sites. The update was designed to rate the quality of article marketing and blog spam. Then there was Penguin 2.0…
The domain from which a backlink is pointing to a specific page or link.
Robots.txt or the robots exclusion standard is a small text file that is uploaded into a website’s root directory and linked into the sites HTML code. The robots.txt file is there to provide instructions about the site to search engine crawlers about which parts can be indexed for the public.
Root Domain / Sub Domain / Sub folder
Root Domain is the origin and top level of domain hierarchy on the internet. It is the entry point / root to example.com, example.co.uk, example.ac.uk, example.org domains.
The Sub Domain or third-level domain is a second website that can be used to organise website content and has a special URL to gain access to it. For example if job listings on www.example.com were to be accessed via a separate website at jobs.example.com, this would be a subdomain.
A Sub Folder describes the separate folders that are used to organise a website. They can be easily spotted as they’re displayed within a URL after the root domain and between slashes “/”. Let me illustrate – example.com/contact-us/address. The sub folder is /contact-us/.
Search Engine Results Page (SERP)
The Search Engine Results Page is a page that is returned when a search query is performed. SERP will display a list of web pages, including the page title of each result and a brief summary of its contents. Video, images, news and shopping results may also be featured.
Spam is the use of optimisation methods which search engines consider to be low quality or against their guidelines. Examples of such techniques include keyword stuffing, low quality link buying and hacking websites for example.
The Google web spam team led by Matt Cutts deploy algorithm updates such as Penguin and sometimes apply manual penalties to lessen the effect of what they deem to be excessive manipulative activity.