Quick Google Search

Get to the Top on Google

1.Keyword discovery
When a user visits a search engine, they type words into the search box
to find what they are looking for. The search terms they type are called
keywords and the combinations of keywords are keyphrases.
If you imagine that building an optimized site is like cooking a
meal, then keywords are the essential ingredients. Would you attempt to
cook a complex new dish without first referring to a recipe? Would you
start before you had all the ingredients available and properly prepared?
In our analogy, keywords are your ingredients and the rest of the sevenstep
approach is your recipe.
Ideally, you should undertake keyword research well before you
choose a domain name, structure your site, and build your content.
However, this is not always possible, as most webmasters only turn to
SEO after they’ve built their site.

2.Courting the crawl
If you picked up a book on SEO from two or three years ago, there
would have probably been a whole chapter on search engine submission.
There were also businesses that used to specialize in this activity alone.
One or two of them still vainly pursue this business model, but I am
afraid the world has moved on. The modern way to handle Google inclusion
is through the use of sitemaps (see later in this step) and a wellstructured
site.
Courting the crawl is all about helping Google to find your site and,
most importantly, to index all your pages properly. It may surprise you,
but even many well-established big names (with huge sites) have very
substantial problems in this area. In fact, the bigger the client the more
time I typically need to spend focused on the crawl.
As you will see, good websites are hosted well, set up properly, and,
above all, structured sensibly. Whether you are working on a new site or
reworking an existing internet presence, I will show you how to be found
by Google and have all your pages included fully in the Google search
index.

2.1 How Google finds sites and pages
All major search engines use spider programs (also known as crawlers or
robots) to scour the web, collect documents, give each a unique reference,
scan their text, and hand them off to an indexing program. Where the
scan picks up hyperlinks to other documents, those documents are then
fetched in their turn. Google’s spider is called Googlebot and you can see
it hitting your site if you look at your web logs. A typical Googlebot entry
(in the browser section of your logs) might look like this:
Mozilla/5.0 (compatible; Googlebot/2.1; http://www.google.com/bot.html)

Even if you have a site already, it is vital to invest significant time
and energy on keyword research before starting your SEO campaign.
Although this may astonish you, I would recommend that 20% of all
your SEO effort is focused on this activity alone. If you make poor keyword
selections, you are likely to waste energy elsewhere in your SEO
campaign, pursuing avenues unlikely to yield traffic in sufficient quantity,
quality, or both. To return to our analogy, if you select poor ingredients,
no matter how good the recipe may be the meal itself will be a
disappointment – and no one will want to eat it.
Don’t forget that one source for information about keywords is your
own web logs. This helps you avoid undoing what you’re already ranking
well for. Google Analytics’ keyword stats can also be particularly useful
input to the early stages of an SEO campaign (see page 225 for more on
this). I learnt this lesson from a client who ran a local catering business.
She told me that many of her customers had found her via Google, but
she couldn’t understand what they were searching on as she could never
find her site in the top 50, let alone the top 10. By investigating her
Google Analytics stats, we discovered that she was ranking well for
“thanksgiving catering” due to some client testimonials and pictures on
her site. This explained why so many of her clients were ex-pat
Americans and how they were finding her business; after all, such a
search term was pretty niche in South West London, UK!
Common mistakes in keyword selection
Most people approach SEO with a preconception – or prejudice – about
what their best keywords are. They are normally either wholly or partly
wrong. This is good for you because you are armed with this book.
There are five key mistakes to avoid when selecting keywords:
1 Many of my customers first approach me with the sole objective
of ranking number one on Google for the name of their business.
Please don’t misunderstand me, I am not saying that this
isn’t important. If someone you met at a party or in the street
could remember your business name and wanted to use Google
to find your site, you should certainly ensure that you appear in
the top five. However, your business name is very easy to optimize
for and only likely ever to yield traffic from people you
have already met or who have heard of your business through a
word-of-mouth referral. The real power of a search engine is its
ability to deliver quality leads from people who have never heard
of your business before. As such, ranking number one for your
business name, while it’s an important foundation, is really only
of secondary importance in the race to achieve good rankings
on the web.
2 Many site owners (particularly in the business-to-business sector)
make the mistake of wanting to rank well for very esoteric
and supply-side terminology. For example, one client of mine
was very happy to be in the top 10 on Google for “specimen
trees and shrubs,” because that was the supply-side terminology
for his main business (importing wholesale trees and shrubs).
However, fewer than 10 people a month worldwide search using
that phrase. My client would have been much better off optimizing
for “wholesale plants,” which attracts a much more significant
volume of searches. In short, his excellent search engine
position was useless to him, as it never resulted in any traffic.
3 Many webmasters only want to rank well for single words
(rather than chains of words). You may be surprised to hear that
(based on research by OneStat.com) 33% of all searches on
search engines are for two-word combinations, 26% for three
words, and 21% for four or more words. Just 20% of people
search on single words. Why does that surprise you, though?
Isn’t that what you do when you’re searching? Even if you start
with one word, the results you get are generally not specific
enough (so you try adding further words to refine your search).
It is therefore vital that keyword analysis is firmly based on objective facts about what people actually search on rather than
your own subjective guess about what they use.
4 People tend to copy their competitors when choosing the words to
use, without researching in detail what people actually search for
and how many competing sites already carry these terms. Good
SEO is all about finding phrases that pay that are relatively popular
with searchers but relatively underused by your competitors.
5 Many webmasters overuse certain keywords on their site (socalled
keyword stuffing) and underuse related keywords. Human
readers find such pages irritating and Google’s spam filters look
for these unnatural patterns and penalize them! Instead, it is
much better to make liberal use of synonyms and other words
related to your main terms. This process (often involving a thesaurus)
is what information professionals call ontological
analysis.
The best way to avoid these and other common mistakes is to follow the
following maxims:
G Think like your customer and use their language, not yours.
G Put aside your preconceptions of what you wanted to rank for.
G Put aside subjectivity and focus on the facts.
G Consider popularity, competitiveness, and ontology.
In short, you need to make a scientific study of the keywords and keyphrases
your customers and competitors actually use, and balance this
against what your competitors are doing. I use a three-step approach to
keyword analysis (known affectionately as D–A–D): discovery, attractiveness,
and deployment.
Keyword discovery, the first step, is the process of finding all the
keywords and keyphrases that are most relevant to your website and
business proposition.

3.The D–A–D analysis tool
Throughout the steps of the D–A–D model, I will refer to a spreadsheetbased
tool that always accompanies my keyword analysis. Create a new
spreadsheet or table to record your work, with six columns (from left to
right):
A Keywords
B Monthly searches
B Raw competition
D Directly competing
E KEI
F KOI
All will become clear later in this chapter.
In the keyword discovery phase, we are focusing on Column A only
and trying to compile as large a list of keywords as possible.
The discovery shortcut: Learning from competitors
The place to begin your discovery is again by looking at your competitors’
sites. Try putting into Google search terms related to your business,
its products and services. For each of the top five results on each search
term, select the “View source” or “View page source” option from your
browser menu. Make a note of the keywords placed in the <TITLE>,
<META NAME=“Description”>, and <META NAME=“Keywords”>
tags.
Alternatively, if looking through HTML code (hypertext
markup language, the programming language used to create
web pages) leaves you cold, visit one of the keyword analysis
tools listed on the forum that accompanies this book (www.seo-expertservices.
co.uk). One good example is the Abakus Topword Keyword
Check Tool: www.abakus-internet-marketing.de/tools/topword.html.

Here you can enter the URLs of your competitors and read off the
keywords that they use.
List all of the keywords and keyphrases you find on your competitors’
sites, one after another, in Column A of your spreadsheet. Don’t
read me wrong here. This kind of metadata (data about data, in this case
a categorization of common terms), particularly in isolation, is not the
route to high search engine rankings (as you will see later). However,
sites in the top five on Google have generally undertaken SEO campaigns
and have already developed a good idea of what the more popular
keywords are for their (and your) niche. As such, their metadata is
likely to reflect quality keyword analysis, repeated throughout the site in
other ways. This effectively represents a shortcut that gets your campaign
off to a flying start.
Search engines provide the modern information scientist with a
hugely rich data set of search terms commonly used by people to
retrieve the web pages they are looking for. I have coined some terms to
help describe these that I use in my business.
CUSPs – commonly used search phrases – are phrases that people
tend to use when searching for something and, more importantly, narrowing
down the search results returned. There are normally two parts
to a CUSP, a “stem phrase” and a “qualifying phrase.”
For example, a stem for Brad might be “business cards” and a qualifier
“full color.” Additional qualifiers might be “cheap,” “luxury,” “do it
yourself,” and a whole host of other terms.
Sometimes qualifiers are strung together, in terms such as “cheap
Caribbean cruises.” And often people will use different synonyms or
otherwise semantically similar words to describe the same qualifying
phrase.
For example, “discounted” and “inexpensive” are synonyms of
“cheap.” However, searchers have learnt that phrases like “last minute”
and “special offer” might return similar results. As such, searchers are
just as likely to search for “last minute cruises” or “special offer cruises”
as “cheap cruises.”

How Googlebot first finds your site:
There are essentially four ways in which Googlebot finds your new site.
The first and most obvious way is for you to submit your URL to Google
for crawling, via the “Add URL” form at www.google.com/addurl.html.
The second way is when Google finds a link to your site from another
site that it has already indexed and subsequently sends its spider to follow
the link. The third way is when you sign up for Google Webmaster
Tools (more on this on page 228), verify your site, and submit a sitemap.
The fourth (and final) way is when you redirect an already indexed webpage
to the new page (for example using a 301 redirect, about which
there is more later).
In the past you could use search engine submission software, but
Google now prevents this – and prevents spammers bombarding it with
new sites – by using a CAPTCHA, a challenge-response test to determine
whether the user is human, on its Add URL page. CAPTCHA
stands for Completely Automated Public Turing test to tell Computers
and Humans Apart, and typically takes the form of a distorted image of
letters and/or numbers that you have to type in as part of the
submission.

How quickly you can expect to be crawled:
There are no firm guarantees as to how quickly new sites – or pages –
will be crawled by Google and then appear in the search index. However,
following one of the four actions above, you would normally expect to
be crawled within a month and then see your pages appear in the index
two to three weeks afterwards. In my experience, submission via Google
Webmaster Tools is the most effective way to manage your crawl and to
be crawled quickly, so I typically do this for all my clients.

What Googlebot does on your site:
Once Googlebot is on your site, it crawls each page in turn. When it
finds an internal link, it will remember it and crawl it, either later that
visit or on a subsequent trip to your site. Eventually, Google will crawl
your whole site.
In the next step (priming your pages, page 92) I will explain how
Google indexes your pages for retrieval during a search query. In the
step after that (landing the links, page 128) I will explain how each
indexed page is actually ranked. However, for now the best analogy I can
give you is to imagine that your site is a tree, with the base of the trunk
being your home page, your directories the branches, and your pages the
leaves on the end of the branches. Google will crawl up the tree like
nutrients from the roots, gifting each part of the tree with its allimportant
PageRank. If your tree is well structured and has good symmetry,
the crawl will be even and each branch and leaf will enjoy a
proportionate benefit. There is (much) more on this later.

Controlling Googlebot:
For some webmasters Google crawls too often (and consumes too much
bandwidth). For others it visits too infrequently. Some complain that it
doesn’t visit their entire site and others get upset when areas that they
didn’t want accessible via search engines appear in the Google index.
To a certain extent, it is not possible to attract robots. Google will
visit your site often if the site has excellent content that is updated frequently
and cited often by other sites. No amount of shouting will make
you popular! However, it is certainly possible to deter robots. You can
control both the pages that Googlebot crawls and (should you wish)
request a reduction in the frequency or depth of each crawl.
To prevent Google from crawling certain pages, the best method is
to use a robots.txt file. This is simply an ASCII text file that you place
at the root of your domain. For example, if your domain is http://www.yourdomain.com, place the file at http://www.yourdomain.
com/robots.txt. You might use robots.txt to prevent Google indexing
your images, running your PERL scripts (for example, any forms for
your customers to fill in), or accessing pages that are copyrighted. Each
block of the robots.txt file lists first the name of the spider, then the list
of directories or files it is not allowed to access on subsequent, separate
lines. The format supports the use of wildcard characters, such as * or
? to represent numbers or letters.
The following robots.txt file would prevent all robots from accessing
your image or PERL script directories and just Googlebot from
accessing your copyrighted material and copyright notice page (assuming
you had placed images in an “images” directory and your copyrighted
material in a “copyright” directory):
User-agent: *
Disallow: /images/
Disallow: /cgi-bin/
User-agent: Googlebot
Disallow: /copyright/
Disallow: /content/copyright-notice.html
To control Googlebot’s crawl rate, you need to sign up for Google
Webmaster Tools (a process I cover in detail in the section on tracking
and tuning, page 228). You can then choose from one of three settings
for your crawl: faster, normal, or slower (although sometimes faster is
not an available choice). Normal is the default (and recommended)
crawl rate. A slower crawl will reduce Googlebot’s traffic on your server,
but Google may not be able to crawl your site as often.
You should note that none of these crawl adjustment methods is
100% reliable (particularly for spiders that are less well behaved than
Googlebot). Even less likely to work are metadata robot instructions,
which you incorporate in the meta tags section of your web page.

However, I will include them for completeness. The meta tag to stop spiders
indexing a page is:
<meta name=“robots” content=“NOINDEX”>
The meta tag to prevent spiders following the links on your page is:
<meta name=“robots” content=“NOFOLLOW”>
Google is known to observe both the NOINDEX and NOFOLLOW
instructions, but as other search engines often do not, I would recommend
the use of robots.txt as a better method.

Sitemaps
A sitemap (with which you may well be familiar) is an HTML page containing
an ordered list of all the pages on your site (or, for a large site,
at least the most important pages).
Good sitemaps help humans to find what they are looking for and
help search engines to orient themselves and manage their crawl activities.
Googlebot, in particular, may complete the indexing of your site
over multiple visits, and even after that will return from time to time to
check for changes. A sitemap gives the spider a rapid guide to the structure
of your site and what has changed since last time.
Googlebot will also look at the number of levels – and breadth – of
your sitemap (together with other factors) to work out how to distribute
your PageRank, the numerical weighting it assigns to the relative importance
of your pages.

Creating your sitemap
Some hosting providers (for example 1and1) provide utilities via their
web control panel to create your sitemap, so you should always check with your provider first. If this service is not available, then visit
www.xml-sitemaps.com and enter your site URL into the generator box.
After the program has generated your sitemap, click the relevant link to
save the XML file output (XML stands for eXtensible Markup
Language and is more advanced than HTML) so that you can store the
file on your computer. You might also pick up the HTML version for use
on your actual site. Open the resulting file with a text editor such as
Notepad and take a look through it.
At the very beginning of his web redevelopment, Brad creates just
two pages, the Chambers Print homepage and a Contact us page.
He uses a sitemap-generator tool to automatically create a sitemap,
then edits the file manually to tweak the priority tags (see below)
and add a single office location in a KML file (see also below):
<?xml version=“1.0” encoding=“UTF-8” ?>
<urlset xmlns=“http://www.sitemaps.org/schemas/sitemap/0.9”
xmlns:xsi=“http://www.w3.org/2001/XMLSchema-instance”
xsi:schemaLocation=“http://www.sitemaps.org/schemas/sitemap/0.
9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd”>
<url>
<loc>http://www.chambersprint.com/</loc>
<priority>0.9</priority>
<lastmod>2007-07-12T20:05:17+00:00</lastmod>
<changefreq>daily</changefreq>
</url>
<url>
<loc>http://www.chambersprint.com/about-us/contactus.
html</loc>
<priority>0.8</priority>
<lastmod>2007-07-12T20:05:17+00:00</lastmod>
<changefreq>daily</changefreq>
</url>
<url>
<loc>http://www.chambersprint.com/about-us/chambers-boisebranch.
kml</loc>
</url>
</urlset>
I cover KML in greater detail later (under local optimization, page
217) so all you need to understand for now is that a KML file tells
Google where something is located (longitude and latitude) – in this
case, Chambers’ Boise branch.
Sitemaps.org defines the standard protocol. There are four compulsory
elements. The sitemap must:
G Begin with an opening <urlset> tag and end with a closing
</urlset> tag.
G Specify the namespace within the <urlset> tag. The namespace
is the protocol or set of rules you are using and its URL is preceded
by “xmlns” to indicate it is an XML namespace.
G Include a <url> entry for each URL, as a parent XML tag (the
top level or trunk in your site’s “family tree”).
G Include a <loc> child entry for each <url> parent tag (at least
one branch for each trunk).
All other tags are optional and support for them varies among search
engines. At https://www.google.com/webmasters/tools/docs/en/
protocol.html, Google explains how it interprets sitemaps.
You will note that Brad used the following optional tags:
G The <priority> tag gives Google a hint as to the importance of a
URL relative to other URLs in your sitemap. Valid values range
from 0.0 to 1.0. The default priority (i.e., if no tag is present) is
inferred to be 0.5.
G The <lastmod> tag defines the date on which the file was last
modified and is in W3C Datetime format, for example YYYYMM-
DDThh:mm:ss for year, month, day, and time in hours,
minutes and seconds. This format allows you to omit the time
portion, if desired, and just use YYYY-MM-DD.
G The <changefreq> tag defines how frequently the page is likely
to change. Again, this tag merely provides a hint to spiders and
Googlebot may chose to ignore it altogether. Valid values are
always, hourly, daily, weekly, monthly, yearly, never. The value
“always” should be used to describe documents that change each
time they are accessed. The value “never” should be used to
describe archived URLs.
My advice with respect to the use of optional tags is as follows:
G Do use the <priority> tag. Set a value of 0.9 for the homepage,
0.8 for section pages, 0.7 for category pages, and 0.6 for important
content pages (e.g., landing pages and money pages). For
less important content pages, use a setting of 0.3. For archived
content pages, use 0.2. Try to achieve an overall average across
all pages of near to 0.5.
G Only use the <lastmod> tag for pages that from part of a blog or
a news/press-release section. Even then, do not bother adding
the time stamp. So <lastmod>2008-07-12</lastmod> is fine.
G Adding a <changefreq> tag is unlikely to help you, as Google
will probably ignore it anyway (particularly if your pages demonstrably
are not updated as frequently as your sitemap claims).
If you do make manual changes to an XML file that has been
automatically generated for you, you may wish to visit a
sitemap XML validator to check its correct formation prior to
moving on to referencing and submission. On the forum (www.seoexpert-
services.co.uk) I maintain an up-to-date list. My current favourite is the XML Sitemaps validator, at www.xml-sitemaps.com/validate-xmlsitemap.
html.
Referencing your sitemap
Before we turn to submission (i.e., actively notifying the search engines
of your sitemap), I would like to briefly explore passive notification,
which I call sitemap referencing.
SiteMaps.org (to which all the major engines now subscribe) sets a
standard for referencing that utilizes the very same robots.txt file I
explained to you above (page 57). When a spider visits your site and
reads your robots.txt file, you can now tell it where to find your sitemap.
For example (where your sitemap file is called sitemap.xml and is
located in the root of your website):
User-agent: *
Sitemap: http://www.yourdomain.com/sitemap.xml
Disallow: /cgi-bin/
Disallow: /assets/images/
The example robots.txt file tells the crawler how to find your sitemap
and not to crawl either your cgi-bin directory (containing PERL scripts
not intended for the human reader) or your images directory (to save
bandwidth). For more information on the robots.txt standard, you can
refer to the authoritative website www.robotstxt.org.
Submitting your sitemap
Now we turn to the active submission of your site map to the major
search engines (the modern equivalent of old-fashioned search engine
submission). Over time, all the search engines will move toward the
Sitemaps.org standard for submission, which is to use a ping URL submission
syntax. Basically this means you give your sitemap address to the search engine and request it to send out a short burst of data and
“listen” for a reply, like the echo on a submarine sonar search.
At time of writing, I only recommend using this method for
Ask.com. Amend the following to add the full URL path to your sitemap
file, copy it into your browser URL bar, and hit return:
http://submissions.ask.com/ping?sitemap=http://www.yourdomain.com
/sitemap.xml
Ask.com will present you with a reassuring confirmation page, then
crawl your sitemap file shortly thereafter.
MSN has yet to implement a formal interface for sitemap submission.
To monitor the situation, visit the LiveSearch official blog (at
http://blogs.msdn.com/livesearch) where future improvements are likely
to be communicated. However, for the time being I recommend undertaking
two steps to ensure that MSN indexes your site:
G Reference your sitemap in your robots.txt file (see above).
G Ping Moreover using http://api.moreover.com/ping?u=http://
yourdomain.com/yoursitemap.xml.
Moreover.com is the official provider of RSS feeds to the myMSN portal,
so I always work on the (probably erroneous) theory that submission
to Moreover may somehow feed into the main MSN index
somewhere down the track. (RSS is sometimes called Really Simple
Syndication and supplies “feeds” on request from a particular site, usually
a news site or a blog, to a news reader on your desktop, such as
Google Reader.)
Both Google (which originally developed the XML schema for
sitemaps) and Yahoo! offer dedicated tools to webmasters, which
include both the verification of site ownership and submission of
sitemaps:
G Google Webmaster Tools: www.google.com/webmasters.
G Yahoo! Site Explorer: https://siteexplorer.search.yahoo.com/mysites.
To use Google Webmaster Tools, you must first obtain a Google
account (something I cover in more detail in the section on Adwords,
page 187). You then log in, click on “My Account,” and follow the link
to Webmaster Tools. Next, you need tell Google all the sites you own
and begin the verification process. Put the URL of your site (e.g.,
http://www.yourdomain.com) into the Add Sites box and hit return.
Google presents you with a page containing a “next step” to verify your
site. Click on the Verify Site link and choose the “Add a Metatag”
option. Google presents you with a unique meta tag, in the following
format:
<meta name=“verify-v1” content=“uniquecode=” />
Edit your site and add the verification meta tag between the head tags
on your homepage. Tab back to Google and click on the Verify button
to complete the process. Now you can add your sitemap by clicking on
the sitemap column link next to your site. Choose the “Add General
SiteMap” option and complete the sitemap URL using the input box.
You’re all done!
Yahoo! follows a similar approach to Google on Yahoo! Site
Explorer. Sign up, sign in, add a site, and click on the verification button.
With Yahoo! you need to upload a verification key file (in HTML
format) to the root directory of your web server. Then you can return to
Site Explorer and tell Yahoo! to start authentication. This takes up to 24
hours. At the same time you can also add your sitemap by clicking on
the “Manage” button and adding the sitemap as a feed.

How Google builds its index?
Once Googlebot has crawled your site, it gives a unique ID to each page
it has found and passes these to an indexing program. This lists every
document that contains a certain word. For example, the word “gulf”
might exist in documents 4, 9, 22, 57, and 91, while the word “war” might
be found in documents 3, 9, 15, 22, 59, and 77. If someone were to search
with the query “gulf war,” only documents 9 and 22 would contain both
words.

Google stop words:
The Google Search box ignores certain common words, such as “where”
and “how,” as well as certain single digits and letters. In its official FAQ
Google says, “these terms rarely help narrow a search and can slow
search results.” Of course, the main reason such words are not indexed
is because doing so would massively increase the Google index (at great
computing cost and with limited user benefit).
These stop words include (but are not limited to) i, a, about, an,
and, are, as, at, be, by, for, from, how, in, is, it, of, on, or, that, the, this,
to, was, what, when, where, who, will, with.
However, Google is quite intelligent at recognizing when a stop
word is being used in a way that is uncommon. So, for example, a search
for “the good, the bad and the ugly” will be read by Google as “good bad
ugly.” However, a search for “the who” will not be ignored but will be
processed as it is, returning results for the well-known rock band.

Meta description:
The meta-description tag is placed between the <head> tags in your
HTML page. Its purpose is to provide a brief synopsis of page content,
which builds on the headline included in the title tags. The correct syntax
is as follows:
<meta name=“description” content=“put your description in here.” />
Most of the meta tags on a page carry very little SEO value. Search
engines in general (and Google in particular) have paid less and less
attention to them over time, because so many webmasters abuse them
and their content is not actually displayed on the page. Arguably, several
are even counterproductive, in that they make a page longer (and reduce
both density and crawl efficiency) without adding any value. As such,
you may see comments around the forums that the meta-description tag
isn’t that important. In some ways these comments are correct, if SEO
is all you are focusing on. However, in fact the meta-description tag is
very important for the following two (not strictly SEO) reasons:
G Snippet compilation. When Google creates search engine results
pages, it may use the description tag in constructing the “call to
action” snippet that appears below the results link. While this is
more internet marketing than true SEO, I dedicate a whole section
to the important area of SERPs and snippets (page 118).
G Directory submission. Some directory services pick up and use
your page meta description as the description of your entry in
their directory listings. This applies to both human-edited and
some more automated directory services.
I will return to meta-description tags in the snippets section and show
you there what Brad came up with for Chambers Print. However, for
now, let’s keep moving through the page.

Meta keyword:
The meta-keyword tag (or tags) is also placed between the <head> tags
in your HTML page and was intended solely for use by search engines.
The idea is that the keyword tags provide information to the search
engine about the contents of your page (as an aid to semantic indexing).
The correct syntax is as follows:
<meta name=“keywords” content=“keyword1,keyword2,keywordn” />
I don’t want to disappoint you, but I am afraid that the meta-keyword tag
is almost useless for improving your position on Google. Over the last
five years the tag has become so abused by spammers that Google now
appears to ignore the tag altogether in determining the true meaning of
a page.
However, I still consider meta-keyword tags to be worth pursuing,
as there remains patchy (but genuine) evidence that Yahoo! and
Ask.com results are still influenced by them (albeit typically only for
pages that are very graphics intensive). This is due, in part, to the underlying
origins of the search technology used by both engines (Inktomi
and Teoma), which have always paid attention to keyword tags.
So on balance, I would not ignore keyword tags. After all, the exercise
of working out what to put in them has value in itself, as it helps you
to structure your thinking on how to deploy your A, B, and C keyword
lists. As I said previously, SEO can be like throwing mud at a wall –
while most of your meta-keyword mud will not stick on the wall, every
little bit of effort can help. My recommendations are as follows:
G While spaces within keyword phrases are fine, you should separate
each phrase with a comma and no space (e.g., “sharepoint
2007,moss 2007”).
G Use lower case for all keywords and pluralize phrases where possible.
Do not bother including capitalized or nonplural equivalents.

No comments:

Popular Posts