How can I get my site top in Google search?
The simple answer to this top question asked by hundreds of bloggers is by increasing the visibility of your blog website. For a regular person searching or finding data in Google they are entering in the search terms and accordingly sites will be displayed by Google .
The three major ways for strangers to find you on the web is:
Advertising: Online/offline advertising, Exchange advertisement programs, PPC (Pay-Per-Click) advertising, SEM (Search Engine Marketing), Article marketing etc
Referral links: Blogrolls, Directory listing, Forums, RSS/Feed directories, Social bookmarks, Social network links etc
Search Engines: Via Google, Yahoo, MSN, Ask etc
Among the three ways mentioned, ‘Search engines’ is the most popular method used by people to find information on the web. They search for data required by entering the associated terms or keywords. This is the most ideal and durable method to achieve high traffic consistently to your blog website.
In addition to this you need to optimize your site and its content. Your blog posts should be search engine friendly and easier for the search engines to track your blog with the help of search engine bots. This method is also known as Search engine optimization (SEO).
How do Search engines find information?
The blogosphere is filled with millions of websites and data connecting centers. The World Wide Web (WWW) is a giant storehouse of millions of interconnected PC’s and online information stored in them. Therefore to get your blog posts or website content probed and indexed by the search engines, you need to make your presence and existence known to them.
Your existence will be known by the search engine when you complete search engine submission for new sites or when other sites refer your website link via referrals. In both cases the search engine spiders , which is piece of software that reviews your site for its content periodically, needs to crawl all over your site and index its content for the benefit of the search users online.
| If you are interested to own a Blog website Visit our Online Web Store and make your choice |
|---|
How do Search engine spiders spread the web?
Your site is structured with two support systems that helps the search engine spiders to crawl and probe your site effectively.
- Robot.txt:
After you complete search engine submission for your new website, the search engine spiders ( also known as crawlers or bots) will try to navigate your site to understand its content. A robot.txt file maintained on your website will tell the spiders which all folders and modules on your web server are allowed to be crawled by them. This data is configured by the webmaster based on content.
It is great when search engines frequently visit your site and index your content but often there are cases when indexing parts of your online content is not what you want. For example, if you have two versions of a page (one for viewing in the browser and one for printing), you’d rather have the printing version excluded from crawling, otherwise you risk being imposed a duplicate content penalty.
A robots.txt file actually restricts access to the search engine robots that crawl the web to your site . These bots are automated, and before they access pages of a site, they check to see if a robots.txt file exists that prevents them from accessing certain pages.
- Sitemap.xml:
After the robot.txt allows spiders to crawl certain areas of your server folders, the sitemap.xml provides further information. The sitemap.xml file contains all your pages, posts, links to tools and web applications etc that need to be made visible to the public users (and finally to be indexed by the search engines).
Sitemap.xml will also contain more details on these pages such as when it was last changed, the priority of the page etc that help spiders to decide on when to index them.
To summarize robot.txt provides folder /file level access details and Sitemap.xml provides detailed information on pages. These two files are very important for any blog website that is search engine friendly and they are usually maintained at the root folder of your website.
In addition to robots.txt and Sitemap.xml file, there are page and link level instructions that help spiders to decide whether to index a page or a target page that is pointed to by a link on the page (URL). These are page meta tags and link properties.
Submitting your new website to blog directories does not automatically mean that you will top in Google Search, you need to fulfill these basic requirements. Go thru’ this article: How to Rank your Website High in Google’s Search Engine for better insight. You will be able to see your blog website ranked high in Google Search.







