How do Search-engines Work?

Search-engines

For the simple reason that Google was developed for school purposes, at Stanford University, enough information on how it works was published in papers. Therefore, a lot can be determined on how it works. Meanwhile other search-engines, which were developed primarily for commercial purposes, tend to keep their techniques publications at a minimum. From what is published by Google, a clear understanding can be generated about how Google itself works and also gives a general idea on how other search-engines may work.

What made Google successful was the drastic improvement in their algorithm to make sure to present the user with the highest quality pages in relevance with the search query within the first top 10 results. This has mainly been emphasized since it had been noted that users tend to look only through the first result page, usually having ten result links.

Google uses a lot of different factors to determine which pages are of better quality than others. PageRank is one of the most affective factors used by Google’s search-engine. PageRank is assigned to every page Google crawls (visits and indexes) and can be affected in various ways. The PageRank of a page can be determined on the amount of other pages linking to it, by who links to it, how many hits it receives and the page relevance data. Each page containing links is able to pass on its PageRank to the outgoing links so a link from a very high PageRanked page may be considered to be of better importance than hundred links from low PageRanked pages.

 

Please note that all information has been provided by: About SEO Malta which is an open source SEO information website.

A little on how Google works

When a person performs a search, Google uses an algorithm to determine which website and pages to return. The whole algorithm is unknown to all, but the main procedure is quite public. Google starts by crawling through all the websites available. Several Computers are dedicated to crawl through millions of pages. This process is called Google bot or Google spider and it starts by crawling a set of indexed URLs and by following each link on every page, it identifies new and updated pages to index. Any broken links will be negatively noted down on Google index. Once crawling is complete, Google goes through all indexed pages and analyse each page content such as tags, keywords, etc. The more relevant the keywords to the search, the higher value the page will have to the user’s search.