To explore the web-process which, in English, is called web crawling or web spidering, "each developed its own search engine technology, their own programs. The "logic" by which a crawler captures and discriminates information depends on the algorithms with which it was designed. The term comes from the mathematical algorithm and means something like a sequence of instructions, specifically, in computing, an algorithm is a kind of minimum unit of programming, an equation that indicates the steps to solve a problem. At first, walked crawlers only the hunt for metatags that every website had coded in HyperText Markup Language (an acronym that recognizes all internet: HTML). There, in the HTML, in square brackets included a technical description, brief and concise web site content. And if the metatags said that in that corner of the web, there was an unprecedented picture of Napoleon Bonaparte, in Acapulco, at 3:00 pm on June 18, 1989, as the search engine made it part of your database .
Are the limits of automatic operation. Indeed, how can "teach" a machine to identify a lie? + The dishonesty of webmasters Soon the web site managers realized that the key lay metatags to be positioned in the top of the lists proposed by the search engines. Several studies have confirmed that the eye of the user of the site looking from top to bottom and from left to right the latter, of course, in languages that read from left to right, include us among the first ten addresses in a list increased exponentially the chances of attracting Internet users and became synonymous with success on the web.