Using indexing agents such as robots or spiders, search engines update their databases frequently with new URLs and index information. Bots and spiders scan the Web, seeking out new or updated pages, moving from URL to URL. Chances are, many agents have already scanned your site.
When hitting a particular site, an agent will read the full text of every page in the site’s hierarchy, from the home pages on down, and hit all external links as well. This way, the agents are able to locate, read, and catalog sites, whether they are registered or not. You can, of course, register your URL with a search engine; however, that just puts your site in the “to-be-scanned” queue.