Is it possible to create the Crawl path

No it isn't.

I did explain in the other thread how 'crawling works' and that they don't crawl as such any longer.

But I will try again :)

The spiders request a url, the datacentre sends a url to them, they visit the url grab the code and send it home, then request a new url.

Google algorithmically decides which 8urls to crawl and then they run their quality algorithm over the data collected to decide if the pages are worthy of inclusion.

You can encourage google to spider and index pages, which is whay the hieracael structure and link architecture of a site is so important.

Again, what is it you are trying to achieve, as wh3en you only give a tiny amount of information it makes it difficult to reply fully.

You can also encourage google to spider a non spidereable site by using a sitemap.
 
Upvote 0

snareshiver

Free Member
Jul 14, 2009
203
0
Hi,
You said that the spider requests for a url, datacenter sends it, they grab the contents and send it back.

After that request, will the spider not discover other links in the same url(domain)?

Or it will send a second request for a url from a different domain which it might have in the queue?
 
Upvote 0

Latest Articles

Join UK Business Forums for free business advice