To use Web crawlers will cost a lot of network resources.
查看答案
If the frequency of accesses to a given server is too high, it will suffer from severoverload.
A. 对
B. 错
As long as a crawler downloads pages, it is sure to be able to handle them.
A. 对
B. 错
The robots exclusion protocol, or the robots.txt protocol, is able to solve the problem of the costs of using Web crawlers.
A. 对
B. 错
“Crawl-delay:” parameter in the robots.txt file asks crawlers to delay for a number of seconds between requests.
A. 对
B. 错