It's difficult for a server to satisfy multiple requests per second and/or downloading large files from a single crawler or the requests from multiple crawlers.
查看答案
To use Web crawlers will cost a lot of network resources.
A. 对
B. 错
If the frequency of accesses to a given server is too high, it will suffer from severoverload.
A. 对
B. 错
As long as a crawler downloads pages, it is sure to be able to handle them.
A. 对
B. 错
The robots exclusion protocol, or the robots.txt protocol, is able to solve the problem of the costs of using Web crawlers.
A. 对
B. 错