题目内容

The robots exclusion protocol, or the robots.txt protocol, is able to solve the problem of the costs of using Web crawlers.

A. 对
B. 错

查看答案
更多问题

“Crawl-delay:” parameter in the robots.txt file asks crawlers to delay for a number of seconds between requests.

A. 对
B. 错

A parallel crawler can run multiple processes in parallel, as a result of which the download rate is maximized, the overhead is minimized and repeated downloads of the same page are avoided.

A. 对
B. 错

The new URLs discovered during the crawling process are assigned so that the same page is not downloaded more than once.

A. 对
B. 错

有如下程序,#includeint main(){ int a1,a2; char c1,c2; scanf("%d%d",&a1,&a2); scanf("%c%c",&c1,&c2);} 若要求a1、a2、c1、c2的值分别为10、20、A、B,正确的数据输入是()(注:□表示空格,表示回车)

A. 1020AB
B. 10□20AB
C. 10□20□ABC
D. 10□20AB

答案查题题库