Source code: Lib/urllib/robotparser.py
This module provides a single class,
RobotFileParser, which answers
questions about whether or not a particular user agent can fetch a URL on the
Web site that published the
robots.txt file. For more details on the
robots.txt files, see http://www.robotstxt.org/orig.html.
This class provides methods to read, parse and answer questions about the
robots.txtfile at url.
Sets the URL referring to a
robots.txtURL and feeds it to the parser.
Parses the lines argument.
Trueif the useragent is allowed to fetch the url according to the rules contained in the parsed
Returns the time the
robots.txtfile was last fetched. This is useful for long-running web spiders that need to check for new
Sets the time the
robots.txtfile was last fetched to the current time.
The following example demonstrates basic use of the RobotFileParser class.
>>> import urllib.robotparser >>> rp = urllib.robotparser.RobotFileParser() >>> rp.set_url("http://www.musi-cal.com/robots.txt") >>> rp.read() >>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco") False >>> rp.can_fetch("*", "http://www.musi-cal.com/") True