Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add
respectRobotsTxtFile
crawler option #2910feat: add
respectRobotsTxtFile
crawler option #2910Changes from all commits
4447f7f
e607660
ee639b5
375b3ee
a94dc5e
9db0513
e230191
86a07cf
2908edd
b8a5926
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it is simple, I would also support this filter in
crawler.addRequests
. I know it is a small wrapper aboverequestQueue.addRequests
but since it is on thecrawler
object, users will expect it will respect robots. It would drop those requests later when fetching but polluting and draining the queue is bad for performance.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I was thinking about that one as well, it should be simple, will do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
implemented via e230191
btw this is just a perf optimization, technically, it was already working this way, since we check if the request is valid inside the
_runTaskFunction
. now we also skip the disallowed ones from adding to the queue viacrawler.addRequests
like we do withenqueueLinks
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that for
enqueueLinks
, we'll only check against the current sitemap (possibly enqueueing forbidden different-domain links), but withaddRequests
, we'll check therobots.txt
files for all of the links separately (possibly downloading manyrobots.txt
files).It kinda makes sense to me (and as you're saying, it's just a matter of performance), just making sure we all understand this right.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(nvm, the performance difference is just RQ utilization, the requests to
robots.txt
files will be made either way)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We check the URLs based on the robots.txt for the originating request (I guess the "sitemap" is a typo? we don't fetch/check sitemaps here). If there is a link that goes outside of the domain, it will be enqueued as usual (if allowed by the enqueue strategy) and checked again when processing. With
addRequests
, we don't know where they came from, so we need to check them one by one. We have a cache for this, so if they are all from the same domain, we only fetch the robots file once.