-
Notifications
You must be signed in to change notification settings - Fork 376
feat: add on_skipped_request
decorator, to process links skipped according to robots.txt
rules
#1166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces an on_skipped_request decorator and enhances robots.txt integration across multiple crawler implementations. Key changes include:
- Adding a robots.txt endpoint and constant in server endpoints.
- Integrating robots.txt filtering in both Playwright and Abstract HTTP crawlers.
- Extending unit tests to cover new robots.txt behaviors and the on_skipped_request hook.
Reviewed Changes
Copilot reviewed 14 out of 14 changed files in this pull request and generated 1 comment.
Show a summary per file
File | Description |
---|---|
tests/unit/server_endpoints.py | Added a ROBOTS_TXT constant with sample directives for testing cases. |
tests/unit/server.py | Introduced a new endpoint to serve the robots.txt file. |
tests/unit/crawlers/* | Added tests for robots.txt respect and on_skipped_request hook across crawlers. |
src/crawlee/crawlers/* | Updated link extraction logic and added skipped request handling for robots.txt. |
src/crawlee/crawlers/_basic/_basic_crawler.py | Integrated robots.txt check in BasicCrawler with a new on_skipped_request callback. |
src/crawlee/_utils/robots.py | Added a new RobotsTxtFile utility using Protego for parsing robots.txt content. |
pyproject.toml | Added dependency on protego for robots.txt parsing. |
Comments suppressed due to low confidence (1)
src/crawlee/crawlers/_basic/_basic_crawler.py:1000
- [nitpick] Consider renaming the parameter 'need_mark' to 'mark_request' for clearer intent in the _handle_skipped_request method.
def _handle_skipped_request(self, request: Request | str, reason: SkippedReason, *, need_mark: bool = False) -> None:
Co-authored-by: Vlada Dusek <[email protected]>
### Description Update `UnprocessedRequest` to match actual data. Add test. ### Issues - Closes: apify#1150
… and the handler is executed for `PlaywrightCrawler` (apify#1163) ### Description - For `PlaywrightCrawler`, cookies should only be saved to the session store when the handler is fully executed. This is because the browser may continue to set cookies while the handler is being executed ### Testing - Add a test simulating the installation of a cookie in the browser during the `default_handler` execution process - Update the `test_isolation_cookies` test
### Description Adds retry to unprocessed requests in call `add_requests_batched`. Retry calls recursively `_process_batch`, which initially works on full request batch and then on batches of unprocessed requests until retry limit is reached or all requests are processed. Each retry is done after linearly increasing delay with each attempt. Unprocessed requests are not counted in `request_queue.get_total_count` Add test. ### Issues - Closes: [Handle unprocessed requests in batch_add_requests](apify/apify-sdk-python#456)
…tation count exceeds maximum (apify#1147) - Call `failed_request_handler` for `SessionError` when session rotation count exceeds maximum
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we please cover on_skipped_request
somewhere in the docs? 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
respect_robots_txt_file
option #1162 by adding anon_skipped_request
decorator to handle references skipped according torobots.txt
rulesIssues
on_skipped_request
hook #1160respect_robots_txt_file
option #1162