Description
I noticed an issue with Crawl4AI where it initially extracts content from the given links as expected. However, once a link fails, the tool starts crawling the website, which I don’t want. The crawling process is slow and significantly increases the load on my PC, which is not ideal.
I would prefer to use Crawl4AI for content extraction only, without triggering any crawling action after a link failure. Is there any way to stop the crawling feature and ensure that the tool only extracts the content, regardless of whether a link fails?
I’m attaching screenshots below to help illustrate the problem:
Before any link fails: (shows expected content extraction)
After a link fails: (shows that crawling starts unexpectedly)
Could you provide guidance on how to disable the crawling feature while keeping the content extraction process intact?
Activity