Skip to content

[Bug]: CORS (Cross-Origin Resource Sharing) error when trying to use Crawl4AI to connect to Twitter. #641

Closed
@alvaro562003

Description

crawl4ai version

Version: 0.4.248

Expected Behavior

I am encountering a CORS (Cross-Origin Resource Sharing) error when trying to use Crawl4AI to connect to Twitter. Crawl4AI is failing to load essential scripts from Twitter's domain (abs.twimg.com), which is preventing proper connection.

Here are the console error messages I am consistently seeing in the logs:

CONSOLE]. ℹ Console: Access to script at 'https://abs.twimg.com/responsive-web/client-web/vendor.c4b9145a.js' from origin 'https://twitter.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
[CONSOLE]. ℹ Console: Failed to load resource: net::ERR_FAILED

Current Behavior

program stops with error logs

Is this reproducible?

Yes

Inputs Causing the Bug

-URLS : https://www.x.com
NB : i used so many configurations, i prefer send the minimalist config version.

Steps to Reproduce

launch the python code on terminal  and look at the console.

Code snippets

site_url = "https://www.x.com"

import asyncio
import nest_asyncio
from crawl4ai import AsyncWebCrawler, CacheMode, BrowserConfig, CrawlerRunConfig
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
import logging

# Configuration du logging
logging.basicConfig(level=logging.DEBUG, 
                   format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger('twitter_crawler')

# Apply nest_asyncio to allow nested event loops
nest_asyncio.apply()

async def main():
    # Configuration optimisée pour Twitter
    browser_conf = BrowserConfig(
        headless=False,  # Mode visible pour le debug
    )
    
    crawler_config = CrawlerRunConfig(
        cache_mode=CacheMode.DISABLED, #        cache_mode=CacheMode.BYPASS/DISABLED,
        log_console=True,  # Activation des logs console
    )

    try:
        async with AsyncWebCrawler(
            config=browser_conf,
            verbose=True,
        ) as crawler:
            result = await crawler.arun(
                url=site_url ,
                config=crawler_config
            )
            if result.success:
                logger.info("Longueur du HTML capturé: %d", len(result.html or ''))
                    
    except Exception as e:
        logger.error(f"Erreur générale: {str(e)}")
        raise e

if __name__ == "__main__":
    asyncio.run(main())

OS

window

Python version

Python 3.11.9

Browser

chromium

Browser version

ersion 133.0.6943.16 (Build officiel) (64 bits)

Error logs & Screenshots (if applicable)

Image

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions