Skip to content

Question on integrating message brokers #5

@JochenDewachter92

Description

@JochenDewachter92

Would you have any advice on how to create a custom streaming data source for message brokers (other than Kafka) such as ZeroMQ, RabbitMQ, etc? There's no real partitioning required and messages have to be queued/buffered before the next spark read operation... I noticed that Spark's DataSourceStreamReader class doesn't seem to like multi-threading...

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions