-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PoC with pokeapi and substream #41638
base: master
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Skipped Deployment
|
# * have the DeclarativePartitionGenerator relying on the stream instead of the stream slicer (but we would have to keep more legacy code) | ||
# * find a way to access/expose a stream slicer concept through the manifest | ||
# For this PoC, we will just assume the retriever expose a stream_slicer | ||
stream_slicer = stream.retriever.stream_slicer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need a stream slicer? imo that's a concept worth deprecating in favor of partitions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The challenge I see us having is that we will need to keep the current model_to_component_factory
until we migrate everything. Until then, we would probably rather not duplicate some code. So here, I try to rely on the stream_slicer. This concept to me will become the DeclarativePartitionGenerator
in the concurrent code. However, during when we will have both the low-code in old CDK and low-code in concurrent CKD during the transition, I'm not exactly sure what is the best way to re-use the logic to build this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the only custom StreamSlicer
s I see are for
- orb
- posthog
- partnerstack
- railz
🌶️ We should just release a breaking change and accept StreamSlicer
s aren't a real concept
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not the level of stream slicer that is useful for matching with a concurrent partition though. The logic to create this level is here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okok. so just a matter of a missing interface since we assume the retriever has a stream_slicer 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a matter of a missing interface
The mindset for this PoC was: create the DeclarativeStream as it exists today and fetch the components within this DeclarativeStream to create the concurrent streams. This does not work well with custom components as they are not forced to expose StreamSlicers.
So, we might even be able to avoid exposing a new interface if we modify the mode_to_component_factory to return a DefaultStream
i.e. that the mode_to_component_factory already know about this interface with _merge_stream_slicers
. I was trying to avoid a bit as this part of the code is a bit complex and having to maintain both seems like a challenge but I guess we could have a flow like: By reading a stream declaration in the manifest, determine if it is concurrent compatible. If so, use a new DefaultStream, else fallback on the current DeclarativeStream.
return streams | ||
|
||
def read(self, logger: logging.Logger, config: TConfig, catalog: TCatalog, state: Optional[TState] = None) -> Iterable[AirbyteMessage]: | ||
concurrency_level = 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd expect this to be define in the yaml file
What
PoC to show how we can move low-code connectors to concurrent
How
Review guide
User Impact
After commit 4b52a9d, the sync went from 45.831 seconds to 10.779 seconds.