Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🏗️ Python CDK: add schema transformer class #6139

Merged
merged 7 commits into from
Sep 27, 2021
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Fix review comments
  • Loading branch information
Dmytro Rezchykov committed Sep 24, 2021
commit 8d2c950e50be84038a0e15a777ecd4d498438274
4 changes: 2 additions & 2 deletions airbyte-cdk/python/airbyte_cdk/sources/abstract_source.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
from airbyte_cdk.sources.streams import Stream
from airbyte_cdk.sources.streams.http.http import HttpStream
from airbyte_cdk.sources.utils.schema_helpers import InternalConfig, split_config
from airbyte_cdk.sources.utils.transform import Transformer
from airbyte_cdk.sources.utils.transform import TypeTransformer


class AbstractSource(Source, ABC):
Expand Down Expand Up @@ -235,7 +235,7 @@ def _checkpoint_state(self, stream_name, stream_state, connector_state, logger):
return AirbyteMessage(type=MessageType.STATE, state=AirbyteStateMessage(data=connector_state))

@lru_cache(maxsize=None)
def _get_stream_transformer_and_schema(self, stream_name: str) -> Tuple[Transformer, dict]:
def _get_stream_transformer_and_schema(self, stream_name: str) -> Tuple[TypeTransformer, dict]:
"""
Lookup stream's transform object and jsonschema based on stream name.
This function would be called a lot so using caching to save on costly
Expand Down
6 changes: 3 additions & 3 deletions airbyte-cdk/python/airbyte_cdk/sources/streams/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
from airbyte_cdk.logger import AirbyteLogger
from airbyte_cdk.models import AirbyteStream, SyncMode
from airbyte_cdk.sources.utils.schema_helpers import ResourceSchemaLoader
from airbyte_cdk.sources.utils.transform import TransformConfig, Transformer
from airbyte_cdk.sources.utils.transform import TransformConfig, TypeTransformer


def package_name_from_class(cls: object) -> str:
Expand All @@ -48,8 +48,8 @@ class Stream(ABC):
# Use self.logger in subclasses to log any messages
logger = AirbyteLogger() # TODO use native "logging" loggers with custom handlers

# Transformer object to perform output data transformation
transformer: Transformer = Transformer(TransformConfig.NoTransform)
# TypeTransformer object to perform output data transformation
transformer: TypeTransformer = TypeTransformer(TransformConfig.NoTransform)

@property
def name(self) -> str:
Expand Down
32 changes: 20 additions & 12 deletions airbyte-cdk/python/airbyte_cdk/sources/utils/transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,20 +33,26 @@

class TransformConfig(Flag):
"""
Transformer class config. Configs can be combined using bitwise or operator e.g.
TypeTransformer class config. Configs can be combined using bitwise or operator e.g.
```
TransformConfig.DefaultSchemaNormalization | TransformConfig.CustomSchemaNormalization
```
"""

# No action taken, default behaviour. Cannot be combined with any other options.
NoTransform = auto()
# Applies default type casting.
DefaultSchemaNormalization = auto()
# Allow registering custom type transformation callback. Can be combined
# with DefaultSchemaNormalization. In this case default type casting would
# be applied before custom one.
CustomSchemaNormalization = auto()
# TODO: implement field transformation with user defined object path
# Field transformation based on field value path inside object. Not
# implemented yet.
FieldTransformation = auto()


class Transformer:
class TypeTransformer:
"""
Class for transforming object before output.
"""
Expand All @@ -55,26 +61,28 @@ class Transformer:

def __init__(self, config: TransformConfig):
"""
Initialize Transformer instance.
Initialize TypeTransformer instance.
:param config Transform config that would be applied to object
"""
if TransformConfig.NoTransform in config and config != TransformConfig.NoTransform:
raise Exception("NoTransform option cannot be combined with other flags.")
self._config = config
all_validators = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

piggybacking on jsonschema native validators is a clever idea!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, Im proud of it :)

key: self.__normalize_and_validate(key, orig_validator)
key: self.__get_normalizer(key, orig_validator)
for key, orig_validator in Draft7Validator.VALIDATORS.items()
# Do not validate field we do not transform for maximum performance.
if key in ["type", "array", "$ref", "properties", "items"]
}
self._normalizer = validators.create(meta_schema=Draft7Validator.META_SCHEMA, validators=all_validators)

def register(self, normalization_callback: Callable) -> Callable:
def registerCustomTransform(self, normalization_callback: Callable[[Any, Dict[str, Any]], Any]) -> Callable:
"""
Register custom normalization callback.
:param normalization_callback function to be used for value
normalization. Should return normalized value.
:return Same callbeck, this is usefull for using register function as decorator.
normalization. Takes original value and part type schema. Should return
normalized value. See docs/connector-development/cdk-python/schemas.md
for details.
:return Same callbeck, this is usefull for using registerCustomTransform function as decorator.
"""
if TransformConfig.CustomSchemaNormalization not in self._config:
raise Exception("Please set TransformConfig.CustomSchemaNormalization config before registering custom normalizer")
Expand Down Expand Up @@ -131,7 +139,7 @@ def default_convert(original_item: Any, subschema: Dict[str, Any]) -> Any:
return original_item
return original_item

def __normalize_and_validate(self, schema_key: str, original_validator: Callable):
def __get_normalizer(self, schema_key: str, original_validator: Callable):
"""
Traverse through object fields using native jsonschema validator and apply normalization function.
:param schema_key related json schema key that currently being validated/normalized.
Expand Down Expand Up @@ -173,16 +181,16 @@ def resolve(subschema):

return normalizator

def transform(self, instance: Dict[str, Any], schema: Dict[str, Any]):
def transform(self, record: Dict[str, Any], schema: Dict[str, Any]):
"""
Normalize and validate according to config.
:param instance object instance for normalization/transformation. All modification are done by modifing existent object.
:param record record instance for normalization/transformation. All modification are done by modifing existent object.
:schema object's jsonschema for normalization.
"""
if TransformConfig.NoTransform in self._config:
return
normalizer = self._normalizer(schema)
for e in normalizer.iter_errors(instance):
for e in normalizer.iter_errors(record):
"""
just calling normalizer.validate() would throw an exception on
first validation occurences and stop processing rest of schema.
Expand Down
8 changes: 4 additions & 4 deletions airbyte-cdk/python/unit_tests/sources/test_source.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
from airbyte_cdk.sources import AbstractSource, Source
from airbyte_cdk.sources.streams.core import Stream
from airbyte_cdk.sources.streams.http.http import HttpStream
from airbyte_cdk.sources.utils.transform import TransformConfig, Transformer
from airbyte_cdk.sources.utils.transform import TransformConfig, TypeTransformer


class MockSource(Source):
Expand Down Expand Up @@ -240,8 +240,8 @@ def test_source_config_transform(abstract_source, catalog):
logger_mock = MagicMock()
streams = abstract_source.streams(None)
http_stream, non_http_stream = streams
http_stream.transformer = Transformer(TransformConfig.DefaultSchemaNormalization)
non_http_stream.transformer = Transformer(TransformConfig.DefaultSchemaNormalization)
http_stream.transformer = TypeTransformer(TransformConfig.DefaultSchemaNormalization)
non_http_stream.transformer = TypeTransformer(TransformConfig.DefaultSchemaNormalization)
http_stream.get_json_schema.return_value = non_http_stream.get_json_schema.return_value = SCHEMA
http_stream.read_records.return_value, non_http_stream.read_records.return_value = [{"value": 23}], [{"value": 23}]
records = [r for r in abstract_source.read(logger=logger_mock, config={}, catalog=catalog, state={})]
Expand All @@ -253,7 +253,7 @@ def test_source_config_transform_and_no_transform(abstract_source, catalog):
logger_mock = MagicMock()
streams = abstract_source.streams(None)
http_stream, non_http_stream = streams
http_stream.transformer = Transformer(TransformConfig.DefaultSchemaNormalization)
http_stream.transformer = TypeTransformer(TransformConfig.DefaultSchemaNormalization)
http_stream.get_json_schema.return_value = non_http_stream.get_json_schema.return_value = SCHEMA
http_stream.read_records.return_value, non_http_stream.read_records.return_value = [{"value": 23}], [{"value": 23}]
records = [r for r in abstract_source.read(logger=logger_mock, config={}, catalog=catalog, state={})]
Expand Down
18 changes: 9 additions & 9 deletions airbyte-cdk/python/unit_tests/sources/utils/test_transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
import json

import pytest
from airbyte_cdk.sources.utils.transform import TransformConfig, Transformer
from airbyte_cdk.sources.utils.transform import TransformConfig, TypeTransformer

SIMPLE_SCHEMA = {"type": "object", "properties": {"value": {"type": "string"}}}
COMPLEX_SCHEMA = {
Expand Down Expand Up @@ -186,30 +186,30 @@
],
)
def test_transform(schema, actual, expected):
t = Transformer(TransformConfig.DefaultSchemaNormalization)
t = TypeTransformer(TransformConfig.DefaultSchemaNormalization)
t.transform(actual, schema)
assert json.dumps(actual) == json.dumps(expected)


def test_transform_wrong_config():
with pytest.raises(Exception, match="NoTransform option cannot be combined with other flags."):
Transformer(TransformConfig.NoTransform | TransformConfig.DefaultSchemaNormalization)
TypeTransformer(TransformConfig.NoTransform | TransformConfig.DefaultSchemaNormalization)

with pytest.raises(Exception, match="Please set TransformConfig.CustomSchemaNormalization config before registering custom normalizer"):

class NotAStream:
transformer = Transformer(TransformConfig.DefaultSchemaNormalization)
transformer = TypeTransformer(TransformConfig.DefaultSchemaNormalization)

@transformer.register
@transformer.registerCustomTransform
def transform_cb(instance, schema):
pass


def test_custom_transform():
class NotAStream:
transformer = Transformer(TransformConfig.CustomSchemaNormalization)
transformer = TypeTransformer(TransformConfig.CustomSchemaNormalization)

@transformer.register
@transformer.registerCustomTransform
def transform_cb(instance, schema):
# Check no default conversion applied
assert instance == 12
Expand All @@ -224,9 +224,9 @@ def transform_cb(instance, schema):

def test_custom_transform_with_default_normalization():
class NotAStream:
transformer = Transformer(TransformConfig.CustomSchemaNormalization | TransformConfig.DefaultSchemaNormalization)
transformer = TypeTransformer(TransformConfig.CustomSchemaNormalization | TransformConfig.DefaultSchemaNormalization)

@transformer.register
@transformer.registerCustomTransform
def transform_cb(instance, schema):
# Check default conversion applied
assert instance == "12"
Expand Down
25 changes: 23 additions & 2 deletions docs/connector-development/cdk-python/schemas.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ def get_json_schema(self):

## Schema normalization

It is important to ensure output data conforms to the declared json schema. This is because the destination receiving this data to load into tables may strictly enforce schema (e.g. when data is stored in a SQL database, you can't put INTEGER type into CHAR column). In the case of changes to API output (which is almost guaranteed to happen over time) or a minor mistake in jsonschema definition, data syncs could thus break because of mismatched datatype schemas.
It is important to ensure output data conforms to the declared json schema. This is because the destination receiving this data to load into tables may strictly enforce schema (e.g. when data is stored in a SQL database, you can't put CHAT type into INTEGER column). In the case of changes to API output (which is almost guaranteed to happen over time) or a minor mistake in jsonschema definition, data syncs could thus break because of mismatched datatype schemas.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These docs are fantastic! very thorough and user friendly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!


To remain robust in operation, the CDK provides a transformation ability to perform automatic object mutation to align with desired schema before outputting to the destination. All streams inherited from airbyte_cdk.sources.streams.core.Stream class have this transform configuration available. It is _disabled_ by default and can be configured per stream within a source connector.
### Default schema normalization
Expand Down Expand Up @@ -68,7 +68,7 @@ class MyStream(Stream):
transformer = Transformer(TransformConfig.CustomSchemaNormalization)
...

@transformer.register
@transformer.registerCustomTransform
def transform_function(orginal_value: Any, field_schema: Dict[str, Any]) -> Any:
# transformed_value = ...
return transformed_value
Expand All @@ -86,3 +86,24 @@ In this case default normalization would be skipped and only custom transformati
transformer = Transformer(TransformConfig.DefaultSchemaNormalization | TransformConfig.CustomSchemaNormalization)
```
In this case custom normalization will be applied after default normalization function. Note that order of flags doesnt matter, default normalization will always be run before custom.

### Performance consideration

Transofrming each object on the fly would add some time for each object processing. This time is depends on object/schema complexitiy and hardware configuration.

There is some performance benchmark we've done with ads_insights facebook schema (it is complex schema with objects nested inside arrays ob object and a lot of references) and example object.
Here is average transform time per single object, seconds:
```
regular transform:
0.0008423403530008121

transform without type casting (but value still being write to dict/array):
0.000776215762666349

transform without actual value setting (but iterating through object properties):
0.0006788729513330812

just traverse/validate through json schema and object fields:
0.0006139181846665452
```
On my PC (AMD Ryzen 7 5800X) it took 0.8 milliseconds per one object. As you can see most time (~ 75%) is taken by jsonschema traverse/validation routine and very little (less than 10 %) by actual converting. Processing time can be reduced by skipping jsonschema type checking but it would be no warnings about possible object jsonschema inconsistency.