Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Advanced class-based conversions= mechanism #402

Open
simonw opened this issue Feb 6, 2022 · 15 comments
Open

Advanced class-based conversions= mechanism #402

simonw opened this issue Feb 6, 2022 · 15 comments

Comments

@simonw
Copy link
Owner

simonw commented Feb 6, 2022

The conversions= parameter works like this at the moment: https://sqlite-utils.datasette.io/en/3.23/python-api.html#converting-column-values-using-sql-functions

db["places"].insert(
    {"name": "Wales", "geometry": wkt},
    conversions={"geometry": "GeomFromText(?, 4326)"},
)

This proposal is to support values in that dictionary that are objects, not strings, which can represent more complex conversions - spun out from #399.

New proposed mechanism:

from sqlite_utils.utils import LongitudeLatitude

db["places"].insert(
    {
        "name": "London",
        "point": (-0.118092, 51.509865)
    },
    conversions={"point": LongitudeLatitude},
)

Here LongitudeLatitude is a magical value which does TWO things: it sets up the GeomFromText(?, 4326) SQL function, and it handles converting the (51.509865, -0.118092) tuple into a POINT({} {}) string.

This would involve a change to the conversions= contract - where it usually expects a SQL string fragment, but it can also take an object which combines that SQL string fragment with a Python conversion function.

Best of all... this resolves the lat, lon v.s. lon, lat dilemma because you can use from sqlite_utils.utils import LongitudeLatitude OR from sqlite_utils.utils import LatitudeLongitude depending on which you prefer!

Originally posted by @simonw in #399 (comment)

@simonw
Copy link
Owner Author

simonw commented Feb 6, 2022

From that thread, two extra ideas which it may be possible to support in a single implementation:

from sqlite_utils.conversions import LongitudeLatitude

db["places"].insert(
    {
        "name": "London",
        "lng": -0.118092,
        "lat": 51.509865,
    },
    conversions={"point": LongitudeLatitude("lng", "lat")},
)

And

db["places"].insert(
    {
        "name": "London",
        "point": LongitudeLatitude(-0.118092, 51.509865)
    }
)

@simonw
Copy link
Owner Author

simonw commented Feb 6, 2022

So the key idea here is to introduce a new abstract base class, Conversion, which has the following abilities:

  • Can wrap one or more Python values (if called using the constructor) such that the .insert_all() method knows how to transform those into a format that can be included in an insert - something like GeomFromText(?, 4326) with input POINT(-0.118092 51.509865)
  • Can be passed to conversions={"point": LongitudeLatitude} in a way that then knows to apply that conversion to every value in the "point" key of the data being inserted.
  • Maybe also extend conversions= to allow the definition of additional keys that use as input other rows? That's the conversions={"point": LongitudeLatitude("lng", "lat")} example above - it may not be possible to get this working with the rest of the design though.

@simonw
Copy link
Owner Author

simonw commented Feb 6, 2022

I like the idea that the contract for Conversion (or rather for its subclasses) is that it can wrap a Python value and then return both the SQL fragment - e.g. GeomFromText(?, 4326) - and the values that should be used as the SQL parameters.

@simonw
Copy link
Owner Author

simonw commented Feb 6, 2022

I think this is the code that needs to become aware of this system:

sql = """
INSERT {or_what}INTO [{table}] ({columns}) VALUES {rows};
""".strip().format(
or_what=or_what,
table=self.name,
columns=", ".join("[{}]".format(c) for c in all_columns),
rows=", ".join(
"({placeholders})".format(
placeholders=", ".join(
[conversions.get(col, "?") for col in all_columns]
)
)
for record in chunk
),
)
flat_values = list(itertools.chain(*values))
queries_and_params = [(sql, flat_values)]

There's an earlier branch that runs for upserts which needs to be modified too:

sql = "INSERT OR IGNORE INTO [{table}]({pks}) VALUES({pk_placeholders});".format(
table=self.name,
pks=", ".join(["[{}]".format(p) for p in pks]),
pk_placeholders=", ".join(["?" for p in pks]),
)
queries_and_params.append((sql, [record[col] for col in pks]))
# UPDATE [book] SET [name] = 'Programming' WHERE [id] = 1001;
set_cols = [col for col in all_columns if col not in pks]
if set_cols:
sql2 = "UPDATE [{table}] SET {pairs} WHERE {wheres}".format(
table=self.name,
pairs=", ".join(
"[{}] = {}".format(col, conversions.get(col, "?"))
for col in set_cols
),
wheres=" AND ".join("[{}] = ?".format(pk) for pk in pks),
)
queries_and_params.append(
(
sql2,
[record[col] for col in set_cols]
+ [record[pk] for pk in pks],
)
)

@eyeseast
Copy link
Contributor

eyeseast commented Feb 7, 2022

I wonder if there's any overlap with the goals here and the sqlite3 module's concept of adapters and converters: https://docs.python.org/3/library/sqlite3.html#sqlite-and-python-types

I'm not sure that's exactly what we're talking about here, but it might be a parallel with some useful ideas to borrow.

@simonw
Copy link
Owner Author

simonw commented Feb 7, 2022

Hah, that's interesting - I've never used that mechanism before so it wasn't something that came to mind.

They seem to be using a pretty surprising trick there that takes advantage of SQLite allowing you to define a column "type" using a made-up type name, which you can then introspect later.

@eyeseast
Copy link
Contributor

eyeseast commented Feb 7, 2022

I've never used it either, but it's interesting, right? Feel like I should try it for something.

I'm trying to get my head around how this conversions feature might work, because I really like the idea of it.

@simonw
Copy link
Owner Author

simonw commented Feb 8, 2022

I have an idea for how that third option could work - the one that creates a new column using values from the existing ones:

db["places"].insert(
    {
        "name": "London",
        "lng": -0.118092,
        "lat": 51.509865,
    },
    conversions={"point": LongitudeLatitude("lng", "lat")},
)

How about specifying that the values in that conversion= dictionary can be:

  • A SQL string fragment (as currently implemented)
  • A subclass of Conversion as described above
  • Or... a callable function that takes the row as an argument and returns either a Conversion subclass instance or a literal value to be jnserted into the database (a string, int or float)

Then you could do this:

db["places"].insert(
    {
        "name": "London",
        "lng": -0.118092,
        "lat": 51.509865,
    },
    conversions={
        "point": lambda row: LongitudeLatitude(
            row["lng"], row["lat"]
        )
    }
)

Something I really like about this is that it expands the abilities of conversions= beyond the slightly obscure need to customize the SQL fragment into something that can solve other data insertion cleanup problems too.

@simonw
Copy link
Owner Author

simonw commented Feb 8, 2022

I'm going to write the documentation for this first, before the implementation, so I can see if it explains cleanly enough that the design appears to be sound.

@eyeseast
Copy link
Contributor

eyeseast commented Feb 8, 2022

What if you did something like this:

class Conversion:
    def __init__(self, *args, **kwargs):
        "Put whatever settings you need here"

    def python(self, row, column, value): # not sure on args here
        "Python step to transform value"
        return value

    def sql(self, row, column, value):
        "Return the actual sql that goes in the insert/update step, and maybe params"
        # value is the return of self.python()
        return value, []

This way, you're always passing an instance, which has methods that do the conversion. (Or you're passing a SQL string, as you would now.) The __init__ could take column names, or SRID, or whatever other setup state you need per row, but the row is getting processed with the python and sql methods (or whatever you want to call them). This is pretty rough, so do what you will with names and args and such.

You'd then use it like this:

# subclass might be unneeded here, if methods are present
class LngLatConversion(Conversion):
    def __init__(self, x="longitude", y="latitude"):
        self.x = x
        self.y = y

    def python(self, row, column, value):
        x = row[self.x]
        y = row[self.y]
        return x, y

    def sql(self, row, column, value):
        # value is now a tuple, returned above
        s = "GeomFromText(POINT(? ?))"
        return s, value

table.insert_all(rows, conversions={"point": LngLatConversion("lng", "lat"))}

I haven't thought through all the implementation details here, and it'll probably break in ways I haven't foreseen, but wanted to get this idea out of my head. Hope it helps.

@simonw
Copy link
Owner Author

simonw commented Feb 9, 2022

My hunch is that the case where you want to consider input from more than one column will actually be pretty rare - the only case I can think of where I would want to do that is for latitude/longitude columns - everything else that I'd want to use it for (which admittedly is still mostly SpatiaLite stuff) works against a single value.

The reason I'm leaning towards using the constructor for the values is that I really like the look of this variant for common conversions:

db["places"].insert(
    {
        "name": "London",
        "boundary": GeometryFromGeoJSON({...})
    }
)

@simonw
Copy link
Owner Author

simonw commented Feb 9, 2022

The CLI version of this could perhaps look like this:

sqlite-utils insert a.db places places.json \
  --conversion boundary GeometryGeoJSON

This will treat the boundary key as GeoJSON. It's equivalent to passing conversions={"boundary": geometryGeoJSON}

The combined latitude/longitude case here can be handled by combining this with the existing --convert mechanism.

Any Conversion subclass will be available to the CLI in this way.

@eyeseast
Copy link
Contributor

eyeseast commented Feb 10, 2022

Yeah, the CLI experience is probably where any kind of multi-column, configured setup is going to fall apart. Sticking with GIS examples, one way I might think about this is using the fiona CLI:

# assuming a database is already created and has SpatiaLite
fio cat boundary.shp | sqlite-utils insert boundaries --conversion geometry GeometryGeoJSON -

Anyway, very interested to see where you land here.

@psychemedia
Copy link

psychemedia commented Feb 16, 2022

My hunch is that the case where you want to consider input from more than one column will actually be pretty rare - the only case I can think of where I would want to do that is for latitude/longitude columns

Other possible pairs: unconventional date/datetime and timezone pairs eg 2022-02-16::17.00, London; or more generally, numerical value and unit of measurement pairs (eg if you want to cast into and out of different measurement units using packages like pint) or currencies etc. Actually, in that case, I guess you may be presenting things that are unit typed already, and so a conversion would need to parse things into an appropriate, possibly two column value, unit format.

@rsyring
Copy link

rsyring commented Jan 22, 2024

wonder if there's any overlap with the goals here and the sqlite3 module's concept of adapters and converters: https://docs.python.org/3/library/sqlite3.html#sqlite-and-python-types

For some discussion of converters, see: #612

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants