Skip to content

Commit

Permalink
Improve docs (#2)
Browse files Browse the repository at this point in the history
* Add information about extending abilities.

* Add docs about custom types and NULLs.
  • Loading branch information
georgysavva authored Jul 10, 2020
1 parent fcf1e55 commit f85c2da
Show file tree
Hide file tree
Showing 5 changed files with 64 additions and 60 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ with just one function call and don't bother with rows iteration.
scany isn't limited to any specific database. It integrates with `database/sql`,
so any database with `database/sql` driver is supported.
It also works with [pgx](https://github.com/jackc/pgx) - specific library for PostgreSQL.
Apart from the out of the box support, scany can be easily extended to work with any database library.
Apart from the out of the box support, scany can be easily extended to work with almost any database library.

## Install

Expand Down Expand Up @@ -95,7 +95,7 @@ package to work with `pgx` library.
## How to use with other database libraries

Use [`dbscan`](https://pkg.go.dev/github.com/georgysavva/scany/dbscan) package that works with an abstract database,
and can be integrated with any library.
and can be integrated with any library that has a concept of rows.
This particular package implements core scany features and contains all the logic.
Both `sqlscan` and `pgxscan` use `dbscan` internally.

Expand Down
66 changes: 52 additions & 14 deletions dbscan/doc.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ By default, to get the corresponding column dbscan translates field name to snak
To override this behavior, specify the column name in the `db` field tag.
In the example above User struct is mapped to the following columns: "user_id", "first_name", "email".
Embedded structs
dbscan works recursively, a struct can contain embedded structs as well.
It allows reusing models in different queries. Structs can be embedded both by value and by a pointer.
Note that, nested non-embedded structs aren't allowed, this decision was made due to simplicity.
Expand All @@ -32,6 +34,11 @@ this simulates the behavior of major SQL databases in case of a JOIN.
To add a prefix to all fields of the embedded struct specify it in the `db` field tag,
dbscan uses "." as a separator, for example:
type Row struct {
*User
Post `db:"post"`
}
type User struct {
UserID string
Email string
Expand All @@ -42,38 +49,68 @@ dbscan uses "." as a separator, for example:
Text string
}
type Row struct {
*User
Post `db:"post"`
Row struct is mapped to the following columns: "user_id", "email", "post.id", "post.text".
Handling custom types and NULLs
dbscan supports custom types and NULLs perfectly.
You can work with them the same way as if you would be using your database library directly.
Under the hood, dbscan passes all types that you provide to the underlying rows.Scan()
and if the database library supports a type, dbscan supports it automatically, for example:
type User struct {
OptionalBio *string
OptionalAge CustomNullInt
Data CustomData
OptionalData *CustomData
}
Row struct is mapped to the following columns: "user_id", "email", "post.id", "post.text".
type CustomNullInt struct {
// Any fields that this custom type needs
}
type CustomData struct {
// Any fields that this custom type needs
}
User struct is valid and every field will be scanned properly, the only condition for this
is that your database library can handle *string, CustomNullInt, CustomData and *CustomData types.
Ignored struct fields
In order for dbscan to work with a field it must be exported, unexported fields will be ignored.
This applied to embedded structs too, the type that is embedded must be exported.
It's possible to explicitly mark a field as ignored for dbscan. To do this set `db:"-"` struct tag.
By the way, it works for embedded structs as well, for example:
type Post struct {
ID string
Text string
}
type Comment struct {
Post `db:"-"`
ID string
Body string
Likes int `db:"-"`
}
type Post struct {
ID string
Text string
}
Comment struct is mapped to the following columns: "id", "body".
Struct scanning errors
In case there is no corresponding field for a column dbscan returns an error,
this forces to only select data from the database that application needs. And another way around,
if a struct contains multiple fields that are mapped to the same column,
dbscan won't be able to make the chose to which field to assign and will return an error, for example:
type Row struct {
User
Post
}
type User struct {
ID string
Email string
Expand All @@ -84,11 +121,6 @@ dbscan won't be able to make the chose to which field to assign and will return
Text string
}
type Row struct {
User
Post
}
Row struct is invalid since both Row.User.ID and Row.Post.ID are mapped to the "id" column.
Scanning into map
Expand Down Expand Up @@ -134,5 +166,11 @@ Manual rows iteration
It's possible to manually control rows iteration but still use all scanning features of dbscan,
see RowScanner for details.
Implementing Rows interface
dbscan can be used with any database library that has a concept of rows and can implement dbscan Rows interface.
It's pretty likely that your rows type already implements Rows interface as-is, for example this is true for the standard *sql.Rows type.
Or you just need a thin adapter how it was done for pgx.Rows in pgxscan, see pgxscan.RowsAdapter for details.
*/
package dbscan
4 changes: 2 additions & 2 deletions doc.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@
scany isn't limited to any specific database. It integrates with database/sql,
so any database with database/sql driver is supported.
It also works with https://github.com/jackc/pgx - specific library for PostgreSQL.
Apart from the out of the box support, scany can be easily extended to work with any database library.
Apart from the out of the box support, scany can be easily extended to work with almost any database library.
scany contains the following packages:
sqlscan package works with database/sql standard library.
pgxscan package works with github.com/jackc/pgx library.
dbscan package works with an abstract database and can be integrated with any library.
dbscan package works with an abstract database and can be integrated with any library that has a concept of rows.
This particular package implements core scany features and contains all the logic.
Both sqlscan and pgxscan use dbscan internally.
*/
Expand Down
23 changes: 8 additions & 15 deletions pgxscan/doc.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,29 +26,22 @@ it's as simple as this:
pgxscan.Query(ctx, &users, db, `SELECT user_id, name, email, age FROM users`)
// users variable now contains data from all rows.
Pgx custom types
Note about pgx custom types
pgx has a concept of custom types: https://pkg.go.dev/github.com/jackc/pgx/v4?tab=doc#hdr-Custom_Type_Support.
You can use them with pgxscan too, here is an example of a struct with pgtype.Text field:
In order to use them with pgxscan you must specify your custom types by value, not by a pointer.
Let's take the pgx custom type pgtype.Text as an example:
type User struct {
UserID string
Name string
Bio pgtype.Text
}
Note that you must specify pgtype.Text by value, not by a pointer. This will not work:
type User struct {
UserID string
Name string
Bio *pgtype.Text // pgxscan won't be able to scan data into a field defined that way.
Name *pgtype.Text // pgxscan won't be able to scan data into a field defined that way.
Bio pgtype.Text // This is a valid use of pgx custom types, pgxscan will handle it easily.
}
This happens because struct fields are always passed to the underlying pgx.Rows.Scan() as pointers,
and if the field type is *pgtype.Text, pgx.Rows.Scan() will receive **pgtype.Text and
pgx won't be able to handle that type, since only *pgtype.Text implements pgx custom type interface.
This happens because struct fields are always passed to the underlying pgx.Rows.Scan() as addresses,
and if the field type is *pgtype.Text, pgx.Rows.Scan() will receive **pgtype.Text.
pgx can't handle that type, since only *pgtype.Text implements pgx custom type interface.
Supported pgx version
Expand Down
27 changes: 0 additions & 27 deletions sqlscan/doc.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,32 +25,5 @@ it's as simple as this:
var users []*User
sqlscan.Query(ctx, &users, db, `SELECT user_id, name, email, age FROM users`)
// users variable now contains data from all rows.
Types that implement sql Scanner
sqlscan plays well with custom types that implement sql.Scanner interface, here is how you can use them:
type PostData struct {
Title string
Text string
Counter int
}
func (pd *PostData) Scan(value interface{}) error {
b, ok := value.([]byte)
if !ok {
return errors.New("Data.Scan: value isn't []byte")
}
return json.Unmarshal(b, &pd)
}
type Post struct {
PostID string
OwnerID string
Data *PostData
}
Note that type implementing sql.Scanner (PostData struct in the example above)
can be presented both by a pointer, as shown in Post struct and by value.
*/
package sqlscan

0 comments on commit f85c2da

Please sign in to comment.