Skip to content

Make client-side pagination a true recommendation  #3743

Closed
@awoods187

Description

@awoods187

I heard from k yesterday that they thought pagination via offset/limit was slow and were looking for recommendations on what to do for large result sets.

I spoke with @jordanlewis and he recommended:
"Doing “client-side pagination” by retrieving a set of records with a limit, and then checking the index key of the last row, using that as an index constraint, and then running the query again
this is normal - not specific to cockroach"

Our current docs: https://www.cockroachlabs.com/docs/stable/selection-queries.html#limiting-row-count-and-pagination actually suggest the wrong solution for pagination.

The reason is that "offset doesn’t know anything smart - it has to get the same data as before, just skips the first n. You have to participate as a client by remembering the index key of the last result set you saw."

We should also reference that some databases offer a feature called cursors to do this. @andy-kimball mentioned that "cursors are generally not a great architecture as well b/c they force server to keep state. client side is way to go for pagination. server pagination just doesn't scale well"

Jordan also mentioned that "you run into trouble with a scale out system like cockroach - if your load balancer moves you to a different server for example the cursor will be lost"

Andy also mentioned that "sql server has cursors, and we spent a lot of time recommending customers not use them"

This way we can make it clear that client-side is the way to go and its not just because we don't have cursors.

Metadata

Metadata

Assignees

Labels

O-externalOrigin: Issue comes from external users.P-0Urgent; must be done in next 2 weeksT-missing-info

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions