Skip to content

Conversation

@mallman
Copy link
Collaborator

@mallman mallman commented Oct 8, 2021

This PR addresses issue #1074. It provides a small update to the GRDB backup API that allows a user to receive backup progress updates.

The existing backup unit tests have been enhanced, however I have not yet tried using this patch in a "real world" scenario with a large/huge database. I'm particularly interested in tuning the backup page step size to something useful. The value I've set, 5, is simply taken from the SQLite example. I have no idea if this is a sane value, so I'm keeping this as a draft PR until I've had the opportunity to test and tune that value.

Pull Request Checklist

  • This pull request is submitted against the development branch.
  • Inline documentation has been updated.
  • README.md or another dedicated guide has been updated.
  • Changes are unit tested.
  • Changes are "real world" tested.

@mallman mallman requested a review from groue October 8, 2021 22:46
@mallman mallman self-assigned this Oct 8, 2021
@mallman
Copy link
Collaborator Author

mallman commented Oct 8, 2021

@groue I have no idea why the build is failing. Help?

@groue
Copy link
Owner

groue commented Oct 11, 2021

I have no idea why the build is failing. Help?

Hello @mallman, what do you do to get this failure?

@mallman
Copy link
Collaborator Author

mallman commented Oct 11, 2021

I have no idea why the build is failing. Help?

Hello @mallman, what do you do to get this failure?

Okay. I couldn't figure it out originally, but the build is working now.

@groue
Copy link
Owner

groue commented Oct 12, 2021

All right, let me know how and when I can help.

First of all, the choice of Progress is excellent.

I have a few questions/suggestions:

  • Can we have a single Progress instance for the whole duration of the backup? This would make it easy to feed UIProgressView.observedProgress, for example. Do you think it is possible?

    If yes, then the Progress documentation suggests that progress tracking should be performed with key-value observing (KVO). I would thus expect an API that looks like:

    @discardableResult
    public func backup(...) -> Progress { ... }

    Maybe you had a specific reason for choosing a closure-based reporting?

    Maybe we'd create threading issues in the client app if the progress is updated outside of the main queue...

  • Let's make sure the isCancellable and isPausable properties have a reliable value. They have to be true if we support cancellation and/or pause, and false otherwise. Such features would be great, if SQLite allows cancellation and pause without putting the connections in a strange state. If your app could profit from them, it's a good opportunity to investigate :-) (this is entirely optional)

  • (Again optional) I would love to see a public, progress-reporting Database.backup(to dbDest: Database) method, that would complement the DatabaseReader/Writer apis. This would allow users to precisely schedule their backup in order to remove all concurrency-related questions created by current apis. For example:

    // During backup from dbPool1 to dbPool2:
    // - Prevent concurrent writes to dbPool1
    // - Prevent all concurrent reads and writes to dbPool2
    try dbPool1.write { dbSource in
        try dbPool2.barrierWriteWithoutTransaction { dbDest in
            try dbSource.backup(to: dbDest)
        }
    }

    I'm not 100% sure this new api is needed, but I currently don't know what to answer to users (including myself) who feel uncomfortable with the documentation of the existing method, which states:

    When the source is a DatabasePool, concurrent writes can happen during the backup. Those writes may, or may not, be reflected in the backup, but they won't trigger any error.

    This sentence does not state anything false, but nothing tells how to avoid the "may, or may not".

/// A module-private function to pass the page step size for testing
func backup(
to writer: DatabaseWriter,
pageStepSize: Int32 = -1,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure we should expose pageStepSize as part of the public API because we might consider it an "implementation detail". On the other hand, it may be of significant use to "advanced" users. After all, not every database is the same, and not every GRDB user has the same use case. Some might want more frequent progress updates than others. Also, I suspect backup throughput increases as pageStepSize increases. If we set a default value of, say, 100 and a value of 1000 would actually make a backup move substantially faster for a an end user with a huge database, it may be valuable to them.

Hmmm... I have not yet tested this backup capability with a "huge" database, but I plan to "soon". I will try some different pageStepSize values and report back with my findings. This may be a moot point. I just want to make a note here while I'm thinking about it.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Progress reporting is only useful for big databases, and the Progress can only report progress over time if several steps are performed, with a "smallish" number of pages per step.

Yet the default value should still remain -1 (copy all source pages in a single step), even if it makes the Progress nearly useless. Documentation will tell users to choose a number of pages that suits their needs.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok.

progress: ((Progress) -> ())? = nil)
throws
{
try writer.writeWithoutTransaction { dbDest in
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think writer.writeWithoutTransaction is the correct API to use here? Come to think of it, I actually just copy-pasta'd it from the existing implementation without thinking about it. But that used a page size of -1, which does the whole backup in one go.

Copy link
Owner

@groue groue Oct 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SQLite documentation says:

SQLite holds a write transaction open on the destination database file for the duration of the backup operation.

So we need to access the destination database without opening any transaction. This excludes write, which opens a transaction. We could prefer barrierWriteWithoutTransaction in order to prevent concurrent reads, but I could not find in the doc that we should prevent concurrent reads from another connection (as DatabasePool can do).

So writeWithoutTransaction it is!

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[...] We could prefer barrierWriteWithoutTransaction in order to prevent concurrent reads, but I could not find in the doc that we should prevent concurrent reads from another connection (as DatabasePool can do).

So I checked existing tests, in DatabasePoolBackupTests :-)

There is no test for parallel read in the destination database during a backup. That would be interesting (not in this PR of course!)

And there is a commented out flaky test for a parallel write in the source database during backup 😬

And that's all. Eventually we're somewhat in the fog about the concurrent behavior of backups 😅

Still, based on the SQLite doc, destWriter.writeWithoutTransaction is correct!

@groue
Copy link
Owner

groue commented Oct 13, 2021

I would thus expect an API that looks like:

@discardableResult
public func backup(...) -> Progress { ... }

Sorry for this suggestion! It's useless to return a Progress after the backup has completed 😅

I don't have a better suggestion at the moment (which allows users to use a single Progress instance).

@mallman
Copy link
Collaborator Author

mallman commented Oct 13, 2021

All right, let me know how and when I can help.

First of all, the choice of Progress is excellent.

I have a few questions/suggestions:

  • Can we have a single Progress instance for the whole duration of the backup? This would make it easy to feed UIProgressView.observedProgress, for example. Do you think it is possible?
    If yes, then the Progress documentation suggests that progress tracking should be performed with key-value observing (KVO). I would thus expect an API that looks like:

    @discardableResult
    public func backup(...) -> Progress { ... }

    Maybe you had a specific reason for choosing a closure-based reporting?
    Maybe we'd create threading issues in the client app if the progress is updated outside of the main queue...

I think the idea of returning a stateful, observable progress object has its appeal. I used a closure-based API largely because it mimics the underlying SQLite API. That simplifies the implementation and IMO makes it more reliable and predictable. Also, the progress reporting is stateless.

I used the Progress object because I was looking for a platform-conventional API for progress reporting. However, after doing some more digging on the Progress API, I'm starting to regret my choice.

For one thing, the API I've proposed is safest when the progress reporting data structure passed to the callback is immutable. The Progress class is far from that, and that could cause some confusion or misuse of the API. It's just supposed to report progress at the time it's reported and nothing else. In fact, my original thought was to report progress with a tuple or a two-argument function, as that is essentially what SQLite does. The downside of this approach is it doesn't explicitly provide labels for which element of the tuple is the total unit count and which is the current progress.

The second and perhaps more significant concern I have about introducing the Progress API is that its underlying implementation and users' experience of it in practice is horrible. See https://github.com/CharlesJS/CSProgress/blob/master/README.md. True, that's only one report, but some things the author mention I can personally vouch for as being deadly for performance. For example, NSLock is sloooooow. Also, Objective-C message passing is slow compared to swift's direct or virtual method dispatch. The report of excessive memory use is concerning, too.

For these reasons, I'm beginning to think we would be better off without Progress. KVO is not the fashionable API for Swift apps these days anyway...

I think there are two possibilities we should consider for moving forward:

  1. Keep the callback-based API but use our own struct for progress reporting. It will also provide a function for cancelling the backup in progress.
  2. Provide a Combine-based API for Swift-native observable progress reporting.

One of the major hurdles of using Combine is that it substantially bumps the platform version requirements to macOS 10.15 and iOS 13.

For the sake of creating something relatively lightweight and simple, I am heavily favoring the first approach.

  • Let's make sure the isCancellable and isPausable properties have a reliable value. They have to be true if we support cancellation and/or pause, and false otherwise. Such features would be great, if SQLite allows cancellation and pause without putting the connections in a strange state. If your app could profit from them, it's a good opportunity to investigate :-) (this is entirely optional)
  • (Again optional) I would love to see a public, progress-reporting Database.backup(to dbDest: Database) method, that would complement the DatabaseReader/Writer apis. This would allow users to precisely schedule their backup in order to remove all concurrency-related questions created by current apis. For example:
    // During backup from dbPool1 to dbPool2:
    // - Prevent concurrent writes to dbPool1
    // - Prevent all concurrent reads and writes to dbPool2
    try dbPool1.write { dbSource in
        try dbPool2.barrierWriteWithoutTransaction { dbDest in
            try dbSource.backup(to: dbDest)
        }
    }

This sounds like a good idea. I'll work on this.

@mallman
Copy link
Collaborator Author

mallman commented Oct 14, 2021

I had another thought on why this API does not return some kind of progress object. The function as written in this PR is not asynchronous—it doesn't return until the backup is complete. In that sense, it is an API-compatible replacement for the current backup function (assuming we use a default page size of -1). The only difference introduced by this PR is the provision of an optional capability to receive periodic progress reports while the backup is running.

@groue
Copy link
Owner

groue commented Oct 15, 2021

Yes @mallman, we agree. So... considering the Progress usability problems you have raised (and that I indeed met in the past), and the fact that the method is synchronous (regardless of an eventual async version), are we left with a closure-based reporting as below?

/// Copies the database contents into another database.
///
/// The `backup` method blocks the current thread until the destination
/// database contains the same contents as the source database.
///
/// When the source is a DatabasePool, concurrent writes can happen during
/// the backup. Those writes may, or may not, be reflected in the backup,
/// but they won't trigger any error.
///
/// Usage:
///
///     let source: DatabaseQueue = ...
///     let destination: DatabaseQueue = ...
///     try source.backup(to: destination)
///
/// When you're after progress reporting during backup, you'll want to
/// perform the backup in several steps. Each step copies the number of
/// _database pages_ you specify. See <https://www.sqlite.org/c3ref/backup_finish.html>
/// for more information:
///
///     // Backup with progress reporting
///     try source.backup(
///         to: destination,
///         pagesPerStep: ...,
///         progress: { (completedPageCount, totalPageCount) in
///            print("\(completedPageCount) pages copied out of \(totalPageCount)")
///         })
///
/// - parameters:
///     - writer: The destination database.
///     - pagesPerStep: The number of database pages copied on each backup
///       step. By default, all pages are copied in one single step.
///     - progress: An optional function that is notified of the backup
///       progress. If this function throws, the backup is abandoned, and
///       the error is rethrown by the `backup` method.
///     - completedPageCount: The number of copied pages.
///     - totalPageCount: The total number of pages to be copied.
/// - throws: The error thrown by `progress`, or any `DatabaseError` that
///   would happen while performing the backup.
func backup(
    to writer: DatabaseWriter,
    pagesPerStep: Int = -1,
    progress: ((_ completedPageCount: Int, _ totalPageCount: Int) throws -> Void)? = nil)
throws
{ ... }

@groue
Copy link
Owner

groue commented Oct 15, 2021

About the above proposed documentation that talks about throwing from the progress callback in order to abandon the backup, and the fact that this error is rethrown by the backup method: this is unfinished thoughts, and quick and dirty documentation.

I do think it is valuable to let the library user abandon the backup. If an application displays a progress bar, then it may want to let its user cancel the backup. It is possible according to the SQLite doc (search for "abandon").

But we must let the user understand clearly what is the final state of the destination database: copied, or not. In this context, I wonder what happens when one throws from the last step:

// 😈
try source.backup(to: destination) { completed, total in
    if completed == total { throw MyError() }
}

In the above sample code, is our implementation able to abandon the backup, or not? Maybe yes, maybe not. It's better if we can, so that we can document a simple rule: any error thrown from progress is turned into an abandoned backup. But maybe we'll have a difficulty honoring this rule for the last step, I don't know yet.

Any way, what's paramount is that if the destination database is copied, then backup method must not throw. On the other side, if the backup has failed in any way (EDIT: or cancelled), then an error must be thrown.

This is important, considering the component that calls the backup method may not be the component that provides the callback. In other words, one must be able to reliably use the backup method, even if one does not know what the progress function does (because it is written by somebody else):

func someComplexBackup(progress: ...) throws {
    do {
        try source.backup(to: destination, progress: progress)
        // Here the backup MUST have completed.
        // It is a GRDB bug if we can end up here with an failed/abandoned backup.
        ... // proceed and assume source and destination have the same content
    } catch {
        // Here the backup MUST have failed.
        // It is a GRDB bug if we can end up here with a completed backup.
        ... // proceed and do not assume source and destination have the same content
    }
}

@mallman
Copy link
Collaborator Author

mallman commented Oct 15, 2021

I see the dilemma.

In Example 2 from https://sqlite.org/backup.html, SQLite's "solution" is to omit a means by which the caller can abort a backup in progress. The callback is simply informative.

However, providing some kind of cancellation mechanism through the progress callback is very attractive for a user-oriented interface. We are in agreement here.

What if we omitted a call to the user-supplied progress callback if sqlite3_backup_step() returns SQLITE_DONE? That is to say, we only call the callback (and thereby provide the user a means to cancel the backup in progress) if sqlite3_backup_step() returns SQLITE_OK?

Generally speaking, should callers expect to get a "final" callback stating 100% completion of their task? From an end user's UI perspective we can clearly say the answer to this question is "no". As one concrete example, I recently saw my phone's iOS update go from something like 27% to being done—specifically, the progress bar never reported 100%.

@groue
Copy link
Owner

groue commented Oct 17, 2021

What if we omitted a call to the user-supplied progress callback if sqlite3_backup_step() returns SQLITE_DONE? That is to say, we only call the callback (and thereby provide the user a means to cancel the backup in progress) if sqlite3_backup_step() returns SQLITE_OK?

I would be surprised, as a user, if the last step would not trigger the callback, if I would not be sure the callback was called at least once, with completedPageCount == totalPageCount.

If a user wants to feed a Progress, then it would look like:

let progress = Progress()
try source.backup(to: destination, ...) { completedPageCount, totalPageCount in
    progress.totalUnitCount = totalPageCount
    progress.completedUnitCount = completedPageCount
}

Otherwise, it's just very confusing:

let progress = Progress()
try source.backup(to: destination, ...) { completedPageCount, totalPageCount in
    progress.totalUnitCount = totalPageCount
    progress.completedUnitCount = completedPageCount
}
// Deal with last step
// But what if callback is never called? What is totalUnitCount?
progress.completedUnitCount = progress.totalUnitCount

And of course such code must also work without modification if pagesPerStep is -1.

@mallman
Copy link
Collaborator Author

mallman commented Oct 18, 2021

What if we omitted a call to the user-supplied progress callback if sqlite3_backup_step() returns SQLITE_DONE? That is to say, we only call the callback (and thereby provide the user a means to cancel the backup in progress) if sqlite3_backup_step() returns SQLITE_OK?

I would be surprised, as a user, if the last step would not trigger the callback, if I would not be sure the callback was called at least once, with completedPageCount == totalPageCount.

We could also make the cancel function advisory or "best-effort". Or we could state it only works if the backup is incomplete. If the caller receives a final progress update where completedPageCount == totalPageCount, does "cancellation" make sense? The operation is already complete.

@groue
Copy link
Owner

groue commented Oct 24, 2021

We could also make the cancel function advisory or "best-effort". Or we could state it only works if the backup is incomplete.

This sounds good to me.

If the caller receives a final progress update where completedPageCount == totalPageCount, does "cancellation" make sense? The operation is already complete.

That was one previous question of mine "In the above sample code, is our implementation able to abandon the backup, or not?". Practically speaking, if the user throws between the last _step, and the final _finish, can we cancel the backup? If yes, let's cancel it, and have our backup method throw in order to tell that the backup has failed. Otherwise, let's have our backup method NOT throw.

@groue
Copy link
Owner

groue commented Nov 7, 2021

Hello @mallman, maybe you met some unexpected difficulty?

@mallman
Copy link
Collaborator Author

mallman commented Nov 9, 2021

Hello @mallman, maybe you met some unexpected difficulty?

Sorry, and thanks for your patience.

I've made significant progress. I've updated the API and implementation as we discussed, however I haven't finished the new tests. These are proving trickier than anticipated, and I got stuck on them. I'll have another go today and try to push a new commit.

@mallman
Copy link
Collaborator Author

mallman commented Nov 9, 2021

Hello @mallman, maybe you met some unexpected difficulty?

Oh, also I took a break camping the last few days, and I'm just catching up on e-mail and other correspondence.

@groue
Copy link
Owner

groue commented Nov 9, 2021

I'm glad you're fine - and you have all the time you want! 🙂

@mallman mallman force-pushed the dev/issue-1074-incremental_backup_api branch from e2d6985 to 83b6372 Compare November 9, 2021 19:03
@mallman
Copy link
Collaborator Author

mallman commented Nov 9, 2021

The latest commit adds the cancellation API we discussed, as well as a backup API for the Database type. I spent a considerable amount of time trying to reduce code duplication in the tests to the bare minimum. There is still a lot of duplication, but I don't see a clear way to make further progress.

Once we've settled on the API and implementation I will work on documentation.

@mallman
Copy link
Collaborator Author

mallman commented Nov 9, 2021

Hmmm... seems I spent so much time refactoring the existing backup tests I forgot to add tests for the cancellation functionality. Derp. I don't think I'll get to that today, but it's coming.

@mallman
Copy link
Collaborator Author

mallman commented Nov 30, 2021

Hi @groue. Sorry for the delay on the incremental backup tests. I've been occupied by other priorities, but I will make time for them soon.

In the meantime, I want to ask if you have time to review the changes as they are, especially if the API looks good to you. If the API looks good I will continue to focus on testing the implementation.

@groue
Copy link
Owner

groue commented Nov 30, 2021

All right, I'll have a look really soon :-) Thank you @mallman!

@groue
Copy link
Owner

groue commented Dec 3, 2021

Hello @mallman,

I don't quite understand the reason for introducing a public DatabaseBackupProgress protocol, a DatabaseBackupProgressImpl private class, and a public DatabaseBackupCancelled error, when the callback could be implemented as a throwing closure (see #1075 (comment)). Do I miss something?

@mallman
Copy link
Collaborator Author

mallman commented Dec 7, 2021

Hello @mallman,

I don't quite understand the reason for introducing a public DatabaseBackupProgress protocol, a DatabaseBackupProgressImpl private class, and a public DatabaseBackupCancelled error, when the callback could be implemented as a throwing closure (see #1075 (comment)). Do I miss something?

Quite frankly, it's been so long since I wrote this code I can't remember why I wrote the API this way. So your question could well be posed—do we miss something? 😉 I'm doing some careful review myself to uncover the rationale for adopting this API in my PR as opposed to your simpler alternative.

Having spent a couple of hours ruminating, I have an idea for a clear response to your question. However, my analysis hinges on an assumption about the API you propose. Specifically, how do you propose the user request the backup be cancelled? By throwing from the callback? That's my assumption. If not that, then how?

Cheers.

@mallman
Copy link
Collaborator Author

mallman commented Dec 9, 2021

Question. Do we want to allow the client to know when throwing an error from the progress callback will abort the backup and vice-versa?

I was hoping to be able to use completed versus total page counts as a guaranteed proxy for backup completion. However, we can't do that. Therefore, as is, the client cannot know what behavior to expect when throwing an error from the callback—it will either rethrow or be ignored.

However, we could add a third parameter to the callback to signal that the backup can no longer be aborted. It would look something like this:

progress: ((_ completedPageCount: Int, _ totalPageCount: Int, _ abortable: Bool) throws -> ())? = nil)

In backupInternal, the case statements in the while loop would look like

case SQLITE_DONE:
    try? afterBackupStep?(completedPages, totalPages, false)
    break backupLoop
case SQLITE_OK:
    try afterBackupStep?(completedPages, totalPages, true)

@groue
Copy link
Owner

groue commented Dec 9, 2021

Do you have a use case in mind?

@mallman
Copy link
Collaborator Author

mallman commented Dec 9, 2021

Do you have a use case in mind?

I could imagine a UI that runs that backup off the main queue and disables the cancel button after the last call to sqlite3_backup_step if the call to sqlite3_backup_finish takes a very long time. Maybe. Possibly. I'm reaching.

I have a large-ish database I'll try this PR on to see how things perform in a real world scenario.

I may also inspect the source code for sqlite3_backup_finish to get an idea for the complexity of that function as it currently stands.

@groue
Copy link
Owner

groue commented Dec 9, 2021

sqlite3_backup_finish is documented to release a lock, so it does I/O. If the database is hosted on a network disk, this can take some time.

I really do not like the extra boolean. At the same time, I have difficulties measuring of the chances for the user to witness:

  • completedPageCount >= totalPageCount after sqlite3_step() has returned OK (user who relies on these numbers could falsely believe the backup is completed, or witness several "completions")
  • completedPageCount < totalPageCount after sqlite3_step() has returned DONE (user who relies on these numbers could miss the opportunity to perform specific backup completion handling from the progress closure).

Reading the SQLite doc does not help me reach a definitive conclusion.

Grrr

OK maybe our api is fundamentally flawed.

@mallman
Copy link
Collaborator Author

mallman commented Dec 9, 2021

I think we can work this out if we make this some kind of experts-only API. The GRDB step-wise backup API reflects the low-level SQLite step-wise backup behavior and all its subtlety. The progress we provide via the callback is a direct reflection of what we get from the sqlite3_backup_remaining and backup_pagecount calls. Users may be aware that if they want some hard assurances about the behavior of this API they have to consult the SQLite backup documentation and/or the mailing list.

If users want a simple API with simple semantics, they should pass stepsPerPage = -1 and progress = nil, which is the default anyway.

@groue
Copy link
Owner

groue commented Dec 10, 2021

I fully agree. Thanks for keeping the boat afloat :-)

Your concern about letting users know about the last step is valid, and still not addressed. The extra boolean argument in the closure is really ugly, so what about improving slightly the api like this:

public BackupProgress {
    public var completedPageCount: Int
    public var totalPageCount: Int
    public var isCompleted: Bool // set iff SQLITE_DONE
}
public func backup(
    to destDb: Database,
    pagesPerStep: Int32 = -1,
    progress: ((BackupProgress) throws -> Void)? = nil)
{ ... }

@groue groue changed the title API and implementation for progress-reporting database backups Progress-reporting database backups Dec 10, 2021
@mallman
Copy link
Collaborator Author

mallman commented Dec 15, 2021

I've (finally) started some "real world" testing of this API with my "real" project. From a usability perspective my immediate reaction to writing the progress callback is I can't remember which parameter is which. I think BackupProgress struct is a better way to go. I also like the idea of getting an actual completion flag that mirrors the result from the SQLite API. Since we are exposing some of SQLite's API in this interface, we should document what isCompleted == true means and vice-versa.

I'm working on another commit for this PR with this change and some readme updates.

@mallman mallman marked this pull request as ready for review December 15, 2021 06:55
@mallman
Copy link
Collaborator Author

mallman commented Dec 15, 2021

@groue I would still like to run some more "real world" tests, but otherwise I'd say this PR is ready for review.

@groue
Copy link
Owner

groue commented Dec 17, 2021

Thanks @mallman! We're close to the end of the year, so I may be slow to review.

@groue
Copy link
Owner

groue commented Jan 6, 2022

Hello @mallman, this looks just great.

I have merged the development branch, and exposed DatabaseBackupProgress.remainingPageCount for raw sqlite3_backup_remaining() access. Consider that we advise the user to read the SQLite documentation: the raw SQLite concepts found in the documentation should be available as directly as possible in the GRDB apis so that the user feels empowered. In other words, when we go expert mode, we should do it unabatedly.

The handy completedPageCount property is still there, but it is now computed.

Let's wait for tests to run before I can merge 👍

@groue
Copy link
Owner

groue commented Jan 6, 2022

We were just missing a test file in the SQLCipher 3 & 4 test projects, as revealed by make smokeTest.

I recently updated CONTRIBUTING.md, you'll find a mention of make smokeTest there.

I should specify there and in the pull request template that this running this suite is recommended before one submits a PR. But most importantly I should definitely beef up the CI test suite 😅

@groue groue merged commit 30dcea9 into groue:master Jan 6, 2022
@groue
Copy link
Owner

groue commented Jan 6, 2022

The PR was not on the development branch, I had to force-push to master in order to revert the merge.

@groue
Copy link
Owner

groue commented Jan 6, 2022

@groue
Copy link
Owner

groue commented Jan 9, 2022

Shipped in v5.18.0. Thanks @mallman!

@mallman mallman deleted the dev/issue-1074-incremental_backup_api branch January 25, 2022 19:13
@mallman
Copy link
Collaborator Author

mallman commented Jan 25, 2022

The PR was not on the development branch, I had to force-push to master in order to revert the merge.

Mea culpa. I'll make a note of this for next time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants