Skip to content

Fetch should report progress through callbacks. #238

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Nov 3, 2012

Conversation

jamill
Copy link
Member

@jamill jamill commented Oct 26, 2012

Update to LibGit2Sharp that incorporates the new progress callbacks exposed in libgit2 (PR libgit2/libgit2#990)

@nulltoken
Copy link
Member

Applying this patch

diff --git a/LibGit2Sharp.Tests/RemoteFixture.cs b/LibGit2Sharp.Tests/RemoteFixture.cs
index dda9fdb..4db2664 100644
--- a/LibGit2Sharp.Tests/RemoteFixture.cs
+++ b/LibGit2Sharp.Tests/RemoteFixture.cs
@@ -141,13 +141,30 @@ public void CanFetchAllTagsIntoAnEmptyRepository(string url)
                 }

                 // Perform the actual fetch
-                remote.Fetch(TagFetchMode.All, onUpdateTips: expectedFetchState.RemoteUpdateTipsHandler);
+                remote.Fetch(TagFetchMode.All, mess, comp, expectedFetchState.RemoteUpdateTipsHandler, trans);

                 // Verify the expected
                 expectedFetchState.CheckUpdatedReferences(repo);
             }
         }

+        private void trans(TransferProgress progress)
+        {
+            Console.WriteLine("trans: Received/Index/Total = {0}/{1}/{2} - Bytes received = {3}", progress.ReceivedObjectCount, progress.IndexedObjectCount, progress.TotalObjectCount, progress.BytesReceived);
+        }
+
+        private int comp(RemoteCompletionType remotecompletiontype)
+        {
+            Console.WriteLine("comp:{0}", remotecompletiontype);
+
+            return 0;
+        }
+
+        private void mess(string message)
+        {
+            Console.Write(message);
+        }
+
         [Fact]
         public void CreatingANewRemoteAddsADefaultRefSpec()
         {

produces the following output

  Counting objects: 70, done.
Compressing objects:   2% (1/45)   
Compressing objects:   4% (2/45)   
Compressing objects:   6% (3/45)   
Compressing objects:   8% (4/45)   
Compressing objects:  11% (5/45)   
Compressing objects:  13% (6/45)   
Compressing objects:  15% (7/45)   
Compressing objects:  17% (8/45)   
Compressing objects:  20% (9/45)   
Compressing objects:  22% (10/45)   
Compressing objects:  24% (11/45)   
Compressing objects:  26% (12/45)   
Compressing objects:  28% (13/45)   
Compressing objects:  31% (14/45)   
Compressing objects:  33% (15/45)   
Compressing objects:  35% (16/45)   
Compressing objects:  37% (17/45)   
Compressing objects:  40% (18/45)   
Compressing objects:  42% (19/45)   
Compressing objects:  44% (20/45)   
Compressing objects:  46% (21/45)   
Compressing objects:  48% (22/45)   
Compressing objects:  51% (23/45)   
Compressing objects:  53% (24/45)   
Compressing objects:  55% (25/45)   
Compressing objects:  57% (26/45)   
Compressing objects:  60% (27/45)   
Compressing objects:  62% (28/45)   
Compressing objects:  64% (29/45)   
Compressing objects:  66% (30/45)   
Compressing objects:  68% (31/45)   
Compressing objects:  71% (32/45)   
Compressing objects:  73% (33/45)   
Compressing objects:  75% (34/45)   
Compressing objects:  77% (35/45)   
Compressing objects:  80% (36/45)   
Compressing objects:  82% (37/45)   
Compressing objects:  84% (38/45)   
Compressing objects:  86% (39/45)   
Compressing objects:  88% (40/45)   
Compressing objects:  91% (41/45)   
Compressing objects:  93% (42/45)   
Compressing objects:  95% (43/45)   
Compressing objects:  97% (44/45)   
Compressing objects: 100% (45/45)   
Compressing objects: 100% (45/45), done.
trans: Received/Index/Total = 0/0/70 - Bytes received = 0
  trans: Received/Index/Total = 1/1/70 - Bytes received = 0
  trans: Received/Index/Total = 2/2/70 - Bytes received = 0
  trans: Received/Index/Total = 3/2/70 - Bytes received = 0
  trans: Received/Index/Total = 4/2/70 - Bytes received = 0
  trans: Received/Index/Total = 5/3/70 - Bytes received = 0
  trans: Received/Index/Total = 6/4/70 - Bytes received = 0
  trans: Received/Index/Total = 7/5/70 - Bytes received = 0
  trans: Received/Index/Total = 8/6/70 - Bytes received = 0
  trans: Received/Index/Total = 9/7/70 - Bytes received = 0
  trans: Received/Index/Total = 10/8/70 - Bytes received = 0
  trans: Received/Index/Total = 11/9/70 - Bytes received = 0
  trans: Received/Index/Total = 12/10/70 - Bytes received = 0
  trans: Received/Index/Total = 13/11/70 - Bytes received = 0
  trans: Received/Index/Total = 14/12/70 - Bytes received = 0
  trans: Received/Index/Total = 15/13/70 - Bytes received = 0
  trans: Received/Index/Total = 16/14/70 - Bytes received = 0
  trans: Received/Index/Total = 17/15/70 - Bytes received = 0
  trans: Received/Index/Total = 18/16/70 - Bytes received = 0
  trans: Received/Index/Total = 19/17/70 - Bytes received = 0
  trans: Received/Index/Total = 20/18/70 - Bytes received = 0
  trans: Received/Index/Total = 21/19/70 - Bytes received = 0
  trans: Received/Index/Total = 22/20/70 - Bytes received = 0
  trans: Received/Index/Total = 23/21/70 - Bytes received = 0
  trans: Received/Index/Total = 24/22/70 - Bytes received = 0
  trans: Received/Index/Total = 25/23/70 - Bytes received = 0
  trans: Received/Index/Total = 26/24/70 - Bytes received = 0
  trans: Received/Index/Total = 27/25/70 - Bytes received = 0
  trans: Received/Index/Total = 28/26/70 - Bytes received = 0
  trans: Received/Index/Total = 29/27/70 - Bytes received = 0
  trans: Received/Index/Total = 30/28/70 - Bytes received = 0
  trans: Received/Index/Total = 31/29/70 - Bytes received = 0
  trans: Received/Index/Total = 32/30/70 - Bytes received = 0
  trans: Received/Index/Total = 33/31/70 - Bytes received = 0
  trans: Received/Index/Total = 34/32/70 - Bytes received = 0
  trans: Received/Index/Total = 35/33/70 - Bytes received = 0
  trans: Received/Index/Total = 36/34/70 - Bytes received = 0
  trans: Received/Index/Total = 37/35/70 - Bytes received = 0
  trans: Received/Index/Total = 38/36/70 - Bytes received = 0
  trans: Received/Index/Total = 39/37/70 - Bytes received = 0
  trans: Received/Index/Total = 40/38/70 - Bytes received = 0
  trans: Received/Index/Total = 41/39/70 - Bytes received = 0
  trans: Received/Index/Total = 42/40/70 - Bytes received = 0
  trans: Received/Index/Total = 43/41/70 - Bytes received = 0
  trans: Received/Index/Total = 44/42/70 - Bytes received = 0
  trans: Received/Index/Total = 45/43/70 - Bytes received = 0
  trans: Received/Index/Total = 46/44/70 - Bytes received = 0
  trans: Received/Index/Total = 47/45/70 - Bytes received = 0
  trans: Received/Index/Total = 48/46/70 - Bytes received = 0
  trans: Received/Index/Total = 49/47/70 - Bytes received = 0
  trans: Received/Index/Total = 50/48/70 - Bytes received = 0
  trans: Received/Index/Total = 51/49/70 - Bytes received = 0
  trans: Received/Index/Total = 52/50/70 - Bytes received = 0
  trans: Received/Index/Total = 53/51/70 - Bytes received = 0
  trans: Received/Index/Total = 54/52/70 - Bytes received = 0
  trans: Received/Index/Total = 55/53/70 - Bytes received = 0
  trans: Received/Index/Total = 56/54/70 - Bytes received = 0
  trans: Received/Index/Total = 57/55/70 - Bytes received = 0
  trans: Received/Index/Total = 58/56/70 - Bytes received = 0
  trans: Received/Index/Total = 59/57/70 - Bytes received = 0
  trans: Received/Index/Total = 60/58/70 - Bytes received = 0
  trans: Received/Index/Total = 61/59/70 - Bytes received = 0
  trans: Received/Index/Total = 62/60/70 - Bytes received = 0
  trans: Received/Index/Total = 63/61/70 - Bytes received = 0
  trans: Received/Index/Total = 64/62/70 - Bytes received = 0
  trans: Received/Index/Total = 65/63/70 - Bytes received = 0
  trans: Received/Index/Total = 66/63/70 - Bytes received = 0
  trans: Received/Index/Total = 67/64/70 - Bytes received = 0
  trans: Received/Index/Total = 68/65/70 - Bytes received = 0
  trans: Received/Index/Total = 69/66/70 - Bytes received = 0
  trans: Received/Index/Total = 70/67/70 - Bytes received = 0
  Total 70 (delta 3), reused 70 (delta 3)
trans: Received/Index/Total = 70/68/70 - Bytes received = 0
  trans: Received/Index/Total = 70/69/70 - Bytes received = 0
  trans: Received/Index/Total = 70/70/70 - Bytes received = 0

which is rather nice 🆒 ❗

However, it looks like the number of received bytes doesn't grow... Did I mess up anything?
Also, the completion callback seem to be not invoked. Maybe should we not expose it yet...

/cc @carlosmn @ben

/// </summary>
[CLSCompliant(false)]
public virtual uint TotalObjectCount
{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer not decorating this with a [CLSCompliant(false)]

What's the maximum object numbers that could be reached? Wouldn't int be enough? Should we have to deal with more than 2,147,483,647 objects, we could still make this a long

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I imagine int would be enough. I was reflecting the types that these values are as in libgit2. I will change the uints to be ints, and the ulong to be a long .

@carlosmn
Copy link
Member

The bytes are now also updated with WinHTTP (fix came in recently). The completion callbacks still aren't being called, though we should just add them in the library. No reason to wait, really.

@nulltoken
Copy link
Member

@carlosmn

The bytes are now also updated with WinHTTP (fix came in recently).

LibGit2Sharp runs against the latest libgit2 development tip (libgit2/libgit2@a0ce87c)

Moreover, as you can below, no bytes are received, whatever the protocol used (git/http(s))

------ Test started: Assembly: LibGit2Sharp.Tests.dll ------

Output from LibGit2Sharp.Tests.RemoteFixture.CanFetchAllTagsIntoAnEmptyRepository(url: "http://github.com/libgit2/TestGitRepository"):
  Counting objects: 70, done.
Compressing objects:   2% (1/45)   
Compressing objects:   4% (2/45)   
...
[snip]
...
Compressing objects: 100% (45/45)   
Compressing objects: 100% (45/45), done.
trans: Received/Index/Total = 0/0/70 - Bytes received = 0
  trans: Received/Index/Total = 1/1/70 - Bytes received = 0
...
[snip]
...
  trans: Received/Index/Total = 70/69/70 - Bytes received = 0
  trans: Received/Index/Total = 70/70/70 - Bytes received = 0

Output from LibGit2Sharp.Tests.RemoteFixture.CanFetchAllTagsIntoAnEmptyRepository(url: "https://github.com/libgit2/TestGitRepository"):
  Counting objects: 70, done.
Compressing objects:   2% (1/45)   
Compressing objects:   4% (2/45)   
...
[snip]
...
Compressing objects:  97% (44/45)   
Compressing objects: 100% (45/45)   
Compressing objects: 100% (45/45), done.
Total 70 (delta 3), reused 70 (delta 3)
trans: Received/Index/Total = 0/0/70 - Bytes received = 0
  trans: Received/Index/Total = 1/1/70 - Bytes received = 0
  trans: Received/Index/Total = 2/2/70 - Bytes received = 0
...
[snip]
...
  trans: Received/Index/Total = 70/68/70 - Bytes received = 0
  trans: Received/Index/Total = 70/69/70 - Bytes received = 0
  trans: Received/Index/Total = 70/70/70 - Bytes received = 0

Output from LibGit2Sharp.Tests.RemoteFixture.CanFetchAllTagsIntoAnEmptyRepository(url: "git://github.com/libgit2/TestGitRepository.git"):
  Counting objects: 70, done.
Compressing objects:   2% (1/45)   
Compressing objects:   4% (2/45)   
Compressing objects:   6% (3/45)   
...
[snip]
...
Compressing objects:  97% (44/45)   
Compressing objects: 100% (45/45)   
Compressing objects: 100% (45/45), done.
trans: Received/Index/Total = 0/0/70 - Bytes received = 0
  trans: Received/Index/Total = 1/1/70 - Bytes received = 0
  trans: Received/Index/Total = 2/2/70 - Bytes received = 0
...
[snip]
...
  trans: Received/Index/Total = 69/66/70 - Bytes received = 0
  trans: Received/Index/Total = 70/67/70 - Bytes received = 0
  Total 70 (delta 3), reused 70 (delta 3)
trans: Received/Index/Total = 70/68/70 - Bytes received = 0
  trans: Received/Index/Total = 70/69/70 - Bytes received = 0
  trans: Received/Index/Total = 70/70/70 - Bytes received = 0

3 passed, 0 failed, 0 skipped, took 3,92 seconds (xUnit.net 1.9.0 build 1566).

Note: While running the tests I also encountered the dreaded "Result too large" error:

Test 'LibGit2Sharp.Tests.RemoteFixture.CanFetchAllTagsIntoAnEmptyRepository(url: "http://github.com/libgit2/TestGitRepository")' failed:
    LibGit2Sharp.LibGit2SharpException : An error was raised by libgit2. Category = Os (Error).
Failed to receive response: Result too large
    Core\Ensure.cs(85,0): at LibGit2Sharp.Core.Ensure.Success(Int32 result, Boolean allowPositiveResult)
    Core\Proxy.cs(1032,0): at LibGit2Sharp.Core.Proxy.git_remote_connect(RemoteSafeHandle remote, GitDirection direction)
    Remote.cs(85,0): at LibGit2Sharp.Remote.Fetch(TagFetchMode tagFetchMode, ProgressHandler onProgress, CompletionHandler onCompletion, UpdateTipsHandler onUpdateTips, TransferProgressHandler onTransferProgress)
    RemoteFixture.cs(144,0): at LibGit2Sharp.Tests.RemoteFixture.CanFetchAllTagsIntoAnEmptyRepository(String url)

@jamill
Copy link
Member Author

jamill commented Oct 28, 2012

I have not had a chance yet to debug the issue with the bytes not updating with the WinHttp fix. I will look into this (probably not until Monday).

@nulltoken
Copy link
Member

@jamill Last nitpick

using the following callback as the ProgressHandler

private void mess(string message)
{
    Console.WriteLine("[BEG]{0}[END]", message);
}

Produces the following output:

Output from LibGit2Sharp.Tests.RemoteFixture.CanFetchAllTagsIntoAnEmptyRepository(url: "git://github.com/libgit2/TestGitRepository.git"):
  [BEG]Counting objects: 70, done.
[END]
  [BEG]Compressing objects:   2% (1/45)   
Compressing objects:   4% (2/45)   
Compressing objects:   6% (3/45)   
Compressing objects:[END]
  [BEG]   8% (4/45)   
Compressing objects:  11% (5/45)   
Compressing objects:  13% (6/45)   
Compressing objects:  15% (7/45)   
Comp[END]
  [BEG]ressing objects:  17% (8/45)   
Compressing objects:  20% (9/45)   
Compressing objects:  22% (10/45)   
Compressing objects:  2[END]
  [BEG]4% (11/45)   
Compressing objects:  26% (12/45)   
Compressing objects:  28% (13/45)   
Compressing objects:  31% (14/45)   
Com[END]
  [BEG]pressing objects:  33% (15/45)   
Compressing objects:  35% (16/45)   
Compressing objects:  37% (17/45)   
Compressing objects:[END]
  [BEG]  40% (18/45)   
Compressing objects:  42% (19/45)   
Compressing objects:  44% (20/45)   
Compressing objects:  46% (21/45)   
[END]
  [BEG]Compressing objects:  48% (22/45)   
Compressing objects:  51% (23/45)   
Compressing objects:  53% (24/45)   
Compressing objec[END]
  [BEG]ts:  55% (25/45)   
Compressing objects:  57% (26/45)   
Compressing objects:  60% (27/45)   
Compressing objects:  62% (28/45) [END]
  [BEG]  
Compressing objects:  64% (29/45)   
Compressing objects:  66% (30/45)   
Compressing objects:  68% (31/45)   
Compressing ob[END]
  [BEG]jects:  71% (32/45)   
Compressing objects:  73% (33/45)   
Compressing objects:  75% (34/45)   
Compressing objects:  77% (35/4[END]
  [BEG]5)   
Compressing objects:  80% (36/45)   
Compressing objects:  82% (37/45)   
Compressing objects:  84% (38/45)   
Compressing[END]
  [BEG] objects:  86% (39/45)   
Compressing objects:  88% (40/45)   
Compressing objects:  91% (41/45)   
Compressing objects:  93% (4[END]
  [BEG]2/45)   
Compressing objects:  95% (43/45)   
Compressing objects:  97% (44/45)   
Compressing objects: 100% (45/45)   
Compress[END]
  [BEG]ing objects: 100% (45/45), done.
Total 70 (delta 3), reused 70 (delta 3)
[END]

As one can see, the message parameter contains chunks of data, and not a complete, self contained message. We may need a better name than message. Something that would reflect the arbitrarily chunked aspect of what will be received.

Thinking about it, maybe a slight update to the XML doc wouldn't be so bad as well ;-)

@nulltoken
Copy link
Member

The completion callbacks still aren't being called, though we should just add them in the library. No reason to wait, really.

@ben As you've brilliantly dived into the network code (and got back safe), do you think you could tackle this as well? 🙇 🙏

@dahlbyk
Copy link
Member

dahlbyk commented Oct 28, 2012

As one can see, the message parameter contains chunks of data, and not a complete, self contained message. We may need a better name than message. Something that would reflect the arbitrarily chunked aspect of what will be received.

Is it too late to change how libgit2 treats progress? This chunked response (which appears to be based on an internal buffer size?) seems less useful than self-contained messages. For example, if I wanted to show fetch progress on a single line I would have to parse the last full line out of the message, and remember the rest to prepend to the next message.

@carlosmn
Copy link
Member

The way we treat progress is the way that git gives it to us. Somebody has to parse it, and I'd rather not have the library decide what belongs where, especially since there is absolutely no fixed format and any number of processes on the remote end can send us data, not just git. This is clearer for push. We get the hook's output exactly the same way (which is what that channel is for) and whether to wait for a newline (or CR as is the case for fetch) to send something to the library's user is not something the library is in a position to judge.

The main problem there is that git on the remote end assumes that the output will be thrown directly to the console, which will parse it without problems. The "real" solution is to have git send us machine-readable output through a different channel, but we need to standardise it somehow (and convince the other implementations to support it).

@nulltoken
Copy link
Member

Is it too late to change how libgit2 treats progress?

I'm not even sure that @carlosmn would agree to this. We call this progress, but basically, those are only printable chars that the server throws at us.

We could do the buffering and the line splitting on our side, though. However, I'm not sure that everything will be \n splitted. Is that even a convention?

@nulltoken
Copy link
Member

Ah. It looks like I should pay more attention to page refreshes. @carlosmn already answered :-) ^^

@carlosmn
Copy link
Member

No, the output isn't LF-delimited. The remote git is sending the output so it looks good (updating the one line) when you show it to the console, so each "line" is delimited by CR. The only LF would come at the last one, so the indexing process can give its own output.

@nulltoken
Copy link
Member

so each "line" is delimited by CR

Is this "stable" enough to build up a line-splitting process is C#?

The only LF would come at the last one

Is this being sent by libgit2 as well?

@carlosmn
Copy link
Member

I doubt very much that it's going to change, even if just for backwards compatibility, upload-pack is going to assume that its output is being sent to the shell. As you can see from the output you pasted above, after the last "Compressing", you do get a newline. The library will pass up whatever we get from the other end without inspecting it.

If the output changes, the 'git fetch' output is going to break without local changes, so I wouldn't worry about that part of the format changing. The problem would come if a post-receive hook shows output by adding dots to the same line without any explicit separation, expecting it to be shown to the user immediately (not that likely a situation, but people do all sorts of weird things).

@nulltoken
Copy link
Member

As you can see from the output you pasted above, after the last "Compressing", you do get a newline. The library will pass up whatever we get from the other end without inspecting it.

Cool. So, provided we implement a buffered line splitting process, we've got a stable signal to let us know when we can emit a complete line.

@dahlbyk @jamill Guys, I'll be off the hook for a few days and won't merge this PR untill I'm back. So if you're willing to try and update the ProgressHandler to make it able to send complete lines, now would be a good time 😜

@jamill
Copy link
Member Author

jamill commented Oct 28, 2012

@nulltoken @dahlbyk

@dahlbyk @jamill Guys, I'll be off the hook for a few days and won't merge this PR untill I'm back. So if you're willing to try and update the ProgressHandler to make it able to send complete lines, now would be a good time

Are you thinking that the delegate signature would remain the same but only emit complete lines? or, will it return a CR delimited list of strings? That is, how will it handle a string that contains multiple lines. Would the signature look something like:

public delegate void ProgressHandler(List<string> messages);

Or, stay as is, but not buffer the text of an incomplete line (until it is able to emit it as part of a full line).

If we do not like the parameter being named messages, what might be a better name... serverProgressOutput, serverProgressText ...?

@nulltoken
Copy link
Member

Or, stay as is, but not buffer the text of an incomplete line (until it is able to emit it as part of a full line).

This. And when libgit2 invokes us with a block of 3 CR delimited lines, we would issue 3 callbacks of one line, in a row. Would this make sense?

If the ProgressHandler issues complete line, I think we could keep the message name.

@dahlbyk
Copy link
Member

dahlbyk commented Oct 28, 2012

Or we could fire the progress callback with a single string of all complete lines (including line endings, should a client wish to handle CR/LF differently), storing the remainder be sent with the next callback.

@nulltoken
Copy link
Member

@dahlbyk I was rather thinking of removing the trailing CR or LF before firing each callback. So, the client could either "replace" the line being displayed or pack them together using whatever line ending terminator.

Given:

[BEG]WORD0(CR)[END]
[BEG]WORD1(CR)
WORD2(CR)
WO[END]
[BEG]RD3(CR)
WORD4(LF)[END]

we would issue

First invocation:

  • callback("WORD0");

Second invocation:

  • callback("WORD1");
  • callback("WORD2");

Third invocation:

  • callback("WORD3");
  • callback("WORD4"); // final one, because of the LF

@dahlbyk
Copy link
Member

dahlbyk commented Oct 29, 2012

LF doesn't necessarily mean end of progress. This:

private void mess(string message)
{
    message = message.Replace("\r", "\\r\r").Replace("\n", "\\n\n");
    Console.Write(message);
}

Yields this:

Counting objects: 70, done.\n
Compressing objects:   2% (1/45)   \r
Compressing objects:   4% (2/45)   \r
Compressing objects:   6% (3/45)   \r
...snip...
Compressing objects:  95% (43/45)   \r
Compressing objects:  97% (44/45)   \r
Compressing objects: 100% (45/45)   \r
Compressing objects: 100% (45/45), done.\n
Total 70 (delta 3), reused 70 (delta 3)\n

So that the shell leaves only the messages preceding LF:

Counting objects: 70, done.
Compressing objects: 100% (45/45), done.
Total 70 (delta 3), reused 70 (delta 3)

If our consumers want to emulate this behavior they need the line ending.

@nulltoken
Copy link
Member

You're right. Beside this, we don't need to identify the final chunk. We only need to be sure that it will bear a line terminator.

So, given:

[BEG]WORD0(LF)[END]
[BEG]WORD1(CR)
WORD2(CR)
WO[END]
[BEG]RD3(CR)
WORD4(LF)[END]

we would issue

First invocation:

  • callback("WORD0(LF)");

Second invocation:

  • callback("WORD1(CR)");
  • callback("WORD2(CR)");

Third invocation:

  • callback("WORD3(CR)");
  • callback("WORD4(LF)");

Better?

@ethomson
Copy link
Member

If you only call callback() when you see a \r, then you'll miss messages that are:

Compressing data: 
(long pause)
done\n

These channels are really meant to correspond nicely to stdout/stderr. It seems like the best analog to them is a Stream... why don't we just write the literal bytes to a TextWriter? If the caller wants to pass in Console.Out and Console.Error, they can. If they want to do something more advanced, they can pass in their own implementation of TextWriter...?

@jamill
Copy link
Member Author

jamill commented Oct 29, 2012

However, it looks like the number of received bytes doesn't grow... Did I mess up anything?

@ben @nulltoken It looks like this is due to the granularity of the number of bytes reported. This value gets updated when fetching a larger repository.

While running the tests I also encountered the dreaded "Result too large" error:

I was also able to repro and get the part of the stack that sets this error. Looks like the call to WinHttpReceiveResponse failed :

git2.dll!giterr_set(int error_class, const char * string, ...)  Line 41 C
git2.dll!send_request(transport_http * t, const char * service, void * data, long content_length, int ls)  Line 180 + 0xc bytes C
git2.dll!http_negotiation_step(git_transport * transport, void * data, unsigned int len)  Line 507 + 0x18 bytes C
git2.dll!git_fetch_negotiate(git_remote * remote)  Line 270 + 0x14 bytes    C
git2.dll!git_remote_download(git_remote * remote, void (const git_transfer_progress *, void *)* progress_cb, void * progress_payload)  Line 531 + 0x9 bytes C

@dahlbyk
Copy link
Member

dahlbyk commented Oct 29, 2012

It seems like the best analog to them is a Stream... why don't we just write the literal bytes to a TextWriter?

👍

@nulltoken
Copy link
Member

I like very much the TextWriter idea as well.

And I'd be very interested in having a sample usage in the tests of what an advanced implementation (callback based?) could look like.

@dahlbyk
Copy link
Member

dahlbyk commented Oct 29, 2012

@jamill updates look good

@ben
Copy link
Member

ben commented Oct 29, 2012

I figured out the problem with bytes_received being zero! Bug in libgit2. I've already pushed a fix over there, and I'll open a PR here in a sec.

This was referenced Oct 29, 2012
@jamill
Copy link
Member Author

jamill commented Oct 30, 2012

@ben Thanks!

I have rebased this PR on top of @ben's fix. The output fron @nulltoken's patch:

Counting objects: 70, done.
Compressing objects: 2% (1/45)
Compressing objects: 4% (2/45)
Compressing objects: 6% (3/45)
Compressing objects: 8% (4/45)
Compressing objects: 11% (5/45)
Compressing objects: 13% (6/45)
Compressing objects: 15% (7/45)
Compressing objects: 17% (8/45)
Compressing objects: 20% (9/45)
Compressing objects: 22% (10/45)
Compressing objects: 24% (11/45)
Compressing objects: 26% (12/45)
Compressing objects: 28% (13/45)
Compressing objects: 31% (14/45)
Compressing objects: 33% (15/45)
Compressing objects: 35% (16/45)
Compressing objects: 37% (17/45)
Compressing objects: 40% (18/45)
Compressing objects: 42% (19/45)
Compressing objects: 44% (20/45)
Compressing objects: 46% (21/45)
Compressing objects: 48% (22/45)
Compressing objects: 51% (23/45)
Compressing objects: 53% (24/45)
Compressing objects: 55% (25/45)
Compressing objects: 57% (26/45)
Compressing objects: 60% (27/45)
Compressing objects: 62% (28/45)
Compressing objects: 64% (29/45)
Compressing objects: 66% (30/45)
Compressing objects: 68% (31/45)
Compressing objects: 71% (32/45)
Compressing objects: 73% (33/45)
Compressing objects: 75% (34/45)
Compressing objects: 77% (35/45)
Compressing objects: 80% (36/45)
Compressing objects: 82% (37/45)
Compressing objects: 84% (38/45)
Compressing objects: 86% (39/45)
Compressing objects: 88% (40/45)
Compressing objects: 91% (41/45)
Compressing objects: 93% (42/45)
Compressing objects: 95% (43/45)
Compressing objects: 97% (44/45)
Compressing objects: 100% (45/45)
Compressing objects: 100% (45/45), done.
Total 70 (delta 3), reused 70 (delta 3)
trans: Received/Index/Total = 0/0/70 - Bytes received = 5882
trans: Received/Index/Total = 1/1/70 - Bytes received = 5882
trans: Received/Index/Total = 2/2/70 - Bytes received = 5882
trans: Received/Index/Total = 3/2/70 - Bytes received = 5882
trans: Received/Index/Total = 4/2/70 - Bytes received = 5882
trans: Received/Index/Total = 5/3/70 - Bytes received = 5882
trans: Received/Index/Total = 6/4/70 - Bytes received = 5882
trans: Received/Index/Total = 7/5/70 - Bytes received = 5882
trans: Received/Index/Total = 8/6/70 - Bytes received = 5882
trans: Received/Index/Total = 9/7/70 - Bytes received = 5882
trans: Received/Index/Total = 10/8/70 - Bytes received = 5882
trans: Received/Index/Total = 11/9/70 - Bytes received = 5882
trans: Received/Index/Total = 12/10/70 - Bytes received = 5882
trans: Received/Index/Total = 13/11/70 - Bytes received = 5882
trans: Received/Index/Total = 14/12/70 - Bytes received = 5882
trans: Received/Index/Total = 15/13/70 - Bytes received = 5882
trans: Received/Index/Total = 16/14/70 - Bytes received = 5882
trans: Received/Index/Total = 17/15/70 - Bytes received = 5882
trans: Received/Index/Total = 18/16/70 - Bytes received = 5882
trans: Received/Index/Total = 19/17/70 - Bytes received = 5882
trans: Received/Index/Total = 20/18/70 - Bytes received = 5882
trans: Received/Index/Total = 21/19/70 - Bytes received = 5882
trans: Received/Index/Total = 22/20/70 - Bytes received = 5882
trans: Received/Index/Total = 23/21/70 - Bytes received = 5882
trans: Received/Index/Total = 24/22/70 - Bytes received = 5882
trans: Received/Index/Total = 25/23/70 - Bytes received = 5882
trans: Received/Index/Total = 26/24/70 - Bytes received = 5882
trans: Received/Index/Total = 27/25/70 - Bytes received = 5882
trans: Received/Index/Total = 28/26/70 - Bytes received = 5882
trans: Received/Index/Total = 29/27/70 - Bytes received = 5882
trans: Received/Index/Total = 30/28/70 - Bytes received = 5882
trans: Received/Index/Total = 31/29/70 - Bytes received = 5882
trans: Received/Index/Total = 32/30/70 - Bytes received = 5882
trans: Received/Index/Total = 33/31/70 - Bytes received = 5882
trans: Received/Index/Total = 34/32/70 - Bytes received = 5882
trans: Received/Index/Total = 35/33/70 - Bytes received = 5882
trans: Received/Index/Total = 36/34/70 - Bytes received = 5882
trans: Received/Index/Total = 37/35/70 - Bytes received = 5882
trans: Received/Index/Total = 38/36/70 - Bytes received = 5882
trans: Received/Index/Total = 39/37/70 - Bytes received = 5882
trans: Received/Index/Total = 40/38/70 - Bytes received = 5882
trans: Received/Index/Total = 41/39/70 - Bytes received = 5882
trans: Received/Index/Total = 42/40/70 - Bytes received = 5882
trans: Received/Index/Total = 43/41/70 - Bytes received = 5882
trans: Received/Index/Total = 44/42/70 - Bytes received = 5882
trans: Received/Index/Total = 45/43/70 - Bytes received = 5882
trans: Received/Index/Total = 46/44/70 - Bytes received = 5882
trans: Received/Index/Total = 47/45/70 - Bytes received = 5882
trans: Received/Index/Total = 48/46/70 - Bytes received = 5882
trans: Received/Index/Total = 49/47/70 - Bytes received = 5882
trans: Received/Index/Total = 50/48/70 - Bytes received = 5882
trans: Received/Index/Total = 51/49/70 - Bytes received = 5882
trans: Received/Index/Total = 52/50/70 - Bytes received = 5882
trans: Received/Index/Total = 53/51/70 - Bytes received = 5882
trans: Received/Index/Total = 54/52/70 - Bytes received = 5882
trans: Received/Index/Total = 55/53/70 - Bytes received = 5882
trans: Received/Index/Total = 56/54/70 - Bytes received = 5882
trans: Received/Index/Total = 57/55/70 - Bytes received = 5882
trans: Received/Index/Total = 58/56/70 - Bytes received = 5882
trans: Received/Index/Total = 59/57/70 - Bytes received = 5882
trans: Received/Index/Total = 60/58/70 - Bytes received = 5882
trans: Received/Index/Total = 61/59/70 - Bytes received = 5882
trans: Received/Index/Total = 62/60/70 - Bytes received = 5882
trans: Received/Index/Total = 63/61/70 - Bytes received = 5882
trans: Received/Index/Total = 64/62/70 - Bytes received = 5882
trans: Received/Index/Total = 65/63/70 - Bytes received = 5882
trans: Received/Index/Total = 66/63/70 - Bytes received = 5882
trans: Received/Index/Total = 67/64/70 - Bytes received = 5882
trans: Received/Index/Total = 68/65/70 - Bytes received = 5882
trans: Received/Index/Total = 69/66/70 - Bytes received = 5882
trans: Received/Index/Total = 70/67/70 - Bytes received = 5882
trans: Received/Index/Total = 70/68/70 - Bytes received = 5882
trans: Received/Index/Total = 70/69/70 - Bytes received = 5882
trans: Received/Index/Total = 70/70/70 - Bytes received = 5882

@jamill
Copy link
Member Author

jamill commented Oct 30, 2012

@nulltoken @ben @dahlbyk

I also made some tweaks with the commit in my last update. This was mainly to some of comments, but also included the following changes (I think these are for the better)

  1. I renames the TotalObjectCount, ReceivedObjectCount, IndexededObjectCount, and BytesReceived properties to TotalObjects, ReceivedObjects, IndexedObjects and ReceivedBytes respectively. I think this is more consistent (sorry that it breaks the patch and Ben's PR Clone #223)

  2. Changed the class name / namespace of GitTransferCallbacks to TransferCallbacks (as this class does not have a direct libgit2 counterpart).

@jamill
Copy link
Member Author

jamill commented Oct 30, 2012

I like very much the TextWriter idea as well.

And I'd be very interested in having a sample usage in the tests of what an advanced implementation (callback based?) could look like.

@nulltoken - I am not be against implementing the TextWriter idea, what would you think if we tackled that in a separate PR? And have this one focus on the update to the TransferProgress callbacks (and associated datastructures)? Let me know what you think

@dahlbyk
Copy link
Member

dahlbyk commented Oct 31, 2012

@jamill @nulltoken See dahlbyk/libgit2sharp@jamill:fetch_update...fetch_update

  1. Cleaner if GenerateCallback() handles the null check?
  2. Motivated by (3) - if progressWriter is optional then calls without a progress handler are ambiguous. If it's not optional, it seems weird to have Fetch(progressWriter, tagFetchMode, onCompletion, ...) next to Fetch(tagFetchMode, onProgress, onCompletion, ...).
  3. Add TextWriter support as extension methods.

@dahlbyk
Copy link
Member

dahlbyk commented Oct 31, 2012

And 👍 for polish changes

@jamill
Copy link
Member Author

jamill commented Oct 31, 2012

@dahlbyk Thanks for the review!

I like your suggestions. Do we need both the ProgressHandlercallback and the TextWriter parameters? Could we remove the ProgressHandlerparameter and just go with the TextWriter parameter?

@dahlbyk
Copy link
Member

dahlbyk commented Oct 31, 2012

Part of me thinks the callback maybe gives a bit of flexibility that would be harder to achieve with the TextWriter. I don't see harm in leaving both options, but I don't have a strong feeling either way.

@jamill
Copy link
Member Author

jamill commented Oct 31, 2012

My thoughts were that having a single way to expose the text stream from the server would leave the interface a bit simpler (and that if there is another scenario that we want to support which exposing the callback would enable / make easier, we could add it then...)

Either way, I am fine as well. Assuming we want to expose both interfaces, what would be the best way to include your modifications? I could directly include your commits in this PR? you could follow up with a PR? ...

@dahlbyk
Copy link
Member

dahlbyk commented Oct 31, 2012

For now I would bring them into this PR, then squash as appropriate when we're ready to merge. I think we're close?

@dahlbyk
Copy link
Member

dahlbyk commented Oct 31, 2012

@nulltoken will have to make the call on the progress handler. :)

@nulltoken
Copy link
Member

Part of me thinks the callback maybe gives a bit of flexibility that would be harder to achieve with the TextWriter.

@dahlbyk I agree with you. This is why was asking for some advanced usage in a test. I think that redirecting thr output to a Console might be trivial. However, if the client is a GUI a wishes to update a progress dialog, this may require some trickier tweaking.

@jamill This PR is awesome as it is! Let's keep the TextWriter for another PR.

Could you please update the ProgressHandler documentation in order to explain that chunked input is likely to happen, rename to message to serverProgressOutput and squash the commits all together ?

@jamill
Copy link
Member Author

jamill commented Nov 1, 2012

Thanks for the reviews!

Could you please update the ProgressHandler documentation in order to explain that chunked input is likely to happen, rename to message to serverProgressOutput and squash the commits all together ?

Done!

@dahlbyk - I also included your suggestion for moving the null check into the callback generation. Thanks!

@dahlbyk
Copy link
Member

dahlbyk commented Nov 1, 2012

👍

@nulltoken nulltoken merged commit fadfa10 into libgit2:vNext Nov 3, 2012
@nulltoken
Copy link
Member

Merged! ❤️ += 💖

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants