Rebase onto Git for Windows v2.53.0-rc0.windows.1#845
Rebase onto Git for Windows v2.53.0-rc0.windows.1#845dscho merged 294 commits intomicrosoft:vfs-2.53.0-rc0from
Conversation
As of 9e59b38 (object-file: emit corruption errors when detected, 2022-12-14), Git will loudly complain about corrupt objects. That is fine, as long as the idea isn't to re-download locally-corrupted objects. But that's exactly what we want to do in VFS for Git via the `read-object` hook, as per the `GitCorruptObjectTests` code added in microsoft/VFSForGit@2db0c030eb25 (New features: [...] - GVFS can now recover from corrupted git object files [...] , 2018-02-16). So let's support precisely that, and add a regression test that ensures that re-downloading corrupt objects via the `read-object` hook works. While at it, avoid the XOR operator to flip the bits, when we actually want to make sure that they are turned off: Use the AND-NOT operator for that purpose. Helped-by: Matthew John Cheetham <mjcheetham@outlook.com> Helped-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Add the ability to block built-in commands based on if the `core.gvfs` setting has the `GVFS_USE_VIRTUAL_FILESYSTEM` bit set. This allows us to selectively block commands that use the GVFS protocol, but don't use VFS for Git (for example repos cloned via `scalar clone` against Azure DevOps). Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
Loosen the blocking of the `repack` command from all "GVFS repos" (those that have `core.gvfs` set) to only those that actually use the virtual file system (VFS for Git only). This allows for `repack` to be used in Scalar clones. Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
String formatting can be a performance issue when there are hundreds of thousands of trees. Change to stop using the strbuf_addf and just add the strings or characters individually. There are a limited number of modes so added a switch for the known ones and a default case if something comes through that are not a known one for git. In one scenario regarding a huge worktree, this reduces the time required for a `git checkout <branch>` from 44 seconds to 38 seconds, i.e. it is a non-negligible performance improvement. Signed-off-by: Kevin Willford <kewillf@microsoft.com>
Loosen the blocking of the `fsck` command from all "GVFS repos" (those that have `core.gvfs` set) to only those that actually use the virtual file system (VFS for Git only). This allows for `fsck` to be used in Scalar clones. Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
The following commands and options are not currently supported when working in a GVFS repo. Add code to detect and block these commands from executing. 1) fsck 2) gc 4) prune 5) repack 6) submodule 8) update-index --split-index 9) update-index --index-version (other than 4) 10) update-index --[no-]skip-worktree 11) worktree Signed-off-by: Ben Peart <benpeart@microsoft.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Loosen the blocking of the `prune` command from all "GVFS repos" (those that have `core.gvfs` set) to only those that actually use the virtual file system (VFS for Git only). This allows for `prune` to be used in Scalar clones. Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
In earlier versions of `microsoft/git`, we found a user who had set `core.gvfs = false` in their global config. This should not have been necessary, but it also should not have caused a problem. However, it did. The reason was that `gvfs_load_config_value()` was called from `config.c` when reading config key/value pairs from all the config files. The local config should override the global config, and this is done by `config.c` reading the global config first then reading the local config. However, our logic only allowed writing the `core_gvfs` variable once. In v2.51.0, we had to adapt to upstream changes that changed way the `core.gvfs` config value is read, and the special handling is no longer necessary, yet we still want the test case that ensures that this bug does not experience a regression. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Replace the special casing of the `worktree` command being blocked on VFS-enabled repos with the new `BLOCK_ON_VFS_ENABLED` flag. Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
On index load, clear/set the skip worktree bits based on the virtual file system data. Use virtual file system data to update skip-worktree bit in unpack-trees. Use virtual file system data to exclude files and folders not explicitly requested. Update 2022-04-05: disable the "present-despite-SKIP_WORKTREE" file removal behavior when 'core.virtualfilesystem' is enabled. Signed-off-by: Ben Peart <benpeart@microsoft.com>
Emit a warning message when the `gvfs.sharedCache` option is set that the `repack` command will not perform repacking on the shared cache. In the future we can teach `repack` to operate on the shared cache, at which point we can drop this commit. Signed-off-by: Matthew John Cheetham <mjcheetham@outlook.com>
…x has been redirected Fixes #13 Some git commands spawn helpers and redirect the index to a different location. These include "difftool -d" and the sequencer (i.e. `git rebase -i`, `git cherry-pick` and `git revert`) and others. In those instances we don't want to update their temporary index with our virtualization data. Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Ben Peart <Ben.Peart@microsoft.com>
Add check to see if a directory is included in the virtualfilesystem before checking the directory hashmap. This allows a directory entry like foo/ to find all untracked files in subdirectories.
When our patches to support that hook were upstreamed, the hook's name was eliciting some reviewer suggestions, and it was renamed to `post-index-change`. These patches (with the new name) made it into v2.22.0. However, VFSforGit users may very well have checkouts with that hook installed under the original name. To support this, let's just introduce a hack where we look a bit more closely when we just failed to find the `post-index-change` hook, and allow any `post-indexchanged` hook to run instead (if it exists).
When using a virtual file system layer, the FSMonitor does not make sense. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
When sparse-checkout is enabled, add the sparse-checkout percentage to the Trace2 data stream. This number was already computed and printed on the console in the "You are in a sparse checkout..." message. It would be helpful to log it too for performance monitoring. Signed-off-by: Jeff Hostetler <jeffhostetler@github.com>
Teach STATUS to optionally serialize the results of a status computation to a file. Teach STATUS to optionally read an existing serialization file and simply print the results, rather than actually scanning. This is intended for immediate status results on extremely large repos and assumes the use of a service/daemon to maintain a fresh current status snapshot. 2021-10-30: packet_read() changed its prototype in ec9a37d (pkt-line.[ch]: remove unused packet_read_line_buf(), 2021-10-14). 2021-10-30: sscanf() now does an extra check that "%d" goes into an "int" and complains about "uint32_t". Replacing with "%u" fixes the compile-time error. 2021-10-30: string_list_init() was removed by abf897b (string-list.[ch]: remove string_list_init() compatibility function, 2021-09-28), so we need to initialize manually. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Teach status serialization to take an optional pathname on
the command line to direct that cache data be written there
rather than to stdout. When used this way, normal status
results will still be written to stdout.
When no path is given, only binary serialization data is
written to stdout.
Usage:
git status --serialize[=<path>]
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Teach status deserialize code to reject status cache when printing in porcelain V2 and there are unresolved conflicts in the cache file. A follow-on task might extend the cache format to include this additiona data. See code for longer explanation. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Changes to the global or repo-local excludes files can change the results returned by "git status" for untracked files. Therefore, it is important that the exclude-file values used during serialization are still current at the time of deserialization. Teach "git status --serialize" to report metadata on the user's global exclude file (which defaults to "$XDG_HOME/git/ignore") and for the repo-local excludes file (which is in ".git/info/excludes"). Serialize will record the pathnames and mtimes for these files in the serialization header (next to the mtime data for the .git/index file). Teach "git status --deserialize" to validate this new metadata. If either exclude file has changed since the serialization-cache-file was written, then deserialize will reject the cache file and force a full/normal status run. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Teach `git status --deserialize` to either wait indefintely or immediately fail if the status serialization cache file is stale. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
With the "--untracked-files=complete" option status computes a superset of the untracked files. We use this when writing the status cache. If subsequent deserialize commands ask for either the complete set or one of the "no", "normal", or "all" subsets, it can still use the cache file because of filtering in the deserialize parser. When running status with the "-uno" option, the long format status would print a "(use -u to show untracked files)" hint. When deserializing with the "-uno" option and using a cache computed with "-ucomplete", the "nothing to commit, working tree clean" message would be printed instead of the hint. It was easy to miss because the correct hint message was printed if the cache was rejected for any reason (and status did the full fallback). The "struct wt_status des" structure was initialized with the content of the status cache (and thus defaulted to "complete"). This change sets "des.show_untracked_files" to the requested subset from the command-line or config. This allows the long format to print the hint. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
When using fsmonitor the CE_FSMONITOR_VALID flag should be checked when wanting to know if the entry has been updated. If the flag is set the entry should be considered up to date and the same as if the CE_UPTODATE is set. In order to trust the CE_FSMONITOR_VALID flag, the fsmonitor data needs to be refreshed when the fsmonitor bitmap is applied to the index in tweak_fsmonitor. Since the fsmonitor data is kept up to date for every command, some tests needed to be updated to take that into account. istate->untracked->use_fsmonitor was set in tweak_fsmonitor when the fsmonitor bitmap data was loaded and is now in refresh_fsmonitor since that is being called in tweak_fsmonitor. refresh_fsmonitor will only be called once and any other callers should be setting it when refreshing the fsmonitor data so that code can use the fsmonitor data when checking untracked files. When writing the index, fsmonitor_last_update is used to determine if the fsmonitor bitmap should be created and the extension data written to the index. When running through unpack-trees this is not copied to the result index. This makes the next time a git command is ran do all the work of lstating all files to determine what is clean since all entries in the index are marked as dirty since there wasn't any fsmonitor data saved in the index extension. Copying the fsmonitor_last_update to the result index will cause the extension data for fsmonitor to be in the index for the next git command to use. Signed-off-by: Kevin Willford <Kevin.Willford@microsoft.com>
Add trace2 region around read_object_process to collect time spent waiting for missing objects to be dynamically fetched. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
The fsmonitor script that can be used for running all the git tests using watchman was causing some of the tests to fail because it wrote to stderr and created some files for debugging purposes. Add a new debug script to use with debugging and modify the other script to remove the code that would cause tests to fail. Signed-off-by: Kevin Willford <Kevin.Willford@microsoft.com>
Add trace2 region and data events describing attempts to deserialize
status data using a status cache.
A category:status, label:deserialize region is pushed around the
deserialize code.
Deserialization results when reading from a file are:
category:status, path = <path>
category:status, polled = <number_of_attempts>
category:status, result = "ok" | "reject"
When reading from STDIN are:
category:status, path = "STDIN"
category:status, result = "ok" | "reject"
Status will fallback and run a normal status scan when a "reject"
is reported (unless "--deserialize-wait=fail").
If "ok" is reported, status was able to use the status cache and
avoid scanning the workdir.
Additionally, a cmd_mode is emitted for each step: collection,
deserialization, and serialization. For example, if deserialization
is attempted and fails and status falls back to actually computing
the status, a cmd_mode message containing "deserialize" is issued
and then a cmd_mode for "collect" is issued.
Also, if deserialization fails, a data message containing the
rejection reason is emitted.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Disable deserialization when verbose output requested. Verbose mode causes Git to print diffs for modified files. This requires the index to be loaded to have the currently staged OID values. Without loading the index, verbose output make it look like everything was deleted. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Add trace information around status serialization. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Users sometimes see transient network errors, but they are actually due to some other problem within the installation of a packfile. Observed resolutions include freeing up space on a full disk or deleting the shared object cache because something was broken due to a file corruption or power outage. This change only provides the advice to suggest those workarounds to help users help themselves. This is our first advice custom to the microsoft/git fork, so I have partitioned the key away from the others to avoid adjacent change conflicts (at least until upstream adds a new change at the end of the alphabetical list). We could consider providing a tool that does a more robust check of the shared object cache, but since 'git fsck' isn't safe to run as it may download missing objects, we do not have that ability at the moment. The good news is that it is safe to delete and rebuild the shared object cache as long as all local branches are pushed. The branches must be pushed because the local .git/objects/ directory is moved to the shared object cache in the 'cache-local-objects' maintenance task. Signed-off-by: Derrick Stolee <stolee@gmail.com>
Similar to a recent change to avoid the collision check for loose objects, do the same for prefetch packfiles. This should be more rare, but the same prefetch packfile could be downloaded from the same cache server so this isn't out of the range of possibility. Signed-off-by: Derrick Stolee <stolee@gmail.com>
…-for-windows#832) Add the `--ref-format` option to the `scalar clone` command. This will allow users to opt-in to creating a Scalar repository using alternative ref storage backends, such as reftable. Example: ```shell scalar clone --ref-format reftable $URL ```
…ed object cache (git-for-windows#840) There have been a number of customer-reported problems with errors of the form ``` error: inflate: data stream error (unknown compression method) error: unable to unpack a163b1302d4729ebdb0a12d3876ca5bca4e1a8c3 header error: files 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/pack/tempPacks/t-20260106-014520-049919-0001.temp' and 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/a1/63b1302d4729ebdb0a12d3876ca5bca4e1a8c3' differ in contents error: gvfs-helper error: 'could not install loose object 'D:/.scalarCache/id_49b0c9f4-555f-4624-8157-a57e6df513b3/a1/63b1302d4729ebdb0a12d3876ca5bca4e1a8c3': from GET a163b1302d4729ebdb0a12d3876ca5bca4e1a8c3' ``` or ``` Receiving packfile 1/1 with 1 objects (bytes received): 17367934, done. Receiving packfile 1/1 with 1 objects [retry 1/6] (bytes received): 17367934, done. Waiting to retry after network error (sec): 100% (8/8), done. Receiving packfile 1/1 with 1 objects [retry 2/6] (bytes received): 17367934, done. Waiting to retry after network error (sec): 100% (16/16), done. Receiving packfile 1/1 with 1 objects [retry 3/6] (bytes received): 17367934, done. ``` These are not actually due to network issues, but they look like it based on the stack that is doing retries. Instead, these actually have problems when installing the loose object or packfile into the shared object cache. The loose objects are hitting issues when installing and the target loose object is corrupt in some way, such as all NUL bytes because the disk wasn't flushed when the machine shut down. The error results because we are doing a collision check without confirming that the existing contents are valid. The packfiles may be hitting similar comparison cases, but it is less likely. We update these packfile installations to also skip the collision check. In both cases, if we have a transient network error, we add a new advice message that helps users with the two most common workarounds: 1. Your disk may be full. Make room. 2. Your shared object cache may be corrupt. Push all branches, delete it, and fetch to refill it. I make special note of when the shared object cache doesn't exist and point that it probably should so the whole repo is suspect at that point. * [X] This change only applies to interactions with Azure DevOps and the GVFS Protocol. Resolves git-for-windows#837.
|
This comment is unfortunately a little bit long. I tried to make it more readable via details and summary tags, but unfortunately those cannot really be nested with quoted areas, so it doesn't work, sadness.
This got quite a bit larger because a function that formerly had been file-local is now public. Meaning that its declaration also needs to be edited, this one in the header file.
It is unfortunately not visible in the context, but Upstream changed the surrounding text to enclose the terms in backticks. This now unfortunately looks a little strange. Basically we dropped the commit Range-diff
Here, a function was dropped via a fix up commit.
This came from a fixup commit. We'll skip over the workflows fixups that have been squashed into the original commits. Here comes that range-diff
This is actually in the mimalloc version 2.2.6; I managed to upstream this patch, just not into Git, but into mimalloc instead. "That's all, folks!" |
Before modifying the config documentation more, fill in these blanks. Signed-off-by: Derrick Stolee <stolee@gmail.com>
In anticipation of tests for multiple cache-servers, update the existing logic that sets up and tears down cache-servers to allow multiple instances on different ports. Signed-off-by: Derrick Stolee <stolee@gmail.com>
This extension of the gvfs.cache-server config now allows a new key, gvfs.prefetch.cache-server, to override the cache-server URL for only the prefetch endpoint. The purpose of this config is to allow for incremental testing and deployment of new cache-server infrastructure. Hypothetically, we could have special-purpose cache-servers that are glorified bundle servers and other servers that focus on the object and size endpoints. More realistically, this will allow us to test cache servers that have only the prefetch endpoint ready to go. This allows some incremental rollout that is more controlled than a flag day replacing the entire infrastructure. Signed-off-by: Derrick Stolee <stolee@gmail.com>
This extension of the gvfs.cache-server config now allows a new key, gvfs.get.cache-server, to override the cache-server URL for only the prefetch endpoint. The purpose of this config is to allow for incremental testing and deployment of new cache-server infrastructure. Signed-off-by: Derrick Stolee <stolee@gmail.com>
This extension of the gvfs.cache-server config now allows a new key, gvfs.post.cache-server, to overrid the cache-server URL for only the batched objects endpoint. The purpose of this config is to allow for incremental testing and deployment of new cache-server infrastructure. Signed-off-by: Derrick Stolee <stolee@gmail.com>
Test that all three verb-specific cache-server configs can be used simultaneously, each directing requests to a different server. This verifies that prefetch, get, and post verbs each respect their own override and don't interfere with each other.
The t5799-gvfs-helper.sh script is long and takes forever. This slows down PR merges and the local development inner loop is a pain. Before distributing the tests into a set of new test scripts by topic, extract important helper methods that can be imported by the new scripts. Signed-off-by: Derrick Stolee <stolee@gmail.com>
Move the tests from t5799-gvfs-helper.sh into multiple scripts that can run in parallel. To ensure that the ports do not overlap, add a large multiplier on the instance when needing multiple ports within the same test (currently limited to the verb-specific cache servers). Signed-off-by: Derrick Stolee <stolee@gmail.com>
…or-windows#836) I'm exploring ideas around a new cache-server infrastructure. One of the trickiest parts of deploying new infrastructure is the need to have all endpoints ready to go at launch without incremental learning. This change adds new `gvfs.<verb>.cache-server` config keys that allow for verb-by-verb overrides of the typical `gvfs.cache-server` config key. These are loaded on a per-verb basis and then reset after the request. Further, if there is a failure then the request is retried with the base cache-server URL. This would allow us to, for example, deploy a service that serves only the `gvfs/prefetch` endpoint and see if that is improving on latency and throughput expectations before moving on to the GET and POST verbs for single and batched object downloads. As I was adding tests, I realized that we should split this test script into distinct parts so we can have a faster inner loop when testing specific areas. I know that this script is frequently the longest script running in our PR and CI builds, so the parallel split should help significantly. Use commit-by-commit review. I tried to keep the last two commits as obviously "copy-and-paste only" except for a small change to the port calculation to avoid overlap when using multiple ports in parallel tests. * [X] This change only applies to interactions with Azure DevOps and the GVFS Protocol.
This PR rebases Microsoft Git patches onto Git for Windows v2.53.0-rc0.windows.1.
Previous base: vfs-2.52.0
Range-diff vs vfs-2.52.0
sizevariable is initializedlookup_commit()sane_istest()does not access array past endzinmallocz()strbuf_read()does NUL-terminate correctly@@ gvfs-helper-client.c (new) + if (!skip_prefix(line, "loose ", &v1_oid)) + BUG("update_loose_cache: invalid line '%s'", line); + -+ odb_loose_cache_add_new_oid(gh_client__chosen_odb, &oid); -+} -+ -+/* -+ * Update the packed-git list to include the newly created packfile. -+ */ -+static void gh_client__update_packed_git(const char *line) -+{ -+ struct strbuf path = STRBUF_INIT; -+ const char *v1_filename; -+ struct packed_git *p; -+ int is_local; -+ -+ if (!skip_prefix(line, "packfile ", &v1_filename)) -+ BUG("update_packed_git: invalid line '%s'", line); -+ -+ /* -+ * ODB[0] is the local .git/objects. All others are alternates. -+ */ -+ is_local = (gh_client__chosen_odb == the_repository->objects->sources); -+ -+ strbuf_addf(&path, "%s/pack/%s", -+ gh_client__chosen_odb->path, v1_filename); -+ strbuf_strip_suffix(&path, ".pack"); -+ strbuf_addstr(&path, ".idx"); -+ -+ p = add_packed_git(the_repository, path.buf, path.len, is_local); -+ if (p) -+ packfile_store_add_pack_also_to_mru(the_repository, p); -+ strbuf_release(&path); ++ odb_source_loose_cache_add_new_oid(gh_client__chosen_odb, &oid); +} + +/* @@ gvfs-helper-client.c (new) + } + + else if (starts_with(line, "packfile")) { -+ gh_client__update_packed_git(line); + ghc |= GHC__CREATED__PACKFILE; + *p_nr_packfile += 1; + } @@ gvfs-helper-client.c (new) + } + } + ++ if (ghc & GHC__CREATED__PACKFILE) ++ packfile_store_reprepare(the_repository->objects->packfiles); ++ + *p_ghc = ghc; + + return err; @@ odb.c: static int do_oid_object_info_extended(struct object_database *odb, const struct object_id *real = oid; int already_retried = 0; int tried_hook = 0; -- + int tried_gvfs_helper = 0; if (flags & OBJECT_INFO_LOOKUP_REPLACE) real = lookup_replace_object(odb->repo, oid); @@ odb.c: retry: - } + odb_prepare_alternates(odb); while (1) { + extern int core_use_gvfs_helper; -+ - if (find_pack_entry(odb->repo, real, &e)) - break; + struct odb_source *source; + if (!packfile_store_read_object_info(odb->packfiles, real, oi, flags)) @@ odb.c: retry: - if (!loose_object_info(odb->repo, real, oi, flags)) - return 0; + if (!odb_source_loose_read_object_info(source, real, oi, flags)) + return 0; + if (core_use_gvfs_helper && !tried_gvfs_helper) { + enum gh_client__created ghc; @@ odb.c: retry: /* Not a loose object; someone else may have just packed it. */ if (!(flags & OBJECT_INFO_QUICK)) { odb_reprepare(odb->repo->objects); - if (find_pack_entry(odb->repo, real, &e)) - break; + if (!packfile_store_read_object_info(odb->packfiles, real, oi, flags)) + return 0; if (gvfs_virtualize_objects(odb->repo) && !tried_hook) { + // TODO Assert or at least trace2 if gvfs-helper + // TODO was tried and failed and then read-object-hook@@ gvfs-helper-client.c: static void gh_client__update_loose_cache(const char *line + if (get_oid_hex(v1_oid, &oid)) + BUG("update_loose_cache: invalid line '%s'", line); + - odb_loose_cache_add_new_oid(gh_client__chosen_odb, &oid); + odb_source_loose_cache_add_new_oid(gh_client__chosen_odb, &oid); }@@ t/t5799-gvfs-helper.sh: verify_prefetch_keeps () { stop_gvfs_protocol_server @@ t/t5799-gvfs-helper.sh: test_expect_success 'integration: fully implicit: diff 2 commits' ' - >OUT.output 2>OUT.stderr + ! trace_has_immediate_oid $oid <diff-trace.txt ' +#################################################################monitor-componentsworkflow in msft-gitgvfs.fallbackconfig setting@@ .github/workflows/build-git-installers.yml (new) + eval $b/please.sh make_installers_from_mingw_w64_git --include-pdbs \ + --version=${{ needs.prereqs.outputs.tag_version }} \ + -o artifacts --${{matrix.type.name}} \ -+ --pkg=${{matrix.arch.artifact}}/mingw-w64-${{matrix.arch.toolchain}}-git-[0-9]*.tar.xz \ -+ --pkg=${{matrix.arch.artifact}}/mingw-w64-${{matrix.arch.toolchain}}-git-doc-html-[0-9]*.tar.xz && -+ ++ $(ls ${{matrix.arch.artifact}}/mingw-w64-${{matrix.arch.toolchain}}-*.tar.* | ++ sed '/\.sig$/d;/archimport/d;/cvs/d;/p4/d;/gitweb/d;/doc-man/d;s/^/--pkg=/' | ++ tr '\n' ' ') && + if test portable = '${{matrix.type.name}}' && test -n "$(git config alias.signtool)" + then + git signtool artifacts/PortableGit-*.exeuniversal@@ Commit message - build & upload unsigned .deb package Co-authored-by: Lessley Dennington <ldennington@github.com> + Co-authored-by: Sverre Johansen <sverre.johansen@gmail.com> + Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> ## .github/workflows/build-git-installers.yml ## @@ .github/workflows/build-git-installers.yml: jobs: @@ .github/workflows/build-git-installers.yml: jobs: + + # Build unsigned Ubuntu package + create-linux-unsigned-artifacts: -+ runs-on: ubuntu-latest ++ runs-on: ${{ matrix.arch.runner }} ++ strategy: ++ matrix: ++ arch: ++ - name: amd64 ++ runner: ubuntu-latest ++ # EOL 04/2025: https://endoflife.date/ubuntu ++ container_image: ubuntu:20.04 ++ # Use unofficial Node.js builds with glibc-217 for older Ubuntu ++ node_url: https://unofficial-builds.nodejs.org/download/release/v20.18.1/node-v20.18.1-linux-x64-glibc-217.tar.gz ++ - name: arm64 ++ runner: ubuntu-24.04-arm ++ # EOL 04/2027: https://endoflife.date/ubuntu ++ container_image: ubuntu:22.04 ++ # Use official Node.js builds for ARM64 (requires glibc 2.28+, Ubuntu 22.04 has 2.35) ++ node_url: https://nodejs.org/dist/v20.18.1/node-v20.18.1-linux-arm64.tar.gz + container: -+ image: ubuntu:20.04 # security support until 04/02/2025, according to https://endoflife.date/ubuntu ++ image: ${{ matrix.arch.container_image }} + volumes: + # override /__e/node20 because GitHub Actions uses a version that requires too-recent glibc, see "Install dependencies" below + - /tmp:/__e/node20 @@ .github/workflows/build-git-installers.yml: jobs: + libcurl4-gnutls-dev libpcre2-dev zlib1g-dev libexpat-dev \ + curl ca-certificates + -+ # Install a Node.js version that works in older Ubuntu containers (read: does not require very recent glibc) -+ NODE_VERSION=v20.18.1 && -+ NODE_URL=https://unofficial-builds.nodejs.org/download/release/$NODE_VERSION/node-$NODE_VERSION-linux-x64-glibc-217.tar.gz && -+ curl -Lo /tmp/node.tar.gz $NODE_URL && ++ # Install Node.js for GitHub Actions compatibility ++ curl -Lo /tmp/node.tar.gz "${{ matrix.arch.node_url }}" && + tar -C /__e/node20 -x --strip-components=1 -f /tmp/node.tar.gz + + - name: Clone git @@ .github/workflows/build-git-installers.yml: jobs: + die "Could not determine host architecture!" + fi + -+ PKGNAME="microsoft-git_$VERSION" ++ PKGNAME="microsoft-git_${VERSION}_${ARCH}" + PKGDIR="$(dirname $(pwd))/$PKGNAME" + + rm -rf "$PKGDIR"@@ Commit message - job skipped if credentials for accessing certificate aren't present Co-authored-by: Lessley Dennington <ldennington@github.com> + Co-authored-by: Sverre Johansen <sverre.johansen@gmail.com> + Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> ## .github/workflows/build-git-installers.yml ## @@ .github/workflows/build-git-installers.yml: on: @@ .github/workflows/build-git-installers.yml: jobs: - # Build unsigned Ubuntu package + # Build and sign Debian package create-linux-unsigned-artifacts: - runs-on: ubuntu-latest - container: + runs-on: ${{ matrix.arch.runner }} + strategy: @@ .github/workflows/build-git-installers.yml: jobs: - # Move Debian package for later artifact upload - mv "$PKGNAME.deb" "$GITHUB_WORKSPACE" - -+ - name: Upload artifacts -+ uses: actions/upload-artifact@v4 -+ with: -+ name: linux-unsigned-artifacts + - name: Upload artifacts + uses: actions/upload-artifact@v4 + with: +- name: linux-artifacts ++ name: linux-unsigned-${{ matrix.arch.name }} + path: | + *.deb + + create-linux-artifacts: + runs-on: ubuntu-latest + needs: [prereqs, create-linux-unsigned-artifacts] ++ strategy: ++ matrix: ++ arch: [amd64, arm64] + environment: release + steps: + - name: Log into Azure @@ .github/workflows/build-git-installers.yml: jobs: + - name: Download artifacts + uses: actions/download-artifact@v4 + with: -+ name: linux-unsigned-artifacts ++ name: linux-unsigned-${{ matrix.arch }} + + - name: Sign Debian package + run: | + # Sign Debian package + version="${{ needs.prereqs.outputs.tag_version }}" -+ debsigs --sign=origin --verify --check microsoft-git_"$version".deb ++ debsigs --sign=origin --verify --check microsoft-git_"$version"_${{ matrix.arch }}.deb + - - name: Upload artifacts - uses: actions/upload-artifact@v4 - with: - name: linux-artifacts ++ - name: Upload artifacts ++ uses: actions/upload-artifact@v4 ++ with: ++ name: linux-${{ matrix.arch }} path: | *.deb - # End build unsigned Debian package@@ .github/workflows/build-git-installers.yml: jobs: success() || (needs.create-linux-artifacts.result == 'skipped' && @@ .github/workflows/build-git-installers.yml: jobs: - name: linux-artifacts + name: linux-arm64 path: deb-package + - name: Log into Azure@@ Commit message Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> + ## Documentation/scalar.adoc ## +@@ Documentation/scalar.adoc: status.aheadBehind=false:: + message that can be disabled by disabling the `advice.statusAheadBehind` + config. + ++core.configWriteLockTimeoutMS:: ++ Sets a timeout to work gracefully around Git config write contention. ++ + The following settings are different based on which platform is in use: + + core.untrackedCache=(true|false):: + ## scalar.c ## @@ scalar.c: static int set_recommended_config(int reconfigure) - { "core.safeCRLF", "false" }, - { "fetch.showForcedUpdates", "false" }, - { "pack.usePathWalk", "true" }, + { "commitGraph.changedPaths", "true" }, + { "commitGraph.generationVersion", "1" }, + { "core.autoCRLF", "false" }, + { "core.configWriteLockTimeoutMS", "150" }, - { NULL, NULL }, - }; - int i; + { "core.logAllRefUpdates", "true" }, + { "core.safeCRLF", "false" }, + { "credential.https://dev.azure.com.useHttpPath", "true" }, @@ scalar.c: static int set_recommended_config(int reconfigure) */ static int toggle_maintenance(int enable)configcommand for backwards compatibilityendpointcommandgvfs.sharedCache@@ scalar.c #include "setup.h" +#include "wrapper.h" #include "trace2.h" + #include "path.h" #include "json-parser.h" +#include "path.h" +cache-servercommandclone --no-fetch-commits-and-treesfor backwards compatibility@@ scalar.c: static int set_recommended_config(int reconfigure) + } + for (i = 0; config[i].key; i++) { - if (set_scalar_config(config + i, reconfigure)) + if (set_config_if_missing(config + i, reconfigure)) return error(_("could not configure %s=%s"),git stash -ucmdsubmodule_from_path()post-commandhook handling