Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

flake.nix: rewrite #4605

Merged
merged 23 commits into from
Dec 29, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
d3e73df
flake.lock: update to hotfix CUDA::cuda_driver
SomeoneSerge Dec 26, 2023
8364cf4
flake.nix: rewrite
philiptaron Dec 22, 2023
0607e24
flake.nix: use finalPackage instead of passing it manually
SomeoneSerge Dec 24, 2023
eab1c12
nix: unclutter darwin support
SomeoneSerge Dec 24, 2023
0259941
nix: pass most darwin frameworks unconditionally
SomeoneSerge Dec 24, 2023
0fa62c1
*.nix: nixfmt
SomeoneSerge Dec 25, 2023
69c56bc
flake.nix: add maintainers
SomeoneSerge Dec 25, 2023
a07407c
nix: move meta down to follow Nixpkgs style more closely
SomeoneSerge Dec 25, 2023
04bc417
nix: add missing meta attributes
SomeoneSerge Dec 25, 2023
d08690a
flake.nix: avoid re-evaluating nixpkgs too many times
SomeoneSerge Dec 25, 2023
a28c9ac
flake.nix: use flake-parts
SomeoneSerge Dec 25, 2023
a629371
nix: migrate to pname+version
SomeoneSerge Dec 25, 2023
e3b1ba2
flake.nix: overlay: expose both the namespace and the default attribute
SomeoneSerge Dec 25, 2023
12d4a68
ci: add the (Nix) flakestry workflow
SomeoneSerge Dec 25, 2023
a16f589
nix: cmakeFlags: explicit OFF bools
SomeoneSerge Dec 26, 2023
dd0e12c
nix: cuda: reduce runtime closure
SomeoneSerge Dec 26, 2023
4522c47
nix: fewer rebuilds
SomeoneSerge Dec 26, 2023
ae6bebc
nix: respect config.cudaCapabilities
SomeoneSerge Dec 26, 2023
1efbc6b
nix: add the impure driver's location to the DT_RUNPATHs
SomeoneSerge Dec 26, 2023
82e48e2
nix: clean sources more thoroughly
SomeoneSerge Dec 26, 2023
7bd8d8c
nix: explicit mpi support
SomeoneSerge Dec 26, 2023
d0adab6
nix: explicit jetson support
SomeoneSerge Dec 26, 2023
3f7003b
flake.nix: darwin: only expose the default
SomeoneSerge Dec 26, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
flake.nix: avoid re-evaluating nixpkgs too many times
  • Loading branch information
SomeoneSerge committed Dec 26, 2023
commit d08690af65af266414512d308d88fa1715f75821
5 changes: 0 additions & 5 deletions .devops/nix/overlay.nix

This file was deleted.

3 changes: 3 additions & 0 deletions .devops/nix/scope.nix
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{ lib, newScope }:

lib.makeScope newScope (self: { llama-cpp = self.callPackage ./package.nix { }; })
52 changes: 35 additions & 17 deletions flake.nix
Original file line number Diff line number Diff line change
Expand Up @@ -17,23 +17,42 @@
in

{
# These define the various ways to build the llama.cpp project.
# Integrate them into your flake.nix configuration by adding this overlay to nixpkgs.overlays.
overlays.default = import ./.devops/nix/overlay.nix;
# An overlay can be used to have a more granular control over llama-cpp's
# dependencies and configuration, than that offered by the `.override`
# mechanism. Cf. https://nixos.org/manual/nixpkgs/stable/#chap-overlays.
#
# E.g. in a flake:
# ```
# { nixpkgs, llama-cpp, ... }:
# let pkgs = import nixpkgs {
# overlays = [ (llama-cpp.overlays.default) ];
# system = "aarch64-linux";
# config.allowUnfree = true;
# config.cudaSupport = true;
# config.cudaCapabilities = [ "7.2" ];
# config.cudaEnableForwardCompat = false;
# }; in {
# packages.aarch64-linux.llamaJetsonXavier = pkgs.llamaPackages.llama-cpp;
# }
# ```
#
# Cf. https://nixos.org/manual/nix/unstable/command-ref/new-cli/nix3-flake.html?highlight=flake#flake-format
overlays.default = (final: prev: { llamaPackages = final.callPackage .devops/nix/scope.nix { }; });

# These use the package definition from `./.devops/nix/package.nix`.
# There's one per backend that llama-cpp uses. Add more as needed!
packages = eachSystem (
system:
let
defaultConfig = {
# Avoid re-evaluation for the nixpkgs instance,
# cf. https://zimbatm.com/notes/1000-instances-of-nixpkgs
pkgs = nixpkgs.legacyPackages.${system};

# Ensure dependencies use CUDA consistently (e.g. that openmpi, ucc,
# and ucx are built with CUDA support)
pkgsCuda = import nixpkgs {
inherit system;
overlays = [ self.overlays.default ];
};
pkgs = import nixpkgs defaultConfig;

# Let's not make a big deal about getting the CUDA bits.
cudaConfig = defaultConfig // {
config.cudaSupport = true;
config.allowUnfreePredicate =
p:
Expand All @@ -48,19 +67,18 @@
)
(p.meta.licenses or [ p.meta.license ]);
};
pkgsCuda = import nixpkgs cudaConfig;

# Let's make sure to turn on ROCm support across the whole package ecosystem.
rocmConfig = defaultConfig // {
# Ensure dependencies use ROCm consistently
pkgsRocm = import nixpkgs {
inherit system;
config.rocmSupport = true;
};
pkgsRocm = import nixpkgs rocmConfig;
in
{
default = pkgs.llama-cpp;
opencl = pkgs.llama-cpp.override { useOpenCL = true; };
cuda = pkgsCuda.llama-cpp;
rocm = pkgsRocm.llama-cpp;
default = (pkgs.callPackage .devops/nix/scope.nix { }).llama-cpp;
opencl = self.packages.${system}.default.override { useOpenCL = true; };
cuda = (pkgsCuda.callPackage .devops/nix/scope.nix { }).llama-cpp;
rocm = (pkgsRocm.callPackage .devops/nix/scope.nix { }).llama-cpp;
}
);

Expand Down