Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to force consistent versions for a library and an associated build tool with new-build? #5105

Open
sol opened this issue Feb 4, 2018 · 38 comments

Comments

@sol
Copy link
Member

sol commented Feb 4, 2018

Simplified scenario: We have a library that ships with a build tool. The library and the build tool are meant to be used together and must be of the same version. With sandboxed builds you would just depend on the library and cabal would make sure that the build tool is available when building.

With new-build this approach will lead to a build failure. The user is required to specify build-tool-depends:<package>:<executable> to make it work. However, as far as I can tell, there is no guarantee that the build tool and the library will be of the same version.

Is there a way to achieve this without specifying exact dependency versions on both the library and build tool (say I would want to continue to use e.g. == 2.* instead of specifying == 2.4.0 in two places)?

For my real use case, the library and the build tool are actually two separate packages, hspec and hspec-discover, where hspec depends on an exact version of hspec-discover.

Scope: 1.3k packages on Hackage depend on hspec + an unknown number of in-house projects.

@sol sol changed the title Is it possible to force consistent dependencies between build tools and executables with new-build? Is it possible to force consistent versions of a library and an associated build tool with new-build? Feb 4, 2018
@sol sol changed the title Is it possible to force consistent versions of a library and an associated build tool with new-build? Is it possible to force consistent versions for a library and an associated build tool with new-build? Feb 4, 2018
@hvr
Copy link
Member

hvr commented Feb 4, 2018

Unfortunately, we lack the ability to express such a kind of constraint; ironically, the motivation for introducing qualified goals was to become more liberal and allow to have decoupled install-plans and avoid having a "single version of every dependency" and is working against you here. So I'm afraid, for the case of exe:hspec-discover + lib:hspec, for now (until we figure out something, which will require a recent enough cabal-version:... specification) you'll have to do the thing you already suggest, i.e. specify pair-wise ranges for which the cartesian product of version(exe:hspec-discover) × version(lib:hspec) is inter-compatible. On the bright side, the tooling in question mostly affects test-suite components, whose build failures don't participate in install-plans for consumers of a package, so there's that...

@23Skidoo
Copy link
Member

23Skidoo commented Feb 4, 2018

I believe that this type of constraint is not that hard to implement (I hope @grayjay, who is the expert in this area, agrees), so it'd help if someone came up with a proposal for an extension of the constraint syntax.

@gbaz
Copy link
Collaborator

gbaz commented Feb 5, 2018

This is an interesting question. I think the right generalization is that a given build-tool generates code that expects a given version of a given library to be in scope. So executables should be able to declare that when used as build-tools they forces transitive deps on certain libs. Perhaps the idea could be to add an executable-requires field to executable stanzas?

@sol
Copy link
Member Author

sol commented Feb 5, 2018

If we are already at the point were we agree that this requires code changes to cabal, may I be so bold to request that we try to find a solution that does not require changes to thousands of packages?

(read: Can we change cabal new-build so that packages that built successfully in the past will continue to work unmodified in the future?)

@gbaz
Copy link
Collaborator

gbaz commented Feb 5, 2018

My suggestion, if it works as I think it should, would not require changes to any downstream packages, just to the cabal file of hspec-discover itself, and nothing else.

@gbaz
Copy link
Collaborator

gbaz commented Feb 5, 2018

(i.e. it would result in adding executable-requires to the executable stanza in the hpack-discover package. At this point the solver could then understand that anything that had a build-tool-depends on exe:hspec-discover would inherit a build-depends on the lib specified in that executable-requires -- in this case, the matching lib:hspec-discover)

@sol
Copy link
Member Author

sol commented Feb 5, 2018

anything that had a build-tool-depends on exe:hspec-discover

Hmm, I think that would still require changes to every affected package. build-tool-depends is a Cabal 2.0 feature. Most existing packages do not specify it.

@gbaz
Copy link
Collaborator

gbaz commented Feb 5, 2018

It would be good to work out the impact here. While there are indeed a ton of packages declaring a dep on hspec, I only see 34 with a dep on hspec-discover of any form (https://packdeps.haskellers.com/reverse/hspec-discover). But it may be that others also just assume the executable is in scope as well -- I don't see how to find this out without just grepping code. (I haven't looked for how many packages might have hspec-discover in the build-tools field, which existed prior to cabal 2.0 as well, but I suspect not many?)

If the problem is only on the order of 50 packages total, then revision fixups seem feasible. If the problem is closer to 1k packages, it may be worth looking at a specific backwards-compat fix for just this case.

I wouldn't like any general purpose solution that made the default behavior to continue to be to assume that you can just go rooting around in the path for build-tools.

@sol
Copy link
Member Author

sol commented Feb 5, 2018

In the past build-tools only accepted a predefined list of executables, the only way to extend that list was with a custom Setup.hs (something that I always advised against). There was no good reason in the past to depend on hspec-discover.

Packages that use hspec-discover will have the following (or a variation with arguments) in some source file:

{-# OPTIONS_GHC -F -pgmF hspec-discover #-}

(e.g. https://github.com/hspec/hspec-example/blob/master/test/Spec.hs)

I wouldn't like any general purpose solution that made the default behavior to continue to be to assume that you can just go rooting around in the path for build-tools.

Well, the current behavior is a breaking change + there were no better solutions in the past + I'm not aware of any problems that were caused by using executables from (transitive) dependencies as build tools.

What I know caused trouble in the past is, that it was not possible to (a) depend on a package that only specifies an executable (b) specify a version for a build tool. In that regard, build-tool-depends solves a problem, which is a good thing! My question though: Is it really necessary to fix one issue and at the same time introduce an other (the inability to enforce consistent versions between a library and an associated build tool) and yet at the same time break existing packages?

@gbaz
Copy link
Collaborator

gbaz commented Feb 5, 2018

So one concrete suggestion would be that whenever new-build encounters a dependency that includes an executable component, then the executable itself should be brought in-scope to the path.

I'm not sure how people feel about that in general. One thought might be that for cabal files specced prior to say 2.0, this behavior would be in place, but for files specced to a newer version (that includes build-tool-depends) then this additional transitive-executable-path-inclusion wouldn't take place?

That way we can ease in the new semantics, but remain backwards-compatible...

@sol
Copy link
Member Author

sol commented Feb 5, 2018

So one concrete suggestion would be that whenever new-build encounters a dependency that includes an executable component, then the executable itself should be brought in-scope to the path.

That would work for me. We would still need to solve the "consistent versions for a library with associated build tool" scenario when cabal-version: 2.0.

An other option, that would be even more seamless for the user and solve both issues at the same time:

Add a field provide-build-tool: <exec> (or build-tool-provides if you prefer), which basically says, if you (transitively) depend on this library, you'll get that build tool for free, no further actions required!

hspec-discover would then continue to specify the executable and "bless" it as build tool:

provide-build-tool: hspec-discover

executable hspec-discover
  ...

Older versions of cabal would ignore provide-build-tool with a warning and continue to work.

If we don't like build tools to take effect transitively, we would need something like a reexport-build-tools field a well.

What I prefer about this approach is that it would be more DRY for the end user.

All that said, any solution that addresses the problem is probably fine with me!

@phadej
Copy link
Collaborator

phadej commented Feb 5, 2018

I'm very against "if you transitively depend on" => "something happens".

If you need something, you should explicitly specify it. Similarly modules of transitive dependencies aren't in scope, you have to depend explicitly on the package. That's IMO very good thing, and we don't want to make that worse.

This have to be emphasized: explicit behavior is better. Anyone could make an alex4 package, providing alex executable. If both alex and alex4 would be transitively dependent upon, which executable will be used?

Per component build makes possible to not build executables at all. I don't need yaml2json in scope, or warp. And still they are there. That's annoying too. If I'd need one I'd write build-tool-depends: yaml:yaml2json.

Having a constraint between build-tool and its companion library versions is valid thing to ask. I have no concrete proposal how that would look like in syntax.

@gbaz
Copy link
Collaborator

gbaz commented Feb 5, 2018

ok, so this sounds like we can do the "whenever new-build encounters a dependency that includes an executable component, then the executable itself should be brought in-scope to the path" thing only as a backward-compat hack. Then we could use something like executable-requires going forward to indicate the companion library constraint.

@phadej
Copy link
Collaborator

phadej commented Feb 5, 2018

Note: that there are other expressivity issues and proposals:

I'd be very careful not to make a local optimisation, because every change to Cabal-the-spec have to be supported forever. I'm unsure that executable-requires is the best way to solve this issue. Convince me!

@hvr
Copy link
Member

hvr commented Feb 5, 2018

@gbaz I agree with adding a new thing like executable-requires, but I strongly disagree with adding yet another local hack. Cabal's official design mantra is "no untracked dependencies", and what you're suggesting would effectively nullify a big killer feature I have been loudly advertising about cabal new-build's per-component builds for 2 years ever since 1.24 brought us per-componet builds: which is that finally we can have library+executable packages which don't force the executable to be built and brought into scope.

NB: there's almost no package that breaks here (otherwise we would considered doing something about this; I've been constantly monitoring the new-build fallout on matrix.hho), this is all about test-suites for older packages which tend to have inaccurate dependency specifications anyway, and are likely to not build anyway. Let's not add more technical debt by adding some special hack for something that wasn't intended to work in the first place!

@gbaz
Copy link
Collaborator

gbaz commented Feb 5, 2018

Fair point in this not triggering significant breaks. That's an important consideration when putting in backwards-compat shims. Nonetheless, for clarity, my suggestion would not break the "finally we can have library+executable packages which don't force the executable to be built and brought into scope" property -- you could still have it, but it just would require using the a newer cabal-version field. I can see why this might limit the packages that benefit from this too drastically however.

@phadej a skim over the other expressivity proposals reveals nothing that can solve this. my suggestion of executable-requires seems pretty much the most obvious thing to do.

Executables can and often do generate code. The code they generate will either A) assume some library lying around or B) be completely self-contained and reproduce the common-lib functions. Often we lean towards the second, at the cost of lots of "vendoring" of the common portions of generated code. It also makes such tools much harder to write. Or consider something like https://github.com/awakesecurity/proto3-suite -- here, it makes no assumptions about utility libs, but clearly the generated code requires a bunch of packages to be in scope, not least aeson (since it contains json instances).

So an executable being able to state what packages need to be in scope for the code it generates to actually be able to compile seems pretty reasonable to me? And I imagine in the long-run it will actually let a lot of code-gen tools be slightly less fragile and builds with them be more reliable and maintainable. Put another way -- it seems to coincide with the general goals of the new-build theme -- making implicit conditions expressible explicitly, so they can be reasoned about directly by automated tooling.

@phadej
Copy link
Collaborator

phadej commented Feb 5, 2018

@gbaz I'm not saying that something proposed would solve this, I want us to pause and think in peace if some other old but yet unsolved problem can be also solved by solution to this one. I.e. if the solution is generalisable.

@hvr
Copy link
Member

hvr commented Feb 6, 2018

I'd also point out that we need to think very careful about the consequences of executable-requires for the solver; I deliberately came up with the provides proposal as much weaker form than a (too powerful) constraints-like mechanism would have been, to avoid undesirable "actions at a distance" effects which would harm the (de)composability properties of install-plans as well as cause the complexity class the cabal solver has to solve to be promoted into a harder class. So I'd be very careful about "generalising" here... the more we generalise, the closer we experience the full potential of the NP-hardness of the underlying problem the solver needs to solve (but we currently avoid experiencing the NP-hardness, since we operate in calmer waters of the problem space...)

@gbaz
Copy link
Collaborator

gbaz commented Feb 6, 2018

I confess I don't understand the concern. Anything in executable-requires becomes a transitive dep of something using the executable as a build tool. This is no different than anything in build-depends becoming a transitive dep of something using a library directly. So I don't think this changes the nature of the problem that needs to be solved at all.

@sol
Copy link
Member Author

sol commented Feb 9, 2018

Hey, thanks allot for all your input. However, may I request that we restart the discussion?

What I suggest is that we base claims on evidence and try to find a practical solution.

Rational: At least from my experience making unfounded claims and reciting mantras does not yield the best possible results.

To make my point, I'll now try to back up two claims with evidence.

First claim: Almost no packages break

there's almost no package that breaks here (otherwise we would considered doing something about this; I've been constantly monitoring the new-build fallout on matrix.hho), this is all about test-suites for older packages which tend to have inaccurate dependency specifications anyway, and are likely to not build anyway.

I was unable to support this claim with evidence.

Here is what I did: To exclude possibly old, outdated packages that don't build anyway, I only looked at the subset of packages that are on Stackage.

Stackage Nightly 2018-01-29 includes 2651 packages.

  • 502 of these packages use hspec
  • 247 of these packages use hspec-discover
  • only 14 of these packages specify build-tool-depends

This leaves us with 233 packages that break with cabal new-build, or in other words:

94% of the packages that use hspec-discover (or 8.8% of the total number of packages) break with cabal new-build

Second claim: This wasn't intended to work in the first place

Let's not add more technical debt by adding some special hack for something that wasn't intended to work in the first place!

I was unable to support this claim with evidence.

The corresponding code, git commit 7dc0a10 and issue #1120 give a different picture:

.cabal-sandbox/bin was added to the PATH intentionally, for the specific purpose of bringing built tools into scope.

One more thing, the discussion on this feature and a code comment use the term temporarily. I want to point out that temporarily in this context means "temporarily, while the action passed to withSandboxBinDirOnSearchPath is running" and not temporarily as in "temporary workaround".

@gbaz
Copy link
Collaborator

gbaz commented Feb 9, 2018

@sol I think the claim about package breakage was intended to refer to installation of the packages themselves, not the running of their test-suites. I.e. while many packages use hspec-discover, they will continue to be able to new-build. The problem will come with new-test.

I lean towards wanting to keep the test-suites working as well. But I can see the argument that this is less of an issue than if the installs themselves all went kaput -- in the latter case, I think that some way of ensuring backwards-compat would be almost certainly necessary. In this case, it seems a bit more debatable.

That said, I'd like to see what @23Skidoo has to say, and probably look as a whole for a wider range of input among cabal devs, as if we do choose to break backwards-compat, it should be done very consciously and with our eyes open.

@hvr
Copy link
Member

hvr commented Feb 9, 2018

I'd also point out, that I'd like to see a concrete specced out proposal on how to extend our expressibility to address the problem, and even more importantly, we should define what problems we're trying to solve, in order to be able to evaluate the possible solutions we come up with.

And to be frank, hspec-discover is the least important problems I'm aware of in this category; we have much more critical ones involving code-generator build-tools which break packages for real -- and to emphasize this very clearly: broken test-suites don't break the essence of a package on hackage (you don't depend on a package for its testsuite, you can't even express a dependency on a test-suite; test-suites are an internal detail that isn't part of a package's surface area; a package's test-suite isn't even built when you depend on it...); to me, test-suites & benchmarks are not an essential component of a package; they're just one of many development tools used by the maintainer, no more and no less and it's merely accidental/convenience that we package them up together with a package (consequently I'd never make a release or revision just to "fix" a test-suite); and since Stackage got brought up: Stackage being nothing more than a glorified centrally maintained huge freeze file which doesn't even support the paradigm of qualified solver goals that makes this problem even possible isn't affected by the issue at hand anyway.

That being said, I'd be happy to help with articulating the larger problem statement and evaluating proposed solutions, if anyone here's willing to invest the time and effort to tackle this non-trivial problem, which sooner or later requires to be addressed anyway.

@gbaz
Copy link
Collaborator

gbaz commented Feb 9, 2018

Herbert, yes, you've made very clear where you stand w/r/t backward-compat and test suites. I think it is important nonetheless to get input from a wider range of cabal devs, as this really is a policy question in a sense. I think what you say makes perfect sense for hackage trustee revisions -- but it is less clear to me that the exact same considerations should apply in how the cabal tool manages the migration to new-build, where streamlining such things could really help with smooth adoption. Another sort of middle-ground that would leave everyone a bit grumpy is (assuming we have the new tool constraint we want, of the sort i suggested or otherwise) to detect when a cabal file is victim to this problem in cabal check and propose the appropriate change. Certainly better than everyone having to google for the answer at least!

@23Skidoo
Copy link
Member

23Skidoo commented Feb 9, 2018

Sorry for not following this discussion, I have temperature and my brain is fried.

@ezyang
Copy link
Contributor

ezyang commented Feb 10, 2018

This discussion has gotten a bit confused because there are actually two distinct issues underlying the top-level problem "hspec/hspec-discover don't work with new-build":

  1. hspec and hspec-discover must depsolve consistently, even though new-build solves for hspec-discover under a qualified goal
  2. Many old packages specify an executable dependency on hspec-discover by writing build-depends: hspec-discover.

Let's talk about (2) first because it's a core problem which I kicked down the road when I added build-tool-depends and fixed new-build to somewhat support executables. The basic tension, as @phadej has pointed out, is that if you unconditionally state that the semantics of build-depends is "build all libraries and executables", then there's no way to say, "Actually, I only cared about the libraries" and skip building the executables; conversely, in practice, people declare executable dependencies with build-depends, and cabal sandbox hasn't exactly discouraged this practice.

So let me first state that build-tools dependencies in cabal new-build were always a bit busted. Some aspects of this were fixed in 6764810 and da15a6f but I never added any BC code for the build-depends case, because, well, surely if you really care about running the test suites with new-build, you'll appropriately declare build-tool-depends and then get it working.

If this truly is unacceptable, I'm willing to be convinced that some packages should get special dispensation, whereby a build-depends on specific libraries also implies an executable dependency from that package. This is compatible with the fact that the original build-tools list has a hard-coded list of supported build tools. And furthermore, this is exactly what cabal-version is for: if there is a de facto use of Cabal files we don't like, we grandfather the behavior and then fix it in the next cabal-version revision. I don't like @gbaz's suggestion of making this special dispensation toggleable from hspec-discover's Cabal file, because there's already a way to declare if you have an executable dependency (build-tool-depends) and so you might as well use that instead.


Issue (1) also has some unique challenges:

  1. The mechanism by which this use-case is supported must be general enough to seem natural and be useful by other packages (@hvr, @phadej)
  2. In an hspec/hspec-discover situation, we should not require every client package to add a large amount of redundant boilerplate (@sol)
  3. Users should not be required to rev to the latest cabal-version to get the "correct" pinned behavior.

Supposing that build-depends: hspec-discover gets special dispensation, you naturally end up in a situation where this syntax means that hspec-discover gets depsolved as part of the parent plan, and no qualified goal occurs. In this case, constraint (3) doesn't really apply, because old-style Cabal files will just work automatically; all we need to do is come up with a way for new-style Cabal files to express "executable dependency, but don't make a qualified goal." So the dumb, stone-age way to solve this is just add another field (here's an awful name for it: unqualified-build-tool-depends) which doesn't result in qualified goals when you exe solve.

There are probably other ways to solve this problem but figuring out how to solve constraint (3) is the crux of the issue.

HTH.

@gbaz
Copy link
Collaborator

gbaz commented Feb 11, 2018

@ezyang I think the confusion between the two issues infected your understanding of what I was proposing (or perhaps my expression if it).

On issue #2 I did not suggest having a "special dispensation toggleable from hspec-discover's Cabal file". Rather I agree with you -- the solution, should we choose to implement it, would be a special workaround that's gated by cabal-version so it can be moved away from over time.

On issue #1, I did propose a special field -- executable-requires. I continue to think this solves the issue going forward with the introduction of no new problems or issues, and in accord with the natural way that we should think about making more precise the dependency language of cabal.

I understand there are some reservations about this, but there is as of yet nothing concrete -- just the thought that maybe there is potentially something more general or better. So I think you confused my executable-requires idea for a solution to #2 rather than #1. Take a look at it in light of #1 and see what you think! :-)

@ezyang
Copy link
Contributor

ezyang commented Feb 14, 2018

I guess the weird thing about putting executable-requires in the executable defining package is that, while it makes sense from a usability perspective, it violates the my underlying model about how qualified goals work. Right now, there is a local test to determine if a referenced package will be solved in a qualified goal or not: if it is a setup dependency, or an executable dependency, it is solved as a qualified goal. If we add executable-requires, then the depsolver has to know about the package it has selected to even figure out if it is going to dependency solve that package in a qualified goal or not.
One particularly obnoxious way to cause problems is to write this:

library
  build-depends: foo == 0.2
  build-tool-depends: foo

and then have two versions of foo, one which has executable-requires and another which does not. Ugh. There are two self-consistent choices and who knows which the solver will pick.

@hvr
Copy link
Member

hvr commented Feb 14, 2018

@ezyang fwiw, your scenario even holds if executable-requires is reduced to a seemingly weaker boolean property; i.e. foo.cabal could simply have qualifiable-goal: False + build-depends: foo >= 0.2.1 && < 0.3, and this would effectively have the same effect as a executable-requires: foo >= 0.2.1 && < 0.3; and also exhibit the very same problem you demonstrate; i.e. destroy the modularity/independency/isolation of qualified targets's install-plans relative to the requesting component.

@gbaz
Copy link
Collaborator

gbaz commented Feb 14, 2018

My understanding of qualified goals is not amazing, and I'm mainly working off this post: https://www.well-typed.com/blog/2015/03/qualified-goals/

That said, I don't quite understand the issue described by @ezyang here:

Right now, there is a local test to determine if a referenced package will be solved in a qualified goal or not: if it is a setup dependency, or an executable dependency, it is solved as a qualified goal. If we add executable-requires, then the depsolver has to know about the package it has selected to even figure out if it is going to dependency solve that package in a qualified goal or not.

I mean... the referenced package isn't solved in a qualified goal, as I understand the post -- rather, its dependencies are. So the idea is that transitively, I guess, all qualified goals themselves have qualified dependencies. But in this case, an executable-requires field wouldn't affect all the other dependencies, which would all be qualified. It would just "locally" break out of the qualified dependencies for the purpose of the particular libraries in that field.

So on the whole, the install-plan of the target would still be exactly what it was before -- but to be able to depend on that target would also depend on the non-modular fact that you need to depend on a particular version of a particular library as well.

In particular, in what I'm envisioning, the provided executable package would not necessarily depend on the libraries listed in its executable-requires. And even if it did, it would only do so explicitly by depending on that library in its qualified form. So if a package foo both used bar and also had an executable-requires on bar, those two goals would get solved independently.

@ezyang
Copy link
Contributor

ezyang commented Feb 15, 2018

I mean... the referenced package isn't solved in a qualified goal, as I understand the post -- rather, its dependencies are.

I don't know what you mean about, so let's talk about setup dependencies. In this case, when I write setup-depends: foo, I am not ONLY saying that the dependencies of foo are allowed diverge from my choices in the main library, but also that foo ITSELF can diverge as well. The referenced package from a setup-depends is very much solved "as a qualified goal."

But maybe that is not what you are thinking about, because you go on to say:

It would just "locally" break out of the qualified dependencies for the purpose of the particular libraries in that field.

But this is still pretty weird. If foo is solved as a qualified goal (and somehow some of its deps "jailbreak" the qualification), suppose that foo is executable-requires on foo the library, then you can still end up in a situation where you select foo-0.1 for the executable (because foo is qualified) but you select foo-0.2 for the library (because the executable-requires is not qualified and does something else.)

So if a package foo both used bar and also had an executable-requires on bar, those two goals would get solved independently.

...and when would you actually want different solutions for these?

@gbaz
Copy link
Collaborator

gbaz commented Feb 15, 2018

I don't know what you mean about, so let's talk about setup dependencies. In this case, when I write setup-depends: foo, I am not ONLY saying that the dependencies of foo are allowed diverge from my choices in the main library, but also that foo ITSELF can diverge as well. The referenced package from a setup-depends is very much solved "as a qualified goal."

Fair enough. I was thinking about build-tool-depends: foo where I can sort of "skip over" that step. But you're correct. We should be thinking of these things as qualified in either case, and also their deps as transitively qualified.

But on to the main issue. You write:

suppose that foo is executable-requires on foo the library, then you can still end up in a situation where you select foo-0.1 for the executable (because foo is qualified) but you select foo-0.2 for the library (because the executable-requires is not qualified and does something else.)

And you ask when I would want that.

Well, in the concrete set of cases I'm imagining, I don't see the issue. In particular, this handles the case when a build-tool is a code generator. And the generated code needs to make use of functionality provided by a certain library. Upthread these were called "companion libraries." So imagine I have a code generator that among other things provides instances for aeson. (this is the case for proto3-suite). So if I want to use that as a build tool in my pipeline, then I need to make sure the parent package has an api-compatible aeson available so that it can actually build my generated code. And that aeson should be the same as the one elsewhere in the package, so it can link in my generated code. However, internally, proto3-suite may use aeson for something else entirely -- like, say, parsing a json config file (it doesn't, afaik, but it could). There's no reason the two are coupled. The companion library needs to provide the right API for generated code to interface with. The tool itself is going to use the library, in general, for a different purpose. There's no reason why these two would need to have anything in common, that I can think of.

So: let me turn the question around -- what is the use-case you can imagine where we would ever want the same solution? :-)

@grayjay
Copy link
Collaborator

grayjay commented Feb 19, 2018

I tried to draw a diagram of my understanding of the relationship between qualified goals and executable-requires. This is a simplified, contrived example where hspec-discover generates code that depends on a specific version of hspec. Test suites from two different packages use hspec/hspec-discover.

issue-5105-diagram

The circles represent different goal qualifiers. (Currently, every goal is qualified in cabal, but build targets usually have the "top-level" qualifier.) This example assumes that the two test suites' packages already have different qualifiers, as if they were used by different build tools that aren't shown in the diagram.

Qualifiers:

  1. Test suite 1
  2. Test suite 2
  3. The hspec-discover executable used by test suite 1
  4. The hspec-discover executable used by test suite 2

In this example, the two hspec-discover dependencies are linked, because they can both be satisfied by the same instance of the package. Each of the qualifiers allows for a different set of dependency versions, which the diagram shows with dependencies on directory. hspec-discover and the test suites require three different versions of directory, but hspec doesn't constrain the version.

As I understand it, the executable-requires dependency only needs to constrain versions across different qualifiers, it doesn't need to merge the qualifiers. Therefore, hspec-discover's executable-requires field constrains the hspec version to 2.4.8, but it doesn't also force consistent versions of directory.

I can see at least two possible meanings for executable-requires: hspec == 2.4.8 declared in hspec-discover.cabal:

  1. The field requires any component that declares a build-tool-depends dependency on hspec-discover to also declare a dependency on hspec. Additionally, the executable-requires field constrains the component's dependency on hspec to 2.4.8. For example, a test suite could have build-depends: hspec == 2.4.*, but the executable-requires field in hspec-discover would constrain the dependency further (cabal would intersect the version ranges).

  2. The field adds an implicit build-depends: hspec == 2.4.8 dependency to any component that declares a build-tool-depends dependency on hspec-discover. The test suite would not need to add an explicit dependency on hspec, unless its non-generated code also imported modules from hspec.

I think that the first option would be easier to implement, but it would require the users of build tools to know what dependencies the build tools add to their code. I also don't think that either option would significantly change the complexity for the solver. executable-requires would just add more version constraints between packages that aren't very different from the existing constraints in the solver. It would require a significant code change, though.

@phadej
Copy link
Collaborator

phadej commented Feb 19, 2018

@grayjay To clarify, hspec-discovers are a linked, but executable-requires may break the link and force different hspec-discovers? In the case where test-suites depend on different directorys will force different hspec and they in turn different hspec-discovers?

The meaning 2. implicit bound feels very powerful. Won't the first case work where executable-requires will constraint hspec only if my component depends on it, not requiring it to depend on hspec?
I can imaging code generators where needed dependencies are based on the input, thus requiring component to depend on all possibly needed dependencies is too much?

I.e. executable-requires would add additional constraint, only if one depends on that package already?

OTOH that breaks if I use (non-existing) hspec-candy which re-exports hspec modules, and not hspec directly.

Please correct me, if I understood something wrong.

@gbaz
Copy link
Collaborator

gbaz commented Feb 20, 2018

Thank you for the very nice diagram!

I hadn't considered option 1 before, but it seems like it adequately addresses the use-case, and it is also more explicit in the way that people seem to like. Option 1 does seem like it would necessitate a good error message if there was an executable-requires without a matching dependency directly.

As for "As I understand it, the executable-requires dependency only needs to constrain versions across different qualifiers, it doesn't need to merge the qualifiers." Yes -- that is what I think is the case too.

@grayjay
Copy link
Collaborator

grayjay commented Feb 20, 2018

@grayjay To clarify, hspec-discovers are a linked, but executable-requires may break the link and force different hspec-discovers? In the case where test-suites depend on different directorys will force different hspec and they in turn different hspec-discovers?

I thought that both test suites could use the same hspec-discover executable, even though the executable generates code that ends up being built with two different versions of directory. executable-requires would only be a constraint that applies to reverse-dependencies of build tools; it wouldn't be a dependency of the build tool itself.

The meaning 2. implicit bound feels very powerful. Won't the first case work where executable-requires will constraint hspec only if my component depends on it, not requiring it to depend on hspec?
I can imaging code generators where needed dependencies are based on the input, thus requiring component to depend on all possibly needed dependencies is too much?

I.e. executable-requires would add additional constraint, only if one depends on that package already?

Yes, I think that only adding a version constraint would also handle the hspec/hspec-discover case, and it would probably be easier to implement.

I didn't consider that the generated code's dependencies might only be known once the build tool runs. It seems hard to track dependencies perfectly in that case. Maybe we should distinguish between optional and required dependencies for the build tool's generated code.

OTOH that breaks if I use (non-existing) hspec-candy which re-exports hspec modules, and not hspec directly.

I'm not sure I understand. What would break?

@phadej
Copy link
Collaborator

phadej commented Feb 20, 2018

I'm not sure I understand. What would break?

Someone could use hspec-discover to generate code, but consume it by something else than hspec, my own hspec-to-tasty-adapter for contrived example. It's a hack, but still.

I think that build-tool author could provide hints on what constraints should be put on the dependencies to use generated code, but user should explicitly (doesn't exclude conveniently) take them in use. The reasoning, as the "code-generation" is dynamic dependencies might be too.

I think for now, that only adding constraints as hinted by build-tool executable-requires (terrible name, something more output-depends is more correct) only if downstream depends on the packages itself. Scenarios where build-tool changes so generated code requires new dependencies is IMHO worth major bump & other semantic signalling.

Say if I need to draw a graph. I think I should, but I first have to learn how to make such pictures quickly :)

@grayjay
Copy link
Collaborator

grayjay commented Feb 25, 2018

@phadej I see what you mean. I think it makes sense to give the build tool user control over the packages to depend on, and allow the build tool to constrain those dependencies.

@Ericson2314
Copy link
Collaborator

Let just through out the once I teach Cabal about cross, we'll need to keep executable and setup dependencies qualified goals, because they are built with a GHC and than libraries! So @grayjay's point about adding a version constraint but keeping the qualification is quite necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants