-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automated plan split into subplans #308
Comments
The Manually, If I wanted to say I want to say in my L2 metadata, while trying to be most specfiic, what should be the actual "split"? Would it look something like this? prepare:
split_plan:
- repo: http://.......
provision:
split_plan:
- fips: enabled
- cpu: [ 1, 2 ]
- how: [ container, virtual ] # (just some theoretical example) So with each entry, the plan would "duplicate" into two, like so:
Would that make sense? Or do you have some completely different approach in mind? How would Thinking about this I see 2 distinct behaviours: |
Recently @fernflower shared with us a related use case for splitting plans per test:
I propose to start with covering this use case as the first step for automated plan generation. The config could look like this: discover:
how: fmf
slice: 1 Any better name for the option? The The question is into which step config it should go. Probably under the @lukaszachy, @thrix, @pvalena, @sopos, @jscotka, @fernflower, please share your thoughts. |
It is still questionable where (in which step) the test set manipulation should be done. I will propagate my earlier idea of adding a new step which would be dedicated to various manipulations. |
Actually, I think we have the manually enumerated variants solved via inheritance quite well, and I would rather avoid introducing another syntax. To take the earlier example: prepare:
[ . . . ]
execute:
[ . . . ]
# maybe this isn't valid syntax. Sorry, writing this from the top of my head.
/fips-enabled
provision:
- how: virtual
fips: enabled
/single-cpu
provision:
- how: virtual
cpu: 1
/double-cpu
provision:
- how: virtual
cpu: 2
/container-provision
provision:
- how: container IOW I retract my earlier proposal for lists / split_plan. It doesn't cover +1 for |
A couple of thoughts from the implementation point of view as discussed today with @mkluson and @bocekm today: So the expectation is that there is syntax defined which enables to create a separate plan for each test identified during the One possible solution is to perform the expansion only when the plan is executed: So we start with a single plan, run the
@thrix, @happz, @lukaszachy, @FrNecas, any ideas how to handle this? I don't think it would be a good idea to execute discover step for each plan when just exploring available plans in the repository. Especially for tests fetched from remote repositories this would slow down things substantially. |
Just an idea. When listing a directory you can see files and subdirectories. You can also list directories recursively. Would such concept make sense in this situation? Can a sub-plan be represented by an object that is listed but not processed as a test case .. and only processed when required (-R)? |
Sounds like two incompatible cases. As soon as a test and its metadata presence can lead to a new dynamic plan being spawn just for that test, listing plans without test discovery will inherently yield an incomplete list, sooner or later. One way how to get out of this could be a warning and an option: "Be aware that test metadata may affect the final list of plans, especially if tests employ the following metadata fields: .... If you wish to see the final list of plans, run I'm not sure there's a perfect solution. If performance is the issue here - and I understand cloning remote repositories just to find out they don't change anything could be a problem - I'd be fine with the behavior I described above. |
+1 for @happz idea:
|
From a user perspective, I would like to be able to distinguish these to cases in tmt output, i.e. a warning or different form of an output (preferably) would be used only in case there are such nested plans. Personally, I would like to see the nested plan listed in the output, just identified as not-a-test but a subplan, similarly to the ls file/directory example. |
From Stakeholders mtg: We should support usage when final plan name stays the same, even though tests require several distinct provision or prepare steps. E.g. tmt configuration creates plan called e.g. |
Summary from stakeholders mtg:
|
If the provisioning part was done externally tmt should provide uniqe SUT IDs to a provisioner to be able to I mean something like following flow:
This way one could use any kind of provisioning method (e.g. beaker) which is even not supported by tmt itself and connect the machines in the expected way. |
I'll try to summarize the behaviour discussed above from the implementation point of view: SUT (system under test) is defined by the
Each SUT would be given a unique id. Each test would store id of the corresponding SUT on which it should be executed. The following steps would be performed:
The question is how the detected SUT, that is the |
Since someone put this issue on this week's hacking meeting, so I poked tmt internals a bit, just to see what would happen. Nothing complex, not even remotely close to tackling all raised use cases, I merely started with |
Use case 1: Plan contain mutually exclusive tests, tmt should run distinct sets of tests separately
Use case 2: Each plan should be run multiple times with slightly modified environment
UC1: Test attributes as following should cause only equivalent tests to be run in same sub plan
UC2: Run plan "normally" as well as with FIPS mode enabled. Set of tests might be further pruned using relevancy.
It would be great if TMT allows sub-plans generation based on various attributes/plugins and each of them could be individually turn on/off.
The text was updated successfully, but these errors were encountered: