-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: debian datasource #13463
feat: debian datasource #13463
Conversation
Once the code for the feature is approved by the maintainers, the documentation can be added in a new The "Repology datasource" has docs that you can look at for inspiration. 1 Footnotes |
Hi @Ka0o0, thanks for this PR! Can you create a feature request issue to link to from this PR, and so we can discuss the requirements/implementation proposal there first? |
Please do not use simple @HonkingGoose Can we extend our contribution guide? So people should simply resolve conversations when done and only use re-request review button. Any only if no maintainer answer, use comments after ~5days. @rarkins WDYT? |
Sounds good to me. Is the goal here to reduce noise, and to "teach" contributors to wait a bit before commenting? Our
Footnotes |
So we only need some sentence about to suppress short comments like |
return needsToDownload; | ||
} | ||
|
||
async probeExtractedPackage( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a goot candidate to add cache decorator, sample:
renovate/lib/manager/terraform/lockfile/hash.ts
Lines 61 to 69 in 4f65b57
@cache({ | |
namespace: `datasource-${TerraformProviderDatasource.id}-build-hashes`, | |
key: (build: TerraformBuild) => build.url, | |
ttlMinutes: TerraformProviderHash.hashCacheTTL, | |
}) | |
static async calculateSingleHash( | |
build: TerraformBuild, | |
cacheDir: string | |
): Promise<string> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mhm but what if the server returns a new compressed file, can I manually invalidate the cache entry then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rarkins I'm still somehow unconfident with the current cache mechanism. I think we should extract all updates and write to renovate cache with additional metadata. And only use cache if still valid.
lib/datasource/deb/index.spec.ts
Outdated
const testPackagesFile = __dirname + '/test-data/Packages.gz'; | ||
const extractedTestFile = __dirname + '/test-data/Packages'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have helpers for this, checkout fixtures class
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only test case where I think a fixture would make sense is this one. However, I don't quite know how to get it running.
If I understood it correctly, the Fixtures.mock method will register some temp files based on what is provided as a parameter and intercept FS calls to those files? So I'm setting up the test case like this (basically replacing line 44 - 45):
const fixtures: DirectoryJSON = {};
fixtures[extractedPackageFile] = Fixtures.get('Packages');
Fixtures.mock(fixtures);
But now the test case would fail here. Shouldn't the Fixtures' mock make sure that the file exists?
expect(res.releases).toHaveLength(1); | ||
|
||
// validate that the server was called correctly | ||
expect(httpMock.getTrace()).toHaveLength(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
expect(httpMock.getTrace()).toHaveLength(1); |
This will be always equal to your mocked http requests, so no benefit. 😉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfulfilled mocks are only evaluated after the test block. In the next line the array access on getTrace()
would therefore be executed which would lead to an runtime exception in a failing test case scenario.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes and no, if you look at the http mock code, you can see a afterEach
which will validate this, so we don't have to repeat it here. 😉
const rl = readline.createInterface({ | ||
input: fs.createReadStream(extractedFile), | ||
terminal: false, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should cache the compressed file and directly stream if possible. Would sava a lot disk space. 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that would indeed some disk space for the cost of computational expense. I was also thinking about removing some unused meta information from the extracted Package file which would also reduce the package file size and reduce the lookup time (which is more important I guess). What do you think?
Co-authored-by: Michael Kriese <michael.kriese@visualon.de>
Needs test fixes |
@@ -0,0 +1,41 @@ | |||
import { createReadStream } from 'fs'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this file?
return fs.createReadStream(path); | ||
} | ||
|
||
export function access(path: string, mode?: number): Promise<void> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move to ./proxies.ts
@@ -0,0 +1,182 @@ | |||
import { copyFile, mkdirp, stat } from 'fs-extra'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
file structure changed, please move datasource to right folder.
please don't force-push
Closing due to staleness |
Changes:
Context:
This datasource is able to search multiple Debian package sources that provide a Packages file as described here.
This repository standard defines components within a release repository (e.g. main within bullseye release).
The datasource works by downloading the compressed Packages file for each component and extracting it. Currently, only .xz files are supported, but it should be possible to add other compression standards as well.
The file is downloaded using curl, which is a shell cmd call, into a configurable folder.
The file is only downloaded if the local version is older than what the server has (using curl's -z parameter) and is only extracted if needed.
The file is then searched through, currently, line by line which is a bit slow with Node.JS and further work might improve the performance by removing unused information from the extracted files.
It then returns the latest version which is, differently from what repology is returning us, the version you can specify with apt install, like
apt-get install curl=7.74.0-1.3+deb11u1
. An example PR created with this banch merged looks like this: Ka0o0/renovate-testrepo#2Documentation (please check one with an [x])
How I've tested my work (please tick one)
I have verified these changes via:
Open Points