Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cached not deleting. #35

Open
jhemmmm opened this issue Sep 4, 2020 · 6 comments
Open

Cached not deleting. #35

jhemmmm opened this issue Sep 4, 2020 · 6 comments

Comments

@jhemmmm
Copy link

jhemmmm commented Sep 4, 2020

var gRequest = cachedRequest({
            url: photoFetch.data.results,
            ttl: 1000 * 60 * 10 //1 minutes
        }).on("error", function(err) {
            console.log(err);
        });
        gRequest.pipe(res);

Cached file does not seem to be deleting automatically. I am doing it wrong?.

@elhoyos
Copy link
Contributor

elhoyos commented Sep 4, 2020

Hi @jhemmmm, your code seems ok to me. Notice that at this moment cached-request does not have such a functionality. The cached entries are not removed automatically after they expire, but the request will be made as expected.

We have not considered adding an automated cleanup of expired cache entries, but it sounds useful to have it when setting your cache directory. Something like

cachedRequest.setCacheDirectory(cacheDirectory, {
  cleanupInterval: 24 * 60 * 60 * 1000 // cleanup expired items every 24 hours
});

If you feel like wanting to contribute, feel free to open up a pull request. I would be happy to help you.

@jhemmmm
Copy link
Author

jhemmmm commented Sep 4, 2020

I see,

That might not be a good idea on my side. To Identify the expired cache entries, I have to load all saved json files on /tmp/cache. There's at least 2TB of files on my cache directory.

@jhemmmm
Copy link
Author

jhemmmm commented Sep 4, 2020

I have tried added a queue job that delete the files base on it's ttl.
Here's the forked repo on it.
jhemmmm@a65b908
jhemmmm@e40e286

It works perfectly fine. Thanks.

EDIT: This thing requires redis. I am still looking for other solution.

@jhemmmm
Copy link
Author

jhemmmm commented Sep 5, 2020

I have added a cleanup interval.
jhemmmm@3cce8e1
jhemmmm@5cc08ab
jhemmmm@24e71a0

cachedRequest.cleaUpCacheDirectory({
    age: 60 * 60 * 3, // 3 hours
    cleanupInterval: 1000 * 60 * 30 // 30 minutes
});

@elhoyos
Copy link
Contributor

elhoyos commented Sep 5, 2020

This looks way better without redis. Here are some comments:

  • would you mind opening a pull request against this repo, please? This way we could discuss the implementation details there and merge it easily once it is ready.
  • I would prefer to make it more performant and reduce the locking the event loop for longer periods with some async find+remove. Anyways, this is a good feature to improve later and I think your solution is good enough for now.
  • If there is no strong reason to have cleaUpCacheDirectory[sic], let's set the cleanup options in setCacheDirectory instead.
  • let's make the cleanup optional, off by default to not disturb other people using the files.
  • age and ttl seem redundant, let's remove it from the options. Starting fresh is an interesting use case you bring to my mind with your age proposal, there you would want to cleanup responses that have not expired yet when re-starting cached-request. That sounds to me like a new feature we could implement in the near future.

Thank you for your ideas and contributions. Please keep them coming.

@jhemmmm
Copy link
Author

jhemmmm commented Sep 7, 2020

I used age because the find+remove uses it to check the "creation" of the file. Instead of the ttl given by the cached-request. I'll take a look for it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants