Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cache_dir doesn't do what you think it does #85

Open
ju5t opened this issue Jun 20, 2017 · 1 comment
Open

cache_dir doesn't do what you think it does #85

ju5t opened this issue Jun 20, 2017 · 1 comment

Comments

@ju5t
Copy link

ju5t commented Jun 20, 2017

From the README:

This caches the downloaded file in an intermediate directory to avoid repeatedly downloading it. This uses the timestamping (-N) and prefix (-P) wget options to only re-download if the source file has been updated.

   wget::fetch { 'https://tool.com/downloads/tool-1.0.tgz':
     destination => '/tmp/',
     cache_dir   => '/var/cache/wget',
   }

But if you specify cache_dir it actually keeps downloading the file over and over again.

This was introduced in commit back in 2015 from @mirthy (023ed15):

if $redownload == true or $cache_dir != undef  {

Ref: #L62

Is this for any particular reason? Apparently using cache_dir is the only way to set additional parameters on our files such as mode so we prefer to remove the $cache_dir != undef.

I don't mind raising a pull request for it but I wonder what we may break. Can someone shed a light on this perhaps?

@ju5t
Copy link
Author

ju5t commented Jun 21, 2017

Right, things are finally falling into place now. It actually doesn't download the file again. It only looks like it because it updates the cache and Puppet sees it as a 'change'.

I'm not sure what would be the best way to work around it with Puppet aside from changing wget::fetch into a Ruby function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant