Closed
Description
The current implementation of fs::walk_dir
is breadth first, with a file descriptor being held for each directory that has been found but not yet traversed. This can lead to "Too many open files" errors.
On my mac, the default limit for file descriptors is 256. A .git/objects
directory can contain 257 directories, so with a breadth first search with a queue of ReadDir objects (each of which holds a DIR), this limit can easily be hit.
I can see two possible solutions: either changing the queue to hold paths rather than ReadDir objects, or switching to depth first traversal.
I hit this error running the following on a directory tree containing a .git
directory, and with a max directory depth of 9.
#![feature(fs_walk)]
use std::fs;
use std::io;
use std::path::Path;
fn main() {
match walk() {
Ok(_) => (),
Err(e) => println!("ERROR {}", e)
}
}
fn walk() -> Result<(), io::Error> {
for f in try!(fs::walk_dir(&Path::new("."))) {
let f = try!(f);
println!("copy_tree {:?}", f.path());
}
Ok(())
}
Metadata
Metadata
Assignees
Labels
No labels