fs.readdir()
Enhanced
Features
-
Fully backward-compatible drop-in replacement for
fs.readdir()
andfs.readdirSync()
-
Can crawl sub-directories - you can even control which ones
-
Supports filtering results using globs, regular expressions, or custom logic
-
Can return absolute paths
-
Can return
fs.Stats
objects rather than just paths -
Exposes additional APIs: Promise, Stream, EventEmitter, and Async Iterator.
Example
;; // Synchronous APIlet files = readdir; // Callback APIreaddir; // Promises APIreaddir ; // Async/Await APIlet files = await readdir; // Async Iterator APIfor await let item of readdir ... // EventEmitter APIreaddir ; // Streaming APIlet stream = readdir
Pick Your API
Readdir Enhanced has multiple APIs, so you can pick whichever one you prefer. Here are some things to consider about each API:
Function | Returns | Syntax | Blocks the thread? | Buffers results? |
---|---|---|---|---|
readdirSync() readdir.sync() |
Array | Synchronous | yes | yes |
readdir() readdir.async() readdirAsync() |
Promise | async/await Promise.then() callback |
no | yes |
readdir.iterator() readdirIterator() |
Iterator | for await...of |
no | no |
readdir.stream() readdirStream() |
Readable Stream | stream.on("data") stream.read() stream.pipe() |
no | no |
Blocking the Thread
The synchronous API blocks the thread until all results have been read. Only use this if you know the directory does not contain many items, or if your program needs the results before it can do anything else.
Buffered Results
Some APIs buffer the results, which means you get all the results at once (as an array). This can be more convenient to work with, but it can also consume a significant amount of memory, depending on how many results there are. The non-buffered APIs return each result to you one-by-one, which means you can start processing the results even while the directory is still being read.
Alias Exports
The example above imported the readdir
default export and used its properties, such as readdir.sync
or readdir.async
to call specific APIs. For convenience, each of the different APIs is exported as a named function that you can import directly.
readdir.sync()
is also exported asreaddirSync()
readdir.async()
is also exported asreaddirAsync()
readdir.iterator()
is also exported asreaddirIterator()
readdir.stream()
is also exported asreaddirStream()
Here's how to import named exports rather than the default export:
;
Enhanced Features
Readdir Enhanced adds several features to the built-in fs.readdir()
function. All of the enhanced features are opt-in, which makes Readdir Enhanced fully backward compatible by default. You can enable any of the features by passing-in an options
argument as the second parameter.
Crawl Subdirectories
By default, Readdir Enhanced will only return the top-level contents of the starting directory. But you can set the deep
option to recursively traverse the subdirectories and return their contents as well.
Crawl ALL subdirectories
The deep
option can be set to true
to traverse the entire directory structure.
; ;
Crawl to a specific depth
The deep
option can be set to a number to only traverse that many levels deep. For example, calling readdir("my/directory", {deep: 2})
will return subdir1/file.txt
and subdir1/subdir2/file.txt
, but it won't return subdir1/subdir2/subdir3/file.txt
.
; ;
Crawl subdirectories by name
For simple use-cases, you can use a regular expression or a glob pattern to crawl only the directories whose path matches the pattern. The path is relative to the starting directory by default, but you can customize this via options.basePath
.
NOTE: Glob patterns always use forward-slashes, even on Windows. This does not apply to regular expressions though. Regular expressions should use the appropraite path separator for the environment. Or, you can match both types of separators using
[\\/]
.
; // Only crawl the "lib" and "bin" subdirectories// (notice that the "node_modules" subdirectory does NOT get crawled);
Custom recursion logic
For more advanced recursion, you can set the deep
option to a function that accepts an fs.Stats
object and returns a truthy value if the starting directory should be crawled.
NOTE: The
fs.Stats
object that's passed to the function has additionalpath
anddepth
properties. Thepath
is relative to the starting directory by default, but you can customize this viaoptions.basePath
. Thedepth
is the number of subdirectories beneath the base path (seeoptions.deep
).
; // Crawl all subdirectories, except "node_modules" { return statspath === -1;} ;
Filtering
The filter
option lets you limit the results based on any criteria you want.
Filter by name
For simple use-cases, you can use a regular expression or a glob pattern to filter items by their path. The path is relative to the starting directory by default, but you can customize this via options.basePath
.
NOTE: Glob patterns always use forward-slashes, even on Windows. This does not apply to regular expressions though. Regular expressions should use the appropraite path separator for the environment. Or, you can match both types of separators using
[\\/]
.
; // Find all .txt files; // Find all package.json files; // Find everything with at least one number in the name;
Custom filtering logic
For more advanced filtering, you can specify a filter function that accepts an fs.Stats
object and returns a truthy value if the item should be included in the results.
NOTE: The
fs.Stats
object that's passed to the filter function has additionalpath
anddepth
properties. Thepath
is relative to the starting directory by default, but you can customize this viaoptions.basePath
. Thedepth
is the number of subdirectories beneath the base path (seeoptions.deep
).
; // Only return file names containing an underscore { return stats && statspath >= 0;} ;
Get fs.Stats
objects instead of strings
All of the Readdir Enhanced functions listed above return an array of strings (paths). But in some situations, the path isn't enough information. Setting the stats
option returns an array of fs.Stats
objects instead of path strings. The fs.Stats
object contains all sorts of useful information, such as the size, the creation date/time, and helper methods such as isFile()
, isDirectory()
, isSymbolicLink()
, etc.
NOTE: The
fs.Stats
objects that are returned also have additionalpath
anddepth
properties. Thepath
is relative to the starting directory by default, but you can customize this viaoptions.basePath
. Thedepth
is the number of subdirectories beneath the base path (seeoptions.deep
).
; ;
Base Path
By default all Readdir Enhanced functions return paths that are relative to the starting directory. But you can use the basePath
option to customize this. The basePath
will be prepended to all of the returned paths. One common use-case for this is to set basePath
to the absolute path of the starting directory, so that all of the returned paths will be absolute.
;; // Get absolute pathslet absPath = ;; // Get paths relative to the working directory;
Path Separator
By default, Readdir Enhanced uses the correct path separator for your OS (\
on Windows, /
on Linux & MacOS). But you can set the sep
option to any separator character(s) that you want to use instead. This is usually used to ensure consistent path separators across different OSes.
; // Always use Windows path separators;
Custom FS methods
By default, Readdir Enhanced uses the default Node.js FileSystem module for methods like fs.stat
, fs.readdir
and fs.lstat
. But in some situations, you can want to use your own FS methods (FTP, SSH, remote drive and etc). So you can provide your own implementation of FS methods by setting options.fs
or specific methods, such as options.fs.stat
.
; { ;} let options = fs: readdir: myCustomReaddirMethod ; ;
Backward Compatible
Readdir Enhanced is fully backward-compatible with Node.js' built-in fs.readdir()
and fs.readdirSync()
functions, so you can use it as a drop-in replacement in existing projects without affecting existing functionality, while still being able to use the enhanced features as needed.
; // Use it just like Node's built-in fs.readdir function; // Use it just like Node's built-in fs.readdirSync functionlet files = ;
A Note on Streams
The Readdir Enhanced streaming API follows the Node.js streaming API. A lot of questions around the streaming API can be answered by reading the Node.js documentation.. However, we've tried to answer the most common questions here.
Stream Events
All events in the Node.js streaming API are supported by Readdir Enhanced. These events include "end", "close", "drain", "error", plus more. An exhaustive list of events is available in the Node.js documentation.
Detect when the Stream has finished
Using these events, we can detect when the stream has finished reading files.
; // Build the stream using the Streaming APIlet stream = readdir ; // Listen to the end event to detect the end of the streamstream;
Paused Streams vs. Flowing Streams
As with all Node.js streams, a Readdir Enhanced stream starts in "paused mode". For the stream to start emitting files, you'll need to switch it to "flowing mode".
There are many ways to trigger flowing mode, such as adding a stream.data()
handler, using stream.pipe()
or calling stream.resume()
.
Unless you trigger flowing mode, your stream will stay paused and you won't receive any file events.
More information on paused vs. flowing mode can be found in the Node.js documentation.
Contributing
Contributions, enhancements, and bug-fixes are welcome! Open an issue on GitHub and submit a pull request.
Building
To build the project locally on your computer:
-
Clone this repo
git clone https://github.com/JS-DevTools/readdir-enhanced.git
-
Install dependencies
npm install
-
Run the tests
npm test
License
Readdir Enhanced is 100% free and open-source, under the MIT license. Use it however you want.
This package is Treeware. If you use it in production, then we ask that you buy the world a tree to thank us for our work. By contributing to the Treeware forest you’ll be creating employment for local families and restoring wildlife habitats.
Big Thanks To
Thanks to these awesome companies for their support of Open Source developers ❤