Hexatool's fs-crawl module
Modular fs library.
Installation
npm install --save @hexatool/fs-crawl
Using yarn
yarn add @hexatool/fs-crawl
What it does
Crawl your filesystem up or down.
API
crawl(root: string, options: CrawlerOptions): string[]
-
root
- Type:
string
. - Optional:
false
. - Description: The folder to start crawl.
- Type:
-
options
- Type
CrawlerOptions
. - Optional:
false
.
- Type
CrawlerOptions
-
direction
- Type:
string
. - Optional:
false
. - Allowed values
up
ordown
. - Description: The direction to crawl.
- Type:
-
exclude
- Type:
(dirName: string, dirPath: string) => boolean
. - Optional:
true
. - Applies an exclusion filter to all directories and only crawls those that do not satisfy the condition. Useful for speeding up crawling if you know you can ignore some directories.
The function receives two parameters: the first is the name of the directory, and the second is the path to it.
Currently, you can apply only one exclusion filter per crawler. This might change.
- Type:
-
excludeFiles
- Type:
boolean
. - Optional:
true
. - Description: Exclude files from the output.
- Type:
-
filters
- Type:
(path: string, isDirectory: boolean) => boolean
. - Optional:
true
. - Description: Applies a filter to all directories and files and only adds those that satisfy the filter.
Multiple filters are joined using AND.
The function receives two parameters: the first is the path of the item, and the second is a flag that indicates whether the item is a directory or not.
- Type:
-
includeBasePath
- Type:
boolean
. - Optional:
true
. - Description: Use this to add the base path to each output path.
By default, the crawler does not add the base path to the output. For example, if you crawl
node_modules
, the output will contain only the filenames. - Type:
-
includeDirs
- Type:
boolean
. - Optional:
true
. - Description: Use this to also add the directories to the output.
For example, if you are crawling
node_modules
, the output will only contain the files ignoring the directories includingnode_modules
itself. - Type:
-
normalizePath
- Type:
boolean
. - Optional:
true
. - Description: Normalize the given directory path using
path.normalize
.
- Type:
-
resolvePaths
- Type:
boolean
. - Optional:
true
. - Description: Resolve the given directory path using
path.pathResolve
.
- Type:
-
resolveSymlinks
- Type:
boolean
. - Optional:
true
. - Description: Use this to resolve and recurse over all symlinks.
NOTE: This will affect crawling performance so use only if required.
- Type:
-
suppressErrors
- Type:
boolean
. - Optional:
true
. - Description: Use this if you want to handle all errors manually.
By default, the crawler handles and suppresses all errors including permission, non-existent directory ones.
- Type:
-
maxDepth
- Type:
number
. - Optional:
true
. - Default:
Infinite
- Description: Use this to limit the maximum depth the crawler will crawl to before stopping.
- Type:
-
relativePaths
- Type:
boolean
. - Optional:
true
. - Description: Use this to get paths relative to the root directory in the output.
- Type:
-
stopAt
- Type:
string
. - Optional:
true
. - Description: Use this to specify the folder where the crawler should stop crawl when direction is
up
.
By default, the crawler stops crawl when it reaches the root of the filesystem.
- Type:
Hexatool Code Quality Standards
Publishing this package we are committing ourselves to the following code quality standards:
- Respect Semantic Versioning: No breaking changes in patch or minor versions
- No surprises in transitive dependencies: Use the bare minimum dependencies needed to meet the purpose
- One specific purpose to meet without having to carry a bunch of unnecessary other utilities
- Tests as documentation and usage examples
- Well documented README showing how to install and use
- License favoring Open Source and collaboration