main funcions fixes
This commit is contained in:
55
desktop-operator/node_modules/path-scurry/LICENSE.md
generated
vendored
Normal file
55
desktop-operator/node_modules/path-scurry/LICENSE.md
generated
vendored
Normal file
@@ -0,0 +1,55 @@
|
||||
# Blue Oak Model License
|
||||
|
||||
Version 1.0.0
|
||||
|
||||
## Purpose
|
||||
|
||||
This license gives everyone as much permission to work with
|
||||
this software as possible, while protecting contributors
|
||||
from liability.
|
||||
|
||||
## Acceptance
|
||||
|
||||
In order to receive this license, you must agree to its
|
||||
rules. The rules of this license are both obligations
|
||||
under that agreement and conditions to your license.
|
||||
You must not do anything with this software that triggers
|
||||
a rule that you cannot or will not follow.
|
||||
|
||||
## Copyright
|
||||
|
||||
Each contributor licenses you to do everything with this
|
||||
software that would otherwise infringe that contributor's
|
||||
copyright in it.
|
||||
|
||||
## Notices
|
||||
|
||||
You must ensure that everyone who gets a copy of
|
||||
any part of this software from you, with or without
|
||||
changes, also gets the text of this license or a link to
|
||||
<https://blueoakcouncil.org/license/1.0.0>.
|
||||
|
||||
## Excuse
|
||||
|
||||
If anyone notifies you in writing that you have not
|
||||
complied with [Notices](#notices), you can keep your
|
||||
license by taking all practical steps to comply within 30
|
||||
days after the notice. If you do not do so, your license
|
||||
ends immediately.
|
||||
|
||||
## Patent
|
||||
|
||||
Each contributor licenses you to do everything with this
|
||||
software that would otherwise infringe any patent claims
|
||||
they can license or become able to license.
|
||||
|
||||
## Reliability
|
||||
|
||||
No contributor can revoke this license.
|
||||
|
||||
## No Liability
|
||||
|
||||
***As far as the law allows, this software comes as is,
|
||||
without any warranty or condition, and no contributor
|
||||
will be liable to anyone for any damages related to this
|
||||
software or this license, under any kind of legal claim.***
|
||||
636
desktop-operator/node_modules/path-scurry/README.md
generated
vendored
Normal file
636
desktop-operator/node_modules/path-scurry/README.md
generated
vendored
Normal file
@@ -0,0 +1,636 @@
|
||||
# path-scurry
|
||||
|
||||
Extremely high performant utility for building tools that read
|
||||
the file system, minimizing filesystem and path string munging
|
||||
operations to the greatest degree possible.
|
||||
|
||||
## Ugh, yet another file traversal thing on npm?
|
||||
|
||||
Yes. None of the existing ones gave me exactly what I wanted.
|
||||
|
||||
## Well what is it you wanted?
|
||||
|
||||
While working on [glob](http://npm.im/glob), I found that I
|
||||
needed a module to very efficiently manage the traversal over a
|
||||
folder tree, such that:
|
||||
|
||||
1. No `readdir()` or `stat()` would ever be called on the same
|
||||
file or directory more than one time.
|
||||
2. No `readdir()` calls would be made if we can be reasonably
|
||||
sure that the path is not a directory. (Ie, a previous
|
||||
`readdir()` or `stat()` covered the path, and
|
||||
`ent.isDirectory()` is false.)
|
||||
3. `path.resolve()`, `dirname()`, `basename()`, and other
|
||||
string-parsing/munging operations are be minimized. This means
|
||||
it has to track "provisional" child nodes that may not exist
|
||||
(and if we find that they _don't_ exist, store that
|
||||
information as well, so we don't have to ever check again).
|
||||
4. The API is not limited to use as a stream/iterator/etc. There
|
||||
are many cases where an API like node's `fs` is preferrable.
|
||||
5. It's more important to prevent excess syscalls than to be up
|
||||
to date, but it should be smart enough to know what it
|
||||
_doesn't_ know, and go get it seamlessly when requested.
|
||||
6. Do not blow up the JS heap allocation if operating on a
|
||||
directory with a huge number of entries.
|
||||
7. Handle all the weird aspects of Windows paths, like UNC paths
|
||||
and drive letters and wrongway slashes, so that the consumer
|
||||
can return canonical platform-specific paths without having to
|
||||
parse or join or do any error-prone string munging.
|
||||
|
||||
## PERFORMANCE
|
||||
|
||||
JavaScript people throw around the word "blazing" a lot. I hope
|
||||
that this module doesn't blaze anyone. But it does go very fast,
|
||||
in the cases it's optimized for, if used properly.
|
||||
|
||||
PathScurry provides ample opportunities to get extremely good
|
||||
performance, as well as several options to trade performance for
|
||||
convenience.
|
||||
|
||||
Benchmarks can be run by executing `npm run bench`.
|
||||
|
||||
As is always the case, doing more means going slower, doing less
|
||||
means going faster, and there are trade offs between speed and
|
||||
memory usage.
|
||||
|
||||
PathScurry makes heavy use of [LRUCache](http://npm.im/lru-cache)
|
||||
to efficiently cache whatever it can, and `Path` objects remain
|
||||
in the graph for the lifetime of the walker, so repeated calls
|
||||
with a single PathScurry object will be extremely fast. However,
|
||||
adding items to a cold cache means "doing more", so in those
|
||||
cases, we pay a price. Nothing is free, but every effort has been
|
||||
made to reduce costs wherever possible.
|
||||
|
||||
Also, note that a "cache as long as possible" approach means that
|
||||
changes to the filesystem may not be reflected in the results of
|
||||
repeated PathScurry operations.
|
||||
|
||||
For resolving string paths, `PathScurry` ranges from 5-50 times
|
||||
faster than `path.resolve` on repeated resolutions, but around
|
||||
100 to 1000 times _slower_ on the first resolution. If your
|
||||
program is spending a lot of time resolving the _same_ paths
|
||||
repeatedly (like, thousands or millions of times), then this can
|
||||
be beneficial. But both implementations are pretty fast, and
|
||||
speeding up an infrequent operation from 4µs to 400ns is not
|
||||
going to move the needle on your app's performance.
|
||||
|
||||
For walking file system directory trees, a lot depends on how
|
||||
often a given PathScurry object will be used, and also on the
|
||||
walk method used.
|
||||
|
||||
With default settings on a folder tree of 100,000 items,
|
||||
consisting of around a 10-to-1 ratio of normal files to
|
||||
directories, PathScurry performs comparably to
|
||||
[@nodelib/fs.walk](http://npm.im/@nodelib/fs.walk), which is the
|
||||
fastest and most reliable file system walker I could find. As far
|
||||
as I can tell, it's almost impossible to go much faster in a
|
||||
Node.js program, just based on how fast you can push syscalls out
|
||||
to the fs thread pool.
|
||||
|
||||
On my machine, that is about 1000-1200 completed walks per second
|
||||
for async or stream walks, and around 500-600 walks per second
|
||||
synchronously.
|
||||
|
||||
In the warm cache state, PathScurry's performance increases
|
||||
around 4x for async `for await` iteration, 10-15x faster for
|
||||
streams and synchronous `for of` iteration, and anywhere from 30x
|
||||
to 80x faster for the rest.
|
||||
|
||||
```
|
||||
# walk 100,000 fs entries, 10/1 file/dir ratio
|
||||
# operations / ms
|
||||
New PathScurry object | Reuse PathScurry object
|
||||
stream: 1112.589 | 13974.917
|
||||
sync stream: 492.718 | 15028.343
|
||||
async walk: 1095.648 | 32706.395
|
||||
sync walk: 527.632 | 46129.772
|
||||
async iter: 1288.821 | 5045.510
|
||||
sync iter: 498.496 | 17920.746
|
||||
```
|
||||
|
||||
A hand-rolled walk calling `entry.readdir()` and recursing
|
||||
through the entries can benefit even more from caching, with
|
||||
greater flexibility and without the overhead of streams or
|
||||
generators.
|
||||
|
||||
The cold cache state is still limited by the costs of file system
|
||||
operations, but with a warm cache, the only bottleneck is CPU
|
||||
speed and VM optimizations. Of course, in that case, some care
|
||||
must be taken to ensure that you don't lose performance as a
|
||||
result of silly mistakes, like calling `readdir()` on entries
|
||||
that you know are not directories.
|
||||
|
||||
```
|
||||
# manual recursive iteration functions
|
||||
cold cache | warm cache
|
||||
async: 1164.901 | 17923.320
|
||||
cb: 1101.127 | 40999.344
|
||||
zalgo: 1082.240 | 66689.936
|
||||
sync: 526.935 | 87097.591
|
||||
```
|
||||
|
||||
In this case, the speed improves by around 10-20x in the async
|
||||
case, 40x in the case of using `entry.readdirCB` with protections
|
||||
against synchronous callbacks, and 50-100x with callback
|
||||
deferrals disabled, and _several hundred times faster_ for
|
||||
synchronous iteration.
|
||||
|
||||
If you can think of a case that is not covered in these
|
||||
benchmarks, or an implementation that performs significantly
|
||||
better than PathScurry, please [let me
|
||||
know](https://github.com/isaacs/path-scurry/issues).
|
||||
|
||||
## USAGE
|
||||
|
||||
```ts
|
||||
// hybrid module, load with either method
|
||||
import { PathScurry, Path } from 'path-scurry'
|
||||
// or:
|
||||
const { PathScurry, Path } = require('path-scurry')
|
||||
|
||||
// very simple example, say we want to find and
|
||||
// delete all the .DS_Store files in a given path
|
||||
// note that the API is very similar to just a
|
||||
// naive walk with fs.readdir()
|
||||
import { unlink } from 'fs/promises'
|
||||
|
||||
// easy way, iterate over the directory and do the thing
|
||||
const pw = new PathScurry(process.cwd())
|
||||
for await (const entry of pw) {
|
||||
if (entry.isFile() && entry.name === '.DS_Store') {
|
||||
unlink(entry.fullpath())
|
||||
}
|
||||
}
|
||||
|
||||
// here it is as a manual recursive method
|
||||
const walk = async (entry: Path) => {
|
||||
const promises: Promise<any> = []
|
||||
// readdir doesn't throw on non-directories, it just doesn't
|
||||
// return any entries, to save stack trace costs.
|
||||
// Items are returned in arbitrary unsorted order
|
||||
for (const child of await pw.readdir(entry)) {
|
||||
// each child is a Path object
|
||||
if (child.name === '.DS_Store' && child.isFile()) {
|
||||
// could also do pw.resolve(entry, child.name),
|
||||
// just like fs.readdir walking, but .fullpath is
|
||||
// a *slightly* more efficient shorthand.
|
||||
promises.push(unlink(child.fullpath()))
|
||||
} else if (child.isDirectory()) {
|
||||
promises.push(walk(child))
|
||||
}
|
||||
}
|
||||
return Promise.all(promises)
|
||||
}
|
||||
|
||||
walk(pw.cwd).then(() => {
|
||||
console.log('all .DS_Store files removed')
|
||||
})
|
||||
|
||||
const pw2 = new PathScurry('/a/b/c') // pw2.cwd is the Path for /a/b/c
|
||||
const relativeDir = pw2.cwd.resolve('../x') // Path entry for '/a/b/x'
|
||||
const relative2 = pw2.cwd.resolve('/a/b/d/../x') // same path, same entry
|
||||
assert.equal(relativeDir, relative2)
|
||||
```
|
||||
|
||||
## API
|
||||
|
||||
[Full TypeDoc API](https://isaacs.github.io/path-scurry)
|
||||
|
||||
There are platform-specific classes exported, but for the most
|
||||
part, the default `PathScurry` and `Path` exports are what you
|
||||
most likely need, unless you are testing behavior for other
|
||||
platforms.
|
||||
|
||||
Intended public API is documented here, but the full
|
||||
documentation does include internal types, which should not be
|
||||
accessed directly.
|
||||
|
||||
### Interface `PathScurryOpts`
|
||||
|
||||
The type of the `options` argument passed to the `PathScurry`
|
||||
constructor.
|
||||
|
||||
- `nocase`: Boolean indicating that file names should be compared
|
||||
case-insensitively. Defaults to `true` on darwin and win32
|
||||
implementations, `false` elsewhere.
|
||||
|
||||
**Warning** Performing case-insensitive matching on a
|
||||
case-sensitive filesystem will result in occasionally very
|
||||
bizarre behavior. Performing case-sensitive matching on a
|
||||
case-insensitive filesystem may negatively impact performance.
|
||||
|
||||
- `childrenCacheSize`: Number of child entries to cache, in order
|
||||
to speed up `resolve()` and `readdir()` calls. Defaults to
|
||||
`16 * 1024` (ie, `16384`).
|
||||
|
||||
Setting it to a higher value will run the risk of JS heap
|
||||
allocation errors on large directory trees. Setting it to `256`
|
||||
or smaller will significantly reduce the construction time and
|
||||
data consumption overhead, but with the downside of operations
|
||||
being slower on large directory trees. Setting it to `0` will
|
||||
mean that effectively no operations are cached, and this module
|
||||
will be roughly the same speed as `fs` for file system
|
||||
operations, and _much_ slower than `path.resolve()` for
|
||||
repeated path resolution.
|
||||
|
||||
- `fs` An object that will be used to override the default `fs`
|
||||
methods. Any methods that are not overridden will use Node's
|
||||
built-in implementations.
|
||||
|
||||
- lstatSync
|
||||
- readdir (callback `withFileTypes` Dirent variant, used for
|
||||
readdirCB and most walks)
|
||||
- readdirSync
|
||||
- readlinkSync
|
||||
- realpathSync
|
||||
- promises: Object containing the following async methods:
|
||||
- lstat
|
||||
- readdir (Dirent variant only)
|
||||
- readlink
|
||||
- realpath
|
||||
|
||||
### Interface `WalkOptions`
|
||||
|
||||
The options object that may be passed to all walk methods.
|
||||
|
||||
- `withFileTypes`: Boolean, default true. Indicates that `Path`
|
||||
objects should be returned. Set to `false` to get string paths
|
||||
instead.
|
||||
- `follow`: Boolean, default false. Attempt to read directory
|
||||
entries from symbolic links. Otherwise, only actual directories
|
||||
are traversed. Regardless of this setting, a given target path
|
||||
will only ever be walked once, meaning that a symbolic link to
|
||||
a previously traversed directory will never be followed.
|
||||
|
||||
Setting this imposes a slight performance penalty, because
|
||||
`readlink` must be called on all symbolic links encountered, in
|
||||
order to avoid infinite cycles.
|
||||
|
||||
- `filter`: Function `(entry: Path) => boolean`. If provided,
|
||||
will prevent the inclusion of any entry for which it returns a
|
||||
falsey value. This will not prevent directories from being
|
||||
traversed if they do not pass the filter, though it will
|
||||
prevent the directories themselves from being included in the
|
||||
results. By default, if no filter is provided, then all entries
|
||||
are included in the results.
|
||||
- `walkFilter`: Function `(entry: Path) => boolean`. If provided,
|
||||
will prevent the traversal of any directory (or in the case of
|
||||
`follow:true` symbolic links to directories) for which the
|
||||
function returns false. This will not prevent the directories
|
||||
themselves from being included in the result set. Use `filter`
|
||||
for that.
|
||||
|
||||
Note that TypeScript return types will only be inferred properly
|
||||
from static analysis if the `withFileTypes` option is omitted, or
|
||||
a constant `true` or `false` value.
|
||||
|
||||
### Class `PathScurry`
|
||||
|
||||
The main interface. Defaults to an appropriate class based on the
|
||||
current platform.
|
||||
|
||||
Use `PathScurryWin32`, `PathScurryDarwin`, or `PathScurryPosix`
|
||||
if implementation-specific behavior is desired.
|
||||
|
||||
All walk methods may be called with a `WalkOptions` argument to
|
||||
walk over the object's current working directory with the
|
||||
supplied options.
|
||||
|
||||
#### `async pw.walk(entry?: string | Path | WalkOptions, opts?: WalkOptions)`
|
||||
|
||||
Walk the directory tree according to the options provided,
|
||||
resolving to an array of all entries found.
|
||||
|
||||
#### `pw.walkSync(entry?: string | Path | WalkOptions, opts?: WalkOptions)`
|
||||
|
||||
Walk the directory tree according to the options provided,
|
||||
returning an array of all entries found.
|
||||
|
||||
#### `pw.iterate(entry?: string | Path | WalkOptions, opts?: WalkOptions)`
|
||||
|
||||
Iterate over the directory asynchronously, for use with `for
|
||||
await of`. This is also the default async iterator method.
|
||||
|
||||
#### `pw.iterateSync(entry?: string | Path | WalkOptions, opts?: WalkOptions)`
|
||||
|
||||
Iterate over the directory synchronously, for use with `for of`.
|
||||
This is also the default sync iterator method.
|
||||
|
||||
#### `pw.stream(entry?: string | Path | WalkOptions, opts?: WalkOptions)`
|
||||
|
||||
Return a [Minipass](http://npm.im/minipass) stream that emits
|
||||
each entry or path string in the walk. Results are made available
|
||||
asynchronously.
|
||||
|
||||
#### `pw.streamSync(entry?: string | Path | WalkOptions, opts?: WalkOptions)`
|
||||
|
||||
Return a [Minipass](http://npm.im/minipass) stream that emits
|
||||
each entry or path string in the walk. Results are made available
|
||||
synchronously, meaning that the walk will complete in a single
|
||||
tick if the stream is fully consumed.
|
||||
|
||||
#### `pw.cwd`
|
||||
|
||||
Path object representing the current working directory for the
|
||||
PathScurry.
|
||||
|
||||
#### `pw.chdir(path: string)`
|
||||
|
||||
Set the new effective current working directory for the scurry
|
||||
object, so that `path.relative()` and `path.relativePosix()`
|
||||
return values relative to the new cwd path.
|
||||
|
||||
#### `pw.depth(path?: Path | string): number`
|
||||
|
||||
Return the depth of the specified path (or the PathScurry cwd)
|
||||
within the directory tree.
|
||||
|
||||
Root entries have a depth of `0`.
|
||||
|
||||
#### `pw.resolve(...paths: string[])`
|
||||
|
||||
Caching `path.resolve()`.
|
||||
|
||||
Significantly faster than `path.resolve()` if called repeatedly
|
||||
with the same paths. Significantly slower otherwise, as it builds
|
||||
out the cached Path entries.
|
||||
|
||||
To get a `Path` object resolved from the `PathScurry`, use
|
||||
`pw.cwd.resolve(path)`. Note that `Path.resolve` only takes a
|
||||
single string argument, not multiple.
|
||||
|
||||
#### `pw.resolvePosix(...paths: string[])`
|
||||
|
||||
Caching `path.resolve()`, but always using posix style paths.
|
||||
|
||||
This is identical to `pw.resolve(...paths)` on posix systems (ie,
|
||||
everywhere except Windows).
|
||||
|
||||
On Windows, it returns the full absolute UNC path using `/`
|
||||
separators. Ie, instead of `'C:\\foo\\bar`, it would return
|
||||
`//?/C:/foo/bar`.
|
||||
|
||||
#### `pw.relative(path: string | Path): string`
|
||||
|
||||
Return the relative path from the PathWalker cwd to the supplied
|
||||
path string or entry.
|
||||
|
||||
If the nearest common ancestor is the root, then an absolute path
|
||||
is returned.
|
||||
|
||||
#### `pw.relativePosix(path: string | Path): string`
|
||||
|
||||
Return the relative path from the PathWalker cwd to the supplied
|
||||
path string or entry, using `/` path separators.
|
||||
|
||||
If the nearest common ancestor is the root, then an absolute path
|
||||
is returned.
|
||||
|
||||
On posix platforms (ie, all platforms except Windows), this is
|
||||
identical to `pw.relative(path)`.
|
||||
|
||||
On Windows systems, it returns the resulting string as a
|
||||
`/`-delimited path. If an absolute path is returned (because the
|
||||
target does not share a common ancestor with `pw.cwd`), then a
|
||||
full absolute UNC path will be returned. Ie, instead of
|
||||
`'C:\\foo\\bar`, it would return `//?/C:/foo/bar`.
|
||||
|
||||
#### `pw.basename(path: string | Path): string`
|
||||
|
||||
Return the basename of the provided string or Path.
|
||||
|
||||
#### `pw.dirname(path: string | Path): string`
|
||||
|
||||
Return the parent directory of the supplied string or Path.
|
||||
|
||||
#### `async pw.readdir(dir = pw.cwd, opts = { withFileTypes: true })`
|
||||
|
||||
Read the directory and resolve to an array of strings if
|
||||
`withFileTypes` is explicitly set to `false` or Path objects
|
||||
otherwise.
|
||||
|
||||
Can be called as `pw.readdir({ withFileTypes: boolean })` as
|
||||
well.
|
||||
|
||||
Returns `[]` if no entries are found, or if any error occurs.
|
||||
|
||||
Note that TypeScript return types will only be inferred properly
|
||||
from static analysis if the `withFileTypes` option is omitted, or
|
||||
a constant `true` or `false` value.
|
||||
|
||||
#### `pw.readdirSync(dir = pw.cwd, opts = { withFileTypes: true })`
|
||||
|
||||
Synchronous `pw.readdir()`
|
||||
|
||||
#### `async pw.readlink(link = pw.cwd, opts = { withFileTypes: false })`
|
||||
|
||||
Call `fs.readlink` on the supplied string or Path object, and
|
||||
return the result.
|
||||
|
||||
Can be called as `pw.readlink({ withFileTypes: boolean })` as
|
||||
well.
|
||||
|
||||
Returns `undefined` if any error occurs (for example, if the
|
||||
argument is not a symbolic link), or a `Path` object if
|
||||
`withFileTypes` is explicitly set to `true`, or a string
|
||||
otherwise.
|
||||
|
||||
Note that TypeScript return types will only be inferred properly
|
||||
from static analysis if the `withFileTypes` option is omitted, or
|
||||
a constant `true` or `false` value.
|
||||
|
||||
#### `pw.readlinkSync(link = pw.cwd, opts = { withFileTypes: false })`
|
||||
|
||||
Synchronous `pw.readlink()`
|
||||
|
||||
#### `async pw.lstat(entry = pw.cwd)`
|
||||
|
||||
Call `fs.lstat` on the supplied string or Path object, and fill
|
||||
in as much information as possible, returning the updated `Path`
|
||||
object.
|
||||
|
||||
Returns `undefined` if the entry does not exist, or if any error
|
||||
is encountered.
|
||||
|
||||
Note that some `Stats` data (such as `ino`, `dev`, and `mode`)
|
||||
will not be supplied. For those things, you'll need to call
|
||||
`fs.lstat` yourself.
|
||||
|
||||
#### `pw.lstatSync(entry = pw.cwd)`
|
||||
|
||||
Synchronous `pw.lstat()`
|
||||
|
||||
#### `pw.realpath(entry = pw.cwd, opts = { withFileTypes: false })`
|
||||
|
||||
Call `fs.realpath` on the supplied string or Path object, and
|
||||
return the realpath if available.
|
||||
|
||||
Returns `undefined` if any error occurs.
|
||||
|
||||
May be called as `pw.realpath({ withFileTypes: boolean })` to run
|
||||
on `pw.cwd`.
|
||||
|
||||
#### `pw.realpathSync(entry = pw.cwd, opts = { withFileTypes: false })`
|
||||
|
||||
Synchronous `pw.realpath()`
|
||||
|
||||
### Class `Path` implements [fs.Dirent](https://nodejs.org/docs/latest/api/fs.html#class-fsdirent)
|
||||
|
||||
Object representing a given path on the filesystem, which may or
|
||||
may not exist.
|
||||
|
||||
Note that the actual class in use will be either `PathWin32` or
|
||||
`PathPosix`, depending on the implementation of `PathScurry` in
|
||||
use. They differ in the separators used to split and join path
|
||||
strings, and the handling of root paths.
|
||||
|
||||
In `PathPosix` implementations, paths are split and joined using
|
||||
the `'/'` character, and `'/'` is the only root path ever in use.
|
||||
|
||||
In `PathWin32` implementations, paths are split using either
|
||||
`'/'` or `'\\'` and joined using `'\\'`, and multiple roots may
|
||||
be in use based on the drives and UNC paths encountered. UNC
|
||||
paths such as `//?/C:/` that identify a drive letter, will be
|
||||
treated as an alias for the same root entry as their associated
|
||||
drive letter (in this case `'C:\\'`).
|
||||
|
||||
#### `path.name`
|
||||
|
||||
Name of this file system entry.
|
||||
|
||||
**Important**: _always_ test the path name against any test
|
||||
string using the `isNamed` method, and not by directly comparing
|
||||
this string. Otherwise, unicode path strings that the system sees
|
||||
as identical will not be properly treated as the same path,
|
||||
leading to incorrect behavior and possible security issues.
|
||||
|
||||
#### `path.isNamed(name: string): boolean`
|
||||
|
||||
Return true if the path is a match for the given path name. This
|
||||
handles case sensitivity and unicode normalization.
|
||||
|
||||
Note: even on case-sensitive systems, it is **not** safe to test
|
||||
the equality of the `.name` property to determine whether a given
|
||||
pathname matches, due to unicode normalization mismatches.
|
||||
|
||||
Always use this method instead of testing the `path.name`
|
||||
property directly.
|
||||
|
||||
#### `path.isCWD`
|
||||
|
||||
Set to true if this `Path` object is the current working
|
||||
directory of the `PathScurry` collection that contains it.
|
||||
|
||||
#### `path.getType()`
|
||||
|
||||
Returns the type of the Path object, `'File'`, `'Directory'`,
|
||||
etc.
|
||||
|
||||
#### `path.isType(t: type)`
|
||||
|
||||
Returns true if `is{t}()` returns true.
|
||||
|
||||
For example, `path.isType('Directory')` is equivalent to
|
||||
`path.isDirectory()`.
|
||||
|
||||
#### `path.depth()`
|
||||
|
||||
Return the depth of the Path entry within the directory tree.
|
||||
Root paths have a depth of `0`.
|
||||
|
||||
#### `path.fullpath()`
|
||||
|
||||
The fully resolved path to the entry.
|
||||
|
||||
#### `path.fullpathPosix()`
|
||||
|
||||
The fully resolved path to the entry, using `/` separators.
|
||||
|
||||
On posix systems, this is identical to `path.fullpath()`. On
|
||||
windows, this will return a fully resolved absolute UNC path
|
||||
using `/` separators. Eg, instead of `'C:\\foo\\bar'`, it will
|
||||
return `'//?/C:/foo/bar'`.
|
||||
|
||||
#### `path.isFile()`, `path.isDirectory()`, etc.
|
||||
|
||||
Same as the identical `fs.Dirent.isX()` methods.
|
||||
|
||||
#### `path.isUnknown()`
|
||||
|
||||
Returns true if the path's type is unknown. Always returns true
|
||||
when the path is known to not exist.
|
||||
|
||||
#### `path.resolve(p: string)`
|
||||
|
||||
Return a `Path` object associated with the provided path string
|
||||
as resolved from the current Path object.
|
||||
|
||||
#### `path.relative(): string`
|
||||
|
||||
Return the relative path from the PathWalker cwd to the supplied
|
||||
path string or entry.
|
||||
|
||||
If the nearest common ancestor is the root, then an absolute path
|
||||
is returned.
|
||||
|
||||
#### `path.relativePosix(): string`
|
||||
|
||||
Return the relative path from the PathWalker cwd to the supplied
|
||||
path string or entry, using `/` path separators.
|
||||
|
||||
If the nearest common ancestor is the root, then an absolute path
|
||||
is returned.
|
||||
|
||||
On posix platforms (ie, all platforms except Windows), this is
|
||||
identical to `pw.relative(path)`.
|
||||
|
||||
On Windows systems, it returns the resulting string as a
|
||||
`/`-delimited path. If an absolute path is returned (because the
|
||||
target does not share a common ancestor with `pw.cwd`), then a
|
||||
full absolute UNC path will be returned. Ie, instead of
|
||||
`'C:\\foo\\bar`, it would return `//?/C:/foo/bar`.
|
||||
|
||||
#### `async path.readdir()`
|
||||
|
||||
Return an array of `Path` objects found by reading the associated
|
||||
path entry.
|
||||
|
||||
If path is not a directory, or if any error occurs, returns `[]`,
|
||||
and marks all children as provisional and non-existent.
|
||||
|
||||
#### `path.readdirSync()`
|
||||
|
||||
Synchronous `path.readdir()`
|
||||
|
||||
#### `async path.readlink()`
|
||||
|
||||
Return the `Path` object referenced by the `path` as a symbolic
|
||||
link.
|
||||
|
||||
If the `path` is not a symbolic link, or any error occurs,
|
||||
returns `undefined`.
|
||||
|
||||
#### `path.readlinkSync()`
|
||||
|
||||
Synchronous `path.readlink()`
|
||||
|
||||
#### `async path.lstat()`
|
||||
|
||||
Call `lstat` on the path object, and fill it in with details
|
||||
determined.
|
||||
|
||||
If path does not exist, or any other error occurs, returns
|
||||
`undefined`, and marks the path as "unknown" type.
|
||||
|
||||
#### `path.lstatSync()`
|
||||
|
||||
Synchronous `path.lstat()`
|
||||
|
||||
#### `async path.realpath()`
|
||||
|
||||
Call `realpath` on the path, and return a Path object
|
||||
corresponding to the result, or `undefined` if any error occurs.
|
||||
|
||||
#### `path.realpathSync()`
|
||||
|
||||
Synchornous `path.realpath()`
|
||||
1116
desktop-operator/node_modules/path-scurry/dist/commonjs/index.d.ts
generated
vendored
Normal file
1116
desktop-operator/node_modules/path-scurry/dist/commonjs/index.d.ts
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1
desktop-operator/node_modules/path-scurry/dist/commonjs/index.d.ts.map
generated
vendored
Normal file
1
desktop-operator/node_modules/path-scurry/dist/commonjs/index.d.ts.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
2014
desktop-operator/node_modules/path-scurry/dist/commonjs/index.js
generated
vendored
Normal file
2014
desktop-operator/node_modules/path-scurry/dist/commonjs/index.js
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1
desktop-operator/node_modules/path-scurry/dist/commonjs/index.js.map
generated
vendored
Normal file
1
desktop-operator/node_modules/path-scurry/dist/commonjs/index.js.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
3
desktop-operator/node_modules/path-scurry/dist/commonjs/package.json
generated
vendored
Normal file
3
desktop-operator/node_modules/path-scurry/dist/commonjs/package.json
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"type": "commonjs"
|
||||
}
|
||||
1116
desktop-operator/node_modules/path-scurry/dist/esm/index.d.ts
generated
vendored
Normal file
1116
desktop-operator/node_modules/path-scurry/dist/esm/index.d.ts
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1
desktop-operator/node_modules/path-scurry/dist/esm/index.d.ts.map
generated
vendored
Normal file
1
desktop-operator/node_modules/path-scurry/dist/esm/index.d.ts.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1979
desktop-operator/node_modules/path-scurry/dist/esm/index.js
generated
vendored
Normal file
1979
desktop-operator/node_modules/path-scurry/dist/esm/index.js
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1
desktop-operator/node_modules/path-scurry/dist/esm/index.js.map
generated
vendored
Normal file
1
desktop-operator/node_modules/path-scurry/dist/esm/index.js.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
3
desktop-operator/node_modules/path-scurry/dist/esm/package.json
generated
vendored
Normal file
3
desktop-operator/node_modules/path-scurry/dist/esm/package.json
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"type": "module"
|
||||
}
|
||||
15
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/LICENSE
generated
vendored
Normal file
15
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
The ISC License
|
||||
|
||||
Copyright (c) 2010-2023 Isaac Z. Schlueter and Contributors
|
||||
|
||||
Permission to use, copy, modify, and/or distribute this software for any
|
||||
purpose with or without fee is hereby granted, provided that the above
|
||||
copyright notice and this permission notice appear in all copies.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
|
||||
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
331
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/README.md
generated
vendored
Normal file
331
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/README.md
generated
vendored
Normal file
@@ -0,0 +1,331 @@
|
||||
# lru-cache
|
||||
|
||||
A cache object that deletes the least-recently-used items.
|
||||
|
||||
Specify a max number of the most recently used items that you
|
||||
want to keep, and this cache will keep that many of the most
|
||||
recently accessed items.
|
||||
|
||||
This is not primarily a TTL cache, and does not make strong TTL
|
||||
guarantees. There is no preemptive pruning of expired items by
|
||||
default, but you _may_ set a TTL on the cache or on a single
|
||||
`set`. If you do so, it will treat expired items as missing, and
|
||||
delete them when fetched. If you are more interested in TTL
|
||||
caching than LRU caching, check out
|
||||
[@isaacs/ttlcache](http://npm.im/@isaacs/ttlcache).
|
||||
|
||||
As of version 7, this is one of the most performant LRU
|
||||
implementations available in JavaScript, and supports a wide
|
||||
diversity of use cases. However, note that using some of the
|
||||
features will necessarily impact performance, by causing the
|
||||
cache to have to do more work. See the "Performance" section
|
||||
below.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
npm install lru-cache --save
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```js
|
||||
// hybrid module, either works
|
||||
import { LRUCache } from 'lru-cache'
|
||||
// or:
|
||||
const { LRUCache } = require('lru-cache')
|
||||
// or in minified form for web browsers:
|
||||
import { LRUCache } from 'http://unpkg.com/lru-cache@9/dist/mjs/index.min.mjs'
|
||||
|
||||
// At least one of 'max', 'ttl', or 'maxSize' is required, to prevent
|
||||
// unsafe unbounded storage.
|
||||
//
|
||||
// In most cases, it's best to specify a max for performance, so all
|
||||
// the required memory allocation is done up-front.
|
||||
//
|
||||
// All the other options are optional, see the sections below for
|
||||
// documentation on what each one does. Most of them can be
|
||||
// overridden for specific items in get()/set()
|
||||
const options = {
|
||||
max: 500,
|
||||
|
||||
// for use with tracking overall storage size
|
||||
maxSize: 5000,
|
||||
sizeCalculation: (value, key) => {
|
||||
return 1
|
||||
},
|
||||
|
||||
// for use when you need to clean up something when objects
|
||||
// are evicted from the cache
|
||||
dispose: (value, key) => {
|
||||
freeFromMemoryOrWhatever(value)
|
||||
},
|
||||
|
||||
// how long to live in ms
|
||||
ttl: 1000 * 60 * 5,
|
||||
|
||||
// return stale items before removing from cache?
|
||||
allowStale: false,
|
||||
|
||||
updateAgeOnGet: false,
|
||||
updateAgeOnHas: false,
|
||||
|
||||
// async method to use for cache.fetch(), for
|
||||
// stale-while-revalidate type of behavior
|
||||
fetchMethod: async (
|
||||
key,
|
||||
staleValue,
|
||||
{ options, signal, context }
|
||||
) => {},
|
||||
}
|
||||
|
||||
const cache = new LRUCache(options)
|
||||
|
||||
cache.set('key', 'value')
|
||||
cache.get('key') // "value"
|
||||
|
||||
// non-string keys ARE fully supported
|
||||
// but note that it must be THE SAME object, not
|
||||
// just a JSON-equivalent object.
|
||||
var someObject = { a: 1 }
|
||||
cache.set(someObject, 'a value')
|
||||
// Object keys are not toString()-ed
|
||||
cache.set('[object Object]', 'a different value')
|
||||
assert.equal(cache.get(someObject), 'a value')
|
||||
// A similar object with same keys/values won't work,
|
||||
// because it's a different object identity
|
||||
assert.equal(cache.get({ a: 1 }), undefined)
|
||||
|
||||
cache.clear() // empty the cache
|
||||
```
|
||||
|
||||
If you put more stuff in the cache, then less recently used items
|
||||
will fall out. That's what an LRU cache is.
|
||||
|
||||
For full description of the API and all options, please see [the
|
||||
LRUCache typedocs](https://isaacs.github.io/node-lru-cache/)
|
||||
|
||||
## Storage Bounds Safety
|
||||
|
||||
This implementation aims to be as flexible as possible, within
|
||||
the limits of safe memory consumption and optimal performance.
|
||||
|
||||
At initial object creation, storage is allocated for `max` items.
|
||||
If `max` is set to zero, then some performance is lost, and item
|
||||
count is unbounded. Either `maxSize` or `ttl` _must_ be set if
|
||||
`max` is not specified.
|
||||
|
||||
If `maxSize` is set, then this creates a safe limit on the
|
||||
maximum storage consumed, but without the performance benefits of
|
||||
pre-allocation. When `maxSize` is set, every item _must_ provide
|
||||
a size, either via the `sizeCalculation` method provided to the
|
||||
constructor, or via a `size` or `sizeCalculation` option provided
|
||||
to `cache.set()`. The size of every item _must_ be a positive
|
||||
integer.
|
||||
|
||||
If neither `max` nor `maxSize` are set, then `ttl` tracking must
|
||||
be enabled. Note that, even when tracking item `ttl`, items are
|
||||
_not_ preemptively deleted when they become stale, unless
|
||||
`ttlAutopurge` is enabled. Instead, they are only purged the
|
||||
next time the key is requested. Thus, if `ttlAutopurge`, `max`,
|
||||
and `maxSize` are all not set, then the cache will potentially
|
||||
grow unbounded.
|
||||
|
||||
In this case, a warning is printed to standard error. Future
|
||||
versions may require the use of `ttlAutopurge` if `max` and
|
||||
`maxSize` are not specified.
|
||||
|
||||
If you truly wish to use a cache that is bound _only_ by TTL
|
||||
expiration, consider using a `Map` object, and calling
|
||||
`setTimeout` to delete entries when they expire. It will perform
|
||||
much better than an LRU cache.
|
||||
|
||||
Here is an implementation you may use, under the same
|
||||
[license](./LICENSE) as this package:
|
||||
|
||||
```js
|
||||
// a storage-unbounded ttl cache that is not an lru-cache
|
||||
const cache = {
|
||||
data: new Map(),
|
||||
timers: new Map(),
|
||||
set: (k, v, ttl) => {
|
||||
if (cache.timers.has(k)) {
|
||||
clearTimeout(cache.timers.get(k))
|
||||
}
|
||||
cache.timers.set(
|
||||
k,
|
||||
setTimeout(() => cache.delete(k), ttl)
|
||||
)
|
||||
cache.data.set(k, v)
|
||||
},
|
||||
get: k => cache.data.get(k),
|
||||
has: k => cache.data.has(k),
|
||||
delete: k => {
|
||||
if (cache.timers.has(k)) {
|
||||
clearTimeout(cache.timers.get(k))
|
||||
}
|
||||
cache.timers.delete(k)
|
||||
return cache.data.delete(k)
|
||||
},
|
||||
clear: () => {
|
||||
cache.data.clear()
|
||||
for (const v of cache.timers.values()) {
|
||||
clearTimeout(v)
|
||||
}
|
||||
cache.timers.clear()
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
If that isn't to your liking, check out
|
||||
[@isaacs/ttlcache](http://npm.im/@isaacs/ttlcache).
|
||||
|
||||
## Storing Undefined Values
|
||||
|
||||
This cache never stores undefined values, as `undefined` is used
|
||||
internally in a few places to indicate that a key is not in the
|
||||
cache.
|
||||
|
||||
You may call `cache.set(key, undefined)`, but this is just
|
||||
an alias for `cache.delete(key)`. Note that this has the effect
|
||||
that `cache.has(key)` will return _false_ after setting it to
|
||||
undefined.
|
||||
|
||||
```js
|
||||
cache.set(myKey, undefined)
|
||||
cache.has(myKey) // false!
|
||||
```
|
||||
|
||||
If you need to track `undefined` values, and still note that the
|
||||
key is in the cache, an easy workaround is to use a sigil object
|
||||
of your own.
|
||||
|
||||
```js
|
||||
import { LRUCache } from 'lru-cache'
|
||||
const undefinedValue = Symbol('undefined')
|
||||
const cache = new LRUCache(...)
|
||||
const mySet = (key, value) =>
|
||||
cache.set(key, value === undefined ? undefinedValue : value)
|
||||
const myGet = (key, value) => {
|
||||
const v = cache.get(key)
|
||||
return v === undefinedValue ? undefined : v
|
||||
}
|
||||
```
|
||||
|
||||
## Performance
|
||||
|
||||
As of January 2022, version 7 of this library is one of the most
|
||||
performant LRU cache implementations in JavaScript.
|
||||
|
||||
Benchmarks can be extremely difficult to get right. In
|
||||
particular, the performance of set/get/delete operations on
|
||||
objects will vary _wildly_ depending on the type of key used. V8
|
||||
is highly optimized for objects with keys that are short strings,
|
||||
especially integer numeric strings. Thus any benchmark which
|
||||
tests _solely_ using numbers as keys will tend to find that an
|
||||
object-based approach performs the best.
|
||||
|
||||
Note that coercing _anything_ to strings to use as object keys is
|
||||
unsafe, unless you can be 100% certain that no other type of
|
||||
value will be used. For example:
|
||||
|
||||
```js
|
||||
const myCache = {}
|
||||
const set = (k, v) => (myCache[k] = v)
|
||||
const get = k => myCache[k]
|
||||
|
||||
set({}, 'please hang onto this for me')
|
||||
set('[object Object]', 'oopsie')
|
||||
```
|
||||
|
||||
Also beware of "Just So" stories regarding performance. Garbage
|
||||
collection of large (especially: deep) object graphs can be
|
||||
incredibly costly, with several "tipping points" where it
|
||||
increases exponentially. As a result, putting that off until
|
||||
later can make it much worse, and less predictable. If a library
|
||||
performs well, but only in a scenario where the object graph is
|
||||
kept shallow, then that won't help you if you are using large
|
||||
objects as keys.
|
||||
|
||||
In general, when attempting to use a library to improve
|
||||
performance (such as a cache like this one), it's best to choose
|
||||
an option that will perform well in the sorts of scenarios where
|
||||
you'll actually use it.
|
||||
|
||||
This library is optimized for repeated gets and minimizing
|
||||
eviction time, since that is the expected need of a LRU. Set
|
||||
operations are somewhat slower on average than a few other
|
||||
options, in part because of that optimization. It is assumed
|
||||
that you'll be caching some costly operation, ideally as rarely
|
||||
as possible, so optimizing set over get would be unwise.
|
||||
|
||||
If performance matters to you:
|
||||
|
||||
1. If it's at all possible to use small integer values as keys,
|
||||
and you can guarantee that no other types of values will be
|
||||
used as keys, then do that, and use a cache such as
|
||||
[lru-fast](https://npmjs.com/package/lru-fast), or
|
||||
[mnemonist's
|
||||
LRUCache](https://yomguithereal.github.io/mnemonist/lru-cache)
|
||||
which uses an Object as its data store.
|
||||
|
||||
2. Failing that, if at all possible, use short non-numeric
|
||||
strings (ie, less than 256 characters) as your keys, and use
|
||||
[mnemonist's
|
||||
LRUCache](https://yomguithereal.github.io/mnemonist/lru-cache).
|
||||
|
||||
3. If the types of your keys will be anything else, especially
|
||||
long strings, strings that look like floats, objects, or some
|
||||
mix of types, or if you aren't sure, then this library will
|
||||
work well for you.
|
||||
|
||||
If you do not need the features that this library provides
|
||||
(like asynchronous fetching, a variety of TTL staleness
|
||||
options, and so on), then [mnemonist's
|
||||
LRUMap](https://yomguithereal.github.io/mnemonist/lru-map) is
|
||||
a very good option, and just slightly faster than this module
|
||||
(since it does considerably less).
|
||||
|
||||
4. Do not use a `dispose` function, size tracking, or especially
|
||||
ttl behavior, unless absolutely needed. These features are
|
||||
convenient, and necessary in some use cases, and every attempt
|
||||
has been made to make the performance impact minimal, but it
|
||||
isn't nothing.
|
||||
|
||||
## Breaking Changes in Version 7
|
||||
|
||||
This library changed to a different algorithm and internal data
|
||||
structure in version 7, yielding significantly better
|
||||
performance, albeit with some subtle changes as a result.
|
||||
|
||||
If you were relying on the internals of LRUCache in version 6 or
|
||||
before, it probably will not work in version 7 and above.
|
||||
|
||||
## Breaking Changes in Version 8
|
||||
|
||||
- The `fetchContext` option was renamed to `context`, and may no
|
||||
longer be set on the cache instance itself.
|
||||
- Rewritten in TypeScript, so pretty much all the types moved
|
||||
around a lot.
|
||||
- The AbortController/AbortSignal polyfill was removed. For this
|
||||
reason, **Node version 16.14.0 or higher is now required**.
|
||||
- Internal properties were moved to actual private class
|
||||
properties.
|
||||
- Keys and values must not be `null` or `undefined`.
|
||||
- Minified export available at `'lru-cache/min'`, for both CJS
|
||||
and MJS builds.
|
||||
|
||||
## Breaking Changes in Version 9
|
||||
|
||||
- Named export only, no default export.
|
||||
- AbortController polyfill returned, albeit with a warning when
|
||||
used.
|
||||
|
||||
## Breaking Changes in Version 10
|
||||
|
||||
- `cache.fetch()` return type is now `Promise<V | undefined>`
|
||||
instead of `Promise<V | void>`. This is an irrelevant change
|
||||
practically speaking, but can require changes for TypeScript
|
||||
users.
|
||||
|
||||
For more info, see the [change log](CHANGELOG.md).
|
||||
1277
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/commonjs/index.d.ts
generated
vendored
Normal file
1277
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/commonjs/index.d.ts
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/commonjs/index.d.ts.map
generated
vendored
Normal file
1
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/commonjs/index.d.ts.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1546
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/commonjs/index.js
generated
vendored
Normal file
1546
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/commonjs/index.js
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/commonjs/index.js.map
generated
vendored
Normal file
1
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/commonjs/index.js.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
2
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/commonjs/index.min.js
generated
vendored
Normal file
2
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/commonjs/index.min.js
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
7
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/commonjs/index.min.js.map
generated
vendored
Normal file
7
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/commonjs/index.min.js.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
3
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/commonjs/package.json
generated
vendored
Normal file
3
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/commonjs/package.json
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"type": "commonjs"
|
||||
}
|
||||
1277
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/esm/index.d.ts
generated
vendored
Normal file
1277
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/esm/index.d.ts
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/esm/index.d.ts.map
generated
vendored
Normal file
1
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/esm/index.d.ts.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1542
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/esm/index.js
generated
vendored
Normal file
1542
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/esm/index.js
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/esm/index.js.map
generated
vendored
Normal file
1
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/esm/index.js.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
2
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/esm/index.min.js
generated
vendored
Normal file
2
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/esm/index.min.js
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
7
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/esm/index.min.js.map
generated
vendored
Normal file
7
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/esm/index.min.js.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
3
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/esm/package.json
generated
vendored
Normal file
3
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/dist/esm/package.json
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"type": "module"
|
||||
}
|
||||
116
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/package.json
generated
vendored
Normal file
116
desktop-operator/node_modules/path-scurry/node_modules/lru-cache/package.json
generated
vendored
Normal file
@@ -0,0 +1,116 @@
|
||||
{
|
||||
"name": "lru-cache",
|
||||
"publishConfig": {
|
||||
"tag": "legacy-v10"
|
||||
},
|
||||
"description": "A cache object that deletes the least-recently-used items.",
|
||||
"version": "10.4.3",
|
||||
"author": "Isaac Z. Schlueter <i@izs.me>",
|
||||
"keywords": [
|
||||
"mru",
|
||||
"lru",
|
||||
"cache"
|
||||
],
|
||||
"sideEffects": false,
|
||||
"scripts": {
|
||||
"build": "npm run prepare",
|
||||
"prepare": "tshy && bash fixup.sh",
|
||||
"pretest": "npm run prepare",
|
||||
"presnap": "npm run prepare",
|
||||
"test": "tap",
|
||||
"snap": "tap",
|
||||
"preversion": "npm test",
|
||||
"postversion": "npm publish",
|
||||
"prepublishOnly": "git push origin --follow-tags",
|
||||
"format": "prettier --write .",
|
||||
"typedoc": "typedoc --tsconfig ./.tshy/esm.json ./src/*.ts",
|
||||
"benchmark-results-typedoc": "bash scripts/benchmark-results-typedoc.sh",
|
||||
"prebenchmark": "npm run prepare",
|
||||
"benchmark": "make -C benchmark",
|
||||
"preprofile": "npm run prepare",
|
||||
"profile": "make -C benchmark profile"
|
||||
},
|
||||
"main": "./dist/commonjs/index.js",
|
||||
"types": "./dist/commonjs/index.d.ts",
|
||||
"tshy": {
|
||||
"exports": {
|
||||
".": "./src/index.ts",
|
||||
"./min": {
|
||||
"import": {
|
||||
"types": "./dist/esm/index.d.ts",
|
||||
"default": "./dist/esm/index.min.js"
|
||||
},
|
||||
"require": {
|
||||
"types": "./dist/commonjs/index.d.ts",
|
||||
"default": "./dist/commonjs/index.min.js"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git://github.com/isaacs/node-lru-cache.git"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^20.2.5",
|
||||
"@types/tap": "^15.0.6",
|
||||
"benchmark": "^2.1.4",
|
||||
"esbuild": "^0.17.11",
|
||||
"eslint-config-prettier": "^8.5.0",
|
||||
"marked": "^4.2.12",
|
||||
"mkdirp": "^2.1.5",
|
||||
"prettier": "^2.6.2",
|
||||
"tap": "^20.0.3",
|
||||
"tshy": "^2.0.0",
|
||||
"tslib": "^2.4.0",
|
||||
"typedoc": "^0.25.3",
|
||||
"typescript": "^5.2.2"
|
||||
},
|
||||
"license": "ISC",
|
||||
"files": [
|
||||
"dist"
|
||||
],
|
||||
"prettier": {
|
||||
"semi": false,
|
||||
"printWidth": 70,
|
||||
"tabWidth": 2,
|
||||
"useTabs": false,
|
||||
"singleQuote": true,
|
||||
"jsxSingleQuote": false,
|
||||
"bracketSameLine": true,
|
||||
"arrowParens": "avoid",
|
||||
"endOfLine": "lf"
|
||||
},
|
||||
"tap": {
|
||||
"node-arg": [
|
||||
"--expose-gc"
|
||||
],
|
||||
"plugin": [
|
||||
"@tapjs/clock"
|
||||
]
|
||||
},
|
||||
"exports": {
|
||||
".": {
|
||||
"import": {
|
||||
"types": "./dist/esm/index.d.ts",
|
||||
"default": "./dist/esm/index.js"
|
||||
},
|
||||
"require": {
|
||||
"types": "./dist/commonjs/index.d.ts",
|
||||
"default": "./dist/commonjs/index.js"
|
||||
}
|
||||
},
|
||||
"./min": {
|
||||
"import": {
|
||||
"types": "./dist/esm/index.d.ts",
|
||||
"default": "./dist/esm/index.min.js"
|
||||
},
|
||||
"require": {
|
||||
"types": "./dist/commonjs/index.d.ts",
|
||||
"default": "./dist/commonjs/index.min.js"
|
||||
}
|
||||
}
|
||||
},
|
||||
"type": "module",
|
||||
"module": "./dist/esm/index.js"
|
||||
}
|
||||
15
desktop-operator/node_modules/path-scurry/node_modules/minipass/LICENSE
generated
vendored
Normal file
15
desktop-operator/node_modules/path-scurry/node_modules/minipass/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
The ISC License
|
||||
|
||||
Copyright (c) 2017-2023 npm, Inc., Isaac Z. Schlueter, and Contributors
|
||||
|
||||
Permission to use, copy, modify, and/or distribute this software for any
|
||||
purpose with or without fee is hereby granted, provided that the above
|
||||
copyright notice and this permission notice appear in all copies.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
|
||||
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
825
desktop-operator/node_modules/path-scurry/node_modules/minipass/README.md
generated
vendored
Normal file
825
desktop-operator/node_modules/path-scurry/node_modules/minipass/README.md
generated
vendored
Normal file
@@ -0,0 +1,825 @@
|
||||
# minipass
|
||||
|
||||
A _very_ minimal implementation of a [PassThrough
|
||||
stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough)
|
||||
|
||||
[It's very
|
||||
fast](https://docs.google.com/spreadsheets/d/1K_HR5oh3r80b8WVMWCPPjfuWXUgfkmhlX7FGI6JJ8tY/edit?usp=sharing)
|
||||
for objects, strings, and buffers.
|
||||
|
||||
Supports `pipe()`ing (including multi-`pipe()` and backpressure
|
||||
transmission), buffering data until either a `data` event handler
|
||||
or `pipe()` is added (so you don't lose the first chunk), and
|
||||
most other cases where PassThrough is a good idea.
|
||||
|
||||
There is a `read()` method, but it's much more efficient to
|
||||
consume data from this stream via `'data'` events or by calling
|
||||
`pipe()` into some other stream. Calling `read()` requires the
|
||||
buffer to be flattened in some cases, which requires copying
|
||||
memory.
|
||||
|
||||
If you set `objectMode: true` in the options, then whatever is
|
||||
written will be emitted. Otherwise, it'll do a minimal amount of
|
||||
Buffer copying to ensure proper Streams semantics when `read(n)`
|
||||
is called.
|
||||
|
||||
`objectMode` can only be set at instantiation. Attempting to
|
||||
write something other than a String or Buffer without having set
|
||||
`objectMode` in the options will throw an error.
|
||||
|
||||
This is not a `through` or `through2` stream. It doesn't
|
||||
transform the data, it just passes it right through. If you want
|
||||
to transform the data, extend the class, and override the
|
||||
`write()` method. Once you're done transforming the data however
|
||||
you want, call `super.write()` with the transform output.
|
||||
|
||||
For some examples of streams that extend Minipass in various
|
||||
ways, check out:
|
||||
|
||||
- [minizlib](http://npm.im/minizlib)
|
||||
- [fs-minipass](http://npm.im/fs-minipass)
|
||||
- [tar](http://npm.im/tar)
|
||||
- [minipass-collect](http://npm.im/minipass-collect)
|
||||
- [minipass-flush](http://npm.im/minipass-flush)
|
||||
- [minipass-pipeline](http://npm.im/minipass-pipeline)
|
||||
- [tap](http://npm.im/tap)
|
||||
- [tap-parser](http://npm.im/tap-parser)
|
||||
- [treport](http://npm.im/treport)
|
||||
- [minipass-fetch](http://npm.im/minipass-fetch)
|
||||
- [pacote](http://npm.im/pacote)
|
||||
- [make-fetch-happen](http://npm.im/make-fetch-happen)
|
||||
- [cacache](http://npm.im/cacache)
|
||||
- [ssri](http://npm.im/ssri)
|
||||
- [npm-registry-fetch](http://npm.im/npm-registry-fetch)
|
||||
- [minipass-json-stream](http://npm.im/minipass-json-stream)
|
||||
- [minipass-sized](http://npm.im/minipass-sized)
|
||||
|
||||
## Usage in TypeScript
|
||||
|
||||
The `Minipass` class takes three type template definitions:
|
||||
|
||||
- `RType` the type being read, which defaults to `Buffer`. If
|
||||
`RType` is `string`, then the constructor _must_ get an options
|
||||
object specifying either an `encoding` or `objectMode: true`.
|
||||
If it's anything other than `string` or `Buffer`, then it
|
||||
_must_ get an options object specifying `objectMode: true`.
|
||||
- `WType` the type being written. If `RType` is `Buffer` or
|
||||
`string`, then this defaults to `ContiguousData` (Buffer,
|
||||
string, ArrayBuffer, or ArrayBufferView). Otherwise, it
|
||||
defaults to `RType`.
|
||||
- `Events` type mapping event names to the arguments emitted
|
||||
with that event, which extends `Minipass.Events`.
|
||||
|
||||
To declare types for custom events in subclasses, extend the
|
||||
third parameter with your own event signatures. For example:
|
||||
|
||||
```js
|
||||
import { Minipass } from 'minipass'
|
||||
|
||||
// a NDJSON stream that emits 'jsonError' when it can't stringify
|
||||
export interface Events extends Minipass.Events {
|
||||
jsonError: [e: Error]
|
||||
}
|
||||
|
||||
export class NDJSONStream extends Minipass<string, any, Events> {
|
||||
constructor() {
|
||||
super({ objectMode: true })
|
||||
}
|
||||
|
||||
// data is type `any` because that's WType
|
||||
write(data, encoding, cb) {
|
||||
try {
|
||||
const json = JSON.stringify(data)
|
||||
return super.write(json + '\n', encoding, cb)
|
||||
} catch (er) {
|
||||
if (!er instanceof Error) {
|
||||
er = Object.assign(new Error('json stringify failed'), {
|
||||
cause: er,
|
||||
})
|
||||
}
|
||||
// trying to emit with something OTHER than an error will
|
||||
// fail, because we declared the event arguments type.
|
||||
this.emit('jsonError', er)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const s = new NDJSONStream()
|
||||
s.on('jsonError', e => {
|
||||
// here, TS knows that e is an Error
|
||||
})
|
||||
```
|
||||
|
||||
Emitting/handling events that aren't declared in this way is
|
||||
fine, but the arguments will be typed as `unknown`.
|
||||
|
||||
## Differences from Node.js Streams
|
||||
|
||||
There are several things that make Minipass streams different
|
||||
from (and in some ways superior to) Node.js core streams.
|
||||
|
||||
Please read these caveats if you are familiar with node-core
|
||||
streams and intend to use Minipass streams in your programs.
|
||||
|
||||
You can avoid most of these differences entirely (for a very
|
||||
small performance penalty) by setting `{async: true}` in the
|
||||
constructor options.
|
||||
|
||||
### Timing
|
||||
|
||||
Minipass streams are designed to support synchronous use-cases.
|
||||
Thus, data is emitted as soon as it is available, always. It is
|
||||
buffered until read, but no longer. Another way to look at it is
|
||||
that Minipass streams are exactly as synchronous as the logic
|
||||
that writes into them.
|
||||
|
||||
This can be surprising if your code relies on
|
||||
`PassThrough.write()` always providing data on the next tick
|
||||
rather than the current one, or being able to call `resume()` and
|
||||
not have the entire buffer disappear immediately.
|
||||
|
||||
However, without this synchronicity guarantee, there would be no
|
||||
way for Minipass to achieve the speeds it does, or support the
|
||||
synchronous use cases that it does. Simply put, waiting takes
|
||||
time.
|
||||
|
||||
This non-deferring approach makes Minipass streams much easier to
|
||||
reason about, especially in the context of Promises and other
|
||||
flow-control mechanisms.
|
||||
|
||||
Example:
|
||||
|
||||
```js
|
||||
// hybrid module, either works
|
||||
import { Minipass } from 'minipass'
|
||||
// or:
|
||||
const { Minipass } = require('minipass')
|
||||
|
||||
const stream = new Minipass()
|
||||
stream.on('data', () => console.log('data event'))
|
||||
console.log('before write')
|
||||
stream.write('hello')
|
||||
console.log('after write')
|
||||
// output:
|
||||
// before write
|
||||
// data event
|
||||
// after write
|
||||
```
|
||||
|
||||
### Exception: Async Opt-In
|
||||
|
||||
If you wish to have a Minipass stream with behavior that more
|
||||
closely mimics Node.js core streams, you can set the stream in
|
||||
async mode either by setting `async: true` in the constructor
|
||||
options, or by setting `stream.async = true` later on.
|
||||
|
||||
```js
|
||||
// hybrid module, either works
|
||||
import { Minipass } from 'minipass'
|
||||
// or:
|
||||
const { Minipass } = require('minipass')
|
||||
|
||||
const asyncStream = new Minipass({ async: true })
|
||||
asyncStream.on('data', () => console.log('data event'))
|
||||
console.log('before write')
|
||||
asyncStream.write('hello')
|
||||
console.log('after write')
|
||||
// output:
|
||||
// before write
|
||||
// after write
|
||||
// data event <-- this is deferred until the next tick
|
||||
```
|
||||
|
||||
Switching _out_ of async mode is unsafe, as it could cause data
|
||||
corruption, and so is not enabled. Example:
|
||||
|
||||
```js
|
||||
import { Minipass } from 'minipass'
|
||||
const stream = new Minipass({ encoding: 'utf8' })
|
||||
stream.on('data', chunk => console.log(chunk))
|
||||
stream.async = true
|
||||
console.log('before writes')
|
||||
stream.write('hello')
|
||||
setStreamSyncAgainSomehow(stream) // <-- this doesn't actually exist!
|
||||
stream.write('world')
|
||||
console.log('after writes')
|
||||
// hypothetical output would be:
|
||||
// before writes
|
||||
// world
|
||||
// after writes
|
||||
// hello
|
||||
// NOT GOOD!
|
||||
```
|
||||
|
||||
To avoid this problem, once set into async mode, any attempt to
|
||||
make the stream sync again will be ignored.
|
||||
|
||||
```js
|
||||
const { Minipass } = require('minipass')
|
||||
const stream = new Minipass({ encoding: 'utf8' })
|
||||
stream.on('data', chunk => console.log(chunk))
|
||||
stream.async = true
|
||||
console.log('before writes')
|
||||
stream.write('hello')
|
||||
stream.async = false // <-- no-op, stream already async
|
||||
stream.write('world')
|
||||
console.log('after writes')
|
||||
// actual output:
|
||||
// before writes
|
||||
// after writes
|
||||
// hello
|
||||
// world
|
||||
```
|
||||
|
||||
### No High/Low Water Marks
|
||||
|
||||
Node.js core streams will optimistically fill up a buffer,
|
||||
returning `true` on all writes until the limit is hit, even if
|
||||
the data has nowhere to go. Then, they will not attempt to draw
|
||||
more data in until the buffer size dips below a minimum value.
|
||||
|
||||
Minipass streams are much simpler. The `write()` method will
|
||||
return `true` if the data has somewhere to go (which is to say,
|
||||
given the timing guarantees, that the data is already there by
|
||||
the time `write()` returns).
|
||||
|
||||
If the data has nowhere to go, then `write()` returns false, and
|
||||
the data sits in a buffer, to be drained out immediately as soon
|
||||
as anyone consumes it.
|
||||
|
||||
Since nothing is ever buffered unnecessarily, there is much less
|
||||
copying data, and less bookkeeping about buffer capacity levels.
|
||||
|
||||
### Hazards of Buffering (or: Why Minipass Is So Fast)
|
||||
|
||||
Since data written to a Minipass stream is immediately written
|
||||
all the way through the pipeline, and `write()` always returns
|
||||
true/false based on whether the data was fully flushed,
|
||||
backpressure is communicated immediately to the upstream caller.
|
||||
This minimizes buffering.
|
||||
|
||||
Consider this case:
|
||||
|
||||
```js
|
||||
const { PassThrough } = require('stream')
|
||||
const p1 = new PassThrough({ highWaterMark: 1024 })
|
||||
const p2 = new PassThrough({ highWaterMark: 1024 })
|
||||
const p3 = new PassThrough({ highWaterMark: 1024 })
|
||||
const p4 = new PassThrough({ highWaterMark: 1024 })
|
||||
|
||||
p1.pipe(p2).pipe(p3).pipe(p4)
|
||||
p4.on('data', () => console.log('made it through'))
|
||||
|
||||
// this returns false and buffers, then writes to p2 on next tick (1)
|
||||
// p2 returns false and buffers, pausing p1, then writes to p3 on next tick (2)
|
||||
// p3 returns false and buffers, pausing p2, then writes to p4 on next tick (3)
|
||||
// p4 returns false and buffers, pausing p3, then emits 'data' and 'drain'
|
||||
// on next tick (4)
|
||||
// p3 sees p4's 'drain' event, and calls resume(), emitting 'resume' and
|
||||
// 'drain' on next tick (5)
|
||||
// p2 sees p3's 'drain', calls resume(), emits 'resume' and 'drain' on next tick (6)
|
||||
// p1 sees p2's 'drain', calls resume(), emits 'resume' and 'drain' on next
|
||||
// tick (7)
|
||||
|
||||
p1.write(Buffer.alloc(2048)) // returns false
|
||||
```
|
||||
|
||||
Along the way, the data was buffered and deferred at each stage,
|
||||
and multiple event deferrals happened, for an unblocked pipeline
|
||||
where it was perfectly safe to write all the way through!
|
||||
|
||||
Furthermore, setting a `highWaterMark` of `1024` might lead
|
||||
someone reading the code to think an advisory maximum of 1KiB is
|
||||
being set for the pipeline. However, the actual advisory
|
||||
buffering level is the _sum_ of `highWaterMark` values, since
|
||||
each one has its own bucket.
|
||||
|
||||
Consider the Minipass case:
|
||||
|
||||
```js
|
||||
const m1 = new Minipass()
|
||||
const m2 = new Minipass()
|
||||
const m3 = new Minipass()
|
||||
const m4 = new Minipass()
|
||||
|
||||
m1.pipe(m2).pipe(m3).pipe(m4)
|
||||
m4.on('data', () => console.log('made it through'))
|
||||
|
||||
// m1 is flowing, so it writes the data to m2 immediately
|
||||
// m2 is flowing, so it writes the data to m3 immediately
|
||||
// m3 is flowing, so it writes the data to m4 immediately
|
||||
// m4 is flowing, so it fires the 'data' event immediately, returns true
|
||||
// m4's write returned true, so m3 is still flowing, returns true
|
||||
// m3's write returned true, so m2 is still flowing, returns true
|
||||
// m2's write returned true, so m1 is still flowing, returns true
|
||||
// No event deferrals or buffering along the way!
|
||||
|
||||
m1.write(Buffer.alloc(2048)) // returns true
|
||||
```
|
||||
|
||||
It is extremely unlikely that you _don't_ want to buffer any data
|
||||
written, or _ever_ buffer data that can be flushed all the way
|
||||
through. Neither node-core streams nor Minipass ever fail to
|
||||
buffer written data, but node-core streams do a lot of
|
||||
unnecessary buffering and pausing.
|
||||
|
||||
As always, the faster implementation is the one that does less
|
||||
stuff and waits less time to do it.
|
||||
|
||||
### Immediately emit `end` for empty streams (when not paused)
|
||||
|
||||
If a stream is not paused, and `end()` is called before writing
|
||||
any data into it, then it will emit `end` immediately.
|
||||
|
||||
If you have logic that occurs on the `end` event which you don't
|
||||
want to potentially happen immediately (for example, closing file
|
||||
descriptors, moving on to the next entry in an archive parse
|
||||
stream, etc.) then be sure to call `stream.pause()` on creation,
|
||||
and then `stream.resume()` once you are ready to respond to the
|
||||
`end` event.
|
||||
|
||||
However, this is _usually_ not a problem because:
|
||||
|
||||
### Emit `end` When Asked
|
||||
|
||||
One hazard of immediately emitting `'end'` is that you may not
|
||||
yet have had a chance to add a listener. In order to avoid this
|
||||
hazard, Minipass streams safely re-emit the `'end'` event if a
|
||||
new listener is added after `'end'` has been emitted.
|
||||
|
||||
Ie, if you do `stream.on('end', someFunction)`, and the stream
|
||||
has already emitted `end`, then it will call the handler right
|
||||
away. (You can think of this somewhat like attaching a new
|
||||
`.then(fn)` to a previously-resolved Promise.)
|
||||
|
||||
To prevent calling handlers multiple times who would not expect
|
||||
multiple ends to occur, all listeners are removed from the
|
||||
`'end'` event whenever it is emitted.
|
||||
|
||||
### Emit `error` When Asked
|
||||
|
||||
The most recent error object passed to the `'error'` event is
|
||||
stored on the stream. If a new `'error'` event handler is added,
|
||||
and an error was previously emitted, then the event handler will
|
||||
be called immediately (or on `process.nextTick` in the case of
|
||||
async streams).
|
||||
|
||||
This makes it much more difficult to end up trying to interact
|
||||
with a broken stream, if the error handler is added after an
|
||||
error was previously emitted.
|
||||
|
||||
### Impact of "immediate flow" on Tee-streams
|
||||
|
||||
A "tee stream" is a stream piping to multiple destinations:
|
||||
|
||||
```js
|
||||
const tee = new Minipass()
|
||||
t.pipe(dest1)
|
||||
t.pipe(dest2)
|
||||
t.write('foo') // goes to both destinations
|
||||
```
|
||||
|
||||
Since Minipass streams _immediately_ process any pending data
|
||||
through the pipeline when a new pipe destination is added, this
|
||||
can have surprising effects, especially when a stream comes in
|
||||
from some other function and may or may not have data in its
|
||||
buffer.
|
||||
|
||||
```js
|
||||
// WARNING! WILL LOSE DATA!
|
||||
const src = new Minipass()
|
||||
src.write('foo')
|
||||
src.pipe(dest1) // 'foo' chunk flows to dest1 immediately, and is gone
|
||||
src.pipe(dest2) // gets nothing!
|
||||
```
|
||||
|
||||
One solution is to create a dedicated tee-stream junction that
|
||||
pipes to both locations, and then pipe to _that_ instead.
|
||||
|
||||
```js
|
||||
// Safe example: tee to both places
|
||||
const src = new Minipass()
|
||||
src.write('foo')
|
||||
const tee = new Minipass()
|
||||
tee.pipe(dest1)
|
||||
tee.pipe(dest2)
|
||||
src.pipe(tee) // tee gets 'foo', pipes to both locations
|
||||
```
|
||||
|
||||
The same caveat applies to `on('data')` event listeners. The
|
||||
first one added will _immediately_ receive all of the data,
|
||||
leaving nothing for the second:
|
||||
|
||||
```js
|
||||
// WARNING! WILL LOSE DATA!
|
||||
const src = new Minipass()
|
||||
src.write('foo')
|
||||
src.on('data', handler1) // receives 'foo' right away
|
||||
src.on('data', handler2) // nothing to see here!
|
||||
```
|
||||
|
||||
Using a dedicated tee-stream can be used in this case as well:
|
||||
|
||||
```js
|
||||
// Safe example: tee to both data handlers
|
||||
const src = new Minipass()
|
||||
src.write('foo')
|
||||
const tee = new Minipass()
|
||||
tee.on('data', handler1)
|
||||
tee.on('data', handler2)
|
||||
src.pipe(tee)
|
||||
```
|
||||
|
||||
All of the hazards in this section are avoided by setting `{
|
||||
async: true }` in the Minipass constructor, or by setting
|
||||
`stream.async = true` afterwards. Note that this does add some
|
||||
overhead, so should only be done in cases where you are willing
|
||||
to lose a bit of performance in order to avoid having to refactor
|
||||
program logic.
|
||||
|
||||
## USAGE
|
||||
|
||||
It's a stream! Use it like a stream and it'll most likely do what
|
||||
you want.
|
||||
|
||||
```js
|
||||
import { Minipass } from 'minipass'
|
||||
const mp = new Minipass(options) // options is optional
|
||||
mp.write('foo')
|
||||
mp.pipe(someOtherStream)
|
||||
mp.end('bar')
|
||||
```
|
||||
|
||||
### OPTIONS
|
||||
|
||||
- `encoding` How would you like the data coming _out_ of the
|
||||
stream to be encoded? Accepts any values that can be passed to
|
||||
`Buffer.toString()`.
|
||||
- `objectMode` Emit data exactly as it comes in. This will be
|
||||
flipped on by default if you write() something other than a
|
||||
string or Buffer at any point. Setting `objectMode: true` will
|
||||
prevent setting any encoding value.
|
||||
- `async` Defaults to `false`. Set to `true` to defer data
|
||||
emission until next tick. This reduces performance slightly,
|
||||
but makes Minipass streams use timing behavior closer to Node
|
||||
core streams. See [Timing](#timing) for more details.
|
||||
- `signal` An `AbortSignal` that will cause the stream to unhook
|
||||
itself from everything and become as inert as possible. Note
|
||||
that providing a `signal` parameter will make `'error'` events
|
||||
no longer throw if they are unhandled, but they will still be
|
||||
emitted to handlers if any are attached.
|
||||
|
||||
### API
|
||||
|
||||
Implements the user-facing portions of Node.js's `Readable` and
|
||||
`Writable` streams.
|
||||
|
||||
### Methods
|
||||
|
||||
- `write(chunk, [encoding], [callback])` - Put data in. (Note
|
||||
that, in the base Minipass class, the same data will come out.)
|
||||
Returns `false` if the stream will buffer the next write, or
|
||||
true if it's still in "flowing" mode.
|
||||
- `end([chunk, [encoding]], [callback])` - Signal that you have
|
||||
no more data to write. This will queue an `end` event to be
|
||||
fired when all the data has been consumed.
|
||||
- `pause()` - No more data for a while, please. This also
|
||||
prevents `end` from being emitted for empty streams until the
|
||||
stream is resumed.
|
||||
- `resume()` - Resume the stream. If there's data in the buffer,
|
||||
it is all discarded. Any buffered events are immediately
|
||||
emitted.
|
||||
- `pipe(dest)` - Send all output to the stream provided. When
|
||||
data is emitted, it is immediately written to any and all pipe
|
||||
destinations. (Or written on next tick in `async` mode.)
|
||||
- `unpipe(dest)` - Stop piping to the destination stream. This is
|
||||
immediate, meaning that any asynchronously queued data will
|
||||
_not_ make it to the destination when running in `async` mode.
|
||||
- `options.end` - Boolean, end the destination stream when the
|
||||
source stream ends. Default `true`.
|
||||
- `options.proxyErrors` - Boolean, proxy `error` events from
|
||||
the source stream to the destination stream. Note that errors
|
||||
are _not_ proxied after the pipeline terminates, either due
|
||||
to the source emitting `'end'` or manually unpiping with
|
||||
`src.unpipe(dest)`. Default `false`.
|
||||
- `on(ev, fn)`, `emit(ev, fn)` - Minipass streams are
|
||||
EventEmitters. Some events are given special treatment,
|
||||
however. (See below under "events".)
|
||||
- `promise()` - Returns a Promise that resolves when the stream
|
||||
emits `end`, or rejects if the stream emits `error`.
|
||||
- `collect()` - Return a Promise that resolves on `end` with an
|
||||
array containing each chunk of data that was emitted, or
|
||||
rejects if the stream emits `error`. Note that this consumes
|
||||
the stream data.
|
||||
- `concat()` - Same as `collect()`, but concatenates the data
|
||||
into a single Buffer object. Will reject the returned promise
|
||||
if the stream is in objectMode, or if it goes into objectMode
|
||||
by the end of the data.
|
||||
- `read(n)` - Consume `n` bytes of data out of the buffer. If `n`
|
||||
is not provided, then consume all of it. If `n` bytes are not
|
||||
available, then it returns null. **Note** consuming streams in
|
||||
this way is less efficient, and can lead to unnecessary Buffer
|
||||
copying.
|
||||
- `destroy([er])` - Destroy the stream. If an error is provided,
|
||||
then an `'error'` event is emitted. If the stream has a
|
||||
`close()` method, and has not emitted a `'close'` event yet,
|
||||
then `stream.close()` will be called. Any Promises returned by
|
||||
`.promise()`, `.collect()` or `.concat()` will be rejected.
|
||||
After being destroyed, writing to the stream will emit an
|
||||
error. No more data will be emitted if the stream is destroyed,
|
||||
even if it was previously buffered.
|
||||
|
||||
### Properties
|
||||
|
||||
- `bufferLength` Read-only. Total number of bytes buffered, or in
|
||||
the case of objectMode, the total number of objects.
|
||||
- `encoding` Read-only. The encoding that has been set.
|
||||
- `flowing` Read-only. Boolean indicating whether a chunk written
|
||||
to the stream will be immediately emitted.
|
||||
- `emittedEnd` Read-only. Boolean indicating whether the end-ish
|
||||
events (ie, `end`, `prefinish`, `finish`) have been emitted.
|
||||
Note that listening on any end-ish event will immediateyl
|
||||
re-emit it if it has already been emitted.
|
||||
- `writable` Whether the stream is writable. Default `true`. Set
|
||||
to `false` when `end()`
|
||||
- `readable` Whether the stream is readable. Default `true`.
|
||||
- `pipes` An array of Pipe objects referencing streams that this
|
||||
stream is piping into.
|
||||
- `destroyed` A getter that indicates whether the stream was
|
||||
destroyed.
|
||||
- `paused` True if the stream has been explicitly paused,
|
||||
otherwise false.
|
||||
- `objectMode` Indicates whether the stream is in `objectMode`.
|
||||
- `aborted` Readonly property set when the `AbortSignal`
|
||||
dispatches an `abort` event.
|
||||
|
||||
### Events
|
||||
|
||||
- `data` Emitted when there's data to read. Argument is the data
|
||||
to read. This is never emitted while not flowing. If a listener
|
||||
is attached, that will resume the stream.
|
||||
- `end` Emitted when there's no more data to read. This will be
|
||||
emitted immediately for empty streams when `end()` is called.
|
||||
If a listener is attached, and `end` was already emitted, then
|
||||
it will be emitted again. All listeners are removed when `end`
|
||||
is emitted.
|
||||
- `prefinish` An end-ish event that follows the same logic as
|
||||
`end` and is emitted in the same conditions where `end` is
|
||||
emitted. Emitted after `'end'`.
|
||||
- `finish` An end-ish event that follows the same logic as `end`
|
||||
and is emitted in the same conditions where `end` is emitted.
|
||||
Emitted after `'prefinish'`.
|
||||
- `close` An indication that an underlying resource has been
|
||||
released. Minipass does not emit this event, but will defer it
|
||||
until after `end` has been emitted, since it throws off some
|
||||
stream libraries otherwise.
|
||||
- `drain` Emitted when the internal buffer empties, and it is
|
||||
again suitable to `write()` into the stream.
|
||||
- `readable` Emitted when data is buffered and ready to be read
|
||||
by a consumer.
|
||||
- `resume` Emitted when stream changes state from buffering to
|
||||
flowing mode. (Ie, when `resume` is called, `pipe` is called,
|
||||
or a `data` event listener is added.)
|
||||
|
||||
### Static Methods
|
||||
|
||||
- `Minipass.isStream(stream)` Returns `true` if the argument is a
|
||||
stream, and false otherwise. To be considered a stream, the
|
||||
object must be either an instance of Minipass, or an
|
||||
EventEmitter that has either a `pipe()` method, or both
|
||||
`write()` and `end()` methods. (Pretty much any stream in
|
||||
node-land will return `true` for this.)
|
||||
|
||||
## EXAMPLES
|
||||
|
||||
Here are some examples of things you can do with Minipass
|
||||
streams.
|
||||
|
||||
### simple "are you done yet" promise
|
||||
|
||||
```js
|
||||
mp.promise().then(
|
||||
() => {
|
||||
// stream is finished
|
||||
},
|
||||
er => {
|
||||
// stream emitted an error
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### collecting
|
||||
|
||||
```js
|
||||
mp.collect().then(all => {
|
||||
// all is an array of all the data emitted
|
||||
// encoding is supported in this case, so
|
||||
// so the result will be a collection of strings if
|
||||
// an encoding is specified, or buffers/objects if not.
|
||||
//
|
||||
// In an async function, you may do
|
||||
// const data = await stream.collect()
|
||||
})
|
||||
```
|
||||
|
||||
### collecting into a single blob
|
||||
|
||||
This is a bit slower because it concatenates the data into one
|
||||
chunk for you, but if you're going to do it yourself anyway, it's
|
||||
convenient this way:
|
||||
|
||||
```js
|
||||
mp.concat().then(onebigchunk => {
|
||||
// onebigchunk is a string if the stream
|
||||
// had an encoding set, or a buffer otherwise.
|
||||
})
|
||||
```
|
||||
|
||||
### iteration
|
||||
|
||||
You can iterate over streams synchronously or asynchronously in
|
||||
platforms that support it.
|
||||
|
||||
Synchronous iteration will end when the currently available data
|
||||
is consumed, even if the `end` event has not been reached. In
|
||||
string and buffer mode, the data is concatenated, so unless
|
||||
multiple writes are occurring in the same tick as the `read()`,
|
||||
sync iteration loops will generally only have a single iteration.
|
||||
|
||||
To consume chunks in this way exactly as they have been written,
|
||||
with no flattening, create the stream with the `{ objectMode:
|
||||
true }` option.
|
||||
|
||||
```js
|
||||
const mp = new Minipass({ objectMode: true })
|
||||
mp.write('a')
|
||||
mp.write('b')
|
||||
for (let letter of mp) {
|
||||
console.log(letter) // a, b
|
||||
}
|
||||
mp.write('c')
|
||||
mp.write('d')
|
||||
for (let letter of mp) {
|
||||
console.log(letter) // c, d
|
||||
}
|
||||
mp.write('e')
|
||||
mp.end()
|
||||
for (let letter of mp) {
|
||||
console.log(letter) // e
|
||||
}
|
||||
for (let letter of mp) {
|
||||
console.log(letter) // nothing
|
||||
}
|
||||
```
|
||||
|
||||
Asynchronous iteration will continue until the end event is reached,
|
||||
consuming all of the data.
|
||||
|
||||
```js
|
||||
const mp = new Minipass({ encoding: 'utf8' })
|
||||
|
||||
// some source of some data
|
||||
let i = 5
|
||||
const inter = setInterval(() => {
|
||||
if (i-- > 0) mp.write(Buffer.from('foo\n', 'utf8'))
|
||||
else {
|
||||
mp.end()
|
||||
clearInterval(inter)
|
||||
}
|
||||
}, 100)
|
||||
|
||||
// consume the data with asynchronous iteration
|
||||
async function consume() {
|
||||
for await (let chunk of mp) {
|
||||
console.log(chunk)
|
||||
}
|
||||
return 'ok'
|
||||
}
|
||||
|
||||
consume().then(res => console.log(res))
|
||||
// logs `foo\n` 5 times, and then `ok`
|
||||
```
|
||||
|
||||
### subclass that `console.log()`s everything written into it
|
||||
|
||||
```js
|
||||
class Logger extends Minipass {
|
||||
write(chunk, encoding, callback) {
|
||||
console.log('WRITE', chunk, encoding)
|
||||
return super.write(chunk, encoding, callback)
|
||||
}
|
||||
end(chunk, encoding, callback) {
|
||||
console.log('END', chunk, encoding)
|
||||
return super.end(chunk, encoding, callback)
|
||||
}
|
||||
}
|
||||
|
||||
someSource.pipe(new Logger()).pipe(someDest)
|
||||
```
|
||||
|
||||
### same thing, but using an inline anonymous class
|
||||
|
||||
```js
|
||||
// js classes are fun
|
||||
someSource
|
||||
.pipe(
|
||||
new (class extends Minipass {
|
||||
emit(ev, ...data) {
|
||||
// let's also log events, because debugging some weird thing
|
||||
console.log('EMIT', ev)
|
||||
return super.emit(ev, ...data)
|
||||
}
|
||||
write(chunk, encoding, callback) {
|
||||
console.log('WRITE', chunk, encoding)
|
||||
return super.write(chunk, encoding, callback)
|
||||
}
|
||||
end(chunk, encoding, callback) {
|
||||
console.log('END', chunk, encoding)
|
||||
return super.end(chunk, encoding, callback)
|
||||
}
|
||||
})()
|
||||
)
|
||||
.pipe(someDest)
|
||||
```
|
||||
|
||||
### subclass that defers 'end' for some reason
|
||||
|
||||
```js
|
||||
class SlowEnd extends Minipass {
|
||||
emit(ev, ...args) {
|
||||
if (ev === 'end') {
|
||||
console.log('going to end, hold on a sec')
|
||||
setTimeout(() => {
|
||||
console.log('ok, ready to end now')
|
||||
super.emit('end', ...args)
|
||||
}, 100)
|
||||
return true
|
||||
} else {
|
||||
return super.emit(ev, ...args)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### transform that creates newline-delimited JSON
|
||||
|
||||
```js
|
||||
class NDJSONEncode extends Minipass {
|
||||
write(obj, cb) {
|
||||
try {
|
||||
// JSON.stringify can throw, emit an error on that
|
||||
return super.write(JSON.stringify(obj) + '\n', 'utf8', cb)
|
||||
} catch (er) {
|
||||
this.emit('error', er)
|
||||
}
|
||||
}
|
||||
end(obj, cb) {
|
||||
if (typeof obj === 'function') {
|
||||
cb = obj
|
||||
obj = undefined
|
||||
}
|
||||
if (obj !== undefined) {
|
||||
this.write(obj)
|
||||
}
|
||||
return super.end(cb)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### transform that parses newline-delimited JSON
|
||||
|
||||
```js
|
||||
class NDJSONDecode extends Minipass {
|
||||
constructor(options) {
|
||||
// always be in object mode, as far as Minipass is concerned
|
||||
super({ objectMode: true })
|
||||
this._jsonBuffer = ''
|
||||
}
|
||||
write(chunk, encoding, cb) {
|
||||
if (
|
||||
typeof chunk === 'string' &&
|
||||
typeof encoding === 'string' &&
|
||||
encoding !== 'utf8'
|
||||
) {
|
||||
chunk = Buffer.from(chunk, encoding).toString()
|
||||
} else if (Buffer.isBuffer(chunk)) {
|
||||
chunk = chunk.toString()
|
||||
}
|
||||
if (typeof encoding === 'function') {
|
||||
cb = encoding
|
||||
}
|
||||
const jsonData = (this._jsonBuffer + chunk).split('\n')
|
||||
this._jsonBuffer = jsonData.pop()
|
||||
for (let i = 0; i < jsonData.length; i++) {
|
||||
try {
|
||||
// JSON.parse can throw, emit an error on that
|
||||
super.write(JSON.parse(jsonData[i]))
|
||||
} catch (er) {
|
||||
this.emit('error', er)
|
||||
continue
|
||||
}
|
||||
}
|
||||
if (cb) cb()
|
||||
}
|
||||
}
|
||||
```
|
||||
549
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/commonjs/index.d.ts
generated
vendored
Normal file
549
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/commonjs/index.d.ts
generated
vendored
Normal file
@@ -0,0 +1,549 @@
|
||||
/// <reference types="node" />
|
||||
/// <reference types="node" />
|
||||
/// <reference types="node" />
|
||||
/// <reference types="node" />
|
||||
import { EventEmitter } from 'node:events';
|
||||
import { StringDecoder } from 'node:string_decoder';
|
||||
/**
|
||||
* Same as StringDecoder, but exposing the `lastNeed` flag on the type
|
||||
*/
|
||||
type SD = StringDecoder & {
|
||||
lastNeed: boolean;
|
||||
};
|
||||
export type { SD, Pipe, PipeProxyErrors };
|
||||
/**
|
||||
* Return true if the argument is a Minipass stream, Node stream, or something
|
||||
* else that Minipass can interact with.
|
||||
*/
|
||||
export declare const isStream: (s: any) => s is NodeJS.WriteStream | NodeJS.ReadStream | Minipass<any, any, any> | (NodeJS.ReadStream & {
|
||||
fd: number;
|
||||
}) | (EventEmitter & {
|
||||
pause(): any;
|
||||
resume(): any;
|
||||
pipe(...destArgs: any[]): any;
|
||||
}) | (NodeJS.WriteStream & {
|
||||
fd: number;
|
||||
}) | (EventEmitter & {
|
||||
end(): any;
|
||||
write(chunk: any, ...args: any[]): any;
|
||||
});
|
||||
/**
|
||||
* Return true if the argument is a valid {@link Minipass.Readable}
|
||||
*/
|
||||
export declare const isReadable: (s: any) => s is Minipass.Readable;
|
||||
/**
|
||||
* Return true if the argument is a valid {@link Minipass.Writable}
|
||||
*/
|
||||
export declare const isWritable: (s: any) => s is Minipass.Readable;
|
||||
declare const EOF: unique symbol;
|
||||
declare const MAYBE_EMIT_END: unique symbol;
|
||||
declare const EMITTED_END: unique symbol;
|
||||
declare const EMITTING_END: unique symbol;
|
||||
declare const EMITTED_ERROR: unique symbol;
|
||||
declare const CLOSED: unique symbol;
|
||||
declare const READ: unique symbol;
|
||||
declare const FLUSH: unique symbol;
|
||||
declare const FLUSHCHUNK: unique symbol;
|
||||
declare const ENCODING: unique symbol;
|
||||
declare const DECODER: unique symbol;
|
||||
declare const FLOWING: unique symbol;
|
||||
declare const PAUSED: unique symbol;
|
||||
declare const RESUME: unique symbol;
|
||||
declare const BUFFER: unique symbol;
|
||||
declare const PIPES: unique symbol;
|
||||
declare const BUFFERLENGTH: unique symbol;
|
||||
declare const BUFFERPUSH: unique symbol;
|
||||
declare const BUFFERSHIFT: unique symbol;
|
||||
declare const OBJECTMODE: unique symbol;
|
||||
declare const DESTROYED: unique symbol;
|
||||
declare const ERROR: unique symbol;
|
||||
declare const EMITDATA: unique symbol;
|
||||
declare const EMITEND: unique symbol;
|
||||
declare const EMITEND2: unique symbol;
|
||||
declare const ASYNC: unique symbol;
|
||||
declare const ABORT: unique symbol;
|
||||
declare const ABORTED: unique symbol;
|
||||
declare const SIGNAL: unique symbol;
|
||||
declare const DATALISTENERS: unique symbol;
|
||||
declare const DISCARDED: unique symbol;
|
||||
/**
|
||||
* Options that may be passed to stream.pipe()
|
||||
*/
|
||||
export interface PipeOptions {
|
||||
/**
|
||||
* end the destination stream when the source stream ends
|
||||
*/
|
||||
end?: boolean;
|
||||
/**
|
||||
* proxy errors from the source stream to the destination stream
|
||||
*/
|
||||
proxyErrors?: boolean;
|
||||
}
|
||||
/**
|
||||
* Internal class representing a pipe to a destination stream.
|
||||
*
|
||||
* @internal
|
||||
*/
|
||||
declare class Pipe<T extends unknown> {
|
||||
src: Minipass<T>;
|
||||
dest: Minipass<any, T>;
|
||||
opts: PipeOptions;
|
||||
ondrain: () => any;
|
||||
constructor(src: Minipass<T>, dest: Minipass.Writable, opts: PipeOptions);
|
||||
unpipe(): void;
|
||||
proxyErrors(_er: any): void;
|
||||
end(): void;
|
||||
}
|
||||
/**
|
||||
* Internal class representing a pipe to a destination stream where
|
||||
* errors are proxied.
|
||||
*
|
||||
* @internal
|
||||
*/
|
||||
declare class PipeProxyErrors<T> extends Pipe<T> {
|
||||
unpipe(): void;
|
||||
constructor(src: Minipass<T>, dest: Minipass.Writable, opts: PipeOptions);
|
||||
}
|
||||
export declare namespace Minipass {
|
||||
/**
|
||||
* Encoding used to create a stream that outputs strings rather than
|
||||
* Buffer objects.
|
||||
*/
|
||||
export type Encoding = BufferEncoding | 'buffer' | null;
|
||||
/**
|
||||
* Any stream that Minipass can pipe into
|
||||
*/
|
||||
export type Writable = Minipass<any, any, any> | NodeJS.WriteStream | (NodeJS.WriteStream & {
|
||||
fd: number;
|
||||
}) | (EventEmitter & {
|
||||
end(): any;
|
||||
write(chunk: any, ...args: any[]): any;
|
||||
});
|
||||
/**
|
||||
* Any stream that can be read from
|
||||
*/
|
||||
export type Readable = Minipass<any, any, any> | NodeJS.ReadStream | (NodeJS.ReadStream & {
|
||||
fd: number;
|
||||
}) | (EventEmitter & {
|
||||
pause(): any;
|
||||
resume(): any;
|
||||
pipe(...destArgs: any[]): any;
|
||||
});
|
||||
/**
|
||||
* Utility type that can be iterated sync or async
|
||||
*/
|
||||
export type DualIterable<T> = Iterable<T> & AsyncIterable<T>;
|
||||
type EventArguments = Record<string | symbol, unknown[]>;
|
||||
/**
|
||||
* The listing of events that a Minipass class can emit.
|
||||
* Extend this when extending the Minipass class, and pass as
|
||||
* the third template argument. The key is the name of the event,
|
||||
* and the value is the argument list.
|
||||
*
|
||||
* Any undeclared events will still be allowed, but the handler will get
|
||||
* arguments as `unknown[]`.
|
||||
*/
|
||||
export interface Events<RType extends any = Buffer> extends EventArguments {
|
||||
readable: [];
|
||||
data: [chunk: RType];
|
||||
error: [er: unknown];
|
||||
abort: [reason: unknown];
|
||||
drain: [];
|
||||
resume: [];
|
||||
end: [];
|
||||
finish: [];
|
||||
prefinish: [];
|
||||
close: [];
|
||||
[DESTROYED]: [er?: unknown];
|
||||
[ERROR]: [er: unknown];
|
||||
}
|
||||
/**
|
||||
* String or buffer-like data that can be joined and sliced
|
||||
*/
|
||||
export type ContiguousData = Buffer | ArrayBufferLike | ArrayBufferView | string;
|
||||
export type BufferOrString = Buffer | string;
|
||||
/**
|
||||
* Options passed to the Minipass constructor.
|
||||
*/
|
||||
export type SharedOptions = {
|
||||
/**
|
||||
* Defer all data emission and other events until the end of the
|
||||
* current tick, similar to Node core streams
|
||||
*/
|
||||
async?: boolean;
|
||||
/**
|
||||
* A signal which will abort the stream
|
||||
*/
|
||||
signal?: AbortSignal;
|
||||
/**
|
||||
* Output string encoding. Set to `null` or `'buffer'` (or omit) to
|
||||
* emit Buffer objects rather than strings.
|
||||
*
|
||||
* Conflicts with `objectMode`
|
||||
*/
|
||||
encoding?: BufferEncoding | null | 'buffer';
|
||||
/**
|
||||
* Output data exactly as it was written, supporting non-buffer/string
|
||||
* data (such as arbitrary objects, falsey values, etc.)
|
||||
*
|
||||
* Conflicts with `encoding`
|
||||
*/
|
||||
objectMode?: boolean;
|
||||
};
|
||||
/**
|
||||
* Options for a string encoded output
|
||||
*/
|
||||
export type EncodingOptions = SharedOptions & {
|
||||
encoding: BufferEncoding;
|
||||
objectMode?: false;
|
||||
};
|
||||
/**
|
||||
* Options for contiguous data buffer output
|
||||
*/
|
||||
export type BufferOptions = SharedOptions & {
|
||||
encoding?: null | 'buffer';
|
||||
objectMode?: false;
|
||||
};
|
||||
/**
|
||||
* Options for objectMode arbitrary output
|
||||
*/
|
||||
export type ObjectModeOptions = SharedOptions & {
|
||||
objectMode: true;
|
||||
encoding?: null;
|
||||
};
|
||||
/**
|
||||
* Utility type to determine allowed options based on read type
|
||||
*/
|
||||
export type Options<T> = ObjectModeOptions | (T extends string ? EncodingOptions : T extends Buffer ? BufferOptions : SharedOptions);
|
||||
export {};
|
||||
}
|
||||
/**
|
||||
* Main export, the Minipass class
|
||||
*
|
||||
* `RType` is the type of data emitted, defaults to Buffer
|
||||
*
|
||||
* `WType` is the type of data to be written, if RType is buffer or string,
|
||||
* then any {@link Minipass.ContiguousData} is allowed.
|
||||
*
|
||||
* `Events` is the set of event handler signatures that this object
|
||||
* will emit, see {@link Minipass.Events}
|
||||
*/
|
||||
export declare class Minipass<RType extends unknown = Buffer, WType extends unknown = RType extends Minipass.BufferOrString ? Minipass.ContiguousData : RType, Events extends Minipass.Events<RType> = Minipass.Events<RType>> extends EventEmitter implements Minipass.DualIterable<RType> {
|
||||
[FLOWING]: boolean;
|
||||
[PAUSED]: boolean;
|
||||
[PIPES]: Pipe<RType>[];
|
||||
[BUFFER]: RType[];
|
||||
[OBJECTMODE]: boolean;
|
||||
[ENCODING]: BufferEncoding | null;
|
||||
[ASYNC]: boolean;
|
||||
[DECODER]: SD | null;
|
||||
[EOF]: boolean;
|
||||
[EMITTED_END]: boolean;
|
||||
[EMITTING_END]: boolean;
|
||||
[CLOSED]: boolean;
|
||||
[EMITTED_ERROR]: unknown;
|
||||
[BUFFERLENGTH]: number;
|
||||
[DESTROYED]: boolean;
|
||||
[SIGNAL]?: AbortSignal;
|
||||
[ABORTED]: boolean;
|
||||
[DATALISTENERS]: number;
|
||||
[DISCARDED]: boolean;
|
||||
/**
|
||||
* true if the stream can be written
|
||||
*/
|
||||
writable: boolean;
|
||||
/**
|
||||
* true if the stream can be read
|
||||
*/
|
||||
readable: boolean;
|
||||
/**
|
||||
* If `RType` is Buffer, then options do not need to be provided.
|
||||
* Otherwise, an options object must be provided to specify either
|
||||
* {@link Minipass.SharedOptions.objectMode} or
|
||||
* {@link Minipass.SharedOptions.encoding}, as appropriate.
|
||||
*/
|
||||
constructor(...args: [Minipass.ObjectModeOptions] | (RType extends Buffer ? [] | [Minipass.Options<RType>] : [Minipass.Options<RType>]));
|
||||
/**
|
||||
* The amount of data stored in the buffer waiting to be read.
|
||||
*
|
||||
* For Buffer strings, this will be the total byte length.
|
||||
* For string encoding streams, this will be the string character length,
|
||||
* according to JavaScript's `string.length` logic.
|
||||
* For objectMode streams, this is a count of the items waiting to be
|
||||
* emitted.
|
||||
*/
|
||||
get bufferLength(): number;
|
||||
/**
|
||||
* The `BufferEncoding` currently in use, or `null`
|
||||
*/
|
||||
get encoding(): BufferEncoding | null;
|
||||
/**
|
||||
* @deprecated - This is a read only property
|
||||
*/
|
||||
set encoding(_enc: BufferEncoding | null);
|
||||
/**
|
||||
* @deprecated - Encoding may only be set at instantiation time
|
||||
*/
|
||||
setEncoding(_enc: Minipass.Encoding): void;
|
||||
/**
|
||||
* True if this is an objectMode stream
|
||||
*/
|
||||
get objectMode(): boolean;
|
||||
/**
|
||||
* @deprecated - This is a read-only property
|
||||
*/
|
||||
set objectMode(_om: boolean);
|
||||
/**
|
||||
* true if this is an async stream
|
||||
*/
|
||||
get ['async'](): boolean;
|
||||
/**
|
||||
* Set to true to make this stream async.
|
||||
*
|
||||
* Once set, it cannot be unset, as this would potentially cause incorrect
|
||||
* behavior. Ie, a sync stream can be made async, but an async stream
|
||||
* cannot be safely made sync.
|
||||
*/
|
||||
set ['async'](a: boolean);
|
||||
[ABORT](): void;
|
||||
/**
|
||||
* True if the stream has been aborted.
|
||||
*/
|
||||
get aborted(): boolean;
|
||||
/**
|
||||
* No-op setter. Stream aborted status is set via the AbortSignal provided
|
||||
* in the constructor options.
|
||||
*/
|
||||
set aborted(_: boolean);
|
||||
/**
|
||||
* Write data into the stream
|
||||
*
|
||||
* If the chunk written is a string, and encoding is not specified, then
|
||||
* `utf8` will be assumed. If the stream encoding matches the encoding of
|
||||
* a written string, and the state of the string decoder allows it, then
|
||||
* the string will be passed through to either the output or the internal
|
||||
* buffer without any processing. Otherwise, it will be turned into a
|
||||
* Buffer object for processing into the desired encoding.
|
||||
*
|
||||
* If provided, `cb` function is called immediately before return for
|
||||
* sync streams, or on next tick for async streams, because for this
|
||||
* base class, a chunk is considered "processed" once it is accepted
|
||||
* and either emitted or buffered. That is, the callback does not indicate
|
||||
* that the chunk has been eventually emitted, though of course child
|
||||
* classes can override this function to do whatever processing is required
|
||||
* and call `super.write(...)` only once processing is completed.
|
||||
*/
|
||||
write(chunk: WType, cb?: () => void): boolean;
|
||||
write(chunk: WType, encoding?: Minipass.Encoding, cb?: () => void): boolean;
|
||||
/**
|
||||
* Low-level explicit read method.
|
||||
*
|
||||
* In objectMode, the argument is ignored, and one item is returned if
|
||||
* available.
|
||||
*
|
||||
* `n` is the number of bytes (or in the case of encoding streams,
|
||||
* characters) to consume. If `n` is not provided, then the entire buffer
|
||||
* is returned, or `null` is returned if no data is available.
|
||||
*
|
||||
* If `n` is greater that the amount of data in the internal buffer,
|
||||
* then `null` is returned.
|
||||
*/
|
||||
read(n?: number | null): RType | null;
|
||||
[READ](n: number | null, chunk: RType): RType;
|
||||
/**
|
||||
* End the stream, optionally providing a final write.
|
||||
*
|
||||
* See {@link Minipass#write} for argument descriptions
|
||||
*/
|
||||
end(cb?: () => void): this;
|
||||
end(chunk: WType, cb?: () => void): this;
|
||||
end(chunk: WType, encoding?: Minipass.Encoding, cb?: () => void): this;
|
||||
[RESUME](): void;
|
||||
/**
|
||||
* Resume the stream if it is currently in a paused state
|
||||
*
|
||||
* If called when there are no pipe destinations or `data` event listeners,
|
||||
* this will place the stream in a "discarded" state, where all data will
|
||||
* be thrown away. The discarded state is removed if a pipe destination or
|
||||
* data handler is added, if pause() is called, or if any synchronous or
|
||||
* asynchronous iteration is started.
|
||||
*/
|
||||
resume(): void;
|
||||
/**
|
||||
* Pause the stream
|
||||
*/
|
||||
pause(): void;
|
||||
/**
|
||||
* true if the stream has been forcibly destroyed
|
||||
*/
|
||||
get destroyed(): boolean;
|
||||
/**
|
||||
* true if the stream is currently in a flowing state, meaning that
|
||||
* any writes will be immediately emitted.
|
||||
*/
|
||||
get flowing(): boolean;
|
||||
/**
|
||||
* true if the stream is currently in a paused state
|
||||
*/
|
||||
get paused(): boolean;
|
||||
[BUFFERPUSH](chunk: RType): void;
|
||||
[BUFFERSHIFT](): RType;
|
||||
[FLUSH](noDrain?: boolean): void;
|
||||
[FLUSHCHUNK](chunk: RType): boolean;
|
||||
/**
|
||||
* Pipe all data emitted by this stream into the destination provided.
|
||||
*
|
||||
* Triggers the flow of data.
|
||||
*/
|
||||
pipe<W extends Minipass.Writable>(dest: W, opts?: PipeOptions): W;
|
||||
/**
|
||||
* Fully unhook a piped destination stream.
|
||||
*
|
||||
* If the destination stream was the only consumer of this stream (ie,
|
||||
* there are no other piped destinations or `'data'` event listeners)
|
||||
* then the flow of data will stop until there is another consumer or
|
||||
* {@link Minipass#resume} is explicitly called.
|
||||
*/
|
||||
unpipe<W extends Minipass.Writable>(dest: W): void;
|
||||
/**
|
||||
* Alias for {@link Minipass#on}
|
||||
*/
|
||||
addListener<Event extends keyof Events>(ev: Event, handler: (...args: Events[Event]) => any): this;
|
||||
/**
|
||||
* Mostly identical to `EventEmitter.on`, with the following
|
||||
* behavior differences to prevent data loss and unnecessary hangs:
|
||||
*
|
||||
* - Adding a 'data' event handler will trigger the flow of data
|
||||
*
|
||||
* - Adding a 'readable' event handler when there is data waiting to be read
|
||||
* will cause 'readable' to be emitted immediately.
|
||||
*
|
||||
* - Adding an 'endish' event handler ('end', 'finish', etc.) which has
|
||||
* already passed will cause the event to be emitted immediately and all
|
||||
* handlers removed.
|
||||
*
|
||||
* - Adding an 'error' event handler after an error has been emitted will
|
||||
* cause the event to be re-emitted immediately with the error previously
|
||||
* raised.
|
||||
*/
|
||||
on<Event extends keyof Events>(ev: Event, handler: (...args: Events[Event]) => any): this;
|
||||
/**
|
||||
* Alias for {@link Minipass#off}
|
||||
*/
|
||||
removeListener<Event extends keyof Events>(ev: Event, handler: (...args: Events[Event]) => any): this;
|
||||
/**
|
||||
* Mostly identical to `EventEmitter.off`
|
||||
*
|
||||
* If a 'data' event handler is removed, and it was the last consumer
|
||||
* (ie, there are no pipe destinations or other 'data' event listeners),
|
||||
* then the flow of data will stop until there is another consumer or
|
||||
* {@link Minipass#resume} is explicitly called.
|
||||
*/
|
||||
off<Event extends keyof Events>(ev: Event, handler: (...args: Events[Event]) => any): this;
|
||||
/**
|
||||
* Mostly identical to `EventEmitter.removeAllListeners`
|
||||
*
|
||||
* If all 'data' event handlers are removed, and they were the last consumer
|
||||
* (ie, there are no pipe destinations), then the flow of data will stop
|
||||
* until there is another consumer or {@link Minipass#resume} is explicitly
|
||||
* called.
|
||||
*/
|
||||
removeAllListeners<Event extends keyof Events>(ev?: Event): this;
|
||||
/**
|
||||
* true if the 'end' event has been emitted
|
||||
*/
|
||||
get emittedEnd(): boolean;
|
||||
[MAYBE_EMIT_END](): void;
|
||||
/**
|
||||
* Mostly identical to `EventEmitter.emit`, with the following
|
||||
* behavior differences to prevent data loss and unnecessary hangs:
|
||||
*
|
||||
* If the stream has been destroyed, and the event is something other
|
||||
* than 'close' or 'error', then `false` is returned and no handlers
|
||||
* are called.
|
||||
*
|
||||
* If the event is 'end', and has already been emitted, then the event
|
||||
* is ignored. If the stream is in a paused or non-flowing state, then
|
||||
* the event will be deferred until data flow resumes. If the stream is
|
||||
* async, then handlers will be called on the next tick rather than
|
||||
* immediately.
|
||||
*
|
||||
* If the event is 'close', and 'end' has not yet been emitted, then
|
||||
* the event will be deferred until after 'end' is emitted.
|
||||
*
|
||||
* If the event is 'error', and an AbortSignal was provided for the stream,
|
||||
* and there are no listeners, then the event is ignored, matching the
|
||||
* behavior of node core streams in the presense of an AbortSignal.
|
||||
*
|
||||
* If the event is 'finish' or 'prefinish', then all listeners will be
|
||||
* removed after emitting the event, to prevent double-firing.
|
||||
*/
|
||||
emit<Event extends keyof Events>(ev: Event, ...args: Events[Event]): boolean;
|
||||
[EMITDATA](data: RType): boolean;
|
||||
[EMITEND](): boolean;
|
||||
[EMITEND2](): boolean;
|
||||
/**
|
||||
* Return a Promise that resolves to an array of all emitted data once
|
||||
* the stream ends.
|
||||
*/
|
||||
collect(): Promise<RType[] & {
|
||||
dataLength: number;
|
||||
}>;
|
||||
/**
|
||||
* Return a Promise that resolves to the concatenation of all emitted data
|
||||
* once the stream ends.
|
||||
*
|
||||
* Not allowed on objectMode streams.
|
||||
*/
|
||||
concat(): Promise<RType>;
|
||||
/**
|
||||
* Return a void Promise that resolves once the stream ends.
|
||||
*/
|
||||
promise(): Promise<void>;
|
||||
/**
|
||||
* Asynchronous `for await of` iteration.
|
||||
*
|
||||
* This will continue emitting all chunks until the stream terminates.
|
||||
*/
|
||||
[Symbol.asyncIterator](): AsyncGenerator<RType, void, void>;
|
||||
/**
|
||||
* Synchronous `for of` iteration.
|
||||
*
|
||||
* The iteration will terminate when the internal buffer runs out, even
|
||||
* if the stream has not yet terminated.
|
||||
*/
|
||||
[Symbol.iterator](): Generator<RType, void, void>;
|
||||
/**
|
||||
* Destroy a stream, preventing it from being used for any further purpose.
|
||||
*
|
||||
* If the stream has a `close()` method, then it will be called on
|
||||
* destruction.
|
||||
*
|
||||
* After destruction, any attempt to write data, read data, or emit most
|
||||
* events will be ignored.
|
||||
*
|
||||
* If an error argument is provided, then it will be emitted in an
|
||||
* 'error' event.
|
||||
*/
|
||||
destroy(er?: unknown): this;
|
||||
/**
|
||||
* Alias for {@link isStream}
|
||||
*
|
||||
* Former export location, maintained for backwards compatibility.
|
||||
*
|
||||
* @deprecated
|
||||
*/
|
||||
static get isStream(): (s: any) => s is NodeJS.WriteStream | NodeJS.ReadStream | Minipass<any, any, any> | (NodeJS.ReadStream & {
|
||||
fd: number;
|
||||
}) | (EventEmitter & {
|
||||
pause(): any;
|
||||
resume(): any;
|
||||
pipe(...destArgs: any[]): any;
|
||||
}) | (NodeJS.WriteStream & {
|
||||
fd: number;
|
||||
}) | (EventEmitter & {
|
||||
end(): any;
|
||||
write(chunk: any, ...args: any[]): any;
|
||||
});
|
||||
}
|
||||
//# sourceMappingURL=index.d.ts.map
|
||||
1
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/commonjs/index.d.ts.map
generated
vendored
Normal file
1
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/commonjs/index.d.ts.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1028
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/commonjs/index.js
generated
vendored
Normal file
1028
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/commonjs/index.js
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/commonjs/index.js.map
generated
vendored
Normal file
1
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/commonjs/index.js.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
3
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/commonjs/package.json
generated
vendored
Normal file
3
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/commonjs/package.json
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"type": "commonjs"
|
||||
}
|
||||
549
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/esm/index.d.ts
generated
vendored
Normal file
549
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/esm/index.d.ts
generated
vendored
Normal file
@@ -0,0 +1,549 @@
|
||||
/// <reference types="node" resolution-mode="require"/>
|
||||
/// <reference types="node" resolution-mode="require"/>
|
||||
/// <reference types="node" resolution-mode="require"/>
|
||||
/// <reference types="node" resolution-mode="require"/>
|
||||
import { EventEmitter } from 'node:events';
|
||||
import { StringDecoder } from 'node:string_decoder';
|
||||
/**
|
||||
* Same as StringDecoder, but exposing the `lastNeed` flag on the type
|
||||
*/
|
||||
type SD = StringDecoder & {
|
||||
lastNeed: boolean;
|
||||
};
|
||||
export type { SD, Pipe, PipeProxyErrors };
|
||||
/**
|
||||
* Return true if the argument is a Minipass stream, Node stream, or something
|
||||
* else that Minipass can interact with.
|
||||
*/
|
||||
export declare const isStream: (s: any) => s is NodeJS.WriteStream | NodeJS.ReadStream | Minipass<any, any, any> | (NodeJS.ReadStream & {
|
||||
fd: number;
|
||||
}) | (EventEmitter & {
|
||||
pause(): any;
|
||||
resume(): any;
|
||||
pipe(...destArgs: any[]): any;
|
||||
}) | (NodeJS.WriteStream & {
|
||||
fd: number;
|
||||
}) | (EventEmitter & {
|
||||
end(): any;
|
||||
write(chunk: any, ...args: any[]): any;
|
||||
});
|
||||
/**
|
||||
* Return true if the argument is a valid {@link Minipass.Readable}
|
||||
*/
|
||||
export declare const isReadable: (s: any) => s is Minipass.Readable;
|
||||
/**
|
||||
* Return true if the argument is a valid {@link Minipass.Writable}
|
||||
*/
|
||||
export declare const isWritable: (s: any) => s is Minipass.Readable;
|
||||
declare const EOF: unique symbol;
|
||||
declare const MAYBE_EMIT_END: unique symbol;
|
||||
declare const EMITTED_END: unique symbol;
|
||||
declare const EMITTING_END: unique symbol;
|
||||
declare const EMITTED_ERROR: unique symbol;
|
||||
declare const CLOSED: unique symbol;
|
||||
declare const READ: unique symbol;
|
||||
declare const FLUSH: unique symbol;
|
||||
declare const FLUSHCHUNK: unique symbol;
|
||||
declare const ENCODING: unique symbol;
|
||||
declare const DECODER: unique symbol;
|
||||
declare const FLOWING: unique symbol;
|
||||
declare const PAUSED: unique symbol;
|
||||
declare const RESUME: unique symbol;
|
||||
declare const BUFFER: unique symbol;
|
||||
declare const PIPES: unique symbol;
|
||||
declare const BUFFERLENGTH: unique symbol;
|
||||
declare const BUFFERPUSH: unique symbol;
|
||||
declare const BUFFERSHIFT: unique symbol;
|
||||
declare const OBJECTMODE: unique symbol;
|
||||
declare const DESTROYED: unique symbol;
|
||||
declare const ERROR: unique symbol;
|
||||
declare const EMITDATA: unique symbol;
|
||||
declare const EMITEND: unique symbol;
|
||||
declare const EMITEND2: unique symbol;
|
||||
declare const ASYNC: unique symbol;
|
||||
declare const ABORT: unique symbol;
|
||||
declare const ABORTED: unique symbol;
|
||||
declare const SIGNAL: unique symbol;
|
||||
declare const DATALISTENERS: unique symbol;
|
||||
declare const DISCARDED: unique symbol;
|
||||
/**
|
||||
* Options that may be passed to stream.pipe()
|
||||
*/
|
||||
export interface PipeOptions {
|
||||
/**
|
||||
* end the destination stream when the source stream ends
|
||||
*/
|
||||
end?: boolean;
|
||||
/**
|
||||
* proxy errors from the source stream to the destination stream
|
||||
*/
|
||||
proxyErrors?: boolean;
|
||||
}
|
||||
/**
|
||||
* Internal class representing a pipe to a destination stream.
|
||||
*
|
||||
* @internal
|
||||
*/
|
||||
declare class Pipe<T extends unknown> {
|
||||
src: Minipass<T>;
|
||||
dest: Minipass<any, T>;
|
||||
opts: PipeOptions;
|
||||
ondrain: () => any;
|
||||
constructor(src: Minipass<T>, dest: Minipass.Writable, opts: PipeOptions);
|
||||
unpipe(): void;
|
||||
proxyErrors(_er: any): void;
|
||||
end(): void;
|
||||
}
|
||||
/**
|
||||
* Internal class representing a pipe to a destination stream where
|
||||
* errors are proxied.
|
||||
*
|
||||
* @internal
|
||||
*/
|
||||
declare class PipeProxyErrors<T> extends Pipe<T> {
|
||||
unpipe(): void;
|
||||
constructor(src: Minipass<T>, dest: Minipass.Writable, opts: PipeOptions);
|
||||
}
|
||||
export declare namespace Minipass {
|
||||
/**
|
||||
* Encoding used to create a stream that outputs strings rather than
|
||||
* Buffer objects.
|
||||
*/
|
||||
export type Encoding = BufferEncoding | 'buffer' | null;
|
||||
/**
|
||||
* Any stream that Minipass can pipe into
|
||||
*/
|
||||
export type Writable = Minipass<any, any, any> | NodeJS.WriteStream | (NodeJS.WriteStream & {
|
||||
fd: number;
|
||||
}) | (EventEmitter & {
|
||||
end(): any;
|
||||
write(chunk: any, ...args: any[]): any;
|
||||
});
|
||||
/**
|
||||
* Any stream that can be read from
|
||||
*/
|
||||
export type Readable = Minipass<any, any, any> | NodeJS.ReadStream | (NodeJS.ReadStream & {
|
||||
fd: number;
|
||||
}) | (EventEmitter & {
|
||||
pause(): any;
|
||||
resume(): any;
|
||||
pipe(...destArgs: any[]): any;
|
||||
});
|
||||
/**
|
||||
* Utility type that can be iterated sync or async
|
||||
*/
|
||||
export type DualIterable<T> = Iterable<T> & AsyncIterable<T>;
|
||||
type EventArguments = Record<string | symbol, unknown[]>;
|
||||
/**
|
||||
* The listing of events that a Minipass class can emit.
|
||||
* Extend this when extending the Minipass class, and pass as
|
||||
* the third template argument. The key is the name of the event,
|
||||
* and the value is the argument list.
|
||||
*
|
||||
* Any undeclared events will still be allowed, but the handler will get
|
||||
* arguments as `unknown[]`.
|
||||
*/
|
||||
export interface Events<RType extends any = Buffer> extends EventArguments {
|
||||
readable: [];
|
||||
data: [chunk: RType];
|
||||
error: [er: unknown];
|
||||
abort: [reason: unknown];
|
||||
drain: [];
|
||||
resume: [];
|
||||
end: [];
|
||||
finish: [];
|
||||
prefinish: [];
|
||||
close: [];
|
||||
[DESTROYED]: [er?: unknown];
|
||||
[ERROR]: [er: unknown];
|
||||
}
|
||||
/**
|
||||
* String or buffer-like data that can be joined and sliced
|
||||
*/
|
||||
export type ContiguousData = Buffer | ArrayBufferLike | ArrayBufferView | string;
|
||||
export type BufferOrString = Buffer | string;
|
||||
/**
|
||||
* Options passed to the Minipass constructor.
|
||||
*/
|
||||
export type SharedOptions = {
|
||||
/**
|
||||
* Defer all data emission and other events until the end of the
|
||||
* current tick, similar to Node core streams
|
||||
*/
|
||||
async?: boolean;
|
||||
/**
|
||||
* A signal which will abort the stream
|
||||
*/
|
||||
signal?: AbortSignal;
|
||||
/**
|
||||
* Output string encoding. Set to `null` or `'buffer'` (or omit) to
|
||||
* emit Buffer objects rather than strings.
|
||||
*
|
||||
* Conflicts with `objectMode`
|
||||
*/
|
||||
encoding?: BufferEncoding | null | 'buffer';
|
||||
/**
|
||||
* Output data exactly as it was written, supporting non-buffer/string
|
||||
* data (such as arbitrary objects, falsey values, etc.)
|
||||
*
|
||||
* Conflicts with `encoding`
|
||||
*/
|
||||
objectMode?: boolean;
|
||||
};
|
||||
/**
|
||||
* Options for a string encoded output
|
||||
*/
|
||||
export type EncodingOptions = SharedOptions & {
|
||||
encoding: BufferEncoding;
|
||||
objectMode?: false;
|
||||
};
|
||||
/**
|
||||
* Options for contiguous data buffer output
|
||||
*/
|
||||
export type BufferOptions = SharedOptions & {
|
||||
encoding?: null | 'buffer';
|
||||
objectMode?: false;
|
||||
};
|
||||
/**
|
||||
* Options for objectMode arbitrary output
|
||||
*/
|
||||
export type ObjectModeOptions = SharedOptions & {
|
||||
objectMode: true;
|
||||
encoding?: null;
|
||||
};
|
||||
/**
|
||||
* Utility type to determine allowed options based on read type
|
||||
*/
|
||||
export type Options<T> = ObjectModeOptions | (T extends string ? EncodingOptions : T extends Buffer ? BufferOptions : SharedOptions);
|
||||
export {};
|
||||
}
|
||||
/**
|
||||
* Main export, the Minipass class
|
||||
*
|
||||
* `RType` is the type of data emitted, defaults to Buffer
|
||||
*
|
||||
* `WType` is the type of data to be written, if RType is buffer or string,
|
||||
* then any {@link Minipass.ContiguousData} is allowed.
|
||||
*
|
||||
* `Events` is the set of event handler signatures that this object
|
||||
* will emit, see {@link Minipass.Events}
|
||||
*/
|
||||
export declare class Minipass<RType extends unknown = Buffer, WType extends unknown = RType extends Minipass.BufferOrString ? Minipass.ContiguousData : RType, Events extends Minipass.Events<RType> = Minipass.Events<RType>> extends EventEmitter implements Minipass.DualIterable<RType> {
|
||||
[FLOWING]: boolean;
|
||||
[PAUSED]: boolean;
|
||||
[PIPES]: Pipe<RType>[];
|
||||
[BUFFER]: RType[];
|
||||
[OBJECTMODE]: boolean;
|
||||
[ENCODING]: BufferEncoding | null;
|
||||
[ASYNC]: boolean;
|
||||
[DECODER]: SD | null;
|
||||
[EOF]: boolean;
|
||||
[EMITTED_END]: boolean;
|
||||
[EMITTING_END]: boolean;
|
||||
[CLOSED]: boolean;
|
||||
[EMITTED_ERROR]: unknown;
|
||||
[BUFFERLENGTH]: number;
|
||||
[DESTROYED]: boolean;
|
||||
[SIGNAL]?: AbortSignal;
|
||||
[ABORTED]: boolean;
|
||||
[DATALISTENERS]: number;
|
||||
[DISCARDED]: boolean;
|
||||
/**
|
||||
* true if the stream can be written
|
||||
*/
|
||||
writable: boolean;
|
||||
/**
|
||||
* true if the stream can be read
|
||||
*/
|
||||
readable: boolean;
|
||||
/**
|
||||
* If `RType` is Buffer, then options do not need to be provided.
|
||||
* Otherwise, an options object must be provided to specify either
|
||||
* {@link Minipass.SharedOptions.objectMode} or
|
||||
* {@link Minipass.SharedOptions.encoding}, as appropriate.
|
||||
*/
|
||||
constructor(...args: [Minipass.ObjectModeOptions] | (RType extends Buffer ? [] | [Minipass.Options<RType>] : [Minipass.Options<RType>]));
|
||||
/**
|
||||
* The amount of data stored in the buffer waiting to be read.
|
||||
*
|
||||
* For Buffer strings, this will be the total byte length.
|
||||
* For string encoding streams, this will be the string character length,
|
||||
* according to JavaScript's `string.length` logic.
|
||||
* For objectMode streams, this is a count of the items waiting to be
|
||||
* emitted.
|
||||
*/
|
||||
get bufferLength(): number;
|
||||
/**
|
||||
* The `BufferEncoding` currently in use, or `null`
|
||||
*/
|
||||
get encoding(): BufferEncoding | null;
|
||||
/**
|
||||
* @deprecated - This is a read only property
|
||||
*/
|
||||
set encoding(_enc: BufferEncoding | null);
|
||||
/**
|
||||
* @deprecated - Encoding may only be set at instantiation time
|
||||
*/
|
||||
setEncoding(_enc: Minipass.Encoding): void;
|
||||
/**
|
||||
* True if this is an objectMode stream
|
||||
*/
|
||||
get objectMode(): boolean;
|
||||
/**
|
||||
* @deprecated - This is a read-only property
|
||||
*/
|
||||
set objectMode(_om: boolean);
|
||||
/**
|
||||
* true if this is an async stream
|
||||
*/
|
||||
get ['async'](): boolean;
|
||||
/**
|
||||
* Set to true to make this stream async.
|
||||
*
|
||||
* Once set, it cannot be unset, as this would potentially cause incorrect
|
||||
* behavior. Ie, a sync stream can be made async, but an async stream
|
||||
* cannot be safely made sync.
|
||||
*/
|
||||
set ['async'](a: boolean);
|
||||
[ABORT](): void;
|
||||
/**
|
||||
* True if the stream has been aborted.
|
||||
*/
|
||||
get aborted(): boolean;
|
||||
/**
|
||||
* No-op setter. Stream aborted status is set via the AbortSignal provided
|
||||
* in the constructor options.
|
||||
*/
|
||||
set aborted(_: boolean);
|
||||
/**
|
||||
* Write data into the stream
|
||||
*
|
||||
* If the chunk written is a string, and encoding is not specified, then
|
||||
* `utf8` will be assumed. If the stream encoding matches the encoding of
|
||||
* a written string, and the state of the string decoder allows it, then
|
||||
* the string will be passed through to either the output or the internal
|
||||
* buffer without any processing. Otherwise, it will be turned into a
|
||||
* Buffer object for processing into the desired encoding.
|
||||
*
|
||||
* If provided, `cb` function is called immediately before return for
|
||||
* sync streams, or on next tick for async streams, because for this
|
||||
* base class, a chunk is considered "processed" once it is accepted
|
||||
* and either emitted or buffered. That is, the callback does not indicate
|
||||
* that the chunk has been eventually emitted, though of course child
|
||||
* classes can override this function to do whatever processing is required
|
||||
* and call `super.write(...)` only once processing is completed.
|
||||
*/
|
||||
write(chunk: WType, cb?: () => void): boolean;
|
||||
write(chunk: WType, encoding?: Minipass.Encoding, cb?: () => void): boolean;
|
||||
/**
|
||||
* Low-level explicit read method.
|
||||
*
|
||||
* In objectMode, the argument is ignored, and one item is returned if
|
||||
* available.
|
||||
*
|
||||
* `n` is the number of bytes (or in the case of encoding streams,
|
||||
* characters) to consume. If `n` is not provided, then the entire buffer
|
||||
* is returned, or `null` is returned if no data is available.
|
||||
*
|
||||
* If `n` is greater that the amount of data in the internal buffer,
|
||||
* then `null` is returned.
|
||||
*/
|
||||
read(n?: number | null): RType | null;
|
||||
[READ](n: number | null, chunk: RType): RType;
|
||||
/**
|
||||
* End the stream, optionally providing a final write.
|
||||
*
|
||||
* See {@link Minipass#write} for argument descriptions
|
||||
*/
|
||||
end(cb?: () => void): this;
|
||||
end(chunk: WType, cb?: () => void): this;
|
||||
end(chunk: WType, encoding?: Minipass.Encoding, cb?: () => void): this;
|
||||
[RESUME](): void;
|
||||
/**
|
||||
* Resume the stream if it is currently in a paused state
|
||||
*
|
||||
* If called when there are no pipe destinations or `data` event listeners,
|
||||
* this will place the stream in a "discarded" state, where all data will
|
||||
* be thrown away. The discarded state is removed if a pipe destination or
|
||||
* data handler is added, if pause() is called, or if any synchronous or
|
||||
* asynchronous iteration is started.
|
||||
*/
|
||||
resume(): void;
|
||||
/**
|
||||
* Pause the stream
|
||||
*/
|
||||
pause(): void;
|
||||
/**
|
||||
* true if the stream has been forcibly destroyed
|
||||
*/
|
||||
get destroyed(): boolean;
|
||||
/**
|
||||
* true if the stream is currently in a flowing state, meaning that
|
||||
* any writes will be immediately emitted.
|
||||
*/
|
||||
get flowing(): boolean;
|
||||
/**
|
||||
* true if the stream is currently in a paused state
|
||||
*/
|
||||
get paused(): boolean;
|
||||
[BUFFERPUSH](chunk: RType): void;
|
||||
[BUFFERSHIFT](): RType;
|
||||
[FLUSH](noDrain?: boolean): void;
|
||||
[FLUSHCHUNK](chunk: RType): boolean;
|
||||
/**
|
||||
* Pipe all data emitted by this stream into the destination provided.
|
||||
*
|
||||
* Triggers the flow of data.
|
||||
*/
|
||||
pipe<W extends Minipass.Writable>(dest: W, opts?: PipeOptions): W;
|
||||
/**
|
||||
* Fully unhook a piped destination stream.
|
||||
*
|
||||
* If the destination stream was the only consumer of this stream (ie,
|
||||
* there are no other piped destinations or `'data'` event listeners)
|
||||
* then the flow of data will stop until there is another consumer or
|
||||
* {@link Minipass#resume} is explicitly called.
|
||||
*/
|
||||
unpipe<W extends Minipass.Writable>(dest: W): void;
|
||||
/**
|
||||
* Alias for {@link Minipass#on}
|
||||
*/
|
||||
addListener<Event extends keyof Events>(ev: Event, handler: (...args: Events[Event]) => any): this;
|
||||
/**
|
||||
* Mostly identical to `EventEmitter.on`, with the following
|
||||
* behavior differences to prevent data loss and unnecessary hangs:
|
||||
*
|
||||
* - Adding a 'data' event handler will trigger the flow of data
|
||||
*
|
||||
* - Adding a 'readable' event handler when there is data waiting to be read
|
||||
* will cause 'readable' to be emitted immediately.
|
||||
*
|
||||
* - Adding an 'endish' event handler ('end', 'finish', etc.) which has
|
||||
* already passed will cause the event to be emitted immediately and all
|
||||
* handlers removed.
|
||||
*
|
||||
* - Adding an 'error' event handler after an error has been emitted will
|
||||
* cause the event to be re-emitted immediately with the error previously
|
||||
* raised.
|
||||
*/
|
||||
on<Event extends keyof Events>(ev: Event, handler: (...args: Events[Event]) => any): this;
|
||||
/**
|
||||
* Alias for {@link Minipass#off}
|
||||
*/
|
||||
removeListener<Event extends keyof Events>(ev: Event, handler: (...args: Events[Event]) => any): this;
|
||||
/**
|
||||
* Mostly identical to `EventEmitter.off`
|
||||
*
|
||||
* If a 'data' event handler is removed, and it was the last consumer
|
||||
* (ie, there are no pipe destinations or other 'data' event listeners),
|
||||
* then the flow of data will stop until there is another consumer or
|
||||
* {@link Minipass#resume} is explicitly called.
|
||||
*/
|
||||
off<Event extends keyof Events>(ev: Event, handler: (...args: Events[Event]) => any): this;
|
||||
/**
|
||||
* Mostly identical to `EventEmitter.removeAllListeners`
|
||||
*
|
||||
* If all 'data' event handlers are removed, and they were the last consumer
|
||||
* (ie, there are no pipe destinations), then the flow of data will stop
|
||||
* until there is another consumer or {@link Minipass#resume} is explicitly
|
||||
* called.
|
||||
*/
|
||||
removeAllListeners<Event extends keyof Events>(ev?: Event): this;
|
||||
/**
|
||||
* true if the 'end' event has been emitted
|
||||
*/
|
||||
get emittedEnd(): boolean;
|
||||
[MAYBE_EMIT_END](): void;
|
||||
/**
|
||||
* Mostly identical to `EventEmitter.emit`, with the following
|
||||
* behavior differences to prevent data loss and unnecessary hangs:
|
||||
*
|
||||
* If the stream has been destroyed, and the event is something other
|
||||
* than 'close' or 'error', then `false` is returned and no handlers
|
||||
* are called.
|
||||
*
|
||||
* If the event is 'end', and has already been emitted, then the event
|
||||
* is ignored. If the stream is in a paused or non-flowing state, then
|
||||
* the event will be deferred until data flow resumes. If the stream is
|
||||
* async, then handlers will be called on the next tick rather than
|
||||
* immediately.
|
||||
*
|
||||
* If the event is 'close', and 'end' has not yet been emitted, then
|
||||
* the event will be deferred until after 'end' is emitted.
|
||||
*
|
||||
* If the event is 'error', and an AbortSignal was provided for the stream,
|
||||
* and there are no listeners, then the event is ignored, matching the
|
||||
* behavior of node core streams in the presense of an AbortSignal.
|
||||
*
|
||||
* If the event is 'finish' or 'prefinish', then all listeners will be
|
||||
* removed after emitting the event, to prevent double-firing.
|
||||
*/
|
||||
emit<Event extends keyof Events>(ev: Event, ...args: Events[Event]): boolean;
|
||||
[EMITDATA](data: RType): boolean;
|
||||
[EMITEND](): boolean;
|
||||
[EMITEND2](): boolean;
|
||||
/**
|
||||
* Return a Promise that resolves to an array of all emitted data once
|
||||
* the stream ends.
|
||||
*/
|
||||
collect(): Promise<RType[] & {
|
||||
dataLength: number;
|
||||
}>;
|
||||
/**
|
||||
* Return a Promise that resolves to the concatenation of all emitted data
|
||||
* once the stream ends.
|
||||
*
|
||||
* Not allowed on objectMode streams.
|
||||
*/
|
||||
concat(): Promise<RType>;
|
||||
/**
|
||||
* Return a void Promise that resolves once the stream ends.
|
||||
*/
|
||||
promise(): Promise<void>;
|
||||
/**
|
||||
* Asynchronous `for await of` iteration.
|
||||
*
|
||||
* This will continue emitting all chunks until the stream terminates.
|
||||
*/
|
||||
[Symbol.asyncIterator](): AsyncGenerator<RType, void, void>;
|
||||
/**
|
||||
* Synchronous `for of` iteration.
|
||||
*
|
||||
* The iteration will terminate when the internal buffer runs out, even
|
||||
* if the stream has not yet terminated.
|
||||
*/
|
||||
[Symbol.iterator](): Generator<RType, void, void>;
|
||||
/**
|
||||
* Destroy a stream, preventing it from being used for any further purpose.
|
||||
*
|
||||
* If the stream has a `close()` method, then it will be called on
|
||||
* destruction.
|
||||
*
|
||||
* After destruction, any attempt to write data, read data, or emit most
|
||||
* events will be ignored.
|
||||
*
|
||||
* If an error argument is provided, then it will be emitted in an
|
||||
* 'error' event.
|
||||
*/
|
||||
destroy(er?: unknown): this;
|
||||
/**
|
||||
* Alias for {@link isStream}
|
||||
*
|
||||
* Former export location, maintained for backwards compatibility.
|
||||
*
|
||||
* @deprecated
|
||||
*/
|
||||
static get isStream(): (s: any) => s is NodeJS.WriteStream | NodeJS.ReadStream | Minipass<any, any, any> | (NodeJS.ReadStream & {
|
||||
fd: number;
|
||||
}) | (EventEmitter & {
|
||||
pause(): any;
|
||||
resume(): any;
|
||||
pipe(...destArgs: any[]): any;
|
||||
}) | (NodeJS.WriteStream & {
|
||||
fd: number;
|
||||
}) | (EventEmitter & {
|
||||
end(): any;
|
||||
write(chunk: any, ...args: any[]): any;
|
||||
});
|
||||
}
|
||||
//# sourceMappingURL=index.d.ts.map
|
||||
1
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/esm/index.d.ts.map
generated
vendored
Normal file
1
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/esm/index.d.ts.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
1018
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/esm/index.js
generated
vendored
Normal file
1018
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/esm/index.js
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/esm/index.js.map
generated
vendored
Normal file
1
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/esm/index.js.map
generated
vendored
Normal file
File diff suppressed because one or more lines are too long
3
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/esm/package.json
generated
vendored
Normal file
3
desktop-operator/node_modules/path-scurry/node_modules/minipass/dist/esm/package.json
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"type": "module"
|
||||
}
|
||||
82
desktop-operator/node_modules/path-scurry/node_modules/minipass/package.json
generated
vendored
Normal file
82
desktop-operator/node_modules/path-scurry/node_modules/minipass/package.json
generated
vendored
Normal file
@@ -0,0 +1,82 @@
|
||||
{
|
||||
"name": "minipass",
|
||||
"version": "7.1.2",
|
||||
"description": "minimal implementation of a PassThrough stream",
|
||||
"main": "./dist/commonjs/index.js",
|
||||
"types": "./dist/commonjs/index.d.ts",
|
||||
"type": "module",
|
||||
"tshy": {
|
||||
"selfLink": false,
|
||||
"main": true,
|
||||
"exports": {
|
||||
"./package.json": "./package.json",
|
||||
".": "./src/index.ts"
|
||||
}
|
||||
},
|
||||
"exports": {
|
||||
"./package.json": "./package.json",
|
||||
".": {
|
||||
"import": {
|
||||
"types": "./dist/esm/index.d.ts",
|
||||
"default": "./dist/esm/index.js"
|
||||
},
|
||||
"require": {
|
||||
"types": "./dist/commonjs/index.d.ts",
|
||||
"default": "./dist/commonjs/index.js"
|
||||
}
|
||||
}
|
||||
},
|
||||
"files": [
|
||||
"dist"
|
||||
],
|
||||
"scripts": {
|
||||
"preversion": "npm test",
|
||||
"postversion": "npm publish",
|
||||
"prepublishOnly": "git push origin --follow-tags",
|
||||
"prepare": "tshy",
|
||||
"pretest": "npm run prepare",
|
||||
"presnap": "npm run prepare",
|
||||
"test": "tap",
|
||||
"snap": "tap",
|
||||
"format": "prettier --write . --loglevel warn",
|
||||
"typedoc": "typedoc --tsconfig .tshy/esm.json ./src/*.ts"
|
||||
},
|
||||
"prettier": {
|
||||
"semi": false,
|
||||
"printWidth": 75,
|
||||
"tabWidth": 2,
|
||||
"useTabs": false,
|
||||
"singleQuote": true,
|
||||
"jsxSingleQuote": false,
|
||||
"bracketSameLine": true,
|
||||
"arrowParens": "avoid",
|
||||
"endOfLine": "lf"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/end-of-stream": "^1.4.2",
|
||||
"@types/node": "^20.1.2",
|
||||
"end-of-stream": "^1.4.0",
|
||||
"node-abort-controller": "^3.1.1",
|
||||
"prettier": "^2.6.2",
|
||||
"tap": "^19.0.0",
|
||||
"through2": "^2.0.3",
|
||||
"tshy": "^1.14.0",
|
||||
"typedoc": "^0.25.1"
|
||||
},
|
||||
"repository": "https://github.com/isaacs/minipass",
|
||||
"keywords": [
|
||||
"passthrough",
|
||||
"stream"
|
||||
],
|
||||
"author": "Isaac Z. Schlueter <i@izs.me> (http://blog.izs.me/)",
|
||||
"license": "ISC",
|
||||
"engines": {
|
||||
"node": ">=16 || 14 >=14.17"
|
||||
},
|
||||
"tap": {
|
||||
"typecheck": true,
|
||||
"include": [
|
||||
"test/*.ts"
|
||||
]
|
||||
}
|
||||
}
|
||||
89
desktop-operator/node_modules/path-scurry/package.json
generated
vendored
Normal file
89
desktop-operator/node_modules/path-scurry/package.json
generated
vendored
Normal file
@@ -0,0 +1,89 @@
|
||||
{
|
||||
"name": "path-scurry",
|
||||
"version": "1.11.1",
|
||||
"description": "walk paths fast and efficiently",
|
||||
"author": "Isaac Z. Schlueter <i@izs.me> (https://blog.izs.me)",
|
||||
"main": "./dist/commonjs/index.js",
|
||||
"type": "module",
|
||||
"exports": {
|
||||
"./package.json": "./package.json",
|
||||
".": {
|
||||
"import": {
|
||||
"types": "./dist/esm/index.d.ts",
|
||||
"default": "./dist/esm/index.js"
|
||||
},
|
||||
"require": {
|
||||
"types": "./dist/commonjs/index.d.ts",
|
||||
"default": "./dist/commonjs/index.js"
|
||||
}
|
||||
}
|
||||
},
|
||||
"files": [
|
||||
"dist"
|
||||
],
|
||||
"license": "BlueOak-1.0.0",
|
||||
"scripts": {
|
||||
"preversion": "npm test",
|
||||
"postversion": "npm publish",
|
||||
"prepublishOnly": "git push origin --follow-tags",
|
||||
"prepare": "tshy",
|
||||
"pretest": "npm run prepare",
|
||||
"presnap": "npm run prepare",
|
||||
"test": "tap",
|
||||
"snap": "tap",
|
||||
"format": "prettier --write . --loglevel warn",
|
||||
"typedoc": "typedoc --tsconfig tsconfig-esm.json ./src/*.ts",
|
||||
"bench": "bash ./scripts/bench.sh"
|
||||
},
|
||||
"prettier": {
|
||||
"experimentalTernaries": true,
|
||||
"semi": false,
|
||||
"printWidth": 75,
|
||||
"tabWidth": 2,
|
||||
"useTabs": false,
|
||||
"singleQuote": true,
|
||||
"jsxSingleQuote": false,
|
||||
"bracketSameLine": true,
|
||||
"arrowParens": "avoid",
|
||||
"endOfLine": "lf"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@nodelib/fs.walk": "^1.2.8",
|
||||
"@types/node": "^20.12.11",
|
||||
"c8": "^7.12.0",
|
||||
"eslint-config-prettier": "^8.6.0",
|
||||
"mkdirp": "^3.0.0",
|
||||
"prettier": "^3.2.5",
|
||||
"rimraf": "^5.0.1",
|
||||
"tap": "^18.7.2",
|
||||
"ts-node": "^10.9.2",
|
||||
"tshy": "^1.14.0",
|
||||
"typedoc": "^0.25.12",
|
||||
"typescript": "^5.4.3"
|
||||
},
|
||||
"tap": {
|
||||
"typecheck": true
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=16 || 14 >=14.18"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/isaacs"
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+https://github.com/isaacs/path-scurry"
|
||||
},
|
||||
"dependencies": {
|
||||
"lru-cache": "^10.2.0",
|
||||
"minipass": "^5.0.0 || ^6.0.2 || ^7.0.0"
|
||||
},
|
||||
"tshy": {
|
||||
"selfLink": false,
|
||||
"exports": {
|
||||
"./package.json": "./package.json",
|
||||
".": "./src/index.ts"
|
||||
}
|
||||
},
|
||||
"types": "./dist/commonjs/index.d.ts"
|
||||
}
|
||||
Reference in New Issue
Block a user