diff --git a/.github/workflows/mkdocs.yml b/.github/workflows/mkdocs.yml new file mode 100644 index 000000000..2770b5c5e --- /dev/null +++ b/.github/workflows/mkdocs.yml @@ -0,0 +1,23 @@ +name: "mkdocs" +on: + push: + branches: + - latest-release +permissions: + contents: write +jobs: + deploy: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - name: Configure Git Credentials + run: | + git config user.name github-actions[bot] + git config user.email 41898282+github-actions[bot]@users.noreply.github.com + - uses: actions/setup-python@v5 + with: + python-version: 3.x + - run: pip install -r requirements.txt + working-directory: ./mkdocs + - run: mkdocs gh-deploy --force + working-directory: ./mkdocs diff --git a/.gitignore b/.gitignore index 7a510743f..d9081db27 100644 --- a/.gitignore +++ b/.gitignore @@ -23,3 +23,8 @@ debian/changelog # RPM files rpmbuild/ + +# mkdocs files +mkdocs/env +mkdocs/.env +__pycache__/ diff --git a/mkdocs/README.md b/mkdocs/README.md new file mode 100644 index 000000000..4da62d53e --- /dev/null +++ b/mkdocs/README.md @@ -0,0 +1,15 @@ +# mkdocs + +## Getting started + +```bash +python3 -m venv env +source env/bin/activate +pip3 install -r requirements.txt +mkdocs serve +``` + +## References + +- https://squidfunk.github.io/mkdocs-material/ +- https://www.mkdocs.org/ diff --git a/mkdocs/docs/index.md b/mkdocs/docs/index.md new file mode 100644 index 000000000..95bfa86c4 --- /dev/null +++ b/mkdocs/docs/index.md @@ -0,0 +1,35 @@ +# mergerfs - a featureful union filesystem + +## DESCRIPTION + +**mergerfs** is a union filesystem geared towards simplifying storage +and management of files across numerous commodity storage devices. It +is similar to **mhddfs**, **unionfs**, and **aufs**. + +## FEATURES + +- Configurable behaviors / file placement +- Ability to add or remove filesystems at will +- Resistance to individual filesystem failure +- Support for extended attributes (xattrs) +- Support for file attributes (chattr) +- Runtime configurable (via xattrs) +- Works with heterogeneous filesystem types +- Moving of file when filesystem runs out of space while writing +- Ignore read-only filesystems when creating files +- Turn read-only files into symlinks to underlying file +- Hard link copy-on-write / CoW +- Support for POSIX ACLs +- Misc other things + +## SYNOPSIS + +mergerfs -o<options> <branches> <mountpoint> + +## DOCUMENTATION + +- [https://oregonpillow.github.io/](https://oregonpillow.github.io/) + +## TOOLS + +- [mergerfs tools](https://github.com/trapexit/mergerfs-tools) diff --git a/mkdocs/docs/logo.jpeg b/mkdocs/docs/logo.jpeg new file mode 100644 index 000000000..5f0c6129c Binary files /dev/null and b/mkdocs/docs/logo.jpeg differ diff --git a/mkdocs/docs/pages/documentation/basic_setup.md b/mkdocs/docs/pages/documentation/basic_setup.md new file mode 100644 index 000000000..e5e7dbe84 --- /dev/null +++ b/mkdocs/docs/pages/documentation/basic_setup.md @@ -0,0 +1,54 @@ +# BASIC SETUP + +If you don't already know that you have a special use case then just +start with one of the following option sets. + +#### You need `mmap` (used by rtorrent and many sqlite3 base software) + +`cache.files=auto-full,dropcacheonclose=true,category.create=mfs` + +or if you are on a Linux kernel >= 6.6.x mergerfs will enable a mode +that allows shared mmap when `cache.files=off`. To be sure of the best +performance between `cache.files=off` and `cache.files=auto-full` +you'll need to do your own benchmarking but often `off` is faster. + +#### You don't need `mmap` + +`cache.files=off,dropcacheonclose=true,category.create=mfs` + +### Command Line + +`mergerfs -o cache.files=auto-full,dropcacheonclose=true,category.create=mfs /mnt/hdd0:/mnt/hdd1 /media` + +### /etc/fstab + +`/mnt/hdd0:/mnt/hdd1 /media mergerfs cache.files=auto-full,dropcacheonclose=true,category.create=mfs 0 0` + +### systemd mount + +https://github.com/trapexit/mergerfs/wiki/systemd + +``` +[Unit] +Description=mergerfs service + +[Service] +Type=simple +KillMode=none +ExecStart=/usr/bin/mergerfs \ + -f \ + -o cache.files=auto-full \ + -o dropcacheonclose=true \ + -o category.create=mfs \ + /mnt/hdd0:/mnt/hdd1 \ + /media +ExecStop=/bin/fusermount -uz /media +Restart=on-failure + +[Install] +WantedBy=default.target +``` + +See the mergerfs [wiki for real world +deployments](https://github.com/trapexit/mergerfs/wiki/Real-World-Deployments) +for comparisons / ideas. diff --git a/mkdocs/docs/pages/documentation/benchmarking.md b/mkdocs/docs/pages/documentation/benchmarking.md new file mode 100644 index 000000000..95749fe9e --- /dev/null +++ b/mkdocs/docs/pages/documentation/benchmarking.md @@ -0,0 +1,80 @@ +# BENCHMARKING + +Filesystems are complicated. They do many things and many of those are +interconnected. Additionally, the OS, drivers, hardware, etc. can all +impact performance. Therefore, when benchmarking, it is **necessary** +that the test focuses as narrowly as possible. + +For most throughput is the key benchmark. To test throughput `dd` is +useful but **must** be used with the correct settings in order to +ensure the filesystem or device is actually being tested. The OS can +and will cache data. Without forcing synchronous reads and writes +and/or disabling caching the values returned will not be +representative of the device's true performance. + +When benchmarking through mergerfs ensure you only use 1 branch to +remove any possibility of the policies complicating the +situation. Benchmark the underlying filesystem first and then mount +mergerfs over it and test again. If you're experiencing speeds below +your expectation you will need to narrow down precisely which +component is leading to the slowdown. Preferably test the following in +the order listed (but not combined). + +1. Enable `nullrw` mode with `nullrw=true`. This will effectively make + reads and writes no-ops. Removing the underlying device / + filesystem from the equation. This will give us the top theoretical + speeds. +2. Mount mergerfs over `tmpfs`. `tmpfs` is a RAM disk. Extremely high + speed and very low latency. This is a more realistic best case + scenario. Example: `mount -t tmpfs -o size=2G tmpfs /tmp/tmpfs` +3. Mount mergerfs over a local device. NVMe, SSD, HDD, etc. If you + have more than one I'd suggest testing each of them as drives + and/or controllers (their drivers) could impact performance. +4. Finally, if you intend to use mergerfs with a network filesystem, + either as the source of data or to combine with another through + mergerfs, test each of those alone as above. + +Once you find the component which has the performance issue you can do +further testing with different options to see if they impact +performance. For reads and writes the most relevant would be: +`cache.files`, `async_read`. Less likely but relevant when using NFS +or with certain filesystems would be `security_capability`, `xattr`, +and `posix_acl`. If you find a specific system, device, filesystem, +controller, etc. that performs poorly contact trapexit so he may +investigate further. + +Sometimes the problem is really the application accessing or writing +data through mergerfs. Some software use small buffer sizes which can +lead to more requests and therefore greater overhead. You can test +this out yourself by replacing `bs=1M` in the examples below with `ibs` +or `obs` and using a size of `512` instead of `1M`. In one example +test using `nullrw` the write speed dropped from 4.9GB/s to 69.7MB/s +when moving from `1M` to `512`. Similar results were had when testing +reads. Small writes overhead may be improved by leveraging a write +cache but in casual tests little gain was found. More tests will need +to be done before this feature would become available. If you have an +app that appears slow with mergerfs it could be due to this. Contact +trapexit so he may investigate further. + +### write benchmark + +``` +$ dd if=/dev/zero of=/mnt/mergerfs/1GB.file bs=1M count=1024 oflag=nocache conv=fdatasync status=progress +``` + +### read benchmark + +``` +$ dd if=/mnt/mergerfs/1GB.file of=/dev/null bs=1M count=1024 iflag=nocache conv=fdatasync status=progress +``` + +### other benchmarks + +If you are attempting to benchmark other behaviors you must ensure you +clear kernel caches before runs. In fact it would be a good deal to +run before the read and write benchmarks as well just in case. + +``` +sync +echo 3 | sudo tee /proc/sys/vm/drop_caches +``` diff --git a/mkdocs/docs/pages/documentation/build.md b/mkdocs/docs/pages/documentation/build.md new file mode 100644 index 000000000..544b60017 --- /dev/null +++ b/mkdocs/docs/pages/documentation/build.md @@ -0,0 +1,30 @@ +# BUILD + +**NOTE:** Prebuilt packages can be found at and recommended for most +users: https://github.com/trapexit/mergerfs/releases + +**NOTE:** Only tagged releases are supported. `master` and other +branches should be considered works in progress. + +First, get the code from [github](https://github.com/trapexit/mergerfs). + +``` +$ git clone https://github.com/trapexit/mergerfs.git +$ # or +$ wget https://github.com/trapexit/mergerfs/releases/download//mergerfs-.tar.gz +``` + +#### Debian / Ubuntu + +``` +$ cd mergerfs +$ sudo tools/install-build-pkgs +$ make deb +$ sudo dpkg -i ../mergerfs__.deb +``` + +#### RHEL / CentOS / Rocky / Fedora + +``` +$ su - +``` diff --git a/mkdocs/docs/pages/documentation/caching.md b/mkdocs/docs/pages/documentation/caching.md new file mode 100644 index 000000000..72b726789 --- /dev/null +++ b/mkdocs/docs/pages/documentation/caching.md @@ -0,0 +1,213 @@ +# CACHING + +#### page caching + +https://en.wikipedia.org/wiki/Page_cache + +- cache.files=off: Disables page caching. Underlying files cached, + mergerfs files are not. +- cache.files=partial: Enables page caching. Underlying files cached, + mergerfs files cached while open. +- cache.files=full: Enables page caching. Underlying files cached, + mergerfs files cached across opens. +- cache.files=auto-full: Enables page caching. Underlying files + cached, mergerfs files cached across opens if mtime and size are + unchanged since previous open. +- cache.files=libfuse: follow traditional libfuse `direct_io`, + `kernel_cache`, and `auto_cache` arguments. +- cache.files=per-process: Enable page caching (equivalent to + `cache.files=partial`) only for processes whose 'comm' name matches + one of the values defined in `cache.files.process-names`. If the + name does not match the file open is equivalent to + `cache.files=off`. + +FUSE, which mergerfs uses, offers a number of page caching modes. mergerfs tries to simplify their use via the `cache.files` +option. It can and should replace usage of `direct_io`, +`kernel_cache`, and `auto_cache`. + +Due to mergerfs using FUSE and therefore being a userland process +proxying existing filesystems the kernel will double cache the content +being read and written through mergerfs. Once from the underlying +filesystem and once from mergerfs (it sees them as two separate +entities). Using `cache.files=off` will keep the double caching from +happening by disabling caching of mergerfs but this has the side +effect that _all_ read and write calls will be passed to mergerfs +which may be slower than enabling caching, you lose shared `mmap` +support which can affect apps such as rtorrent, and no read-ahead will +take place. The kernel will still cache the underlying filesystem data +but that only helps so much given mergerfs will still process all +requests. + +If you do enable file page caching, +`cache.files=partial|full|auto-full`, you should also enable +`dropcacheonclose` which will cause mergerfs to instruct the kernel to +flush the underlying file's page cache when the file is closed. This +behavior is the same as the rsync fadvise / drop cache patch and Feh's +nocache project. + +If most files are read once through and closed (like media) it is best +to enable `dropcacheonclose` regardless of caching mode in order to +minimize buffer bloat. + +It is difficult to balance memory usage, cache bloat & duplication, +and performance. Ideally, mergerfs would be able to disable caching for +the files it reads/writes but allow page caching for itself. That +would limit the FUSE overhead. However, there isn't a good way to +achieve this. It would need to open all files with O_DIRECT which +places limitations on what the underlying filesystems would be +supported and complicates the code. + +kernel documentation: https://www.kernel.org/doc/Documentation/filesystems/fuse-io.txt + +#### entry & attribute caching + +Given the relatively high cost of FUSE due to the kernel <-> userspace +round trips there are kernel side caches for file entries and +attributes. The entry cache limits the `lookup` calls to mergerfs +which ask if a file exists. The attribute cache limits the need to +make `getattr` calls to mergerfs which provide file attributes (mode, +size, type, etc.). As with the page cache these should not be used if +the underlying filesystems are being manipulated at the same time as +it could lead to odd behavior or data corruption. The options for +setting these are `cache.entry` and `cache.negative_entry` for the +entry cache and `cache.attr` for the attributes +cache. `cache.negative_entry` refers to the timeout for negative +responses to lookups (non-existent files). + +#### writeback caching + +When `cache.files` is enabled the default is for it to perform +writethrough caching. This behavior won't help improve performance as +each write still goes one for one through the filesystem. By enabling +the FUSE writeback cache small writes may be aggregated by the kernel +and then sent to mergerfs as one larger request. This can greatly +improve the throughput for apps which write to files +inefficiently. The amount the kernel can aggregate is limited by the +size of a FUSE message. Read the `fuse_msg_size` section for more +details. + +There is a small side effect as a result of enabling writeback +caching. Underlying files won't ever be opened with O_APPEND or +O_WRONLY. The former because the kernel then manages append mode and +the latter because the kernel may request file data from mergerfs to +populate the write cache. The O_APPEND change means that if a file is +changed outside of mergerfs it could lead to corruption as the kernel +won't know the end of the file has changed. That said any time you use +caching you should keep from using the same file outside of mergerfs +at the same time. + +Note that if an application is properly sizing writes then writeback +caching will have little or no effect. It will only help with writes +of sizes below the FUSE message size (128K on older kernels, 1M on +newer). + +#### statfs caching + +Of the syscalls used by mergerfs in policies the `statfs` / `statvfs` +call is perhaps the most expensive. It's used to find out the +available space of a filesystem and whether it is mounted +read-only. Depending on the setup and usage pattern these queries can +be relatively costly. When `cache.statfs` is enabled all calls to +`statfs` by a policy will be cached for the number of seconds its set +to. + +Example: If the create policy is `mfs` and the timeout is 60 then for +that 60 seconds the same filesystem will be returned as the target for +creates because the available space won't be updated for that time. + +#### symlink caching + +As of version 4.20 Linux supports symlink caching. Significant +performance increases can be had in workloads which use a lot of +symlinks. Setting `cache.symlinks=true` will result in requesting +symlink caching from the kernel only if supported. As a result it's +safe to enable it on systems prior to 4.20. That said it is disabled +by default for now. You can see if caching is enabled by querying the +xattr `user.mergerfs.cache.symlinks` but given it must be requested at +startup you can not change it at runtime. + +#### readdir caching + +As of version 4.20 Linux supports readdir caching. This can have a +significant impact on directory traversal. Especially when combined +with entry (`cache.entry`) and attribute (`cache.attr`) +caching. Setting `cache.readdir=true` will result in requesting +readdir caching from the kernel on each `opendir`. If the kernel +doesn't support readdir caching setting the option to `true` has no +effect. This option is configurable at runtime via xattr +`user.mergerfs.cache.readdir`. + +#### tiered caching + +Some storage technologies support what some call "tiered" caching. The +placing of usually smaller, faster storage as a transparent cache to +larger, slower storage. NVMe, SSD, Optane in front of traditional HDDs +for instance. + +mergerfs does not natively support any sort of tiered caching. Most +users have no use for such a feature and its inclusion would +complicate the code. However, there are a few situations where a cache +filesystem could help with a typical mergerfs setup. + +1. Fast network, slow filesystems, many readers: You've a 10+Gbps network + with many readers and your regular filesystems can't keep up. +2. Fast network, slow filesystems, small'ish bursty writes: You have a + 10+Gbps network and wish to transfer amounts of data less than your + cache filesystem but wish to do so quickly. + +With #1 it's arguable if you should be using mergerfs at all. RAID +would probably be the better solution. If you're going to use mergerfs +there are other tactics that may help: spreading the data across +filesystems (see the mergerfs.dup tool) and setting `func.open=rand`, +using `symlinkify`, or using dm-cache or a similar technology to add +tiered cache to the underlying device. + +With #2 one could use dm-cache as well but there is another solution +which requires only mergerfs and a cronjob. + +1. Create 2 mergerfs pools. One which includes just the slow devices + and one which has both the fast devices (SSD,NVME,etc.) and slow + devices. +2. The 'cache' pool should have the cache filesystems listed first. +3. The best `create` policies to use for the 'cache' pool would + probably be `ff`, `epff`, `lfs`, or `eplfs`. The latter two under + the assumption that the cache filesystem(s) are far smaller than the + backing filesystems. If using path preserving policies remember that + you'll need to manually create the core directories of those paths + you wish to be cached. Be sure the permissions are in sync. Use + `mergerfs.fsck` to check / correct them. You could also set the + slow filesystems mode to `NC` though that'd mean if the cache + filesystems fill you'd get "out of space" errors. +4. Enable `moveonenospc` and set `minfreespace` appropriately. To make + sure there is enough room on the "slow" pool you might want to set + `minfreespace` to at least as large as the size of the largest + cache filesystem if not larger. This way in the worst case the + whole of the cache filesystem(s) can be moved to the other drives. +5. Set your programs to use the cache pool. +6. Save one of the below scripts or create you're own. +7. Use `cron` (as root) to schedule the command at whatever frequency + is appropriate for your workflow. + +##### time based expiring + +Move files from cache to backing pool based only on the last time the +file was accessed. Replace `-atime` with `-amin` if you want minutes +rather than days. May want to use the `fadvise` / `--drop-cache` +version of rsync or run rsync with the tool "nocache". + +_NOTE:_ The arguments to these scripts include the cache +**filesystem** itself. Not the pool with the cache filesystem. You +could have data loss if the source is the cache pool. + +[mergerfs.time-based-mover](https://raw.githubusercontent.com/trapexit/mergerfs/refs/heads/latest-release/tools/mergerfs.time-based-mover) + +##### percentage full expiring + +Move the oldest file from the cache to the backing pool. Continue till +below percentage threshold. + +_NOTE:_ The arguments to these scripts include the cache +**filesystem** itself. Not the pool with the cache filesystem. You +could have data loss if the source is the cache pool. + +[mergerfs.percent-full-mover](https://raw.githubusercontent.com/trapexit/mergerfs/refs/heads/latest-release/tools/mergerfs.percent-full-mover) diff --git a/mkdocs/docs/pages/documentation/error_handling.md b/mkdocs/docs/pages/documentation/error_handling.md new file mode 100644 index 000000000..e7c36072f --- /dev/null +++ b/mkdocs/docs/pages/documentation/error_handling.md @@ -0,0 +1,37 @@ +# ERROR HANDLING + +POSIX filesystem functions offer a single return code meaning that +there is some complication regarding the handling of multiple branches +as mergerfs does. It tries to handle errors in a way that would +generally return meaningful values for that particular function. + +### chmod, chown, removexattr, setxattr, truncate, utimens + +1. if no error: return 0 (success) +2. if no successes: return first error +3. if one of the files acted on was the same as the related search function: return its value +4. return 0 (success) + +While doing this increases the complexity and cost of error handling, +particularly step 3, this provides probably the most reasonable return +value. + +### unlink, rmdir + +1. if no errors: return 0 (success) +2. return first error + +Older versions of mergerfs would return success if any success occurred +but for unlink and rmdir there are downstream assumptions that, while +not impossible to occur, can confuse some software. + +### others + +For search functions, there is always a single thing acted on and as +such whatever return value that comes from the single function call is +returned. + +For create functions `mkdir`, `mknod`, and `symlink` which don't +return a file descriptor and therefore can have `all` or `epall` +policies it will return success if any of the calls succeed and an +error otherwise. diff --git a/mkdocs/docs/pages/documentation/functions_categories_and_policies.md b/mkdocs/docs/pages/documentation/functions_categories_and_policies.md new file mode 100644 index 000000000..8108677f4 --- /dev/null +++ b/mkdocs/docs/pages/documentation/functions_categories_and_policies.md @@ -0,0 +1,263 @@ +# FUNCTIONS, CATEGORIES and POLICIES + +The POSIX filesystem API is made up of a number of +functions. **creat**, **stat**, **chown**, etc. For ease of +configuration in mergerfs, most of the core functions are grouped into +3 categories: **action**, **create**, and **search**. These functions +and categories can be assigned a policy which dictates which branch is +chosen when performing that function. + +Some functions, listed in the category `N/A` below, can not be +assigned the normal policies. These functions work with file handles, +rather than file paths, which were created by `open` or `create`. That +said many times the current FUSE kernel driver will not always provide +the file handle when a client calls `fgetattr`, `fchown`, `fchmod`, +`futimens`, `ftruncate`, etc. This means it will call the regular, +path based, versions. `statfs`'s behavior can be modified via other +options. + +When using policies which are based on a branch's available space the +base path provided is used. Not the full path to the file in +question. Meaning that mounts in the branch won't be considered in the +space calculations. The reason is that it doesn't really work for +non-path preserving policies and can lead to non-obvious behaviors. + +NOTE: While any policy can be assigned to a function or category, +some may not be very useful in practice. For instance: **rand** +(random) may be useful for file creation (create) but could lead to +very odd behavior if used for `chmod` if there were more than one copy +of the file. + +### Functions and their Category classifications + +| Category | FUSE Functions | +| -------- | -------------------------------------------------------------------------------------------------------------------------------------- | +| action | chmod, chown, link, removexattr, rename, rmdir, setxattr, truncate, unlink, utimens | +| create | create, mkdir, mknod, symlink | +| search | access, getattr, getxattr, ioctl (directories), listxattr, open, readlink | +| N/A | fchmod, fchown, futimens, ftruncate, fallocate, fgetattr, fsync, ioctl (files), read, readdir, release, statfs, write, copy_file_range | + +In cases where something may be searched for (such as a path to clone) +**getattr** will usually be used. + +### Policies + +A policy is the algorithm used to choose a branch or branches for a +function to work on or generally how the function behaves. + +Any function in the `create` category will clone the relative path if +needed. Some other functions (`rename`,`link`,`ioctl`) have special +requirements or behaviors which you can read more about below. + +#### Filtering + +Most policies basically search branches and create a list of files / paths +for functions to work on. The policy is responsible for filtering and +sorting the branches. Filters include **minfreespace**, whether or not +a branch is mounted read-only, and the branch tagging +(RO,NC,RW). These filters are applied across most policies. + +- No **search** function policies filter. +- All **action** function policies filter out branches which are + mounted **read-only** or tagged as **RO (read-only)**. +- All **create** function policies filter out branches which are + mounted **read-only**, tagged **RO (read-only)** or **NC (no + create)**, or has available space less than `minfreespace`. + +Policies may have their own additional filtering such as those that +require existing paths to be present. + +If all branches are filtered an error will be returned. Typically +**EROFS** (read-only filesystem) or **ENOSPC** (no space left on +device) depending on the most recent reason for filtering a +branch. **ENOENT** will be returned if no eligible branch is found. + +If **create**, **mkdir**, **mknod**, or **symlink** fail with `EROFS` +or other fundamental errors then mergerfs will mark any branch found +to be read-only as such (IE will set the mode `RO`) and will rerun the +policy and try again. This is mostly for `ext4` filesystems that can +suddenly become read-only when it encounters an error. + +#### Path Preservation + +Policies, as described below, are of two basic classifications. `path +preserving` and `non-path preserving`. + +All policies which start with `ep` (**epff**, **eplfs**, **eplus**, +**epmfs**, **eprand**) are `path preserving`. `ep` stands for +`existing path`. + +A path preserving policy will only consider branches where the relative +path being accessed already exists. + +When using non-path preserving policies paths will be cloned to target +branches as necessary. + +With the `msp` or `most shared path` policies they are defined as +`path preserving` for the purpose of controlling `link` and `rename`'s +behaviors since `ignorepponrename` is available to disable that +behavior. + +#### Policy descriptions + +A policy's behavior differs, as mentioned above, based on the function +it is used with. Sometimes it really might not make sense to even +offer certain policies because they are literally the same as others +but it makes things a bit more uniform. + +| Policy | Description | +| --------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| all | Search: For **mkdir**, **mknod**, and **symlink** it will apply to all branches. **create** works like **ff**. | +| epall (existing path, all) | For **mkdir**, **mknod**, and **symlink** it will apply to all found. **create** works like **epff** (but more expensive because it doesn't stop after finding a valid branch). | +| epff (existing path, first found) | Given the order of the branches, as defined at mount time or configured at runtime, act on the first one found where the relative path exists. | +| eplfs (existing path, least free space) | Of all the branches on which the relative path exists choose the branch with the least free space. | +| eplus (existing path, least used space) | Of all the branches on which the relative path exists choose the branch with the least used space. | +| epmfs (existing path, most free space) | Of all the branches on which the relative path exists choose the branch with the most free space. | +| eppfrd (existing path, percentage free random distribution) | Like **pfrd** but limited to existing paths. | +| eprand (existing path, random) | Calls **epall** and then randomizes. Returns 1. | +| ff (first found) | Given the order of the branches, as defined at mount time or configured at runtime, act on the first one found. | +| lfs (least free space) | Pick the branch with the least available free space. | +| lus (least used space) | Pick the branch with the least used space. | +| mfs (most free space) | Pick the branch with the most available free space. | +| msplfs (most shared path, least free space) | Like **eplfs** but if it fails to find a branch it will try again with the parent directory. Continues this pattern till finding one. | +| msplus (most shared path, least used space) | Like **eplus** but if it fails to find a branch it will try again with the parent directory. Continues this pattern till finding one. | +| mspmfs (most shared path, most free space) | Like **epmfs** but if it fails to find a branch it will try again with the parent directory. Continues this pattern till finding one. | +| msppfrd (most shared path, percentage free random distribution) | Like **eppfrd** but if it fails to find a branch it will try again with the parent directory. Continues this pattern till finding one. | +| newest | Pick the file / directory with the largest mtime. | +| pfrd (percentage free random distribution) | Chooses a branch at random with the likelihood of selection based on a branch's available space relative to the total. | +| rand (random) | Calls **all** and then randomizes. Returns 1 branch. | + +**NOTE:** If you are using an underlying filesystem that reserves +blocks such as ext2, ext3, or ext4 be aware that mergerfs respects the +reservation by using `f_bavail` (number of free blocks for +unprivileged users) rather than `f_bfree` (number of free blocks) in +policy calculations. **df** does NOT use `f_bavail`, it uses +`f_bfree`, so direct comparisons between **df** output and mergerfs' +policies is not appropriate. + +#### Defaults + +| Category | Policy | +| -------- | ------ | +| action | epall | +| create | epmfs | +| search | ff | + +#### func.readdir + +examples: `func.readdir=seq`, `func.readdir=cor:4` + +`readdir` has policies to control how it manages reading directory +content. + +| Policy | Description | +| ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| seq | "sequential" : Iterate over branches in the order defined. This is the default and traditional behavior found prior to the readdir policy introduction. | +| cosr | "concurrent open, sequential read" : Concurrently open branch directories using a thread pool and process them in order of definition. This keeps memory and CPU usage low while also reducing the time spent waiting on branches to respond. Number of threads defaults to the number of logical cores. Can be overwritten via the syntax `func.readdir=cosr:N` where `N` is the number of threads. | +| cor | "concurrent open and read" : Concurrently open branch directories and immediately start reading their contents using a thread pool. This will result in slightly higher memory and CPU usage but reduced latency. Particularly when using higher latency / slower speed network filesystem branches. Unlike `seq` and `cosr` the order of files could change due the async nature of the thread pool. Number of threads defaults to the number of logical cores. Can be overwritten via the syntax `func.readdir=cor:N` where `N` is the number of threads. | + +Keep in mind that `readdir` mostly just provides a list of file names +in a directory and possibly some basic metadata about said files. To +know details about the files, as one would see from commands like +`find` or `ls`, it is required to call `stat` on the file which is +controlled by `fuse.getattr`. + +#### ioctl + +When `ioctl` is used with an open file then it will use the file +handle which was created at the original `open` call. However, when +using `ioctl` with a directory mergerfs will use the `open` policy to +find the directory to act on. + +#### rename and link + +**NOTE:** If you're receiving errors from software when files are +moved / renamed / linked then you should consider changing the create +policy to one which is **not** path preserving, enabling +`ignorepponrename`, or contacting the author of the offending software +and requesting that `EXDEV` (cross device / improper link) be properly +handled. + +`rename` and `link` are tricky functions in a union +filesystem. `rename` only works within a single filesystem or +device. If a rename can't be done atomically due to the source and +destination paths existing on different mount points it will return +**-1** with **errno = EXDEV** (cross device / improper link). So if a +`rename`'s source and target are on different filesystems within the pool +it creates an issue. + +Originally mergerfs would return EXDEV whenever a rename was requested +which was cross directory in any way. This made the code simple and +was technically compliant with POSIX requirements. However, many +applications fail to handle EXDEV at all and treat it as a normal +error or otherwise handle it poorly. Such apps include: gvfsd-fuse +v1.20.3 and prior, Finder / CIFS/SMB client in Apple OSX 10.9+, +NZBGet, Samba's recycling bin feature. + +As a result a compromise was made in order to get most software to +work while still obeying mergerfs' policies. Below is the basic logic. + +- If using a **create** policy which tries to preserve directory paths (epff,eplfs,eplus,epmfs) + - Using the **rename** policy get the list of files to rename + - For each file attempt rename: + - If failure with ENOENT (no such file or directory) run **create** policy + - If create policy returns the same branch as currently evaluating then clone the path + - Re-attempt rename + - If **any** of the renames succeed the higher level rename is considered a success + - If **no** renames succeed the first error encountered will be returned + - On success: + - Remove the target from all branches with no source file + - Remove the source from all branches which failed to rename +- If using a **create** policy which does **not** try to preserve directory paths + - Using the **rename** policy get the list of files to rename + - Using the **getattr** policy get the target path + - For each file attempt rename: + - If the source branch != target branch: + - Clone target path from target branch to source branch + - Rename + - If **any** of the renames succeed the higher level rename is considered a success + - If **no** renames succeed the first error encountered will be returned + - On success: + - Remove the target from all branches with no source file + - Remove the source from all branches which failed to rename + +The removals are subject to normal entitlement checks. + +The above behavior will help minimize the likelihood of EXDEV being +returned but it will still be possible. + +**link** uses the same strategy but without the removals. + +#### statfs / statvfs + +[statvfs](http://linux.die.net/man/2/statvfs) normalizes the source +filesystems based on the fragment size and sums the number of adjusted +blocks and inodes. This means you will see the combined space of all +sources. Total, used, and free. The sources however are dedupped based +on the filesystem so multiple sources on the same drive will not result in +double counting its space. Other filesystems mounted further down the tree +of the branch will not be included when checking the mount's stats. + +The options `statfs` and `statfs_ignore` can be used to modify +`statfs` behavior. + +#### flush-on-close + +https://lkml.kernel.org/linux-fsdevel/20211024132607.1636952-1-amir73il@gmail.com/T/ + +By default, FUSE would issue a flush before the release of a file +descriptor. This was considered a bit aggressive and a feature added +to give the FUSE server the ability to choose when that happens. + +Options: + +- always +- never +- opened-for-write + +For now it defaults to "opened-for-write" which is less aggressive +than the behavior before this feature was added. It should not be a +problem because the flush is really only relevant when a file is +written to. Given flush is irrelevant for many filesystems in the +future a branch specific flag may be added so only files opened on a +specific branch would be flushed on close. diff --git a/mkdocs/docs/pages/documentation/how_it_works.md b/mkdocs/docs/pages/documentation/how_it_works.md new file mode 100644 index 000000000..0c2b5a014 --- /dev/null +++ b/mkdocs/docs/pages/documentation/how_it_works.md @@ -0,0 +1,35 @@ +# HOW IT WORKS + +mergerfs logically merges multiple paths together. Think a union of +sets. The file/s or directory/s acted on or presented through mergerfs +are based on the policy chosen for that particular action. Read more +about policies below. + +``` +A + B = C +/disk1 /disk2 /merged +| | | ++-- /dir1 +-- /dir1 +-- /dir1 +| | | | | | +| +-- file1 | +-- file2 | +-- file1 +| | +-- file3 | +-- file2 ++-- /dir2 | | +-- file3 +| | +-- /dir3 | +| +-- file4 | +-- /dir2 +| +-- file5 | | ++-- file6 | +-- file4 + | + +-- /dir3 + | | + | +-- file5 + | + +-- file6 +``` + +mergerfs does **not** support the copy-on-write (CoW) or whiteout +behaviors found in **aufs** and **overlayfs**. You can **not** mount a +read-only filesystem and write to it. However, mergerfs will ignore +read-only filesystems when creating new files so you can mix +read-write and read-only filesystems. It also does **not** split data +across filesystems. It is not RAID0 / striping. It is simply a union of +other filesystems. diff --git a/mkdocs/docs/pages/documentation/install.md b/mkdocs/docs/pages/documentation/install.md new file mode 100644 index 000000000..db3ca3d94 --- /dev/null +++ b/mkdocs/docs/pages/documentation/install.md @@ -0,0 +1,81 @@ +# INSTALL + +https://github.com/trapexit/mergerfs/releases + +If your distribution's package manager includes mergerfs check if the +version is up to date. If out of date it is recommended to use +the latest release found on the release page. Details for common +distros are below. + +#### Debian + +Most Debian installs are of a stable branch and therefore do not have +the most up to date software. While mergerfs is available via `apt` it +is suggested that users install the most recent version available from +the [releases page](https://github.com/trapexit/mergerfs/releases). + +#### prebuilt deb + +``` +wget https://github.com/trapexit/mergerfs/releases/download//mergerfs_.debian-_.deb +dpkg -i mergerfs_.debian-_.deb +``` + +#### apt + +``` +sudo apt install -y mergerfs +``` + +#### Ubuntu + +Most Ubuntu installs are of a stable branch and therefore do not have +the most up to date software. While mergerfs is available via `apt` it +is suggested that users install the most recent version available from +the [releases page](https://github.com/trapexit/mergerfs/releases). + +#### prebuilt deb + +``` +wget https://github.com/trapexit/mergerfs/releases/download//mergerfs_.ubuntu-_.deb +dpkg -i mergerfs_.ubuntu-_.deb +``` + +#### apt + +``` +sudo apt install -y mergerfs +``` + +#### Raspberry Pi OS + +Effectively the same as Debian or Ubuntu. + +#### Fedora + +``` +wget https://github.com/trapexit/mergerfs/releases/download//mergerfs-.fc..rpm +sudo rpm -i mergerfs-.fc..rpm +``` + +#### CentOS / Rocky + +``` +wget https://github.com/trapexit/mergerfs/releases/download//mergerfs-.el..rpm +sudo rpm -i mergerfs-.el..rpm +``` + +#### ArchLinux + +1. Setup AUR +2. Install `mergerfs` + +#### Other + +Static binaries are provided for situations where native packages are +unavailable. + +``` +wget https://github.com/trapexit/mergerfs/releases/download//mergerfs-static-linux_.tar.gz +sudo tar xvf mergerfs-static-linux_.tar.gz -C / +``` diff --git a/mkdocs/docs/pages/documentation/known_issues_bugs.md b/mkdocs/docs/pages/documentation/known_issues_bugs.md new file mode 100644 index 000000000..d166dd620 --- /dev/null +++ b/mkdocs/docs/pages/documentation/known_issues_bugs.md @@ -0,0 +1,195 @@ +# KNOWN ISSUES / BUGS + +#### kernel issues & bugs + +[https://github.com/trapexit/mergerfs/wiki/Kernel-Issues-&-Bugs](https://github.com/trapexit/mergerfs/wiki/Kernel-Issues-&-Bugs) + +#### directory mtime is not being updated + +Remember that the default policy for `getattr` is `ff`. The +information for the first directory found will be returned. If it +wasn't the directory which had been updated then it will appear +outdated. + +The reason this is the default is because any other policy would be +more expensive and for many applications it is unnecessary. To always +return the directory with the most recent mtime or a faked value based +on all found would require a scan of all filesystems. + +If you always want the directory information from the one with the +most recent mtime then use the `newest` policy for `getattr`. + +#### 'mv /mnt/pool/foo /mnt/disk1/foo' removes 'foo' + +This is not a bug. + +Run in verbose mode to better understand what's happening: + +``` +$ mv -v /mnt/pool/foo /mnt/disk1/foo +copied '/mnt/pool/foo' -> '/mnt/disk1/foo' +removed '/mnt/pool/foo' +$ ls /mnt/pool/foo +ls: cannot access '/mnt/pool/foo': No such file or directory +``` + +`mv`, when working across devices, is copying the source to target and +then removing the source. Since the source **is** the target in this +case, depending on the unlink policy, it will remove the just copied +file and other files across the branches. + +If you want to move files to one filesystem just copy them there and +use mergerfs.dedup to clean up the old paths or manually remove them +from the branches directly. + +#### cached memory appears greater than it should be + +Use `cache.files=off` and/or `dropcacheonclose=true`. See the section +on page caching. + +#### NFS clients returning ESTALE / Stale file handle + +NFS generally does not like out of band changes. Take a look at the +section on NFS in the [remote-filesystems](remote_filesystems.md) for +more details. + +#### rtorrent fails with ENODEV (No such device) + +Be sure to set +`cache.files=partial|full|auto-full|per-processe`. rtorrent and some +other applications use [mmap](http://linux.die.net/man/2/mmap) to read +and write to files and offer no fallback to traditional methods. FUSE +does not currently support mmap while using `direct_io`. There may be +a performance penalty on writes with `direct_io` off as well as the +problem of double caching but it's the only way to get such +applications to work. If the performance loss is too high for other +apps you can mount mergerfs twice. Once with `direct_io` enabled and +one without it. Be sure to set `dropcacheonclose=true` if not using +`direct_io`. + +#### Plex doesn't work with mergerfs + +It does. If you're trying to put Plex's config / metadata / database +on mergerfs you can't set `cache.files=off` because Plex is using +sqlite3 with mmap enabled. Shared mmap is not supported by Linux's +FUSE implementation when page caching is disabled. To fix this place +the data elsewhere (preferable) or enable `cache.files` (with +`dropcacheonclose=true`). Sqlite3 does not need mmap but the developer +needs to fall back to standard IO if mmap fails. + +This applies to other software: Radarr, Sonarr, Lidarr, Jellyfin, etc. + +I would recommend reaching out to the developers of the software +you're having troubles with and asking them to add a fallback to +regular file IO when mmap is unavailable. + +If the issue is that scanning doesn't seem to pick up media then be +sure to set `func.getattr=newest`, though generally, a full scan will +pick up all media anyway. + +#### When a program tries to move or rename a file it fails + +Please read the section above regarding [rename and link](functions_categories_and_policies.md#rename-and-link). + +The problem is that many applications do not properly handle `EXDEV` +errors which `rename` and `link` may return even though they are +perfectly valid situations which do not indicate actual device, +filesystem, or OS errors. The error will only be returned by mergerfs +if using a path preserving policy as described in the policy section +above. If you do not care about path preservation simply change the +mergerfs policy to the non-path preserving version. For example: `-o +category.create=mfs` Ideally the offending software would be fixed and +it is recommended that if you run into this problem you contact the +software's author and request proper handling of `EXDEV` errors. + +#### my 32bit software has problems + +Some software have problems with 64bit inode values. The symptoms can +include EOVERFLOW errors when trying to list files. You can address +this by setting `inodecalc` to one of the 32bit based algos as +described in the relevant section. + +#### Samba: Moving files / directories fails + +Workaround: Copy the file/directory and then remove the original +rather than move. + +This isn't an issue with Samba but some SMB clients. GVFS-fuse v1.20.3 +and prior (found in Ubuntu 14.04 among others) failed to handle +certain error codes correctly. Particularly **STATUS_NOT_SAME_DEVICE** +which comes from the **EXDEV** which is returned by **rename** when +the call is crossing mount points. When a program gets an **EXDEV** it +needs to explicitly take an alternate action to accomplish its +goal. In the case of **mv** or similar it tries **rename** and on +**EXDEV** falls back to a manual copying of data between the two +locations and unlinking the source. In these older versions of +GVFS-fuse if it received **EXDEV** it would translate that into +**EIO**. This would cause **mv** or most any application attempting to +move files around on that SMB share to fail with a IO error. + +[GVFS-fuse v1.22.0](https://bugzilla.gnome.org/show_bug.cgi?id=734568) +and above fixed this issue but a large number of systems use the older +release. On Ubuntu, the version can be checked by issuing `apt-cache +showpkg gvfs-fuse`. Most distros released in 2015 seem to have the +updated release and will work fine but older systems may +not. Upgrading gvfs-fuse or the distro in general will address the +problem. + +In Apple's MacOSX 10.9 they replaced Samba (client and server) with +their own product. It appears their new client does not handle +**EXDEV** either and responds similarly to older releases of gvfs on +Linux. + +#### Trashing files occasionally fails + +This is the same issue as with Samba. `rename` returns `EXDEV` (in our +case that will really only happen with path preserving policies like +`epmfs`) and the software doesn't handle the situation well. This is +unfortunately a common failure of software which moves files +around. The standard indicates that an implementation `MAY` choose to +support non-user home directory trashing of files (which is a +`MUST`). The implementation `MAY` also support "top directory trashes" +which many probably do. + +To create a `$topdir/.Trash` directory as defined in the standard use +the [mergerfs-tools](https://github.com/trapexit/mergerfs-tools) tool +`mergerfs.mktrash`. + +#### Supplemental user groups + +Due to the overhead of +[getgroups/setgroups](http://linux.die.net/man/2/setgroups) mergerfs +utilizes a cache. This cache is opportunistic and per thread. Each +thread will query the supplemental groups for a user when that +particular thread needs to change credentials and will keep that data +for the lifetime of the thread. This means that if a user is added to +a group it may not be picked up without the restart of +mergerfs. However, since the high level FUSE API's (at least the +standard version) thread pool dynamically grows and shrinks it's +possible that over time a thread will be killed and later a new thread +with no cache will start and query the new data. + +The gid cache uses fixed storage to simplify the design and be +compatible with older systems which may not have C++11 +compilers. There is enough storage for 256 users' supplemental +groups. Each user is allowed up to 32 supplemental groups. Linux >= +2.6.3 allows up to 65535 groups per user but most other \*nixs allow +far less. NFS allows only 16. The system does handle overflow +gracefully. If the user has more than 32 supplemental groups only the +first 32 will be used. If more than 256 users are using the system +when an uncached user is found it will evict an existing user's cache +at random. So long as there aren't more than 256 active users this +should be fine. If either value is too low for your needs you will +have to modify `gidcache.hpp` to increase the values. Note that doing +so will increase the memory needed by each thread. + +While not a bug some users have found when using containers that +supplemental groups defined inside the container don't work properly +with regard to permissions. This is expected as mergerfs lives outside +the container and therefore is querying the host's group +database. There might be a hack to work around this (make mergerfs +read the /etc/group file in the container) but it is not yet +implemented and would be limited to Linux and the /etc/group +DB. Preferably users would mount in the host group file into the +containers or use a standard shared user & groups technology like NIS +or LDAP. diff --git a/mkdocs/docs/pages/documentation/links.md b/mkdocs/docs/pages/documentation/links.md new file mode 100644 index 000000000..068b6e9a6 --- /dev/null +++ b/mkdocs/docs/pages/documentation/links.md @@ -0,0 +1,8 @@ +# LINKS + +- https://spawn.link +- https://github.com/trapexit/mergerfs +- https://github.com/trapexit/mergerfs/wiki +- https://github.com/trapexit/mergerfs-tools +- https://github.com/trapexit/scorch +- https://github.com/trapexit/bbf diff --git a/mkdocs/docs/pages/documentation/mergerfs_versus_x.md b/mkdocs/docs/pages/documentation/mergerfs_versus_x.md new file mode 100644 index 000000000..3a48d6556 --- /dev/null +++ b/mkdocs/docs/pages/documentation/mergerfs_versus_x.md @@ -0,0 +1,100 @@ +# mergerfs versus X + +#### mhddfs + +mhddfs had not been maintained for some time and has some known +stability and security issues. mergerfs provides a superset of mhddfs' +features and should offer the same or better performance. + +Below is an example of mhddfs and mergerfs setup to work similarly. + +`mhddfs -o mlimit=4G,allow_other /mnt/drive1,/mnt/drive2 /mnt/pool` + +`mergerfs -o minfreespace=4G,category.create=ff /mnt/drive1:/mnt/drive2 /mnt/pool` + +#### aufs + +aufs is mostly abandoned and no longer available in most Linux distros. + +While aufs can offer better peak performance mergerfs provides more +configurability and is generally easier to use. mergerfs however does +not offer the overlay / copy-on-write (CoW) features which aufs has. + +#### unionfs-fuse + +unionfs-fuse is more like aufs than mergerfs in that it offers overlay / +copy-on-write (CoW) features. If you're just looking to create a union +of filesystems and want flexibility in file/directory placement then +mergerfs offers that whereas unionfs is more for overlaying read/write +filesystems over read-only ones. + +#### overlayfs + +overlayfs is similar to aufs and unionfs-fuse in that it also is +primarily used to layer a read/write filesystem over one or more +read-only filesystems. It does not have the ability to spread +files/directories across numerous filesystems. + +#### RAID0, JBOD, drive concatenation, striping + +With simple JBOD / drive concatenation / stripping / RAID0 a single +drive failure will result in full pool failure. mergerfs performs a +similar function without the possibility of catastrophic failure and +the difficulties in recovery. Drives may fail but all other +filesystems and their data will continue to be accessible. + +The main practical difference with mergerfs is the fact you don't +actually have contiguous space as large as if you used those other +technologies. Meaning you can't create a 2TB file on a pool of 2 1TB +filesystems. + +When combined with something like [SnapRaid](http://www.snapraid.it) +and/or an offsite backup solution you can have the flexibility of JBOD +without the single point of failure. + +#### UnRAID + +UnRAID is a full OS and its storage layer, as I understand, is +proprietary and closed source. Users who have experience with both +have often said they prefer the flexibility offered by mergerfs and +for some the fact it is open source is important. + +There are a number of UnRAID users who use mergerfs as well though I'm +not entirely familiar with the use case. + +For semi-static data mergerfs + [SnapRaid](http://www.snapraid.it) +provides a similar solution. + +#### ZFS + +mergerfs is very different from ZFS. mergerfs is intended to provide +flexible pooling of arbitrary filesystems (local or remote), of +arbitrary sizes, and arbitrary filesystems. For `write once, read +many` usecases such as bulk media storage. Where data integrity and +backup is managed in other ways. In those usecases ZFS can introduce a +number of costs and limitations as described +[here](http://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html), +[here](https://markmcb.com/2020/01/07/five-years-of-btrfs/), and +[here](https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSWhyNoRealReshaping). + +#### StableBit's DrivePool + +DrivePool works only on Windows so not as common an alternative as +other Linux solutions. If you want to use Windows then DrivePool is a +good option. Functionally the two projects work a bit +differently. DrivePool always writes to the filesystem with the most +free space and later rebalances. mergerfs does not offer rebalance but +chooses a branch at file/directory create time. DrivePool's +rebalancing can be done differently in any directory and has file +pattern matching to further customize the behavior. mergerfs, not +having rebalancing does not have these features, but similar features +are planned for mergerfs v3. DrivePool has builtin file duplication +which mergerfs does not natively support (but can be done via an +external script.) + +There are a lot of misc differences between the two projects but most +features in DrivePool can be replicated with external tools in +combination with mergerfs. + +Additionally, DrivePool is a closed source commercial product vs +mergerfs a ISC licensed OSS project. diff --git a/mkdocs/docs/pages/documentation/options.md b/mkdocs/docs/pages/documentation/options.md new file mode 100644 index 000000000..0b7042197 --- /dev/null +++ b/mkdocs/docs/pages/documentation/options.md @@ -0,0 +1,245 @@ +# OPTIONS + +These options are the same regardless of whether you use them with the +`mergerfs` commandline program, in fstab, or in a config file. + +### mount options + +- **config**: Path to a config file. Same arguments as below in + key=val / ini style format. +- **branches**: Colon delimited list of branches. +- **minfreespace=SIZE**: The minimum space value used for creation + policies. Can be overridden by branch specific option. Understands + 'K', 'M', and 'G' to represent kilobyte, megabyte, and gigabyte + respectively. (default: 4G) +- **moveonenospc=BOOL|POLICY**: When enabled if a **write** fails with + **ENOSPC** (no space left on device) or **EDQUOT** (disk quota + exceeded) the policy selected will run to find a new location for + the file. An attempt to move the file to that branch will occur + (keeping all metadata possible) and if successful the original is + unlinked and the write retried. (default: false, true = mfs) +- **inodecalc=passthrough|path-hash|devino-hash|hybrid-hash**: Selects + the inode calculation algorithm. (default: hybrid-hash) +- **dropcacheonclose=BOOL**: When a file is requested to be closed + call `posix_fadvise` on it first to instruct the kernel that we no + longer need the data and it can drop its cache. Recommended when + **cache.files=partial|full|auto-full|per-process** to limit double + caching. (default: false) +- **direct-io-allow-mmap=BOOL**: On newer kernels (>= 6.6) it is + possible to disable file page caching while still allowing for + shared mmap support. mergerfs will enable this feature if available + but an option is provided to turn it off for testing and debugging + purposes. (default: true) +- **symlinkify=BOOL**: When enabled and a file is not writable and its + mtime or ctime is older than **symlinkify_timeout** files will be + reported as symlinks to the original files. Please read more below + before using. (default: false) +- **symlinkify_timeout=UINT**: Time to wait, in seconds, to activate + the **symlinkify** behavior. (default: 3600) +- **nullrw=BOOL**: Turns reads and writes into no-ops. The request + will succeed but do nothing. Useful for benchmarking + mergerfs. (default: false) +- **lazy-umount-mountpoint=BOOL**: mergerfs will attempt to "lazy + umount" the mountpoint before mounting itself. Useful when + performing live upgrades of mergerfs. (default: false) +- **ignorepponrename=BOOL**: Ignore path preserving on + rename. Typically rename and link act differently depending on the + policy of `create` (read below). Enabling this will cause rename and + link to always use the non-path preserving behavior. This means + files, when renamed or linked, will stay on the same + filesystem. (default: false) +- **export-support=BOOL**: Sets a low-level FUSE feature intended to + indicate the filesystem can support being exported via + NFS. (default: true) +- **security_capability=BOOL**: If false return ENOATTR when xattr + security.capability is queried. (default: true) +- **xattr=passthrough|noattr|nosys**: Runtime control of + xattrs. Default is to passthrough xattr requests. 'noattr' will + short circuit as if nothing exists. 'nosys' will respond with ENOSYS + as if xattrs are not supported or disabled. (default: passthrough) +- **link_cow=BOOL**: When enabled if a regular file is opened which + has a link count > 1 it will copy the file to a temporary file and + rename over the original. Breaking the link and providing a basic + copy-on-write function similar to cow-shell. (default: false) +- **statfs=base|full**: Controls how statfs works. 'base' means it + will always use all branches in statfs calculations. 'full' is in + effect path preserving and only includes branches where the path + exists. (default: base) +- **statfs_ignore=none|ro|nc**: 'ro' will cause statfs calculations to + ignore available space for branches mounted or tagged as 'read-only' + or 'no create'. 'nc' will ignore available space for branches tagged + as 'no create'. (default: none) +- **nfsopenhack=off|git|all**: A workaround for exporting mergerfs + over NFS where there are issues with creating files for write while + setting the mode to read-only. (default: off) +- **branches-mount-timeout=UINT**: Number of seconds to wait at + startup for branches to be a mount other than the mountpoint's + filesystem. (default: 0) +- **follow-symlinks=never|directory|regular|all**: Turns symlinks into + what they point to. (default: never) +- **link-exdev=passthrough|rel-symlink|abs-base-symlink|abs-pool-symlink**: + When a link fails with EXDEV optionally create a symlink to the file + instead. +- **rename-exdev=passthrough|rel-symlink|abs-symlink**: When a rename + fails with EXDEV optionally move the file to a special directory and + symlink to it. +- **readahead=UINT**: Set readahead (in kilobytes) for mergerfs and + branches if greater than 0. (default: 0) +- **posix_acl=BOOL**: Enable POSIX ACL support (if supported by kernel + and underlying filesystem). (default: false) +- **async_read=BOOL**: Perform reads asynchronously. If disabled or + unavailable the kernel will ensure there is at most one pending read + request per file handle and will attempt to order requests by + offset. (default: true) +- **fuse_msg_size=UINT**: Set the max number of pages per FUSE + message. Only available on Linux >= 4.20 and ignored + otherwise. (min: 1; max: 256; default: 256) +- **threads=INT**: Number of threads to use. When used alone + (`process-thread-count=-1`) it sets the number of threads reading + and processing FUSE messages. When used together it sets the number + of threads reading from FUSE. When set to zero it will attempt to + discover and use the number of logical cores. If the thread count is + set negative it will look up the number of cores then divide by the + absolute value. ie. threads=-2 on an 8 core machine will result in 8 + / 2 = 4 threads. There will always be at least 1 thread. If set to + -1 in combination with `process-thread-count` then it will try to + pick reasonable values based on CPU thread count. NOTE: higher + number of threads increases parallelism but usually decreases + throughput. (default: 0) +- **read-thread-count=INT**: Alias for `threads`. +- **process-thread-count=INT**: Enables separate thread pool to + asynchronously process FUSE requests. In this mode + `read-thread-count` refers to the number of threads reading FUSE + messages which are dispatched to process threads. -1 means disabled + otherwise acts like `read-thread-count`. (default: -1) +- **process-thread-queue-depth=UINT**: Sets the number of requests any + single process thread can have queued up at one time. Meaning the + total memory usage of the queues is queue depth multiplied by the + number of process threads plus read thread count. 0 sets the depth + to the same as the process thread count. (default: 0) +- **pin-threads=STR**: Selects a strategy to pin threads to CPUs + (default: unset) +- **flush-on-close=never|always|opened-for-write**: Flush data cache + on file close. Mostly for when writeback is enabled or merging + network filesystems. (default: opened-for-write) +- **scheduling-priority=INT**: Set mergerfs' scheduling + priority. Valid values range from -20 to 19. See `setpriority` man + page for more details. (default: -10) +- **fsname=STR**: Sets the name of the filesystem as seen in + **mount**, **df**, etc. Defaults to a list of the source paths + concatenated together with the longest common prefix removed. +- **func.FUNC=POLICY**: Sets the specific FUSE function's policy. See + below for the list of value types. Example: **func.getattr=newest** +- **func.readdir=seq|cosr|cor|cosr:INT|cor:INT**: Sets `readdir` + policy. INT value sets the number of threads to use for + concurrency. (default: seq) +- **category.action=POLICY**: Sets policy of all FUSE functions in the + action category. (default: epall) +- **category.create=POLICY**: Sets policy of all FUSE functions in the + create category. (default: epmfs) +- **category.search=POLICY**: Sets policy of all FUSE functions in the + search category. (default: ff) +- **cache.open=UINT**: 'open' policy cache timeout in + seconds. (default: 0) +- **cache.statfs=UINT**: 'statfs' cache timeout in seconds. (default: 0) +- **cache.attr=UINT**: File attribute cache timeout in + seconds. (default: 1) +- **cache.entry=UINT**: File name lookup cache timeout in + seconds. (default: 1) +- **cache.negative_entry=UINT**: Negative file name lookup cache + timeout in seconds. (default: 0) +- **cache.files=libfuse|off|partial|full|auto-full|per-process**: File + page caching mode (default: libfuse) +- **cache.files.process-names=LIST**: A pipe | delimited list of + process [comm](https://man7.org/linux/man-pages/man5/proc.5.html) + names to enable page caching for when + `cache.files=per-process`. (default: "rtorrent|qbittorrent-nox") +- **cache.writeback=BOOL**: Enable kernel writeback caching (default: + false) +- **cache.symlinks=BOOL**: Cache symlinks (if supported by kernel) + (default: false) +- **cache.readdir=BOOL**: Cache readdir (if supported by kernel) + (default: false) +- **parallel-direct-writes=BOOL**: Allow the kernel to dispatch + multiple, parallel (non-extending) write requests for files opened + with `cache.files=per-process` (if the process is not in `process-names`) + or `cache.files=off`. (This requires kernel support, and was added in v6.2) +- **direct_io**: deprecated - Bypass page cache. Use `cache.files=off` + instead. (default: false) +- **kernel_cache**: deprecated - Do not invalidate data cache on file + open. Use `cache.files=full` instead. (default: false) +- **auto_cache**: deprecated - Invalidate data cache if file mtime or + size change. Use `cache.files=auto-full` instead. (default: false) +- **async_read**: deprecated - Perform reads asynchronously. Use + `async_read=true` instead. +- **sync_read**: deprecated - Perform reads synchronously. Use + `async_read=false` instead. +- **splice_read**: deprecated - Does nothing. +- **splice_write**: deprecated - Does nothing. +- **splice_move**: deprecated - Does nothing. +- **allow_other**: deprecated - mergerfs v2.35.0 and newer sets this FUSE option + automatically if running as root. +- **use_ino**: deprecated - mergerfs should always control inode + calculation so this is enabled all the time. + +**NOTE:** Options are evaluated in the order listed so if the options +are **func.rmdir=rand,category.action=ff** the **action** category +setting will override the **rmdir** setting. + +**NOTE:** Always look at the documentation for the version of mergerfs +you're using. Not all features are available in older releases. Use +`man mergerfs` or find the docs as linked in the release. + +#### Value Types + +- BOOL = 'true' | 'false' +- INT = [MIN_INT,MAX_INT] +- UINT = [0,MAX_INT] +- SIZE = 'NNM'; NN = INT, M = 'K' | 'M' | 'G' | 'T' +- STR = string (may refer to an enumerated value, see details of + argument) +- FUNC = filesystem function +- CATEGORY = function category +- POLICY = mergerfs function policy + +### branches + +The 'branches' argument is a colon (':') delimited list of paths to be +pooled together. It does not matter if the paths are on the same or +different filesystems nor does it matter the filesystem type (within +reason). Used and available space will not be duplicated for paths on +the same filesystem and any features which aren't supported by the +underlying filesystem (such as file attributes or extended attributes) +will return the appropriate errors. + +Branches currently have two options which can be set. A type which +impacts whether or not the branch is included in a policy calculation +and a individual minfreespace value. The values are set by prepending +an `=` at the end of a branch designation and using commas as +delimiters. Example: `/mnt/drive=RW,1234` + +#### branch mode + +- RW: (read/write) - Default behavior. Will be eligible in all policy + categories. +- RO: (read-only) - Will be excluded from `create` and `action` + policies. Same as a read-only mounted filesystem would be (though + faster to process). +- NC: (no-create) - Will be excluded from `create` policies. You can't + create on that branch but you can change or delete. + +#### minfreespace + +Same purpose and syntax as the global option but specific to the +branch. If not set the global value is used. + +#### globbing + +To make it easier to include multiple branches mergerfs supports +[globbing](http://linux.die.net/man/7/glob). **The globbing tokens +MUST be escaped when using via the shell else the shell itself will +apply the glob itself.** + +``` + +``` diff --git a/mkdocs/docs/pages/documentation/performance.md b/mkdocs/docs/pages/documentation/performance.md new file mode 100644 index 000000000..5ecca36bf --- /dev/null +++ b/mkdocs/docs/pages/documentation/performance.md @@ -0,0 +1,39 @@ +# PERFORMANCE + +mergerfs is at its core just a proxy and therefore its theoretical max +performance is that of the underlying devices. However, given it is a +FUSE filesystem working from userspace there is an increase in +overhead relative to kernel based solutions. That said the performance +can match the theoretical max but it depends greatly on the system's +configuration. Especially when adding network filesystems into the mix +there are many variables which can impact performance. Device speeds +and latency, network speeds and latency, general concurrency, +read/write sizes, etc. Unfortunately, given the number of variables it +has been difficult to find a single set of settings which provide +optimal performance. If you're having performance issues please look +over the suggestions below (including the benchmarking section.) + +NOTE: be sure to read about these features before changing them to +understand what behaviors it may impact + +- disable `security_capability` and/or `xattr` +- increase cache timeouts `cache.attr`, `cache.entry`, `cache.negative_entry` +- enable (or disable) page caching (`cache.files`) +- enable `parallel-direct-writes` +- enable `cache.writeback` +- enable `cache.statfs` +- enable `cache.symlinks` +- enable `cache.readdir` +- change the number of worker threads +- disable `posix_acl` +- disable `async_read` +- test theoretical performance using `nullrw` or mounting a ram disk +- use `symlinkify` if your data is largely static and read-only +- use tiered cache devices +- use LVM and LVM cache to place a SSD in front of your HDDs +- increase readahead: `readahead=1024` + +If you come across a setting that significantly impacts performance +please contact trapexit so he may investigate further. Please test +both against your normal setup, a singular branch, and with +`nullrw=true` diff --git a/mkdocs/docs/pages/documentation/remote_filesystems.md b/mkdocs/docs/pages/documentation/remote_filesystems.md new file mode 100644 index 000000000..55772c574 --- /dev/null +++ b/mkdocs/docs/pages/documentation/remote_filesystems.md @@ -0,0 +1,114 @@ +# Remote Filesystems + +Many users ask about compatibility with remote filesystems. This +section is to describe any known issues or quirks when using mergerfs +with common remote filesystems. + +Keep in mind that, like with caching, it is not a good idea to change +the contents of the remote filesystem +[out-of-band](https://en.wikipedia.org/wiki/Out-of-band). Meaning that +you really shouldn't change the contents of the underlying +filesystems or mergerfs on the server hosting the remote +filesystem. Doing so can lead to weird behavior, inconsistency, +errors, and even data corruption should multiple programs try to write +or read the same data at the same time. This isn't to say you can't do +it or that data corruption is likely but it _could_ happen. It is +better to always use the remote filesystem. Even on the machine +serving it. + +## NFS + +[NFS](https://en.wikipedia.org/wiki/Network_File_System) is a common +remote filesystem on Unix/POSIX systems. Due to how NFS works there +are some settings which need to be set in order for mergerfs to work +with it. + +It should be noted that NFS and FUSE (the technology mergerfs uses) do +not work perfectly with one another due to certain design choices in +FUSE (and mergerfs.) Due to these issues, it is generally recommended +to use SMB when possible till situations change. That said mergerfs +should generally work as an export of NFS and issues discovered should +still be reported. + +To ensure compatibility between mergerfs and NFS use the following +settings. + +mergerfs settings: + +- noforget +- inodecalc=path-hash + +NFS export settings: + +- fsid=UUID +- no_root_squash + +`noforget` is needed because NFS uses the `name_to_handle_at` and +`open_by_handle_at` functions which allow a program to keep a +reference to a file without technically having it open in the typical +sense. The problem is that FUSE has no way to know that NFS has a +handle that it will later use to open the file again. As a result, it +is possible for the kernel to tell mergerfs to forget about the node +and should NFS ever ask for that node's details in the future it would +have nothing to respond with. Keeping nodes around forever is not +ideal but at the moment the only way to manage the situation. + +`inodecalc=path-hash` is needed because NFS is sensitive to +out-of-band changes. FUSE doesn't care if a file's inode value changes +but NFS, being stateful, does. So if you used the default inode +calculation algorithm then it is possible that if you changed a file +or updated a directory the file mergerfs will use will be on a +different branch and therefore the inode would change. This isn't an +ideal solution and others are being considered but it works for most +situations. + +`fsid=UUID` is needed because FUSE filesystems don't have different +`st_dev` values which can cause issues when exporting. The easiest +thing to do is set each mergerfs export `fsid` to some random +value. An easy way to generate a random value is to use the command +line tool `uuid` or `uuidgen` or through a website such as +[uuidgenerator.net](https://www.uuidgenerator.net/). + +`no_root_squash` is not strictly necessary but can lead to confusing +permission and ownership issues if root squashing is enabled. + +## SMB / CIFS + +[SMB](https://en.wikipedia.org/wiki/Server_Message_Block) is a +protocol most used by Microsoft Windows systems to share file shares, +printers, etc. However, due to the popularity of Windows, it is also +supported on many other platforms including Linux. The most popular +way of supporting SMB on Linux is via the software Samba. + +[Samba](), and other +ways of serving Linux filesystems, via SMB should work fine with +mergerfs. The services do not tend to use the same technologies which +NFS uses and therefore don't have the same issues. There should not be +special settings required to use mergerfs with Samba. However, +[CIFSD](https://en.wikipedia.org/wiki/CIFSD) and other programs have +not been extensively tested. If you use mergerfs with CIFSD or other +SMB servers please submit your experiences so these docs can be +updated. + +## SSHFS + +[SSHFS](https://en.wikipedia.org/wiki/SSHFS) is a FUSE filesystem +leveraging SSH as the connection and transport layer. While often +simpler to setup when compared to NFS or Samba the performance can be +lacking and the project is very much in maintenance mode. + +There are no known issues using sshfs with mergerfs. You may want to +use the following arguments to improve performance but your millage +may vary. + +- `-o Ciphers=arcfour` +- `-o Compression=no` + +More info can be found +[here](https://ideatrash.net/2016/08/odds-and-ends-optimizing-sshfs-moving.html). + +## Other + +There are other remote filesystems but none popularly used to serve +mergerfs. If you use something not listed above feel free to reach out +and I will add it to the list. diff --git a/mkdocs/docs/pages/documentation/runtime_interfaces.md b/mkdocs/docs/pages/documentation/runtime_interfaces.md new file mode 100644 index 000000000..2dd6716fd --- /dev/null +++ b/mkdocs/docs/pages/documentation/runtime_interfaces.md @@ -0,0 +1,111 @@ +# RUNTIME INTERFACES + +## RUNTIME CONFIG + +#### .mergerfs pseudo file + +``` +/.mergerfs +``` + +There is a pseudo file available at the mount point which allows for +the runtime modification of certain **mergerfs** options. The file +will not show up in **readdir** but can be **stat**'ed and manipulated +via [{list,get,set}xattrs](http://linux.die.net/man/2/listxattr) +calls. + +Any changes made at runtime are **not** persisted. If you wish for +values to persist they must be included as options wherever you +configure the mounting of mergerfs (/etc/fstab). + +##### Keys + +Use `getfattr -d /mountpoint/.mergerfs` or `xattr -l +/mountpoint/.mergerfs` to see all supported keys. Some are +informational and therefore read-only. `setxattr` will return EINVAL +(invalid argument) on read-only keys. + +##### Values + +Same as the command line. + +###### user.mergerfs.branches + +Used to query or modify the list of branches. When modifying there are +several shortcuts to easy manipulation of the list. + +| Value | Description | +| -------- | -------------------------- | +| [list] | set | +| +<[list] | prepend | +| +>[list] | append | +| -[list] | remove all values provided | +| -< | remove first in list | +| -> | remove last in list | + +`xattr -w user.mergerfs.branches +` +- A `strace` of mergerfs while the program is trying to do whatever it is failing to do: + - `strace -fvTtt -s 256 -p -o /tmp/mergerfs.strace.txt` +- **Precise** directions on replicating the issue. Do not leave **anything** out. +- Try to recreate the problem in the simplest way using standard programs: `ln`, `mv`, `cp`, `ls`, `dd`, etc. + +#### Contact / Issue submission + +- github.com: https://github.com/trapexit/mergerfs/issues +- discord: https://discord.gg/MpAr69V +- reddit: https://www.reddit.com/r/mergerfs + +#### Donations + +https://github.com/trapexit/support + +Development and support of a project like mergerfs requires a +significant amount of time and effort. The software is released under +the very liberal ISC license and is therefore free to use for personal +or commercial uses. + +If you are a personal user and find mergerfs and its support valuable +and would like to support the project financially it would be very +much appreciated. + +If you are using mergerfs commercially please consider sponsoring the +project to ensure it continues to be maintained and receive +updates. If custom features are needed feel free to [contact me +directly](mailto:support@spawn.link). diff --git a/mkdocs/docs/pages/documentation/terminology.md b/mkdocs/docs/pages/documentation/terminology.md new file mode 100644 index 000000000..0ed792778 --- /dev/null +++ b/mkdocs/docs/pages/documentation/terminology.md @@ -0,0 +1,9 @@ +# TERMINOLOGY + +- branch: A base path used in the pool. +- pool: The mergerfs mount. The union of the branches. +- relative path: The path in the pool relative to the branch and mount. +- function: A filesystem call (open, unlink, create, getattr, rmdir, etc.) +- category: A collection of functions based on basic behavior (action, create, search). +- policy: The algorithm used to select a file when performing a function. +- path preservation: Aspect of some policies which includes checking the path for which a file would be created. diff --git a/mkdocs/docs/pages/documentation/tips_notes.md b/mkdocs/docs/pages/documentation/tips_notes.md new file mode 100644 index 000000000..976d47c1a --- /dev/null +++ b/mkdocs/docs/pages/documentation/tips_notes.md @@ -0,0 +1,43 @@ +# TIPS / NOTES + +- This document is literal and thorough. If a suspected feature isn't + mentioned it doesn't exist. If certain libfuse arguments aren't + listed they probably shouldn't be used. +- Ensure you're using the latest version. +- Run mergerfs as `root`. mergerfs is designed and intended to be run + as `root` and may exibit incorrect behavior if run otherwise.. +- If you don't see some directories and files you expect, policies + seem to skip branches, you get strange permission errors, etc. be + sure the underlying filesystems' permissions are all the same. Use + `mergerfs.fsck` to audit the filesystem for out of sync permissions. +- If you still have permission issues be sure you are using POSIX ACL + compliant filesystems. mergerfs doesn't generally make exceptions + for FAT, NTFS, or other non-POSIX filesystem. +- Do **not** use `cache.files=off` if you expect applications (such as + rtorrent) to use [mmap](http://linux.die.net/man/2/mmap) + files. Shared mmap is not currently supported in FUSE w/ page + caching disabled. Enabling `dropcacheonclose` is recommended when + `cache.files=partial|full|auto-full`. +- [Kodi](http://kodi.tv), [Plex](http://plex.tv), + [Subsonic](http://subsonic.org), etc. can use directory + [mtime](http://linux.die.net/man/2/stat) to more efficiently + determine whether to scan for new content rather than simply + performing a full scan. If using the default **getattr** policy of + **ff** it's possible those programs will miss an update on account + of it returning the first directory found's **stat** info and it's a + later directory on another mount which had the **mtime** recently + updated. To fix this you will want to set + **func.getattr=newest**. Remember though that this is just + **stat**. If the file is later **open**'ed or **unlink**'ed and the + policy is different for those then a completely different file or + directory could be acted on. +- Some policies mixed with some functions may result in strange + behaviors. Not that some of these behaviors and race conditions + couldn't happen outside **mergerfs** but that they are far more + likely to occur on account of the attempt to merge multiple sources + of data which could be out of sync due to the different policies. +- For consistency it's generally best to set **category** wide policies + rather than individual **func**'s. This will help limit the + confusion of tools such as + [rsync](http://linux.die.net/man/1/rsync). However, the flexibility + is there if needed. diff --git a/mkdocs/docs/pages/documentation/tooling.md b/mkdocs/docs/pages/documentation/tooling.md new file mode 100644 index 000000000..76bf119a3 --- /dev/null +++ b/mkdocs/docs/pages/documentation/tooling.md @@ -0,0 +1,107 @@ +# TOOLING + +## preload.so + +EXPERIMENTAL + +For some time there has been work to enable passthrough IO in +FUSE. Passthrough IO would allow for near native performance with +regards to reads and writes (at the expense of certain mergerfs +features.) However, there have been several complications which have +kept the feature from making it into the mainline Linux kernel. Until +that feature is available there are two methods to provide similar +functionality. One method is using the LD_PRELOAD feature of the +dynamic linker. The other leveraging ptrace to intercept +syscalls. Each has their disadvantages. At the moment only a preload +based tool is available. A ptrace based tool may be developed later if +there is a need. + +`/usr/lib/mergerfs/preload.so` + +This [preloadable +library](https://man7.org/linux/man-pages/man8/ld.so.8.html#ENVIRONMENT) +overrides the creation and opening of files in order to simulate +passthrough file IO. It catches the open/creat/fopen calls, has +mergerfs do the call, queries mergerfs for the branch the file exists +on, reopens the file on the underlying filesystem and returns that +instead. Meaning that you will get native read/write performance +because mergerfs is no longer part of the workflow. Keep in mind that +this also means certain mergerfs features that work by interrupting +the read/write workflow, such as `moveonenospc`, will no longer work. + +Also, understand that this will only work on dynamically linked +software. Anything statically compiled will not work. Many GoLang and +Rust apps are statically compiled. + +The library will not interfere with non-mergerfs filesystems. The +library is written to always fallback to returning the mergerfs opened +file on error. + +While the library was written to account for a number of edgecases +there could be some yet accounted for so please report any oddities. + +Thank you to +[nohajc](https://github.com/nohajc/mergerfs-io-passthrough) for +prototyping the idea. + +### general usage + +```sh +LD_PRELOAD=/usr/lib/mergerfs/preload.so touch /mnt/mergerfs/filename +``` + +### Docker usage + +Assume `/mnt/fs0` and `/mnt/fs1` are pooled with mergerfs at `/media`. + +All mergerfs branch paths _must_ be bind mounted into the container at +the same path as found on the host so the preload library can see them. + +```sh +docker run \ + -e LD_PRELOAD=/usr/lib/mergerfs/preload.so \ + -v /usr/lib/mergerfs/preload.so:/usr/lib/mergerfs/preload.so:ro \ + -v /media:/data \ + -v /mnt:/mnt \ + ubuntu:latest \ + bash +``` + +or more explicitly + +```sh +docker run \ + -e LD_PRELOAD=/usr/lib/mergerfs/preload.so \ + -v /usr/lib/mergerfs/preload.so:/usr/lib/mergerfs/preload.so:ro \ + -v /media:/data \ + -v /mnt/fs0:/mnt/fs0 \ + -v /mnt/fs1:/mnt/fs1 \ + ubuntu:latest \ + bash +``` + +### systemd unit + +Use the `Environment` option to set the LD_PRELOAD variable. + +- https://www.freedesktop.org/software/systemd/man/latest/systemd.service.html#Command%20lines +- https://serverfault.com/questions/413397/how-to-set-environment-variable-in-systemd-service + +``` +[Service] +Environment=LD_PRELOAD=/usr/lib/mergerfs/preload.so +``` + +## Misc + +- https://github.com/trapexit/mergerfs-tools + - mergerfs.ctl: A tool to make it easier to query and configure mergerfs at runtime + - mergerfs.fsck: Provides permissions and ownership auditing and the ability to fix them + - mergerfs.dedup: Will help identify and optionally remove duplicate files + - mergerfs.dup: Ensure there are at least N copies of a file across the pool + - mergerfs.balance: Rebalance files across filesystems by moving them from the most filled to the least filled + - mergerfs.consolidate: move files within a single mergerfs directory to the filesystem with most free space +- https://github.com/trapexit/scorch + - scorch: A tool to help discover silent corruption of files and keep track of files +- https://github.com/trapexit/bbf + - bbf (bad block finder): a tool to scan for and 'fix' hard drive bad blocks and find the files using those blocks diff --git a/mkdocs/docs/pages/documentation/upgrade.md b/mkdocs/docs/pages/documentation/upgrade.md new file mode 100644 index 000000000..8e42e0743 --- /dev/null +++ b/mkdocs/docs/pages/documentation/upgrade.md @@ -0,0 +1,25 @@ +# UPGRADE + +mergerfs can be upgraded live by mounting on top of the previous +instance. Simply install the new version of mergerfs and follow the +instructions below. + +Run mergerfs again or if using `/etc/fstab` call for it to mount +again. Existing open files and such will continue to work fine though +they won't see runtime changes since any such change would be the new +mount. If you plan on changing settings with the new mount you should +/ could apply those before mounting the new version. + +``` +$ sudo mount /mnt/mergerfs +$ mount | grep mergerfs +media on /mnt/mergerfs type mergerfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other) +media on /mnt/mergerfs type mergerfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other) +``` + +A problem with this approach is that the underlying instance will +continue to run even if the software using it stop or are +restarted. To work around this you can use a "lazy umount". Before +mounting over top the mount point with the new instance of mergerfs +issue: `umount -l `. Or you can let mergerfs do +it by setting the option `lazy-umount-mountpoint=true`. diff --git a/mkdocs/docs/pages/faq/compatibility_and_integration.md b/mkdocs/docs/pages/faq/compatibility_and_integration.md new file mode 100644 index 000000000..5d9133b40 --- /dev/null +++ b/mkdocs/docs/pages/faq/compatibility_and_integration.md @@ -0,0 +1,39 @@ +# Compatibility and Integration + +## Can I use mergerfs without SnapRAID? SnapRAID without mergerfs? + +Yes. They are completely unrelated pieces of software. + +## Can mergerfs run via Docker, Podman, Kubernetes, etc. + +Yes. With Docker you'll need to include `--cap-add=SYS_ADMIN +--device=/dev/fuse --security-opt=apparmor:unconfined` or similar with +other container runtimes. You should also be running it as root or +given sufficient caps to change user and group identity as well as +have root like filesystem permissions. + +Keep in mind that you **MUST** consider identity when using +containers. For example: supplemental groups will be picked up from +the container unless you properly manage users and groups by sharing +relevant /etc files or by using some other means to share identity +across containers. Similarly, if you use "rootless" containers and user +namespaces to do uid/gid translations you **MUST** consider that while +managing shared files. + +Also, as mentioned by [hotio](https://hotio.dev/containers/mergerfs), +with Docker you should probably be mounting with `bind-propagation` +set to `slave`. + +## Does mergerfs support CoW / copy-on-write / writes to read-only filesystems? + +Not in the sense of a filesystem like BTRFS or ZFS nor in the +overlayfs or aufs sense. It does offer a +[cow-shell](http://manpages.ubuntu.com/manpages/bionic/man1/cow-shell.1.html) +like hard link breaking (copy to temp file then rename over original) +which can be useful when wanting to save space by hardlinking +duplicate files but wish to treat each name as if it were a unique and +separate file. + +If you want to write to a read-only filesystem you should look at +overlayfs. You can always include the overlayfs mount into a mergerfs +pool. diff --git a/mkdocs/docs/pages/faq/configuration_and_policies.md b/mkdocs/docs/pages/faq/configuration_and_policies.md new file mode 100644 index 000000000..75dbd3aa6 --- /dev/null +++ b/mkdocs/docs/pages/faq/configuration_and_policies.md @@ -0,0 +1,76 @@ +# Configuration and Policies + +## What policies should I use? + +Unless you're doing something more niche the average user is probably +best off using `mfs` for `category.create`. It will spread files out +across your branches based on available space. Use `mspmfs` if you +want to try to colocate the data a bit more. You may want to use `lus` +if you prefer a slightly different distribution of data if you have a +mix of smaller and larger filesystems. Generally though `mfs`, `lus`, +or even `rand` are good for the general use case. If you are starting +with an imbalanced pool you can use the tool **mergerfs.balance** to +redistribute files across the pool. + +If you really wish to try to colocate files based on directory you can +set `func.create` to `epmfs` or similar and `func.mkdir` to `rand` or +`eprand` depending on if you just want to colocate generally or on +specific branches. Either way the _need_ to colocate is rare. For +instance: if you wish to remove the device regularly and want the data +to predictably be on that device or if you don't use backup at all and +don't wish to replace that data piecemeal. In which case using path +preservation can help but will require some manual +attention. Colocating after the fact can be accomplished using the +**mergerfs.consolidate** tool. If you don't need strict colocation +which the `ep` policies provide then you can use the `msp` based +policies which will walk back the path till finding a branch that +works. + +Ultimately there is no correct answer. It is a preference or based on +some particular need. mergerfs is very easy to test and experiment +with. I suggest creating a test setup and experimenting to get a sense +of what you want. + +`epmfs` is the default `category.create` policy because `ep` policies +are not going to change the general layout of the branches. It won't +place files/dirs on branches that don't already have the relative +branch. So it keeps the system in a known state. It's much easier to +stop using `epmfs` or redistribute files around the filesystem than it +is to consolidate them back. + +## What settings should I use? + +Depends on what features you want. Generally speaking, there are no +"wrong" settings. All settings are performance or feature related. The +best bet is to read over the available options and choose what fits +your situation. If something isn't clear from the documentation please +reach out and the documentation will be improved. + +That said, for the average person, the following should be fine: + +`cache.files=off,dropcacheonclose=true,category.create=mfs` + +## Why are all my files ending up on 1 filesystem?! + +Did you start with empty filesystems? Did you explicitly configure a +`category.create` policy? Are you using an `existing path` / `path +preserving` policy? + +The default create policy is `epmfs`. That is a path preserving +algorithm. With such a policy for `mkdir` and `create` with a set of +empty filesystems it will select only 1 filesystem when the first +directory is created. Anything, files or directories, created in that +first directory will be placed on the same branch because it is +preserving paths. + +This catches a lot of new users off guard but changing the default +would break the setup for many existing users and this policy is the +safest policy as it will not change the general layout of the existing +filesystems. If you do not care about path preservation and wish your +files to be spread across all your filesystems change to `mfs` or +similar policy as described above. If you do want path preservation +you'll need to perform the manual act of creating paths on the +filesystems you want the data to land on before transferring your +data. Setting `func.mkdir=epall` can simplify managing path +preservation for `create`. Or use `func.mkdir=rand` if you're +interested in just grouping directory content by filesystem. diff --git a/mkdocs/docs/pages/faq/general_information_and_overview.md b/mkdocs/docs/pages/faq/general_information_and_overview.md new file mode 100644 index 000000000..ac3009838 --- /dev/null +++ b/mkdocs/docs/pages/faq/general_information_and_overview.md @@ -0,0 +1,64 @@ +# General Information and Overview + +## How well does mergerfs scale? Is it "production ready?" + +Users have reported running mergerfs on everything from a Raspberry Pi +to dual socket Xeon systems with >20 cores. I'm aware of at least a +few companies which use mergerfs in production. [Open Media +Vault](https://www.openmediavault.org) includes mergerfs as its sole +solution for pooling filesystems. The author of mergerfs had it +running for over 300 days managing 16+ devices with reasonably heavy +24/7 read and write usage. Stopping only after the machine's power +supply died. + +Most serious issues (crashes or data corruption) have been due to +[kernel +bugs](https://github.com/trapexit/mergerfs/wiki/Kernel-Issues-&-Bugs). All +of which are fixed in stable releases. + +## Why use FUSE? Why not a kernel based solution? + +As with any solution to a problem, there are advantages and +disadvantages to each one. + +A FUSE based solution has all the downsides of FUSE: + +- Higher IO latency due to the trips in and out of kernel space +- Higher general overhead due to trips in and out of kernel space +- Double caching when using page caching +- Misc limitations due to FUSE's design + +But FUSE also has a lot of upsides: + +- Easier to offer a cross platform solution +- Easier forward and backward compatibility +- Easier updates for users +- Easier and faster release cadence +- Allows more flexibility in design and features +- Overall easier to write, secure, and maintain +- Much lower barrier to entry (getting code into the kernel takes a + lot of time and effort initially) + +FUSE was chosen because of all the advantages listed above. The +negatives of FUSE do not outweigh the positives. + +## Is my OS's libfuse needed for mergerfs to work? + +No. Normally `mount.fuse` is needed to get mergerfs (or any FUSE +filesystem to mount using the `mount` command but in vendoring the +libfuse library the `mount.fuse` app has been renamed to +`mount.mergerfs` meaning the filesystem type in `fstab` can simply be +`mergerfs`. That said there should be no harm in having it installed +and continuing to using `fuse.mergerfs` as the type in `/etc/fstab`. + +If `mergerfs` doesn't work as a type it could be due to how the +`mount.mergerfs` tool was installed. Must be in `/sbin/` with proper +permissions. + +## Why was splice support removed? + +After a lot of testing over the years, splicing always appeared to +at best, provide equivalent performance, and in some cases, worse +performance. Splice is not supported on other platforms forcing a +traditional read/write fallback to be provided. The splice code was +removed to simplify the codebase. diff --git a/mkdocs/docs/pages/faq/recommendations_and_warnings.md b/mkdocs/docs/pages/faq/recommendations_and_warnings.md new file mode 100644 index 000000000..47daca619 --- /dev/null +++ b/mkdocs/docs/pages/faq/recommendations_and_warnings.md @@ -0,0 +1,48 @@ +# Recommendations and Warnings + +## What should mergerfs NOT be used for? + +- databases: Even if the database stored data in separate files + (mergerfs wouldn't offer much otherwise) the higher latency of the + indirection will kill performance. If it is a lightly used SQLITE + database then it may be fine but you'll need to test. +- VM images: For the same reasons as databases. VM images are accessed + very aggressively and mergerfs will introduce too much latency (if + it works at all). +- As replacement for RAID: mergerfs is just for pooling branches. If + you need that kind of device performance aggregation or high + availability you should stick with RAID. + +## It's mentioned that there are some security issues with mhddfs. What are they? How does mergerfs address them? + +[mhddfs](https://github.com/trapexit/mhddfs) manages running as +**root** by calling +[getuid()](https://github.com/trapexit/mhddfs/blob/cae96e6251dd91e2bdc24800b4a18a74044f6672/src/main.c#L319) +and if it returns **0** then it will +[chown](http://linux.die.net/man/1/chown) the file. Not only is that a +race condition but it doesn't handle other situations. Rather than +attempting to simulate POSIX ACL behavior the proper way to manage +this is to use [seteuid](http://linux.die.net/man/2/seteuid) and +[setegid](http://linux.die.net/man/2/setegid), in effect, becoming the +user making the original call, and perform the action as them. This is +what mergerfs does and why mergerfs should always run as root. + +In Linux setreuid syscalls apply only to the thread. GLIBC hides this +away by using realtime signals to inform all threads to change +credentials. Taking after **Samba**, mergerfs uses +**syscall(SYS_setreuid,...)** to set the callers credentials for that +thread only. Jumping back to **root** as necessary should escalated +privileges be needed (for instance: to clone paths between +filesystems). + +For non-Linux systems, mergerfs uses a read-write lock and changes +credentials only when necessary. If multiple threads are to be user X +then only the first one will need to change the processes +credentials. So long as the other threads need to be user X they will +take a readlock allowing multiple threads to share the +credentials. Once a request comes in to run as user Y that thread will +attempt a write lock and change to Y's credentials when it can. If the +ability to give writers priority is supported then that flag will be +used so threads trying to change credentials don't starve. This isn't +the best solution but should work reasonably well assuming there are +few users. diff --git a/mkdocs/docs/pages/faq/technical_behavior_and_limitations.md b/mkdocs/docs/pages/faq/technical_behavior_and_limitations.md new file mode 100644 index 000000000..a59ba5fa2 --- /dev/null +++ b/mkdocs/docs/pages/faq/technical_behavior_and_limitations.md @@ -0,0 +1,168 @@ +# Technical Behavior and Limitations + +## Do hardlinks work? + +Yes. See also the option `inodecalc` for how inode values are +calculated. + +What mergerfs does not do is fake hard links across branches. Read +the section "rename & link" for how it works. + +Remember that hardlinks will NOT work across devices. That includes +between the original filesystem and a mergerfs pool, between two +separate pools of the same underlying filesystems, or bind mounts of +paths within the mergerfs pool. The latter is common when using Docker +or Podman. Multiple volumes (bind mounts) to the same underlying +filesystem are considered different devices. There is no way to link +between them. You should mount in the highest directory in the +mergerfs pool that includes all the paths you need if you want links +to work. + +## How does mergerfs handle moving and copying of files? + +This is a _very_ common mistaken assumption regarding how filesystems +work. There is no such thing as "move" or "copy." These concepts are +high level behaviors made up of numerous independent steps and _not_ +individual filesystem functions. + +A "move" can include a "copy" so lets describe copy first. + +When an application copies a file from source to destination it can do +so in a number of ways but the basics are the following. + +1. `open` the source file. +2. `create` the destination file. +3. `read` a chunk of data from source and `write` to + destination. Continue till it runs out of data to copy. +4. Copy file metadata (`stat`) such as ownership (`chown`), + permissions (`chmod`), timestamps (`utimes`), extended attributes + (`getxattr`, `setxattr`), etc. +5. `close` source and destination files. + +"move" is typically a `rename(src,dst)` and if that errors with +`EXDEV` (meaning the source and destination are on different +filesystems) the application will "copy" the file as described above +and then it removes (`unlink`) the source. + +The `rename(src,dst)`, `open(src)`, `create(dst)`, data copying, +metadata copying, `unlink(src)`, etc. are entirely distinct and +separate events. There is really no practical way to know that what is +ultimately occurring is the "copying" of a file or what the source +file would be. Since the source is not known there is no way to know +how large a created file is destined to become. This is why it is +impossible for mergerfs to choose the branch for a `create` based on +file size. The only context provided when a file is created, besides +the name, is the permissions, if it is to be read and/or written, and +some low level settings for the operating system. + +All of this means that mergerfs can not make decisions when a file is +created based on file size or the source of the data. That information +is simply not available. At best mergerfs could respond to files +reaching a certain size when writing data or when a file is closed. + +Related: if a user wished to have mergerfs perform certain activities +based on the name of a file it is common and even best practice for a +program to write to a temporary file first and then rename to its +final destination. That temporary file name will typically be random +and have no indication of the type of file being written. + +## Does FICLONE or FICLONERANGE work? + +Unfortunately not. FUSE, the technology mergerfs is based on, does not +support the `clone_file_range` feature needed for it to work. mergerfs +won't even know such a request is made. The kernel will simply return +an error back to the application making the request. + +Should FUSE gain the ability mergerfs will be updated to support it. + +## Why do I get an "out of space" / "no space left on device" / ENOSPC error even though there appears to be lots of space available? + +First make sure you've read the sections above about policies, path +preservation, branch filtering, and the options **minfreespace**, +**moveonenospc**, **statfs**, and **statfs_ignore**. + +mergerfs is simply presenting a union of the content within multiple +branches. The reported free space is an aggregate of space available +within the pool (behavior modified by **statfs** and +**statfs_ignore**). It does not represent a contiguous space. In the +same way that read-only filesystems, those with quotas, or reserved +space report the full theoretical space available. + +Due to path preservation, branch tagging, read-only status, and +**minfreespace** settings it is perfectly valid that `ENOSPC` / "out +of space" / "no space left on device" be returned. It is doing what +was asked of it: filtering possible branches due to those +settings. Only one error can be returned and if one of the reasons for +filtering a branch was **minfreespace** then it will be returned as +such. **moveonenospc** is only relevant to writing a file which is too +large for the filesystem it's currently on. + +It is also possible that the filesystem selected has run out of +inodes. Use `df -i` to list the total and available inodes per +filesystem. + +If you don't care about path preservation then simply change the +`create` policy to one which isn't. `mfs` is probably what most are +looking for. The reason it's not default is because it was originally +set to `epmfs` and changing it now would change people's setup. Such a +setting change will likely occur in mergerfs 3. + +## Why does the total available space in mergerfs not equal outside? + +Are you using ext2/3/4? With reserve for root? mergerfs uses available +space for statfs calculations. If you've reserved space for root then +it won't show up. + +You can remove the reserve by running: `tune2fs -m 0 ` + +## I notice massive slowdowns of writes when enabling cache.files. + +When file caching is enabled in any form (`cache.files!=off`) it will +issue `getxattr` requests for `security.capability` prior to _every +single write_. This will usually result in performance degradation, +especially when using a network filesystem (such as NFS or SMB.) +Unfortunately at this moment, the kernel is not caching the response. + +To work around this situation mergerfs offers a few solutions. + +1. Set `security_capability=false`. It will short circuit any call and + return `ENOATTR`. This still means though that mergerfs will + receive the request before every write but at least it doesn't get + passed through to the underlying filesystem. +2. Set `xattr=noattr`. Same as above but applies to _all_ calls to + getxattr. Not just `security.capability`. This will not be cached + by the kernel either but mergerfs' runtime config system will still + function. +3. Set `xattr=nosys`. Results in mergerfs returning `ENOSYS` which + _will_ be cached by the kernel. No future xattr calls will be + forwarded to mergerfs. The downside is that also means the xattr + based config and query functionality won't work either. +4. Disable file caching. If you aren't using applications which use + `mmap` it's probably simpler to just disable it altogether. The + kernel won't send the requests when caching is disabled. + +## Why can't I see my files / directories? + +It's almost always a permissions issue. Unlike mhddfs and +unionfs-fuse, which runs as root and attempts to access content as +such, mergerfs always changes its credentials to that of the +caller. This means that if the user does not have access to a file or +directory than neither will mergerfs. However, because mergerfs is +creating a union of paths it may be able to read some files and +directories on one filesystem but not another resulting in an +incomplete set. + +Whenever you run into a split permission issue (seeing some but not +all files) try using +[mergerfs.fsck](https://github.com/trapexit/mergerfs-tools) tool to +check for and fix the mismatch. If you aren't seeing anything at all +be sure that the basic permissions are correct. The user and group +values are correct and that directories have their executable bit +set. A common mistake by users new to Linux is to `chmod -R 644` when +they should have `chmod -R u=rwX,go=rX`. + +If using a network filesystem such as NFS or SMB (Samba) be sure to +pay close attention to anything regarding permissioning and +users. Root squashing and user translation for instance has bitten a +few mergerfs users. Some of these also affect the use of mergerfs from +container platforms such as Docker. diff --git a/mkdocs/docs/pages/faq/usage_and_functionality.md b/mkdocs/docs/pages/faq/usage_and_functionality.md new file mode 100644 index 000000000..2ff84d03e --- /dev/null +++ b/mkdocs/docs/pages/faq/usage_and_functionality.md @@ -0,0 +1,53 @@ +# Usage and Functionality + +## Can mergerfs be used with filesystems which already have data / are in use? + +Yes. mergerfs is really just a proxy and does **NOT** interfere with +the normal form or function of the filesystems / mounts / paths it +manages. It is just another userland application that is acting as a +man-in-the-middle. It can't do anything that any other random piece of +software can't do. + +mergerfs is **not** a traditional filesystem that takes control over +the underlying block device. mergerfs is **not** RAID. It does **not** +manipulate the data that passes through it. It does **not** shard data +across filesystems. It merely shards some **behavior** and aggregates +others. + +## Can drives/filesystems be removed from the pool at will? + +Yes. See previous question's answer. + +## Can mergerfs be removed without affecting the data? + +Yes. See the previous question's answer. + +## Can drives/filesystems be moved to another pool? + +Yes. See the previous question's answer. + +## How do I migrate data into or out of the pool when adding/removing drives/filesystems? + +You don't need to. See the previous question's answer. + +## How do I remove a drive/filesystem but keep the data in the pool? + +Nothing special needs to be done. Remove the branch from mergerfs' +config and copy (rsync) the data from the removed filesystem into the +pool. Effectively the same as if it were you transfering data from one +filesystem to another. + +If you wish to continue using the pool while performing the transfer +simply create another, temporary pool without the filesystem in +question and then copy the data. It would probably be a good idea to +set the branch to `RO` prior to doing this to ensure no new content is +written to the filesystem while performing the copy. + +## Can filesystems be written to directly? Outside of mergerfs while pooled? + +Yes, however, it's not recommended to use the same file from within the +pool and from without at the same time (particularly +writing). Especially if using caching of any kind (cache.files, +cache.entry, cache.attr, cache.negative_entry, cache.symlinks, +cache.readdir, etc.) as there could be a conflict between cached data +and not. diff --git a/mkdocs/docs/pages/wiki/featured_media_and_publicity.md b/mkdocs/docs/pages/wiki/featured_media_and_publicity.md new file mode 100644 index 000000000..e3ec32e18 --- /dev/null +++ b/mkdocs/docs/pages/wiki/featured_media_and_publicity.md @@ -0,0 +1,71 @@ +# Featured Media and Publicity + +## Tutorials / Articles + +- 2016-02-02 - [Linuxserver.io: The Perfect Media Server 2016](https://blog.linuxserver.io/2016/02/02/the-perfect-media-server-2016/) +- 2016-08-31 - [ZackReed.me: Mergerfs – another good option to pool your SnapRAID disks](https://zackreed.me/mergerfs-another-good-option-to-pool-your-snapraid-disks/) +- 2016-11-06 - [Linuxserver.io: Revisiting the HP ProLiant Gen8 G1610T Microserver](https://blog.linuxserver.io/2016/11/06/revisiting-the-hp-proliant-gen8-g1610t-microserver/) +- 2017-01-17 - [Setting up mergerfs on JBOD's (or a poor mans storage array)](http://corywestropp.com/develop/articles/setting-up-mergerfs/) +- 2017-06-24 - [Linuxserver.io: The Perfect Media Server 2017](https://blog.linuxserver.io/2017/06/24/the-perfect-media-server-2017/) +- 2018-02-19 - [Teknophiles: Disk Pooling in Linux with mergerFS](https://web.archive.org/web/20210324184857/https://www.teknophiles.com/2018/02/19/disk-pooling-in-linux-with-mergerfs/) +- 2018-02-20 - [Fortes.com: Using Rclone and MergerFS together across drives](https://fortes.com/2018/rclone-and-mergerfs/) +- 2019-02-10 - [Medium: Migrating from ZFS to MergerFS and SnapRAID at home](https://medium.com/@pascal.brokmeier/migrating-from-zfs-to-mergerfs-and-snapraid-at-home-89c45fd5db02) +- 2019-04-24 - [MichaelXander.com: DIY NAS with OMV, SnapRAID, MergerFS, and Disk Encryption](https://michaelxander.com/diy-nas/) +- 2019-07-16 - [Linuxserver.io: The Perfect Media Server - 2019 Edition](https://blog.linuxserver.io/2019/07/16/perfect-media-server-2019/) +- 2019-09-10 - [Rclone VFS and MergerFS Setup](https://docs.usbx.me/books/rclone/page/rclone-vfs-and-mergerfs-setup) +- 2019-12-20 - [NetworkShinobi.com: SnapRAID and MergerFS on OpenMediaVault](https://www.networkshinobi.com/snapraid-and-mergerfs-on-openmediavault/) +- 2020-01-14 - [Brandon Rozek's Blog](https://brandonrozek.com/blog/mergerfs/) +- 2020-02-14 - [SelfHostedHome.com: Combining Different Sized Drives with mergerfs and SnapRAID](https://selfhostedhome.com/combining-different-sized-drives-with-mergerfs-and-snapraid/) +- 2020-05-01 - [FedoraMagazine.org: Using mergerfs to increase your virtual storage](https://fedoramagazine.org/using-mergerfs-to-increase-your-virtual-storage/) +- 2020-08-20 - [Setting up Rclone, Mergerfs and Crontab for automated cloud storage](https://bytesized-hosting.com/pages/setting-up-rclone-mergerfs-and-crontab-for-automated-cloud-storage) +- 2020-11-22 - [Introducing… MergerFS – My FREE UNRAID alternative](https://supertechfreaks.com/introducing-mergerfs-free-unraid-alternative/) +- 2020-12-30 - [Perfect Media Server](https://perfectmediaserver.com) (a new site with docs fully fleshing out the 'Perfect Media Server' blog series) +- 2021-10-31 - [Better Home Storage: MergerFS + SnapRAID on OpenMediaVault](https://blog.sakuragawa.moe/better-home-storage-mergerfs-snapraid-on-openmediavault/) +- 2021-11-28 - [Linux Magazine: Come Together - Merging file systems for a simple NAS with MergerFS](https://www.linux-magazine.com/Issues/2022/254/MergerFS) +- 2022-06-04 - [MergerFS + SnapRaid Study](https://crashlaker.github.io/2022/06/04/mergerfs_+_snapraid_study.html) +- 2022-12-31 - [Merge Storages in CasaOS: A secret beta feature you know now](https://blog.casaos.io/blog/13.html) +- 2023-02-03 - [(MergerFS + SnapRAID) is the new RAID 5](https://thenomadcode.tech/mergerfs-snapraid-is-the-new-raid-5) +- 2024-02-07 - [Designing & Deploying MANS - A Hybrid NAS Approach with SnapRAID, MergerFS, and OpenZFS](https://blog.muffn.io/posts/part-3-mini-100tb-nas) + +## Videos + +- 2017-06-23 - [Alex Kretzschmar: Part 1 - Perfect Media Server 2017 - Introduction](https://www.youtube.com/watch?v=L5MH8q3lmmk) +- 2017-06-24 - [Alex Kretzschmar: Part 2 - Perfect Media server 2017 - Installing Debian 9 Stretch](https://www.youtube.com/watch?v=YpVVYRN_L_A) +- 2017-06-24 - [Alex Kretzschmar: Part 3 - Perfect Media Server 2017 - Install MergerFS and setting up your drives](https://www.youtube.com/watch?v=tbCMfm-jJ5Y) +- 2017-06-24 - [Alex Kretzschmar: Part 4 - Perfect Media Server 2017 - Installing Docker](https://www.youtube.com/watch?v=WYI32kx4hPE) +- 2017-06-24 - [Alex Kretzschmar: Part 5 - Perfect Media Server 2017 - Installing and Automating SnapRAID](https://www.youtube.com/watch?v=Ir5ZsUIbHXA) +- 2017-06-24 - [Alex Kretzschmar: Part 6 - Perfect Media Server 2017 -Turning your server into a NAS with Samba and NFS](https://www.youtube.com/watch?v=1hVdWq758ZQ) +- 2017-06-24 - [Alex Kretzschmar: Part 7 - Perfect Media Server 2017 - Managing your apps with docker-compose](https://www.youtube.com/watch?v=aI2rdw7_AmE) +- 2017-06-24 - [Alex Kretzschmar: Part 8 - Perfect Media Server 2017 - Manging your server using a web UI (Cockpit and Portainer)](https://www.youtube.com/watch?v=aLyTWdzDiCg) +- 2018-04-24 - [ElectronicsWizardy: How to setup a Linux Fileserver with Snapraid and Mergerfs Re-Export](https://www.youtube.com/watch?v=D2Klx-X7pFo) +- 2019-03-01 - [Snapraid and Unionfs: Advanced Array Options on Openmediavault (Better than ZFS and Unraid)](https://www.youtube.com/watch?v=FYkdPyCt5FU) +- 2019-03-22 - [Gamexplicit: The Perfect Plex Media Server 2019! Part 1 (Hardware)](https://www.youtube.com/watch?v=rJIRPhM2WcE) +- 2019-05-13 - [Gamexplicit: The Perfect Plex Media Server 2019! Part 2 (Ubuntu Server 18.04.2 LTS)](https://www.youtube.com/watch?v=aLyTWdzDiCg) +- 2019-05-20 - [Gamexplicit: The Perfect Plex Media Server 2019! Part 3 (SnapRaid, Samba, Plex)](https://www.youtube.com/watch?v=uW5y43XC-BI) +- 2020-08-23 - [Installing OpenMediaVault and SnapRAID and UnionFS (mergerfs)](https://www.youtube.com/watch?v=nDvzXM8UjAI) +- 2021-06-07 - [I ditched TrueNAS for MergerFS - Chia Plot Storage](https://www.youtube.com/watch?v=tpqFywkbZa4) +- 2021-08-03 - [Install and configure mergerfs to merge more than one folder in the same place](https://www.youtube.com/watch?v=69zcqEy1674) +- 2021-08-10 - [Instalando e configurando mergerfs para unir mais de uma pasta no mesmo lugar](https://www.youtube.com/watch?v=-RLxbBNBWhU) +- 2021-08-13 - [Let's Convert an Old Laptop to a NAS - What you should do](https://www.youtube.com/watch?v=F1v-TSbOymI) +- 2021-08-17 - [How to Combine Multiple Disks as One by using MergerFS | Ubuntu 20.04 LTS](https://www.youtube.com/watch?v=9e46pz5Seo4) +- 2021-08-20 - [Vamos converter um Laptop antigo para um NAS – O que você deve fazer](https://www.youtube.com/watch?v=q8EK9vWCRTc) +- 2021-10-29 - [Unlimited space for your Plex using Rclone to connect to your Cloud](https://www.youtube.com/watch?v=ghGconyrF3M) +- 2022-12-01 - [Make Your Home Server Go FAST! SSD Caching, 10Gbit Networking, etc.](https://www.youtube.com/watch?v=eRfqC_q3lkM&t=784s) +- 2022-12-07 - [Best RAID for mixed drive sizes. Unraid vs BTRFS vs Snapraid+Mergerfs vs Storage spaces.](https://www.youtube.com/watch?v=NQJkTiLXfgs) +- 2023-02-21 - [MergerFS + SnapRAID : Forget about RAID 5 in your Home Server !](https://www.youtube.com/watch?v=tX5MA-c6Qq4) +- 2023-06-26 - [How to install and setup MergerFS](https://www.youtube.com/watch?v=n7piuhTXeG4) +- 2023-07-31 - [How to recover a dead drive using Snapraid](https://www.youtube.com/watch?v=fmuiRLPcuJE) +- 2024-01-05 - [OpenMediaVault MergerFS Tutorial (Portuguese)](https://www.youtube.com/watch?v=V6Yw86dRUPQ) +- 2024-11-15 - [Meu servidor NAS - Parte 18: Recuperando um HD, recuperando o MergerFS e os próximos passos do NAS!](https://www.youtube.com/watch?v=5fy98kPzE3s) + +## Podcasts + +- 2019-11-04 - [Jupiter Extras: A Chat with mergerfs Developer Antonio Musumeci | Jupiter Extras 28](https://www.youtube.com/watch?v=VmJUAyyhSPk) +- 2019-11-07 - [Jupiter Broadcasting: ZFS Isn’t the Only Option | Self-Hosted 5](https://www.youtube.com/watch?v=JEW7UuKhMJ8) +- 2023-10-08 - [Self Hosted Episode 105 - Sleeper Storage Technology](https://selfhosted.show/105) + +## Social Media + +- [Reddit](https://www.reddit.com/search/?q=mergerfs&sort=new) +- [Twitter](https://twitter.com/search?q=mergerfs&src=spelling_expansion_revert_click&f=live) +- [YouTube](https://www.youtube.com/results?search_query=mergerfs&sp=CAI%253D) diff --git a/mkdocs/docs/pages/wiki/installing_mergerfs_on_a_synology_nas.md b/mkdocs/docs/pages/wiki/installing_mergerfs_on_a_synology_nas.md new file mode 100644 index 000000000..dc1fbdc8f --- /dev/null +++ b/mkdocs/docs/pages/wiki/installing_mergerfs_on_a_synology_nas.md @@ -0,0 +1,148 @@ +Originally from [Reddit](https://www.reddit.com/r/synology/comments/etz32q/instructions_on_how_to_install_mergerfs_on_a/). Copied and edited with permission. + +A different version to overcome some problems with the method below, can be [found here](https://web.archive.org/web/20221205205446/https://daniellemarco.nl/wp/2022/01/01/adding-mergerfs-to-your-synology/) + +Install Entware + +1. SSH into your NAS and switch to the root user: + +``` +sudo su +``` + +2. Create a folder on your hdd (outside rootfs): + +``` +mkdir -p /volume1/@Entware/opt +``` + +2. Remove `/opt` and mount optware folder. + +Make sure that `/opt` folder is empty (Optware is not installed), we will remove the `/opt` folder with its contents at this step. + +``` +rm -rf /opt +mkdir /opt +mount -o bind "/volume1/@Entware/opt" /opt +``` + +3. Run install script depending on the processor. Use command `uname -m` to find out. Then run the corresponding command. + +#### armv8 (aarch64) - Realtek RTD129x + +``` +wget -O - http://bin.entware.net/aarch64-k3.10/installer/generic.sh | /bin/sh +``` + +#### armv5 + +``` +wget -O - http://bin.entware.net/armv5sf-k3.2/installer/generic.sh | /bin/sh +``` + +#### armv7 + +``` +wget -O - http://bin.entware.net/armv7sf-k3.2/installer/generic.sh | /bin/sh +``` + +#### x64 + +``` +wget -O - http://bin.entware.net/x64-k3.2/installer/generic.sh | /bin/sh +``` + +4. Create an Autostart Task On Synology + +Create a triggered user-defined task in Task Scheduler. + +- Go to: DSM > Control Panel > Task Scheduler +- Create > Triggered Task > User Defined Script + - General + - Task: Entware + - User: root + - Event: Boot-up + - Pretask: none +- Task Settings + - Run Command: Paste the below script in. + +``` +#!/bin/sh + +# Mount/Start Entware +mkdir -p /opt +mount -o bind "/volume1/@Entware/opt" /opt +/opt/etc/init.d/rc.unslung start + +# Add Entware Profile in Global Profile +if grep -qF '/opt/etc/profile' /etc/profile; then + echo "Confirmed: Entware Profile in Global Profile" +else + echo "Adding: Entware Profile in Global Profile" +cat >> /etc/profile <<"EOF" + +# Load Entware Profile +[ -r "/opt/etc/profile" ] && . /opt/etc/profile +EOF +fi + +# Update Entware List +/opt/bin/opkg update +``` + +5. Reboot your NAS. + +6. SSH back into your NAS + +7. Install mergerfs by the following command. + +``` +sudo opkg install mergerfs +``` + +9. Make sure it's installed by running the following command. Mergerfs binary is expected to be listed there. + +``` +sudo ls /volume1/@Entware/opt/bin +``` + +This should print the usage helper of mergerfs. + +``` +mergerfs --help +``` + +10. If you want the latest build of mergerfs, you can download the `mergerfs-static-linux_$ARCH.tar.gz` from [Github releases page](https://github.com/trapexit/mergerfs/releases/latest), remember to replace `$ARCH` with your architecture, e.g. what `uname -m` tells you. + +Extract the `.tar.gz` archive and use its content to update the `mergerfs` and `mergerfs-fusermount` binaries in `/opt/bin/` + +11. Configure mergerfs. Note: Change the file paths to your setup. + +_MY CONFIG IS (Don't know if it is the perfect setting, but works in my testing) _ + +``` +mergerfs -o rw,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,dropcacheonclose=true /volume1/Media/TempMedia:/volume1/Media/GMedia /volume1/Media/FinalMedia +``` + +If mergerfs complains about existing files because the destination already has the Synology `@eaDir` directory, you can use the option `nonempty`. + +12. Create an Autostart Task On Synology for Mergerfs + +- Go to: DSM > Control Panel > Task Scheduler +- Create > Triggered Task > User Defined Script + - General + - Task: Mergerfs + - User: root + - Event: Boot-up + - Pretask: Entware +- Task Settings + - Run Command: Paste the below script in. + +``` +#!/bin/sh + +/opt/bin/mergerfs -o rw,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,dropcacheonclose=true /volume1/Media/TempMedia:/volume1/Media/GMedia /volume1/Media/FinalMedia + +``` + +13. Profit diff --git a/mkdocs/docs/pages/wiki/kernel_issues_and_bugs.md b/mkdocs/docs/pages/wiki/kernel_issues_and_bugs.md new file mode 100644 index 000000000..1056dfe8e --- /dev/null +++ b/mkdocs/docs/pages/wiki/kernel_issues_and_bugs.md @@ -0,0 +1,45 @@ +There have been a number of kernel issues / bugs over the years which mergerfs has run into. Here is a list of them for reference and posterity. + +## NFS and EIO errors + +https://lore.kernel.org/linux-fsdevel/20240228160213.1988854-1-mszeredi@redhat.com/T/ + +Over the years some users have reported that while exporting mergerfs via NFS, after significant filesystem activity, not only will the NFS client start returning ESTALE and EIO errors but mergerfs itself would start returning EIO errors. The problem was that no one could reliability reproduce the issue. After a string of reports in late 2023 and early 2024 more investigation was done. + +In Linux 5.14 new validation was put into FUSE which caught a few invalid situations and would tag a FUSE node as invalid if a check failed. Such checks include invalid file type, changing of type from one request to another, a size greater than 63bit, and the generation of a inode changing while in use. + +What happened was that mergerfs was using a different fixed, non-zero value for the generation of all nodes as it was suggested that unique inode + generation pairs are needed for proper integration with NFS. That non-zero value was being sent back to the kernel when a lookup request was made for root. The reason this was hard to track down was because NFS almost uniquely uses an API which can lead to a lookup of the root node that simply won't happen under normal workloads and usage. And that lookup will only happen if child nodes of the root were forgotten but NFS still had a handle to that node and later asked for details about it. It would trigger a set of requests to lookup info on those nodes. + +This wasn't a bug in FUSE but mergerfs. However, the incorrect behavior of mergerfs lead to FUSE behave in an unexpected and incorrect manner. It would issue a lookup of the "parent of a child of the root" and mergerfs would send the invalid generation value. As a result the kernel would mark the root node as "bad" which would then trigger the kernel to issue a "forget root" message. In between those it would issue a request for the parent of the root... which doesn't exist. + +So the kernel was doing two invalid things. Requesting the parent of the root and then when that failed issuing a forget for the root. These led to chasing after the wrong possible causes. + +The proposed change is for FUSE to revert the marking of root node bad if the generation is non-zero and warn about it. It will mark the node bad but not unhash/forget/remove it. + +mergerfs in v2.40.1 ensures that generation for root is always 0 on lookup which should work across any kernel version. + +## Truncated files + +This was a bug with mmap and FUSE on 32bit platforms. Should be fixed in all LTS releases. + +- https://marc.info/?l=linux-fsdevel&m=155550785230874&w=2 + +## Crashing on OpenVZ + +There is/was a bug in the OpenVZ kernel with regard to how it handles ioctl calls. It was making invalid requests which would lead to crashes due to mergerfs not expecting them. + +- https://bugs.openvz.org/browse/OVZ-7145 +- https://www.mail-archive.com/devel@openvz.org/msg37096.html + +## Really bad mmap performance + +There is/was a bug in caching which affects overall performance of mmap through FUSE in Linux 4.x kernels. It is fixed in 4.4.10 and 4.5.4. + +- https://lkml.org/lkml/2016/3/16/260 +- https://lkml.org/lkml/2016/5/11/59 + +## Heavy load and memory pressure leads to kernel panic + +- https://lkml.org/lkml/2016/9/14/527 +- https://lkml.org/lkml/2016/10/4/1 +- https://www.theregister.com/2016/10/05/linus_torvalds_admits_buggy_crap_made_it_into_linux_48/ diff --git a/mkdocs/docs/pages/wiki/limit_drive_spinup.md b/mkdocs/docs/pages/wiki/limit_drive_spinup.md new file mode 100644 index 000000000..c13b472ee --- /dev/null +++ b/mkdocs/docs/pages/wiki/limit_drive_spinup.md @@ -0,0 +1,27 @@ +TL;DR: You really can't. Not through mergerfs alone. + +mergerfs is a proxy. Not a cache. It proxies calls between client software and underlying filesystems. If a client does an `open`, `readdir`, `stat`, etc. it must translate that into something that makes sense across N filesystems. For `readdir` that means running the call against all branches and aggregating the output. For `open` that means finding the file to open and doing so. The only way to find the file to open is to scan across all branches and sort the results and pick one. There is no practical way to do otherwise. Especially given so many mergerfs users expect out of band changes to "just work." + +The best way to limit spinup of drives is to limit their usage at the client level. Meaning keeping software from interacting with the filesystem all together. + +### What if you assume no out of band changes and cache everything? + +This would require a significant rewrite of mergerfs. Everything is done on the fly right now and all those calls to underlying filesystems can cause a spinup. To work around that a database of some sort would have to be used to store ALL metadata about the underlying filesystems and on startup everything scanned and stored. From then on it would have to carefully update all the same data the filesystems do. It couldn't be kept in RAM because it would take up too much space so it'd have to be on a SSD or other storage device. If anything changed out of band it would break things in weird ways. It could rescan on occasion but that would require spinning up everything. It could put file watches on every single directory but that probably won't scale (there are millions of directories in my system for example) and the open files might keep the drives from spinning down. Something as "simple" as keeping the current available free space on each filesystem isn't as easy as one might think given reflinks, snapshots, and other block level dedup technologies. + +Even if all metadata (including xattrs) is cached some software will open files (media like videos and audio) to check their metadata. Granted a Plex or Jellyfin scan which may do that is different from a random directory listing but is still something to consider. Those "deep" scans can't be kept from waking drives. + +### What if you only query already active drives? + +Let's assume that is plausible (it isn't because some drives actually will spin up if you ask if they are spun down... yes... really) you would have to either cache all the metadata on the filesystem or treat it like the filesystem doesn't exist. The former has all the problems mentioned prior and the latter would break a lot of things. + +### Is there anything that can be done where mergerfs is involved? + +Yes, but whether it works for you depends on your tolerance for the complexity. + +1. Cleanly separate writing, storing, and consuming the data. + 1. Use a SSD or dedicated and limited pool of drives for downloads / torrents. + 2. When downloaded move the files to the primary storage pool. + 3. When setting up software like Plex, Jellyfin, etc. point to the underlying filesystems. Not mergerfs. +2. Add a bunch of bcache, lvmcache, or similar block level cache to your setup. After a bit of use, assuming sufficient storage space, you can limit the likelihood of the underlying spinning disks from needing to be hit. + +Remember too that while it may be a tradeoff you're willing to live with there is decent evidence that spinning down drives puts increased wear on them and can lead to their death earlier than otherwise. diff --git a/mkdocs/docs/pages/wiki/links.md b/mkdocs/docs/pages/wiki/links.md new file mode 100644 index 000000000..9fe87f433 --- /dev/null +++ b/mkdocs/docs/pages/wiki/links.md @@ -0,0 +1,4 @@ +# Links + +- [Another way installing MergerFS on Synology and overcoming problems](https://mjanssen.nl/2022/01/01/adding-mergerfs-to-your-synology/) +- [fstab]() diff --git a/mkdocs/docs/pages/wiki/projects_using_mergerfs.md b/mkdocs/docs/pages/wiki/projects_using_mergerfs.md new file mode 100644 index 000000000..44b09a52a --- /dev/null +++ b/mkdocs/docs/pages/wiki/projects_using_mergerfs.md @@ -0,0 +1,34 @@ +# Projects incorporating mergerfs directly in some way + +- [Lakka.tv](https://lakka.tv/): A turnkey software emulation Linux distribution. Used to pool user and local storage. Also includes my other project [Opera](https://retroarch.com/). A 3DO emulator. +- [OpenMediaVault](https://www.openmediavault.org): A network attached storage (NAS) solution based on Debian Linux. They provide plugins to manage mergerfs. +- [CasaOS](https://casaos.io): "A simple, easy to use, elegant open source home cloud system." Has added initial integration with mergerfs to create pools from existing filesystems. +- [ZimaOS](https://github.com/IceWhaleTech/zimaos-rauc): A more commercially focused NAS OS by the authors of CasaOS at [Ice Whale](https://www.zimaboard.com/). + +# Software and services commonly used with mergerfs + +- [snapraid](https://www.snapraid.it/) +- [rclone](https://rclone.org/) + - rclone's [union](https://rclone.org/union/) feature is based on mergerfs policies +- [ZFS](https://openzfs.org/): Common to use ZFS w/ mergerfs +- [UnRAID](https://unraid.net): While UnRAID has its own union filesystem it isn't uncommon to see UnRAID users leverage mergerfs given the differences in the technologies. +- For a time there were a number of Chia miners recommending mergerfs +- [cloudboxes.io](https://cloudboxes.io/wiki/how-to/apps/set-up-mergerfs-using-ssh) + +# Distributions including mergerfs + +mergerfs can be found in the [repositories](https://pkgs.org/download/mergerfs) of [many Linux](https://repology.org/project/mergerfs/versions) (and maybe FreeBSD) distributions. + +Note: Any non-rolling release based distro is likely to have out-of-date versions. + +- [Debian](https://packages.debian.org/bullseye/mergerfs) +- [Ubuntu](https://launchpad.net/ubuntu/+source/mergerfs) +- [Fedora](https://rpmsphere.github.io/) +- [T2](https://t2sde.org/packages/mergerfs) +- [Alpine](https://pkgs.alpinelinux.org/packages?name=mergerfs&branch=edge&repo=&arch=&maintainer=) +- [Gentoo](https://packages.gentoo.org/packages/sys-fs/mergerfs) +- [Arch (AUR)](https://aur.archlinux.org/packages/mergerfs) +- [Void](https://voidlinux.org/packages/?arch=x86_64&q=mergerfs) +- [NixOS](https://search.nixos.org/packages?channel=22.11&show=mergerfs&from=0&size=50&sort=relevance&type=packages&query=mergerfs) +- [Guix]() +- [Slackware](https://slackbuilds.org/repository/15.0/system/mergerfs/?search=mergerfs) diff --git a/mkdocs/docs/pages/wiki/real_world_deployments.md b/mkdocs/docs/pages/wiki/real_world_deployments.md new file mode 100644 index 000000000..0dbbe0bed --- /dev/null +++ b/mkdocs/docs/pages/wiki/real_world_deployments.md @@ -0,0 +1,96 @@ +# trapexit's (mergerfs' author) + +## Current setup + +- SilverStone Technology CS380B-X V2.0 case +- Intel Core i7-4790S +- 32GB DDR3 RAM +- LSI SAS 9201-16e + - 8 SATA connections to the CS380B backplane + - 8 SATA connections to a generic 8-bay enclosure (similar to a Sans Digital 8-bay enclosure) + - Connections via SAS to SATA breakout cables fished through the back. Not elegant but cost effective. SAS SCSI cutout boards are difficult to find and would add $50 to $100 to the cost. +- Marvell 88SE9230 PCIe SATA 6Gb/s Controller on motherboard + - 4 SATA connections to a [StarTech SATSASBP425](https://www.amazon.com/StarTech-com-4-Bay-Mobile-Backplane-Drives/dp/B00X7B3CUE) +- ASMedia ASM1062 SATA Controller on motherboard + - 4 SATA connections to a second [StarTech SATSASBP425](https://www.amazon.com/StarTech-com-4-Bay-Mobile-Backplane-Drives/dp/B00X7B3CUE) + - 1 MSATA connection on the motherboard +- NVidia Quadro P2000 (for hardware transcoding in Plex, Jellyfin, etc.) +- Mix of 3.5" SATA HDD: 8TB - 14TB +- Mix of 2.5" SATA HDD: 2TB - 5TB +- Mix of 2.5" SATA SSD: + - primary boot drive, backup boot drive, application specific caches + - Some of the SSDs are used enterprise drives which can often be found for a reasonable price on eBay +- Mix of 2.5" U.2 NVME: 3x 2TB Intel P4510, 1x 3.84TB Dell P5500 + - Connected via a [Ceacent ANU28PE16 NVMe SSD Riser SFF8643 to SFF8639](https://www.aliexpress.us/item/2255800570129198.html) + - Have a [IcyDock MB931U-1VB](https://www.icydock.com/goods.php?id=363) for using U.2 NVME drives externally +- All drives formatted with EXT4 to make recovery easier in case of failure + - `mkfs.ext4 -L DRIVE_SERIAL_NUMBER -m 0 /dev/DEV` +- HDDs, some SSDs, some NVMEs merged together in a single mergerfs mount + - branches-mount-timeout=300 + - cache.attr=120 + - cache.entry=120 + - cache.files=per-process + - cache.readdir=true + - cache.statfs=10 + - category.create=pfrd + - dropcacheonclose=true + - fsname=media + - lazy-umount-mountpoint=true + - link_cow=true + - readahead=2048 +- some SSDs/NVMes used for bespoke purposes such as main storage for Docker/container config storage and caching (Plex transcoding, etc.) +- Filesystem labels are set to the serial number of the drive for easy identification +- Drives mounted to: + - /mnt/hdd/SIZE-LABEL + - /mnt/sdd/SIZE-LABEL + - /mnt/nvme/SIZE-LABEL + - ex: /mnt/hdd/8TB-ABCDEF +- Total drives in main mergerfs pool: 24 +- Total storage combined in main mergerfs pool: 155TB +- RAM usage by mergerfs under load: 512MB - 1GB of resident memory + +## Old setup + +- Core i7 3770s +- 16GB RAM +- 4 Port ASMedia Technology 106x eSATA PCIE 4x card +- 4x [ICYCube MB561U3S-4SB R1 Quad Bay enclosure](https://www.icydock.com/goods.php?id=219) + +NOTES: The eSATA enclosure setup was easier to manage physically as the enclosures are smaller but the LSI SAS HBA & generic enclosure setup is more reliable/stable, more performant, and actually cost less. Port multipliers tend to behave poorly with different brand controllers (if they work at all). They can perform poorly if a drive is bad leading to the other drives acting as if they have issues leading to a full hard reset of the computer and enclosure to 'fix'. Port multiplier enclosures, over USB, tend not to support hot swapping and the drives will all be reset if a drive is swapped. + +--- + +# (´・ω・`) + +- 2x Intel(R) Xeon(R) CPU X5690 @ 3.47GHz +- 64G RAM +- Chassis: 847E16-R1K28LPB + - 36 bays + 2 sytem bays + - SAS2008 + - X8DTH +- Drives + - 26x 8TB Data + - luks + - btrfs: `mount -ospace_cache=v2,noatime,rw` + - 6x 8TB Parity + - luks + - ext4: `mkfs.ext4 -J size=4 -m 0 -i 67108864 -L