Skip to content

Releases: ArweaveTeam/arweave

Release 2.9.0-early-adopter

13 Dec 00:38
Compare
Choose a tag to compare

Arweave 2.9.0-Early-Adopter Release Notes

This Arweave node implementation proposes a hard fork that activates at height 1602350, approximately 2025-02-03 14:00 UTC. This software was prepared by the Digital History Association, in cooperation with the wider Arweave ecosystem. Additionally, this release was audited by NCC Group.

This 2.9.0 release is an early adopter release. If you do not plan to benchmark and test the new data format, you do not need to upgrade for the 2.9 hard fork yet.

Note: with 2.9.0 when enabling the randomx_large_pages option you will need to configure 5,000 HugePages rather than the 3,500 required for earlier releases.

Replica 2.9 Packing Format

The Arweave 2.9.0-early-adopter release introduces a new data preparation (‘packing’) format. Starting with this release you can begin to test out this new format. This format brings significant improvements to all of the core metrics of data preparation.

To understand the details, please read the full paper here: https://github.com/ArweaveTeam/arweave/blob/release/N.2.9.0-early-adopter/papers/Arweave2_9.pdf

Additionally, an audit of this mechanism was performed by NCC group and is available to read here (the comments highlighted in this audit have since been remediated): https://github.com/ArweaveTeam/arweave/blob/release/N.2.9.0-early-adopter/papers/NCC_Group_ForwardResearch_E020578_Report_2024-12-06_v1.0.pdf

Arweave 2.9’s format enables:

  • Allow miners to read from their drives at a rate of 5 MiB/s (the equivalent of difficulty 10 in Arweave 2.8), without adversely affecting the security of the network. This represents a decrease of 90% from Arweave 2.8, and 97.5% from Arweave 2.7.x. This will allow miners to use the most cost efficient drives in order to participate in the network, while also lowering pressure on disk I/O during mining operations.
  • A ~96.9% decrease in the compute necessary to pack Arweave data when compared to 2.8 composite.1, and SPoRA_2.6. This decrease also scales approximately linearly for higher packing difficulties. For example, for miners that would have packed with Arweave 2.8 to the difficulty necessary to reach a 5 MB/s read speed (composite.10), Arweave 2.9 will require ~99.56% less energy and time. This represents an efficiency improvement of 32x against 2.7.x and 2.8 composite.1, and ~229x for composite.10.

Replica 2.9 Benchmark Tool

If you'd like to benchmark the performance of the new Replica 2.9 packing format on your own machine you can use the new ./bin/benchmark-2.9 tool. It has 2 modes:

  • Entropy generation which generates and then discards entropy. This allows you to benchmark the time it takes for your CPU to perform the work component of packing, ignoring any IO-related effects.
    • To use the entropy generation benchmark run the tool without using any dir flags.
  • Packing which generates entropy, packs some random data, and then writes it to disk. This provides a more complete benchmark of the time it might take your server to pack data. Note: This benchmark does not include unpacking or reading data (and associated disk seek times).
    • To use the packing benchmark mode specify one or more output directories using the multi-use dir flag.
Usage: benchmark-2.9 [format replica_2_9|composite|spora_2_6] [threads N] [mib N] [dir path1 dir path2 dir path3 ...]

format: format to pack. replica_2_9, composite.1, composite.10, or spora_2_6. Default: replica_2_9.
threads: number of threads to run. Default: 1.
mib: total amount of data to pack in MiB. Default: 1024.
     Will be divided evenly between threads, so the final number may be
     lower than specified to ensure balanced threads.
dir: directories to pack data to. If left off, benchmark will just simulate
     entropy generation without writing to disk.

Repacking to Replica 2.9

As well as allowing you to run benchmarks, the 2.9.0-early-adopter release also allows you to pack data for the 2.9 format. It has not, however, been fully optimized and tuned for the new entropy distribution scheme. It is included in this build for validation purposes. In our tests, we have observed consistent >=75% reductions in computation requirements (>4x faster packing speeds), but future releases will continue to improve this towards the performance of the benchmarking tool.

To test this functionality run a node with storage modules configured to use the <address>.replica.2.9 packing format. repack_in_place is not yet supported.

Composite Packing Format Deprecated

The new packing format was discovered as a result of researching an issue (not endangering data, tokens, or consensus) that affects higher difficulty packs of the 2.8 composite scheme. Given this, and the availability of the significantly improved 2.9 packing format, as of block height 1642850 (roughly 2025-04-04 14:00 UTC), data packed to any level of the composite packing format will not produce valid block solutions.

Note: This is an "Early Adopter" release. It implements significant new protocol improvements, but is still in validation. This release is intended for members of the community to try out and benchmark the new data preparation mechanism. You will not need to update your node for 2.9 unless you are interested in testing these features, until shortly before the hard fork height at 1602350 – approximately Feb 3, 2025. As this release is intended for validation purposes, please be aware that there is a possibility that data encoded using its new preparation scheme may need to be repacked before 2.9 activates. The first ‘mainline’ releases for Arweave 2.9 will follow in the coming weeks after community validation has been completed.

Full Changelog: N.2.8.3...N.2.9.0-early-adopter

Release 2.8.3

30 Nov 01:49
Compare
Choose a tag to compare

This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

Bug fixes

  • Fix a performance issue which could cause very low read rates when multiple storage modules were stored on a single disk. The bug had a significant impact on SATA read speeds and hash rates, and noticeable, but smaller, impact on SAS disks.
  • Fix a bug which caused the Mining Performance Report to report incorrectly for some miners. Notably: 0s in the Ideal and Data Size columns.
  • Fix a bug which could cause the verify tool to get stuck when encountering an invalid_iterator error
  • Fix a bug which caused the verify tool to fail to launch with the error reward_history_not_found
  • Fix a performance issue which could cause a node to get backed up during periods of high network transaction volume.
  • Add the packing_difficulty of a storage module to the /metrics endponit

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!

Discord users (alphabetical order):

  • bigbang
  • BloodHunter
  • Butcher_
  • dzeto
  • edzo
  • foozoolsanjj
  • heavyarms1912
  • JF
  • MCB
  • Methistos
  • Mastermind
  • Qwinn
  • Thaseus
  • Vidiot
  • a8_ar
  • jimmyjoe7768
  • lawso2517
  • qq87237850
  • smash
  • sumimi
  • T777
  • tashilo
  • thekitty
  • wybiacx

What's Changed

Full Changelog: N.2.8.2...N.2.8.3

Release 2.8.2

13 Nov 20:17
Compare
Choose a tag to compare

Fixes issue with peer history validation upon re-joining the network.

Full Changelog: N.2.8.1...N.2.8.2

Release 2.8.1

11 Nov 13:38
Compare
Choose a tag to compare

This is a minor update. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

Bug Fix: OOM when setting mining_server_chunk_cache_size_limit

2.8.1 deprecates the mining_server_chunk_cache_size_limit flag and replaces it with the mining_cache_size_mb flag. Miners who wish to increase or decrease the amount of memory allocated to the mining cache can specify the target cache size (in MiB) using the mining_cache_size_mb NUM flag.

Feature: verify mode

The new release includes a new verify mode. When set the node will run a series of checks on all listed storage_modules. If the node discovers any inconsistencies (e.g. missing proofs, inconsistent indices) it will flag the chunks so that they can be resynced and repacked later. Once the verification completes, you can restart then node in a normal mode and it should re-sync and re-pack any flagged chunks.

Note: When running in verify mode several flags will be forced on and several flags are disallowed. See the node output for details.

An example launch command:

./bin/start verify data_dir /opt/data storage_module 10,unpacked storage_module 20,En2eqsVJARnTVOSh723PBXAKGmKgrGSjQ2YIGwE_ZRI.1

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs, sharing debug logs and node metrics, and providing guidance on performance tuning!

Discord users (alphabetical order):

  • BloodHunter
  • Butcher_
  • JF
  • MCB
  • Mastermind
  • Qwinn
  • Thaseus
  • Vidiot
  • a8_ar
  • jimmyjoe7768
  • lawso2517
  • smash
  • thekitty

What's Changed

Full Changelog: N.2.8.0...N.2.8.1

Release 2.8.0

17 Oct 15:27
Compare
Choose a tag to compare

This Arweave node implementation proposes a hard fork that activates at height 1547120, approximately 2024-11-13 14:00 UTC. This software was prepared by the Digital History Association, in cooperation from the wider Arweave ecosystem.

Note: with 2.8.0 when enabling the randomx_large_pages option you will need to configure 3,500 HugePages rather than the 1,000 required for earlier releases. More information below.

Composite Packing

The biggest change in 2.8.0 is the introduction of a new packing format referred to as "composite". Composite packing allows miners in the Arweave network to have slower access to the dataset over time (and thus, mine on larger hard drives at the same bandwidth). The packing format used from version 2.6.0 through 2.7.4 will be referred to as spora_2_6 going forward. spora_2_6 will continue to be supported by the software without change for roughly 4 years.

The composite packing format allows node operators to provide a difficulty setting varying from 1 to 32. Higher difficulties take longer to pack data, but have proportionately lower read requirements while mining. For example, the read speeds for a variety of difficulties are as follows:

Packing Format Example storage module configuration Example storage_modules directory name Time to pack (benchmarked to spora_2_6) Disk read rate per partition when mining against a full replica
spora_2_6 12,addr storage_module_12_addr 1x 200 MiB/s
composite.1 12,addr.1 storage_module_12_addr.1 1x 50 MiB/s
composite.2 12,addr.2 storage_module_12_addr.2 2x 25 MiB/s
composite.3 12,addr.3 storage_module_12_addr.3 3x 16.6667 MiB/s
composite.4 12,addr.4 storage_module_12_addr.4 4x 12.5 MiB/s
... ... ... ... ...
composite.32 12,addr.32 storage_module_12_addr.32 32x 1.5625 MiB/s

The effective hashrate for a full replica packed to any of the supported packing formats is the same. A miner who has packed a full replica to spora_2_6 or composite.1 or composite.32 can expect to find the same number of blocks on average, but with the higher difficulty miner reading fewer chunks from their storage per second. This allows the miner to use larger hard drives in their setup, without increasing the necessary bandwidth between disk and CPU.

Each composite-packed chunk is divided into 32 sub-chunks and then packed with increasing rounds of the RandomX packing function. Each sub-chunk at difficulty 1 is packed with 10 RandomX rounds. This value was selected to roughly match the time it takes to pack a chunk using spora_2_6. At difficulty 2 each sub-chunk is packed with 20 RandomX rounds - this will take roughly twice as long to pack a chunk as it does with difficulty 1 or spora_2_6. At difficulty 3, 30 rounds, and so on.

Composite packing also uses a slightly different version of the RandomX packing function with further improvements to ASIC resistance properties. As a result when running Arweave 2.8 with the randomx_large_pages option you will need to allocate 3,500 HugePages rather than the 1,000 needed for earlier node implementations. If you're unable to immediately increase your HugePages value we recommend restarting your server and trying again. If your node has been running for a while the memory space may simply be too fragmented to allocate the needed HugePages. A reboot should alleviate this issue.

When mining, all storage modules within the same replica must be packed to the same packing format and difficulty level. For example, a single miner will not be able to build a solution involving chunks from storage_module_1_addr.1 and storage_module_2_addr.2 even if the packing address is the same.

To use composite packing miners can modify their storage_module configuration. E.g. if previously you used storage_module 12,addr and had a storage module directory named storage_module_12_addr now you use storage_module 12,addr.1 and create a directory named storage_module_12_addr.1. Syncing, packing, repacking, and repacking in place are handled the same as before just with the addition of the new packing formats.

While you can begin packing data to the composite format immediately, you will not be able to mine the data until the 2.8 hard fork activates at block height 1547120.

Implications of Composite Packing

By enabling lower read rates the new packing format provides greater flexibility when selecting hard drives. For example, it is now possible to mine 4 partitions off a single 16TB hard drive. Whether you need to pack to composite difficulty 1 or 2 in order to optimally mine 4 partitions on a 16TB drive will depend on the specific performance characteristics of your setup.

CPU and RAM requirements while mining will be lower for composite packing versus spora_2_6, and will continue to reduce further as the packing difficulty increases. Extensive benchmarking to confirm the degree of these efficiency gains have yet to be confirmed, but with the lower read rate comes a lower volume of data that needs to be hashed (CPU) and a lower volume of data that needs to be held in memory (RAM).

Block Header Format

The following block header fields have been added or changed:

  • packing_difficulty: the packing difficulty of the chunks used in the block solution. Both reward_address and packing_difficulty together are needed to unpack and validate the solution chunk. packing_difficulty is 0 for spora_2_6 chunks
  • poa1->chunk and poa2->chunk: under spora_2_6 the full packed chunk is provided. Under composite only a packed sub-chunk is included. A sub-chunk is 1/32 of a packed chunk.
  • poa1->unpacked_chunk and poa2->unpacked_chunk: this field is omitted for spora_2_6, and includes the complete unpacked chunk for all composite blocks.
  • unpacked_chunk_hash and unpacked_chunk_hash2: these fields are omitted under spora_2_6 and contain the hash of the full unpacked_chunks for composite blocks

Other Fixes and Improvements

  • Protocol change: The current protocol (implemented prior to the 2.8 Hard Fork) will begin transitioning the upload pricing to a trustless oracle at block height 1551470. 2.8 introduces a slight change: 3 months of blockchain history rather than 1 month will be used to calculate the upload price.
  • Bug fix: several updates to the RocksDB handling have been made which should reduce the frequency of RocksDB corruption - particularly corruption that may have previously occurred during a hard node shutdown.
    • Note: with these changes the repair_rocksdb option has been removed.
  • Optimization: Blockchain syncing (e.g. block and transaction headers) has been optimized to reduce the time it takes to sync the full blockchain
  • Bug fix: GET /data_sync_record no longer reports chunks that have been purged from the disk pool

Community involvement

A huge thank you to all the Mining community members who contributed to this release by identifying and investigating bugs and providing guidance on performance tuning!

Discord users (alphabetical order):

  • BloodHunter
  • Butcher_
  • dzeto
  • edzo
  • heavyarms1912
  • lawso2517
  • ldp
  • MaSTeRMinD
  • MCB
  • Methistos
  • qq87237850
  • Qwinn
  • sk
  • smash
  • sumimi
  • tashilo
  • Thaseus
  • thekitty
  • Vidiot
  • Wednesday

Code Changes

New Contributors

Full Changelog: N.2.7.4...N.2.8.0

Release 2.7.4

01 Aug 01:36
Compare
Choose a tag to compare

Arweave 2.7.4 Release Notes

If you were previously running the 2.7.4 pre-release we recommend you update to this release. This release includes all changes from the pre-release, plus some additional fixes and features.

Mining Performance Improvements

This release includes a number of mining performance improvements, and is the first release for which we've seen a single-node miner successfully mine a full replica at almost the full expected hashrate (56 partitions mined at 95% efficiency at the time of the test). If your miner previously saw a loss of hashrate at higher partition counts despite low CPU utilization, it might be worth retesting.

Erlang VM arguments

Adjusting the arguments provided to the Erlang VM can sometimes improve mining hashrate. In particular we found that on some high-core count CPUs, restricting the number of threads available to Erlang actually improved performance. You'll want to test these options for yourself as behavior varies dramatically from system to system.

This release introduces a new command-line separator: --

All arguments before the -- separator are passed to the Erlang VM, all arguments after it are passed to Arweave. If the -- is omitted, all arguments are passed to Arweave.

For example, to restrict the number of threads available to Arweave to 24, you would build a command like:

./bin/start +S 24:24 -- <regular arweave command line flags>

Faster Node Shutdown

Unrelated to the above changes, this release includes a couple fixes that should reduce the time it takes for a node to shut down following the ./bin/stop command.

Solution recovery

This release includes several features and bug fixes intended to increase the chance that a valid solution results in a confirmed block.

Rebasing

When two or more miners post blocks at the same height, the block that is adopted by a majority of the network first will be added to the blockchain and the other blocks will be orphaned. Miners of orphaned blocks do not receive block rewards for those blocks.

This release introduce the ability for orphaned blocks to be rebased. If a miner detects that their block has been orphaned, but the block solution is still valid, the miner will take that solution and build a new block with it. When a block is rebased a rebasing_block message will be printed to the logs.

Last minute proof fetching

After finding a valid solution a miner goes through several steps as they build a block. One of those steps involves loading the selected chunk proofs from disk. Occasionally those proofs might be missing or corrupt. Prior to this release when that happened, the solution would be rejected and the miner would return to hashing. With this release the miner will reach out to several peers and request the missing proofs - if successful the miner can continue building and publishing the block.

last_step_checkpoints recovery

This release provides more robust logic for generating the last_step_checkpoints field in mined blocks. Prior to this release there were some scenarios where a miner would unnecessarily reject a solution due to missing last_step_checkpoints.

VDF Server Improvements

In addition to a number of VDF server/client bug fixes and performance improvements, this release includes two new VDF server configurations.

VDF Forwarding

You can now set up a node as a VDF forwarder. If a node specifies both the vdf_server_trusted_peer and vdf_client_peer flags it will receive its VDF from the specified VDF Servers and provide its VDF to the specified VDF clients. The push/pull behavior remains unchanged - any of the server/client relationships can be configured to push VDF updates or pull them

Public VDF

If a VDF server specifies the enable public_vdf_server flag it will provide VDF to any peer that requests it without needing to first whitelist that peer via the vdf_client_peer flag.

/recent endpoint

This release adds a new /recent endpoint which will return a list of recent forks that the node has detected, as well as the last 18 blocks they've received as well as the timestamps they received them.

Webhooks

This release adds additional webhook support. When webhooks are configured a node will POST data to a provided URL (aka webhook) when certain events are triggered.

Node webhooks can only be configured via a JSON config_file. For example:

{
  "webhooks": [
    {
      "events": ["transaction", "block"],
      "url": "https://example.com/block_or_tx",
      "headers": {
        "Authorization": "Bearer 123"
       }
    },
    {
      "events": ["transaction_data"],
      "url": "http://127.0.0.1:1985/tx_data"
    }
}

The supported events are:

  • transaction : POSTS
    • the transaction header whenever this node accepts and validates a new transaction
  • transaction_data : POSTS
    • { "event": "transaction_data_synced", "txid": <TXID> } once this node has received all the chunks belonging to the transaction TXID
    • { "event": "transaction_orphaned", "txid": <TXID> } when this node detects that TXID has been orphaned
    • { "event": "transaction_data_removed", "txid": <TXID> } when this node detects that at least one chunk has been removed from a previously synced transaction
  • block : POSTS
    • the block header whenever this node accepts and validates a new block

In all cases the POST payload is JSON-encoded

Benchmarking and data utilities

  • ./bin/benchmark-hash prints benchmark data on H0 and H1/H2 hashing performance
  • fix for ./bin/data-doctor bench - it should now be able to correctly report storage module read performance
  • data-doctor dump dumps all block headers and transactions

Miscellaneous Bug Fixes and additions

  1. Several coordinated mining and mining pool bug fixes
  2. /metrics was incorrect if mining address included a _
  3. Fix bug in start_from_block and start_from_latest_state
  4. Add CORS header to /metrics so it can be queried from an in-browser app
  5. Blacklist handling optimizations

Pre-Release 2.7.4

31 May 13:01
Compare
Choose a tag to compare
Pre-Release 2.7.4 Pre-release
Pre-release

This is a pre-release and has not gone through a full release validation, please install with that in mind

Note: In order to test the VDF client/server fixes please make sure to set your VDF server to vdf-server-4.arweave.xyz. We will keep vdf-server-3.arweave.xyz running an older version of the software (without the fixes) in case there are issues with this release.

Summary of changes in this release:

  • Fixes for several VDF client/server communication issues.
  • Fixes to some pool mining bugs
  • Solution rebasing to lower orphan rate
  • Last minute proof fetching if they can't be found locally
  • More support for webhooks
  • Performance improvments for syncing and blacklist processing

Release 2.7.3

25 Mar 15:09
Compare
Choose a tag to compare

Arweave 2.7.3 Release Notes

2.7.3 is a minor release containing:

Re-packing in place

You can now repack a storage module from one packing address to another without needing any extra storage space. The repacking happens "in-place" replacing the original data with the repacked data.

See the storage_module section in the arweave help ( ./bin/start help) for more information.

Packing bug fixes and performance improvements

This release contains several packing performance improvements and bug fixes.

Coordinated Mining performance improvement

This release implements an improvement in how nodes process H1 batches that they receive from their Coordinated Mining peers. As a result the cm_in_batch_timeout is no longer needed and has been deprecated.

Release 2.7.2

01 Mar 14:22
8a0bef6
Compare
Choose a tag to compare

This release introduces a hard fork that activates at height 1391330, approximately 2024-03-26 14:00 UTC.

Coordinated Mining

When coordinated mining is configured multiple nodes can cooperate to find mining solutions for the same mining address without the risk of losing reserved rewards and blacklisting of the mining address. Without coordinated mining if two nodes publish blocks at the same height and with the same mining address, they may lose their reserved rewards and have their mining address blacklisted (See the Mining Guide for more information). This allows multiple nodes which each store a disjoint subset of the weave to reap the hashrate benefits of more two-chunk solutions.

Basic System

In a coordinated mining cluster there are 2 roles:

  1. Exit Node
  2. Miners

All nodes in the cluster share the same mining address. Each Miner generates H1 hashes for the partitions they store. Occasionally they will need an H2 for a packed partition they don't store. In this case, they can find another Miner in the coordinated mining cluster who does store the required partition packed with the required address, send them the H1 and ask them to calculate the H2. When a valid solution is found (either one- or two-chunk) the solution is sent to the Exit Node. Since the Exit Node is the only node in the coordinated mining cluster which publishes blocks, there's no risk of slashing. This point can be further enforced by ensuring only the Exit Node stores the mining address private key (and therefore only the Exit Node can sign blocks for that mining address)

Every node in the coordinated mining cluster is free to peer with any other nodes on the network as normal.

Single-Miner One Chunk Flow

Screenshot 2024-03-01 at 9 41 36 AM

Note: The single-miner two chunk flow (where Miner1 stores both the H1 and H2 partitions) is very similar

Coordinated Two Chunk Flow

Screenshot 2024-03-01 at 9 42 33 AM

Configuration

  1. All nodes in the Coordinated Mining cluster must specify the coordinated_mining parameter
  2. All nodes in the Coordinated Mining cluster must specify the same secret via the cm_api_secret parameter. A secret can be a string of any length.
  3. All miners in the Coordinated Mining cluster should identify all other miners in the cluster using the cm_peer multi-use parameter.
    • Note: an exit node can also optionally mine, in which case it is also considered a miner and should be identified by the cm_peer parameter
  4. All miners (excluding the exit node) should identify the exit node via the cm_exit_peer parameter.
    • Note: the exit node should not include the cm_exit_peer parameter
  5. All miners in the Coordinated Mining cluster can be configured as normal but they should all specify the same mining_addr.

There is one additional parameter which can be used to tune performance:

  • cm_out_batch_timeout: The frequency in milliseconds of sending other nodes in the coordinated mining setup a batch of H1 values to hash. A higher value reduces network traffic, a lower value reduces hashing latency. Default is 20.

Native Support for Pooled Mining

The Arweave node now has built-in support for pooled mining.

New configuration parameters (see arweave node help for descriptions)::

  • is_pool_server
  • is_pool_client
  • pool_api_key
  • pool_server_address

Mining Performance Improvements

Implemented several optimizations and bug fixes to enable more miners to achieve their maximal hashrate - particularly at higher partition counts.

A summary of changes:

  • Increase the degree of horizontal distribution used by the mining processes to remove performance bottlenecks at higher partition counts
  • Optimize the erlang VM memory allocation, management, and garbage collection
  • Fix several out of memory errors that could occur at higher partition counts
  • Fix a bug which could cause valid chunks to be discarded before being hashed

Updated Mining Performance Report:

=========================================== Mining Performance Report ============================================

VDF Speed:  3.00 s
H1 Solutions:     0
H2 Solutions:     3
Confirmed Blocks: 0

Local mining stats:
+-----------+-----------+----------+-------------+-------------+---------------+------------+------------+--------------+
| Partition | Data Size | % of Max |  Read (Cur) |  Read (Avg) |  Read (Ideal) | Hash (Cur) | Hash (Avg) | Hash (Ideal) |
+-----------+-----------+----------+-------------+-------------+---------------+------------+------------+--------------+
|     Total |   2.0 TiB |      5 % |   1.3 MiB/s |   1.3 MiB/s |    21.2 MiB/s |      5 h/s |      5 h/s |       84 h/s |
|         1 |   1.2 TiB |     34 % |   0.8 MiB/s |   0.8 MiB/s |    12.4 MiB/s |      3 h/s |      3 h/s |       49 h/s |
|         2 |   0.8 TiB |     25 % |   0.5 MiB/s |   0.5 MiB/s |     8.8 MiB/s |      2 h/s |      2 h/s |       35 h/s |
|         3 |   0.0 TiB |      0 % |   0.0 MiB/s |   0.0 MiB/s |     0.0 MiB/s |      0 h/s |      0 h/s |        0 h/s |
+-----------+-----------+----------+-----------+---------------+---------------+------------+------------+--------------+

(All values are reset when a node launches)

  • H1 Solutions / H2 Solutions display the number of each solution type discovered
  • Confirmed Blocks displays the number of blocks that were mined by this node and accepted by the network
  • Cur values refer to the most recent value (e.g. the average over the last ~10seconds)
  • Avg values refer to the all-time running average
  • Ideal refers to the optimal rate given the VDF speed and amount of data currently packed
    % of Max refers to how much of the given partition - or whole weave - is packed

Protocol Changes

The 2.7.2 Hard Fork is scheduled for block 1391330 (or roughly 2024-03-26 14:00 UTC), at which time the following protocol changes will activate:

  • The difficulty of a 1-chunk solution increases by 100x to better incentivize full-weave replicas
  • An additional pricing transition phase is scheduled to start November, 2024
  • A pricing cap of 340 Winston per GiB/minute is implemented until the November pricing transition
  • The checkpoint depth is reduced from 50 blocks to 18
  • Unnecessary poa2 chunks are rejected early to prevent a low impact spam attack. Even in the worst case this attack would add minimal bloat to the blockchain and thus wasn't a practical exploit. Closing the vector as a matter of good hygiene.

Additional Bug Fixes and Improvements

  • Enable Randomx support for OSX and arm/aarch64
  • Simplified TLS protocol support
    • See new configuration parameters tls_cert_file and tls_key_file to configure TLS
  • Add several more prometheus metrics:
    • debug-only metrics to track memory performance and processor utilization
    • mining performance metrics
    • coordinated mining metrics
    • metrics to track network characteristics (e.g. partitions covered in blocks, current/scheduled price, chunks per block)
  • Introduce a bin/data-doctor utility
    • data-doctor merge can merge multiple storage modules into 1
    • data-doctor bench runs a series of read rate benchmarks
  • Introduce a new bin/benchmark-packing utility to benchmark a node's packing peformance
    • The utility will generate input files if necessary and will process as close to 1GiB of data as possible while still allowing each core to process the same number of whole chunks.
    • Results are written to a csv and printed to console

Release 2.7.1

20 Nov 19:03
Compare
Choose a tag to compare

This release introduces a hard fork that activates at height 1316410, approximately 2023-12-05 14:00 UTC.

Note if you are running your own VDF Servers, update the server nodes first, then the client nodes.

Bug fixes

Address Occasional Block Validation Failures on VDF Clients

This release fixes an error that would occasionally cause VDF Clients to fail to validate valid blocks. This could occur following a VDF Difficulty Retarget if the VDF client had cached a stale VDF session with steps computed at the prior difficulty. With this change VDF sessions are refreshed whenever the difficulty retargets.

Stabilize VDF Difficulty Oscillation

This release fixes an error that caused unnecessary oscillation when retargeting VDF difficulty. With this patch the VDF difficulty will adjust smoothly towards a difficulty that will yield a network average VDF speed of 1 second.

Ensure VDF Clients Process Updates from All Configured VDF Servers

This release makes an update to the VDF Client code so that it processes all updates from all configured VDF Servers. Prior to this change a VDF Client would only switch VDF Servers when the active server became non-responsive - this could cause a VDF Client to get "stuck" on one VDF Server even if an alternate server provided better data.

Delay the pricing transition

This release introduces a patch that adds to the transition period before the activation of Arweave 2.6’s trustless price oracle, in order to give miners additional time to on-board packed data to the network. The release delays the onset of the transition window to roughly February 20, 2024


The release comes with the prebuilt binaries for the Linux x86_64 platforms.

If you want to run the miner from the existing Git folder, execute the following command:
git fetch --all --tags && git checkout -f N.2.7.1

See the Mining Guide for further instructions.

If you have any issues upgrading or would like to know more about the release, feel free to reach out to us in the Arweave Miners Discord (https://discord.gg/GHB4fxVv8B) or email us at team@arweave.org.