All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
v2.3.1 - 2023-12-14
- Improve snapshot synchronization diagnostic logging PR#216
v2.3.0 - 2023-06-19
- Rollback deletes only tagged events after the given sequence number #202, PR#203
CassandraPersistenceQueries
can find the highest sequence number even if deleted partition exists #207, PR#209CassandraPersistenceQueries.currentEventsBefore
skips deleted partitions #208, PR#209- Rollback preparation fails if required data have been deleted #206, PR#210
v2.2.0 - 2022-012-26
- Add diagnostic logs PR#164, PR#176, PR#177, PR#196
- Add function extracting shard id from entity id to lerna.akka.entityreplication.typed.ClusterReplication PR#172
- Add function of Disabling raft actor PR#173, PR#188, PR#189
- Add a rollback tool for Raft shard PR#187
- Enhance leader's replication response handling PR#160
- Change event sourcing log level to debug PR#163
- ReplicationRegionRaftActorStarter uses its FQCN as its logger name PR178
- Add diagnostic info to logs of sending replication results PR#179
- Persist EntitySnapshot as an event PR#184
- Allow only specific actors defined as a sticky leader to become a leader PR#186, PR#189
- RaftActor might delete committed entries
#152
#165
PR#151
PR#166
⚠️ This fix adds a new persistent event type. It doesn't allow downgrading after being updated. - An entity on a follower could stick at
WaitForReplication
if the entity has aProcessCommand
in its mailbox #157, PR#158 - Leader cannot reply to an entity with a
ReplicationFailed
message in some cases #153, PR#161, #170, PR#171 - An entity could stick at WaitForReplication when a Raft log entry is truncated by conflict #155, #PR162
- A RaftAcotor(Leader) could mis-deliver a ReplicationSucceeded message to a different entity 156, #PR162
- Snapshot synchronization could remove committed log entries that not be included in snapshots #167 #PR168
- SnapshotStore doesn't reply with SnapshotNotFound sometimes #182, #PR183
v2.1.0 - 2022-03-24
-
Efficient recovery of commit log store, which is on the query side #112
This change will improve the performance of the recovery on the query side. You should migrate settings described at Migration Guide.
-
Raft actors start automatically after an initialization of
ClusterReplication
#118This feature is enabled only by using
typed.ClusterReplication
. It is highly recommended that you switch using the typed API since the classic API was deprecated. -
Raft actors track the progress of the event sourcing #136, PR#137, PR#142.
This feature ensures that
- Event Sourcing won't halt even if the event-sourcing store is unavailable for a long period. After the event-sourcing store recovers, Event Sourcing will work again automatically.
- Compaction won't delete committed events that are not persisted to the event-sourcing store yet.
It adds new following settings (for more details, please see
reference.conf
):lerna.akka.entityreplication.raft.eventsourced.committed-log-entries-check-interval
lerna.akka.entityreplication.raft.eventsourced.max-append-committed-entries-size
lerna.akka.entityreplication.raft.eventsourced.max-append-committed-entries-batch-size
It deletes the following settings:
lerna.akka.entityreplication.raft.eventsourced.commit-log-store.retry.attempts
lerna.akka.entityreplication.raft.eventsourced.commit-log-store.retry.delay
It requires that
lerna.akka.entityreplication.raft.compaction.preserve-log-size
is less thanlerna.akka.entityreplication.raft.compaction.log-size-threshold
. -
Compaction warns if it might not delete enough entries PR#142
-
Bump up Akka version to 2.6.17 PR#98
This change will show you deserialization warnings during the rolling update, it's safe to ignore. For more details, see Akka 2.6.16 release note
-
TestKit throws "Shard received unexpected message" exception after the entity passivated PR#100
-
ReplicatedEntity
can produce illegal snapshot if compaction and receiving new event occur same time #111 -
Starting a follower member later than leader completes a compaction may break ReplicatedLog of the follower #105
-
The Raft leader uses the same previous
LogEntryIndex
andTerm
to all batchedAppendEntries
messages #123 -
Raft Actors doesn't accept a
RequestVote(lastLogIndex < log.lastLogIndex, lastLogTerm > log.lastLogTerm)
message #125 -
A new event is created even though all past events have not been applied #130
-
InstallSnapshot
can miss snapshots to copy PR#128⚠️ This change adds a new persistence event. This might don't allow downgrading after upgrading. -
Moving a leader during snapshot synchronization can delete committed log entries #133
⚠️ This change adds a new persistence event. This might don't allow downgrading after upgrading.
v2.0.0 - 2021-07-16
-
Change the shard-distribution-strategy to distribute shard (
RaftActor
) more evenly PR#82⚠️ This change does not allow rolling updates. You have to update your system by stopping the whole cluster. -
Made internal APIs private
If you are only using the APIs using in the implementation guide, this change does not affect your application. Otherwise, some APIs may be unavailable. Please see PR#47 to check APIs that will no longer be available.
- Java11 support
- Add new typed API based on Akka Typed PR#79
- This API reduces runtime errors and increases productivity.
-
Untyped (classic) API has been deprecated PR#96
⚠️ This API will be removed in the next major version release.
v1.0.0 - 2021-03-29
- GA release 🚀
v0.1.0 - 2021-01-12
- Initial release (under development)