Releases: FluxML/Optimisers.jl
Releases · FluxML/Optimisers.jl
v0.3.3
Optimisers v0.3.3
Merged pull requests:
- add missing SignDecay doc reference (#168) (@CarloLucibello)
- fix broken doc links (#170) (@CarloLucibello)
- add
trainables
(#171) (@CarloLucibello) - fix broken documentation (#172) (@CarloLucibello)
- add
path
option totrainables
(#174) (@CarloLucibello)
Closed issues:
- Documenter CI is failing (#169)
v0.3.2
Optimisers v0.3.2
Merged pull requests:
v0.3.1
Optimisers v0.3.1
Closed issues:
- Error in
update!
for Metal arrays and Adam optimiser (#150)
Merged pull requests:
v0.3.0
Optimisers v0.3.0
The major change is #151, which removes type parameterisation from the structs. This should not break straightforward user code, but may break break loading via BSON etc. It also adds errors on negative learning rates, and will in some cases change the default regulator from eps(Float32)
to 1e-8
.
Closed issues:
Merged pull requests:
v0.2.20
Optimisers v0.2.20
Closed issues:
- Implement Lion, up to 5x faster than Adam, and more accurate (#156)
Merged pull requests:
- Add Lion to docs (#157) (@ToucheSir)
v0.2.19
Optimisers v0.2.19
Closed issues:
OptimiserChain(..., ClipNorm)
fails on GPU (#127)
Merged pull requests:
v0.2.18
Optimisers v0.2.18
Closed issues:
- Interface for gradient accumulation (#130)
- Optimisers.update fails with gradient of type
CUDA.CUSPARSE.CuSparseMatrixCSC
(#141)
Merged pull requests:
- Rule for gradient accumulation (#137) (@CarloLucibello)
v0.2.17
v0.2.16
Optimisers v0.2.16
Closed issues:
Merged pull requests:
- fix anonymous walk deprecation (#125) (@CarloLucibello)
- fix typo (#132) (@bicycle1885)
v0.2.15
Optimisers v0.2.15
Merged pull requests: