Skip to content

Commit

Permalink
Add commit info
Browse files Browse the repository at this point in the history
  • Loading branch information
gdalle committed Feb 29, 2024
1 parent fa01990 commit 2bf68df
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ The reason for this low-dimensional choice is to spend most of the time in the g
The data consists of $50$ independent sequences of length $100$ each, with a number of states varying from $2$ to $10$, to which we apply all inference algorithms (with Baum-Welch performing $5$ iterations).

All benchmarks were run from Julia with `BenchmarkTools.jl` [@chenRobustBenchmarkingNoisy2016], calling Python with `PythonCall.jl` [@rowleyPythonCallJlPython2022], and plotting results with `CairoMakie.jl` [@danischMakieJlFlexible2021].
The comparison code imports `HiddenMarkovModels.jl` version 0.5.0, and it is accessible in the `libs/HMMComparison/` subfolder of our GitHub repository.
The comparison code imports `HiddenMarkovModels.jl` version 0.5.0 (commit [9e0b7ab](https://github.com/gdalle/HiddenMarkovModels.jl/commit/9e0b7ab955866523551efbc42813adc4d2b1e3dd)), and it is accessible in the [`libs/HMMComparison/`](https://github.com/gdalle/HiddenMarkovModels.jl/tree/9e0b7ab955866523551efbc42813adc4d2b1e3dd/libs/HMMComparison) subfolder of our GitHub repository.
We tried to minimize parallelism effects by running everything on a single thread, and made the assumption that the Julia-to-Python overhead is negligible compared to the algorithm runtime.

![Benchmark of HMM packages](images/benchmark.svg)
Expand Down

0 comments on commit 2bf68df

Please sign in to comment.