You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am reposting an open issue in the ldbc_snb_implementations repo here.
I am trying to use the cypher benchmark to evaluate the performance of Neo4j under different configurations. I set operation_count=2500 and run interactive-benchmark.sh script multiple times. However, I was getting three different final operations counts (2473, 2532, 2584) across 3 different runs. Is this the expected result?
If I understand short_read_dissipation correctly, it is the delta in the random walk model. Larger short_read_dissipation means a shorter walk, e.g., in the extreme case where short_read_dissipation=1, there should be no short reads after the complex read. Is this the reason why the number of operations can be different across different runs at the end?
If the above is true, is there a way to set the random seed in the test driver to make sure that the workload of a particular benchmark can be replayed?
I was getting three different final operations counts (2473, 2532, 2584) across 3 different runs. Is this the expected result?
I will discuss this with task force when we talk next
I've just ran the cypher implementation a few times with your configuration and can reproduce the issue. Which scale factor are you using to generate the data?
Hi,
I am reposting an open issue in the ldbc_snb_implementations repo here.
I am trying to use the cypher benchmark to evaluate the performance of Neo4j under different configurations. I set operation_count=2500 and run interactive-benchmark.sh script multiple times. However, I was getting three different final operations counts (2473, 2532, 2584) across 3 different runs. Is this the expected result?
Thanks for any help in advance!
Here is my configuration
The text was updated successfully, but these errors were encountered: