* Optimize checking for collisions by doing this a long at a time always.
* Use a long at a time scanning for delimiter.
* Minor tuning. Now below 0.80s on Intel i9-13900K.
* Add number parsing code from Quan Anh Mai. Fix name length issue.
* Include suggestion from Alfonso Peterssen for another 1.5%.
* Optimize hash collision check compare for ~4% gain.
* Add perf stats based on latest version.
* reset the JDK to the default (21.0.1-open) when no prepare script is provided
* leaderboard improvements - sorting and content
* run sdk install once at the beginning of the script for all the SDKs detected in any of the evaluated prepare scripts
* remove unnecessary code and tweak doc comments
* one more nit
* Don't print rankings values when only 1 fork is being evaluated
* It's been a few hours, so I now have some more rate limit :)
---------
Co-authored-by: Jason Nochlin <hundredwatt@users.noreply.github.com>
* create new version of evaluate.sh using hyperfine + jq
* output the raw times for each command
* nit: s/command/fork/
* update evaluate2.sh for new fork file structure
* review changes
* use numactl on linux
* 1 warmup
* verify output
* leaderboard
* do not early exit on hyperfine error
* check if SMT and turbo boost are disabled
* fix bug
---------
Co-authored-by: Jason Nochlin <hundredwatt@users.noreply.github.com>
* isolgpus: fix chunk sizing when not at 8 threads
use as many cores as are available
don't buffer the station name, only use it when we need it.
get rid of the main branch
move variables inside the loop
* isolgpus: optimistically assume we can read a whole int for the station name, but roll back if we get it wrong. This should be very beneficial on a dataset where station names are mostly over 4 chars
---------
Co-authored-by: Jamie Stansfield <jalstansfield@gmail.com>
Runs with standard JDK 21.
On my computers (i5 13500, 20 cores, 32GB ram) my best run is (file fully in page cache):
49.78user 0.69system 0:02.81elapsed 1795%CPU
A bit older version of the code on Mac pro M1 32 GB:
real 0m2.867s
user 0m23.956s
sys 0m1.329s
As I wrote in comments in the code, I have a few different roundings that the reference implementation. I have seend that there is an issue about that, but no specific rule yet.
Main points:
- use MemorySegment, it's faster than ByteBuffer
- split the work in a lot of chunks and distribute to a thread pool
- fast measurement parser by using a lot of domain knowledge
- very low allocation
- visit each byte only once
Things I tried that were in fact pessimizations:
- use some internal JDK code to vectorize the hashCode computation
- use a MemorySegment to represent the keys instead of byte[], to avoid
copying
Hope I won't have a bad surprise when running on the target server 😱
* start
* slower
* still bad
* finally faster than baseline :)
* starting to go fast
* improve
* we ball
* fix race condition an newline
* change threadpool
* ~18sec on my machine
* single thread memory mapped file reader, pool of processors
* cleanup of inner classes of MetricProcessor
* doubles are parsed without external functions, strings are lazily created from byte arrays
* remove load() MappedByteBuffer in memory
* fixed handling of newline
* fix a bug & correct locale used
* MappedByteBuffer size set to 1MB
* fixed rounding
* Do not use ArrayBlockingQueue.offer since it drops elements when queue is full
* MappedByteBuffer size = 32 MB