* initial commit
* - use loop
- use mutable object to store results
* get rid of regex
* Do not allocate measurement objects
* MMap + custom double parsing ~ 1:30 (down from ~ 2:05)
* HashMap for accumulation and only sort at the end - 1:05
* MMap the whole file
* Use graal
* no GC
* Store results in an array list to avoid double map lookup
* Adjust max buf size
* Manual parsing number to long
* Add --enable-preview
* remove buffer size check (has no effect on performance)
* fix min & max initialization
* do not check for \r
* Revert "do not check for \r"
This reverts commit 9da1f574bf6261ea49c353488d3b4673cad3ce6e.
* Optimise parsing. Now completes in 31 sec down from ~43
* trying to parse numbers faster
* use open address hash table instead of the standard HashMap
* formatting
* Rename the script to match github username (change underscores to slashes)
Enable transparent huge pages, seems to improve by ~2 sec
* Revert "formatting"
This reverts commit 4e90797d2a729ed7385c9000c85cc7e87d935f96.
* Revert "use open address hash table instead of the standard HashMap"
This reverts commit c784b55f61e48f548b2623e5c8958c9b283cae14.
* add prepare_roman-r-m.sh
* SWAR tricks to find semicolon (-2 seconds ro run time)
* remove time call
* fix test
* Parallel version (~6.5 seconds)
* Add multithreaded variant to generate measurements
* Add removing existing measurements.txt file in case exists (for usability reasons)
Fix bug for number of lines generated
* Fix also for less than assumed chunk size (10M entries) per thread
#### Check List:
- [x] Tests pass (`./test.sh MeanderingProgrammer` shows no differences between expected and actual outputs)
- [x] All formatting changes by the build are committed
- [x] Your launch script is named `calculate_average_MeanderingProgrammer.sh` (make sure to match casing of your GH user name) and is executable
- [x] Output matches that of `calculate_average_baseline.sh`
* Execution time: `00:04.668`
* Execution time of reference implementation: `02:40.597`
* System: Apple M2 Max, 12 cores, 64 GB
* Implementation CalculateAverage_japplis of 1BRC from Anthony Goubard (japplis).
Local performance (7 years old desktop i7-6700K - 8 cores - 16GB) 26 seconds. For reference, Jamie Stansfield (isolgpus) is 23 seconds on my machine and 11s in your results.
I've added the nbactions.xml to the .gitignore file. When you add in NetBeans options like --enable-preview to actions like debug file or run file, it creates this file.
* Implementation CalculateAverage_japplis of 1BRC from Anthony Goubard (japplis).
Local performance (7 years old desktop i7-6700K - 8 cores - 16GB) 26 seconds. For reference, Jamie Stansfield (isolgpus) is 23 seconds on my machine and 11s in your results.
I've added the nbactions.xml to the .gitignore file. When you add in NetBeans options like --enable-preview to actions like debug file or run file, it creates this file.
second commit: Removed BufferedInputStream and replaced Measurement with IntSummaryStatistics (thanks davecom): still 23" but cleaner code
* Initial solution by raipc
* Implemented custom hash map with open addressing
* Small optimizations to task splitting and range check disabling
* Fixed off-by-one error in merge
* Run with EpsilonGC. Borrowed VM params from Shipilev
* Make script executable
* Add a license
* First working version.
* Small adjustments.
* Correct number of threads.
* Sync
* Some fixes. To LF instead of CRLF.
* Parallel reading and processing.
* Update CreateMeasurements.java
* Update CalculateAverage.java
* Small fix for bug in switching buffers.
* Update calculate_average_arjenvaneerde.sh
---------
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
* initial commit
* first attempt: segment the file and process it in parallel
* remove commented stuff
* custom parseDouble for this simple case
* fixed some issues and improved parsing
* format
* Update calculate_average_AbstractKamen.sh
---------
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
This commit introduces a new java class, CalculateAverage_couragelee, and a shell script for calculating averages. The java class utilizes NIO's memory-mapping and parallel computing techniques to perform calculations. These changes should improve the efficiency and speed of average calculations.
* feat(flippingbits): Improve parsing of measurement and few cleanups
* feat(flippingbits): Reduce chunk size to 10MB
* feat(flippingbits): Improve parsing of station names
* chore(flippingbits): Remove obsolete import
* chore(flippingbits): Few cleanups
* Optimize checking for collisions by doing this a long at a time always.
* Use a long at a time scanning for delimiter.
* Minor tuning. Now below 0.80s on Intel i9-13900K.
* Add number parsing code from Quan Anh Mai. Fix name length issue.
* Include suggestion from Alfonso Peterssen for another 1.5%.
* Optimize hash collision check compare for ~4% gain.
* Add perf stats based on latest version.
* isolgpus: fix chunk sizing when not at 8 threads
use as many cores as are available
don't buffer the station name, only use it when we need it.
get rid of the main branch
move variables inside the loop
* isolgpus: optimistically assume we can read a whole int for the station name, but roll back if we get it wrong. This should be very beneficial on a dataset where station names are mostly over 4 chars
---------
Co-authored-by: Jamie Stansfield <jalstansfield@gmail.com>
Runs with standard JDK 21.
On my computers (i5 13500, 20 cores, 32GB ram) my best run is (file fully in page cache):
49.78user 0.69system 0:02.81elapsed 1795%CPU
A bit older version of the code on Mac pro M1 32 GB:
real 0m2.867s
user 0m23.956s
sys 0m1.329s
As I wrote in comments in the code, I have a few different roundings that the reference implementation. I have seend that there is an issue about that, but no specific rule yet.
Main points:
- use MemorySegment, it's faster than ByteBuffer
- split the work in a lot of chunks and distribute to a thread pool
- fast measurement parser by using a lot of domain knowledge
- very low allocation
- visit each byte only once
Things I tried that were in fact pessimizations:
- use some internal JDK code to vectorize the hashCode computation
- use a MemorySegment to represent the keys instead of byte[], to avoid
copying
Hope I won't have a bad surprise when running on the target server 😱
* start
* slower
* still bad
* finally faster than baseline :)
* starting to go fast
* improve
* we ball
* fix race condition an newline
* change threadpool
* ~18sec on my machine