* First Version
First draft; stole chunking but it's bad
Forgot my changes
No regex building
Clean & optim
I was not benchmarking myself T_T
Faaaster
First Version
* Update calculate_average_samuelyvon.sh
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
* Add prepare script
* Fix rounding
* Fix format
* Fixing casing
* Formats of sorts?
* Rename
---------
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
* calculate_average_mtopolnik
* short hash (just first 8 bytes of name)
* Remove unneeded checks
* Remove archiving classes
* 2x larger hashtable
* Add "set" to setters
* Simplify parsing temperature, remove newline search
* Reduce the size of the name slot
* Store name length and use to detect collision
* Reduce memory loads in parseTemperature
* Use short for min/max
* Extract constant for semicolon
* Fix script header
* Explicit bash shell in shebang
* Inline usage of broadcast semicolon
* Try vectorization
* Remove vectorization
* Go Unsafe
* Use SWAR temperature parsing by merykitty
* Inline some things
* Remove commented-out MemorySegment usage
* Inline namesMem.asSlice() invocation
* Try out JVM JIT flags
* Implement strcmp
* Remove unused instance variables
* Optimize hashing
* Put station name into hashtable
* Reorder method
* Remove usage of MemorySegment.getUtf8String
Replace with UNSAFE.copyMemory() and new String()
* Fix hashing bug
* Remove outdated comments
* Fix informative constants
* Use broadcastByte() more
* Improve method naming
* More hashing
* Revert more hashing
* Add commented-out code to hash 16 bytes
* Slight cleanup
* Align hashtable at cacheline boundary
* Add Graal Native image
* Revert Graal Native image
This reverts commit d916a42326d89bd1a841bbbecfae185adb8679d7.
* Simplify shell script (no SDK selection)
* Move a constant, zero out hashtable on start
* Better name comparison
* Add prepare_mtopolnik.sh
* Cleaner idiom in name comparison
* AND instead of MOD for hashtable indexing
* Improve word masking code
* Fix formatting
* Reduce memory loads
* Remove endianness checks
* Avoid hash == 0 problem
* Fix subtle bug
* MergeSort of parellel results
* Touch up perf
* Touch up perf
* Remove -Xmx256m
* Extract result printing method
* Print allocation details on OOME
* Single mmap
* Use global allocation arena
* initial commit
* - use loop
- use mutable object to store results
* get rid of regex
* Do not allocate measurement objects
* MMap + custom double parsing ~ 1:30 (down from ~ 2:05)
* HashMap for accumulation and only sort at the end - 1:05
* MMap the whole file
* Use graal
* no GC
* Store results in an array list to avoid double map lookup
* Adjust max buf size
* Manual parsing number to long
* Add --enable-preview
* remove buffer size check (has no effect on performance)
* fix min & max initialization
* do not check for \r
* Revert "do not check for \r"
This reverts commit 9da1f574bf6261ea49c353488d3b4673cad3ce6e.
* Optimise parsing. Now completes in 31 sec down from ~43
* trying to parse numbers faster
* use open address hash table instead of the standard HashMap
* formatting
* Rename the script to match github username (change underscores to slashes)
Enable transparent huge pages, seems to improve by ~2 sec
* Revert "formatting"
This reverts commit 4e90797d2a729ed7385c9000c85cc7e87d935f96.
* Revert "use open address hash table instead of the standard HashMap"
This reverts commit c784b55f61e48f548b2623e5c8958c9b283cae14.
* add prepare_roman-r-m.sh
* SWAR tricks to find semicolon (-2 seconds ro run time)
* remove time call
* fix test
* Parallel version (~6.5 seconds)
* Add multithreaded variant to generate measurements
* Add removing existing measurements.txt file in case exists (for usability reasons)
Fix bug for number of lines generated
* Fix also for less than assumed chunk size (10M entries) per thread
#### Check List:
- [x] Tests pass (`./test.sh MeanderingProgrammer` shows no differences between expected and actual outputs)
- [x] All formatting changes by the build are committed
- [x] Your launch script is named `calculate_average_MeanderingProgrammer.sh` (make sure to match casing of your GH user name) and is executable
- [x] Output matches that of `calculate_average_baseline.sh`
* Execution time: `00:04.668`
* Execution time of reference implementation: `02:40.597`
* System: Apple M2 Max, 12 cores, 64 GB
* Implementation CalculateAverage_japplis of 1BRC from Anthony Goubard (japplis).
Local performance (7 years old desktop i7-6700K - 8 cores - 16GB) 26 seconds. For reference, Jamie Stansfield (isolgpus) is 23 seconds on my machine and 11s in your results.
I've added the nbactions.xml to the .gitignore file. When you add in NetBeans options like --enable-preview to actions like debug file or run file, it creates this file.
* Implementation CalculateAverage_japplis of 1BRC from Anthony Goubard (japplis).
Local performance (7 years old desktop i7-6700K - 8 cores - 16GB) 26 seconds. For reference, Jamie Stansfield (isolgpus) is 23 seconds on my machine and 11s in your results.
I've added the nbactions.xml to the .gitignore file. When you add in NetBeans options like --enable-preview to actions like debug file or run file, it creates this file.
second commit: Removed BufferedInputStream and replaced Measurement with IntSummaryStatistics (thanks davecom): still 23" but cleaner code
* Initial solution by raipc
* Implemented custom hash map with open addressing
* Small optimizations to task splitting and range check disabling
* Fixed off-by-one error in merge
* Run with EpsilonGC. Borrowed VM params from Shipilev
* Make script executable
* Add a license
* First working version.
* Small adjustments.
* Correct number of threads.
* Sync
* Some fixes. To LF instead of CRLF.
* Parallel reading and processing.
* Update CreateMeasurements.java
* Update CalculateAverage.java
* Small fix for bug in switching buffers.
* Update calculate_average_arjenvaneerde.sh
---------
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
* initial commit
* first attempt: segment the file and process it in parallel
* remove commented stuff
* custom parseDouble for this simple case
* fixed some issues and improved parsing
* format
* Update calculate_average_AbstractKamen.sh
---------
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
This commit introduces a new java class, CalculateAverage_couragelee, and a shell script for calculating averages. The java class utilizes NIO's memory-mapping and parallel computing techniques to perform calculations. These changes should improve the efficiency and speed of average calculations.
* feat(flippingbits): Improve parsing of measurement and few cleanups
* feat(flippingbits): Reduce chunk size to 10MB
* feat(flippingbits): Improve parsing of station names
* chore(flippingbits): Remove obsolete import
* chore(flippingbits): Few cleanups
* Optimize checking for collisions by doing this a long at a time always.
* Use a long at a time scanning for delimiter.
* Minor tuning. Now below 0.80s on Intel i9-13900K.
* Add number parsing code from Quan Anh Mai. Fix name length issue.
* Include suggestion from Alfonso Peterssen for another 1.5%.
* Optimize hash collision check compare for ~4% gain.
* Add perf stats based on latest version.
* isolgpus: fix chunk sizing when not at 8 threads
use as many cores as are available
don't buffer the station name, only use it when we need it.
get rid of the main branch
move variables inside the loop
* isolgpus: optimistically assume we can read a whole int for the station name, but roll back if we get it wrong. This should be very beneficial on a dataset where station names are mostly over 4 chars
---------
Co-authored-by: Jamie Stansfield <jalstansfield@gmail.com>
Runs with standard JDK 21.
On my computers (i5 13500, 20 cores, 32GB ram) my best run is (file fully in page cache):
49.78user 0.69system 0:02.81elapsed 1795%CPU
A bit older version of the code on Mac pro M1 32 GB:
real 0m2.867s
user 0m23.956s
sys 0m1.329s
As I wrote in comments in the code, I have a few different roundings that the reference implementation. I have seend that there is an issue about that, but no specific rule yet.
Main points:
- use MemorySegment, it's faster than ByteBuffer
- split the work in a lot of chunks and distribute to a thread pool
- fast measurement parser by using a lot of domain knowledge
- very low allocation
- visit each byte only once
Things I tried that were in fact pessimizations:
- use some internal JDK code to vectorize the hashCode computation
- use a MemorySegment to represent the keys instead of byte[], to avoid
copying
Hope I won't have a bad surprise when running on the target server 😱
* start
* slower
* still bad
* finally faster than baseline :)
* starting to go fast
* improve
* we ball
* fix race condition an newline
* change threadpool
* ~18sec on my machine
* single thread memory mapped file reader, pool of processors
* cleanup of inner classes of MetricProcessor
* doubles are parsed without external functions, strings are lazily created from byte arrays
* remove load() MappedByteBuffer in memory
* fixed handling of newline
* fix a bug & correct locale used
* MappedByteBuffer size set to 1MB
* fixed rounding
* Do not use ArrayBlockingQueue.offer since it drops elements when queue is full
* MappedByteBuffer size = 32 MB
* Adding kgeri's solution
* parallelizing CalculatorAverage_kgeri
* fixing aggregation bugs, chunk size calc for small files
* removed GC logging
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
* fix for when there's no newline at end of input
* fix for when the final record ends on the chunk boundary
---------
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
* improve double reading by eleminating string parsing in between, make calculations over on integer instead of double, parse into double at the end only once
* more improvements, sharing a single StringBuilder to build all toStrings, minor performance gain.
* micro optimizations on reading temperature
* a small skip for redundant traverses, micro optmization
* micro optimization, eleminate some if cases, saves 0.5 seconds more
* micro optimization, calculate key hash ahead eleminates more more loop, saves 0.5 seconds more :)
* optimize key equals and handling the case when a region is larger than max integer size
---------
Co-authored-by: Yavuz Tas <yavuz.tas@ing.com>
* Initial version with multiple ideas
* Added virtual thread implementation based on certain task size
* Removed evaluate file
* Fixed test issues
* Added a custom input split
* first try
* format
* Update calculate_average_imrafaelmerino.sh
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
* Update src/main/java/dev/morling/onebrc/CalculateAverage_imrafaelmerino.java
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
---------
Co-authored-by: Rafael Merino García <imrafaelmerino@gmail.com>
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
* Implementation of 1brc - felix19350
* Added license header
* Fixed failing tests
* Replaced parsing of doubles with a custom parser and integer arithmetic
---------
Co-authored-by: Bruno Felix <bruno.felix@klarna.com>
* Initial version.
* Make PGO feature optional off-by-default. Needs PGO_MODE environment
variable to be set. Add -O3 -march=native tuning flags for better
performance.
* Adjust script to be more quiet.
* Adjust max city length. Fix an issue when accumulating results.
* Tune thomaswue submission.
mmap the entire file, use Unsafe directly instead of ByteBuffer, avoid byte[] copies.
These tricks give a ~30% speedup, over an already fast implementation.
* Optimize parsing of numbers based on specific given constraints.
* Fix for segment calculation for case of very small input.
* Minor shell script fixes.
* Separate out build step into file additional_build_step_thomaswue.sh,
simplify run script and remove PGO option for now.
* Minor corrections to the run script.
---------
Co-authored-by: Alfonso² Peterssen <alfonso.peterssen@oracle.com>
* A solution with Actor Model concurrency and MappedByteBuffer
* fix test cases
* revert back the file name to original
* cache String hashCode calculation via composing with Key object
* fix wrong key caching and eleminate duplicate String creation between actors
* update possible char count in a line, fix calculate_average.sh
* increase possible line length to 256 bytes, much safer to cover 100 chars I hope
---------
Co-authored-by: Yavuz Tas <yavuz.tas@ing.com>
* artpar's attempt
* artpar's attempt
* remove int -> Integer conversions, custom parsing for measurements
* remove allocations by caching station names
also remove Integer and use int instead to remove valueOf calls
* Fix result mismatch errors
* parse int instead of double
* reduce time spent reading the mapped buffer
* cleanup unused memory
* less is faster ? vector addition doesn't look worth it
* backout from virtual threads as well
* Fix breaking tests
Somewhat mixed collection of multiple ideas, mostly based initially
on using the new JDK Vector API for extracting offsets of newlines
and semicolons.
Runs locally in just under 11 seconds on 1B rows of input on a
2020 M1 Macbook Air.
* isolgpus: submission 1
* isolgpus: fix min value bug (breaks if a negative temperature never appears)
* isolgpus: remove unused collector
* isolgpus: fix split on chunk bug
* isolgpus: change name equality algo to a cheaper check.
* isolgpus: fix chunking state to cope with last byte of last chunk
* isolgpus: hash as we go, instead of at the end
* isolgpus: adjust thread count to core count
* isolgpus: change cores to 8 statically
---------
Co-authored-by: Jamie Stansfield <jalstansfield@gmail.com>
* First performance tweaks
* further tweaks
* collect into a treemap
* Tweak JVM options
* Inline rounding into collector
* reduce some operations
* oops, add missing braces
* tweak JVM options
* small fixes
* add min and max to processing
* fix min
* remove compact strings
* replace sumWithCompensation with naive sum implementation
* use UseShenandoahGC
* integrate mmap
* integrate mmap
* Fix messed up array logic
* Set jdk version
* Use Integer calculation instead of double, add unit-test
* Bring back StationIdent optimization
Originally, StationIdent was using byte[] to store names, so the extra
String allocation could be avoided. However, that produced incorrect
sorting.
Sorting is now moved to the result merging step. Here, names are
converted to Strings.
* Implement readStationName with SIMD 256bit
* Rebase and cleanup test code, now that it's in the project
* Fix seijikun formatting
* Fix test failure in specific jobCnt edge-cases
* Also switch to graalvm
In case of key collision broken implementation will likely attribute
measurements to the wrong key and therefore it is better to have
non-zero value to end up with a wrong average value.
When all measurements are zero then averages are also zero even
when attributed to the wrong keys.
Updates #91
* Added tests for endian-calculations (had these in a different class, perhaps handy for others to see as well)
Inlined the hash function, runs locally in 2.4sec now, hopefully endian issues fix
Added equals to support any city name up to 1024 in length, don't rely on hash
* For clarity I've updated the code so endian doesn't change the hashes, easier to debug.
* Fixing bug in array check
Simple is faster
* Also spotted the diff, not just the big exception
Fixed buffer limit issue
Input created via
```sh
bash -c 'for i in {1..10000} ; do echo "id$i;0.0" ; done' >./src/test/resources/samples/measurements-10000-unique-keys.txt
```
and output via baseline implementation.
Keys are short and very similar which improves chances for collision
and hence are good for testing.
Fixes#91
* Use open-addressing scheme to deal with hash table collisions. Reduce concurrency from 16 to 8. Use bit mask rather than mod operator to confine hash code to table range.
* Properly handle file partitions that reside entirely within a line.
* Reorder statements in doProcessBuffer.
Adds test samples that can be used for unit tests or to verify
implementations via:
```bash
for sample in $(ls src/test/resources/samples/*.txt)
do
echo "Validating $sample"
rm -f measurements.txt
ln -s $sample measurements.txt
diff <(./calculate_average.sh) ${sample%.txt}.out
done
rm measurements.txt
```
For #61
Added SWAR (SIMD Within A Register) code to increase bytebuffer processing/throughput
Delaying the creation of the String by comparing hash, segmenting like spullara, improved EOL finding
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
* Initial implementation using Shenandoah GC and parallel iteration
* Use memory mapped files
* Iterate the buffer once and use BigDecimal parsing instead of Double.parseDouble
* Add information about Graal
* Add sdk use to calculate script
* Memory mapped file, single-pass parsing, custom hash map, fixed thread pool
The threading was a hasty addition and needs work
* Used arraylist instead of treemap to reduce a little overhead
We only need it sorted for output, so only construct a treemap for output
* Attempt to speed up double conversion
* Cap core count for low-core systems
* Fix wrong exponent
* Accumulate measurement value in double, seems marginally faster
Benchmark Mode Cnt Score Error Units
DoubleParsingBenchmark.ourToDouble thrpt 10 569.771 ± 7.065 ops/us
DoubleParsingBenchmark.ourToDoubleAccumulateInToDouble thrpt 10 648.026 ± 7.741 ops/us
DoubleParsingBenchmark.ourToDoubleDivideInsteadOfMultiply thrpt 10 570.412 ± 9.329 ops/us
DoubleParsingBenchmark.ourToDoubleNegative thrpt 10 512.618 ± 8.580 ops/us
DoubleParsingBenchmark.ourToDoubleNegativeAccumulateInToDouble thrpt 10 565.043 ± 18.137 ops/us
DoubleParsingBenchmark.ourToDoubleNegativeDivideInsteadOfMultiply thrpt 10 511.228 ± 13.967 ops/us
DoubleParsingBenchmark.stringToDouble thrpt 10 52.310 ± 1.351 ops/us
DoubleParsingBenchmark.stringToDoubleNegative thrpt 10 50.785 ± 1.252 ops/us