* Solution without unsafe
* Solution without unsafe
* Solution without unsafe, remove the usage of bytebuffer, passes the create_measurements3 test
* bug fix for 10k test, update also the CreateMeasurements3.java to use '\n' as newline instead of the os value (if it runs on windows it uses crlf and "breaks" the file format )
* new version that should perform way better than the previous one
* removed prepare script for giovannicuccu
* removed some comments
---------
Co-authored-by: Giovanni Cuccu <gcuccu@imolainformatica.it>
* improve speed, thanks to the following improvements:
- loop unrolling and eleminating extra calculations
- eleminating instance level variable access
- quicker equals check, checking long by long chunks instead of bytes
- update GraalVM version to the latest
* faster equals check
* fix equals bug in 10K, more optimizations on equals and calculate hash parts
* New solution optimized for Linux/AMD hardware
* Optimize solution, try to fix 10K bug on native
* Optimize solution, move records to a local field
* test timing
* revert back accidentally pushed code
---------
Co-authored-by: Yavuz Tas <yavuz.tas@ing.com>
* calculate_average_mtopolnik
* short hash (just first 8 bytes of name)
* Remove unneeded checks
* Remove archiving classes
* 2x larger hashtable
* Add "set" to setters
* Simplify parsing temperature, remove newline search
* Reduce the size of the name slot
* Store name length and use to detect collision
* Reduce memory loads in parseTemperature
* Use short for min/max
* Extract constant for semicolon
* Fix script header
* Explicit bash shell in shebang
* Inline usage of broadcast semicolon
* Try vectorization
* Remove vectorization
* Go Unsafe
* Use SWAR temperature parsing by merykitty
* Inline some things
* Remove commented-out MemorySegment usage
* Inline namesMem.asSlice() invocation
* Try out JVM JIT flags
* Implement strcmp
* Remove unused instance variables
* Optimize hashing
* Put station name into hashtable
* Reorder method
* Remove usage of MemorySegment.getUtf8String
Replace with UNSAFE.copyMemory() and new String()
* Fix hashing bug
* Remove outdated comments
* Fix informative constants
* Use broadcastByte() more
* Improve method naming
* More hashing
* Revert more hashing
* Add commented-out code to hash 16 bytes
* Slight cleanup
* Align hashtable at cacheline boundary
* Add Graal Native image
* Revert Graal Native image
This reverts commit d916a42326d89bd1a841bbbecfae185adb8679d7.
* Simplify shell script (no SDK selection)
* Move a constant, zero out hashtable on start
* Better name comparison
* Add prepare_mtopolnik.sh
* Cleaner idiom in name comparison
* AND instead of MOD for hashtable indexing
* Improve word masking code
* Fix formatting
* Reduce memory loads
* Remove endianness checks
* Avoid hash == 0 problem
* Fix subtle bug
* MergeSort of parellel results
* Touch up perf
* Touch up perf
* Remove -Xmx256m
* Extract result printing method
* Print allocation details on OOME
* Single mmap
* Use global allocation arena
* Add commented-out Xmx64m XXMaxDirectMemorySize=1g
* withinSafeZone
* Update cursor earlier
* Better assert
* Fix bug in addrOfSemicolonSafe
* Move declaration lower
* Simplify code
* Add rounding error test case
* Fix DANGER_ZONE_LEN
* Deoptimize parseTemperatureSimple()
* Inline parseTemperatureAndAdvanceCursor()
* Skip masking until the last load
* Conditionally fetch name words
* Cleanup
* Use native image
* Use supbrocess
* Simpler code
* Cleanup
* Avoid extra condition on hot path
* added code
* Fixed pointers bugs
* removed my own benchmark
* added comment on how I handle hash collisions
* executed mwvn clean verify
* made scripts executable & fixed rounding issues
* Fixed way of dealing with hash collisions
* changed method name sameNameBytes to isSameNameBytes
* changes script from sh to bash
* fixed chunking bug
* Fixed bug in chunking when file size is too small
* added Runtime.getRuntime().availableProcessors
* Solution without unsafe
* Solution without unsafe
* Solution without unsafe, remove the usage of bytebuffer, passes the create_measurements3 test
* bug fix for 10k test, update also the CreateMeasurements3.java to use '\n' as newline instead of the os value (if it runs on windows it uses crlf and "breaks" the file format )
---------
Co-authored-by: Giovanni Cuccu <gcuccu@imolainformatica.it>
- faster merge by ignoring empty entries in the map
- enable CDS for faster startup (added `prepare_serkan-ozal.sh` to generate CDS archive in advance)
- some tweaks with JVM options
- optimized result printing
* Read file with multiple virtual threads and process chunks of file data in parallel.
* Updated logic to bucket every chunk of aggs into a vector and merge them into a TreeMap for printing.
* Virtual Thread / File Channels Impl.
* Renamed files with GHUsername.
* Added statement to get vals before updating.
* Added executable permission to the files.
* Latest snapshot (#1)
preparing initial version
* Improved performance to 20seconds (-9seconds from the previous version) (#2)
improved performance a bit
* Improved performance to 14 seconds (-6 seconds) (#3)
improved performance to 14 seconds
* sync branches (#4)
* initial commit
* some refactoring of methods
* some fixes for partitioning
* some fixes for partitioning
* fixed hacky getcode for utf8 bytes
* simplified getcode for partitioning
* temp solution with syncing
* temp solution with syncing
* new stream processing
* new stream processing
* some improvements
* cleaned stuff
* run configuration
* round buffer for the stream to pages
* not using compute since it's slower than straightforward get/put. using own byte array equals.
* using parallel gc
* avoid copying bytes when creating a station object
* formatting
* Copy less arrays. Improved performance to 12.7 seconds (-2 seconds) (#5)
* initial commit
* some refactoring of methods
* some fixes for partitioning
* some fixes for partitioning
* fixed hacky getcode for utf8 bytes
* simplified getcode for partitioning
* temp solution with syncing
* temp solution with syncing
* new stream processing
* new stream processing
* some improvements
* cleaned stuff
* run configuration
* round buffer for the stream to pages
* not using compute since it's slower than straightforward get/put. using own byte array equals.
* using parallel gc
* avoid copying bytes when creating a station object
* formatting
* some tuning to increase performance
* some tuning to increase performance
* avoid copying data; fast hashCode with slightly more collisions
* avoid copying data; fast hashCode with slightly more collisions
* cleanup (#6)
* tidy up
* Initial submission for jonathan_aotearoa
* Fixing typos
* Adding hyphens to prepare and calculate shell scripts so that they're aligned with my GitHub username.
* Making chunk processing more robust in attempt to fix the cause of the build error.
* Fixing typo.
* Fixed the handling of files less than 8 bytes in length.
* Additional assertion, comment improvements.
* Refactoring to improve testability. Additional assertion and comments.
* Updating collision checking to include checking if the station name is equal.
* Minor refactoring to make param ordering consistent.
* Adding a custom toString method for the results map.
* Fixing collision checking bug
* Fixing rounding bug.
* Fixing collision bug.
---------
Co-authored-by: jonathan <jonathan@example.com>
* CalculateAverage_pdrakatos
* Rename to be valid with rules
* CalculateAverage_pdrakatos
* Rename to be valid with rules
* Changes on scripts execution
* Fixing bugs causing scripts not to be executed
* Changes on prepare make it compatible
* Fixing passing all tests
* Increase direct memory allocation buffer
* Fixing memory problem causes heap space exception
* Initial impl
* Fix bad file descriptor error in the `calculate_average_serkan-ozal.sh`
* Disable Epsilon GC and rely on default GC. Because apparently, JIT and Epsilon GC don't play well together in the eval machine for short lived Vector API's `ByteVector` objects
* Take care of byte order before processing key length with bit shift operators
* Fix key equality check for long keys
/**
* Solution based on thomaswue solution, commit:
* commit d0a28599c2
* Author: Thomas Wuerthinger <thomas.wuerthinger@oracle.com>
* Date: Sun Jan 21 20:13:48 2024 +0100
*
* Changes:
* 1) Use LinkedBlockingQueue to store partial results, that
* will then be merged into the final map later.
* As different chunks finish at different times, this allows
* to process them as they finish, instead of joining the
* threads sequentially.
* This change seems more useful for the 10k dataset, as the
* runtime difference of each chunk is greater.
* 2) Use only 4 threads if the file is >= 14GB.
* This showed much better results on my local test, but I only
* run with 200 million rows (because of limited RAM), and I have
* no idea how it will perform on the 1brc HW.
*/
* fix test rounding, pass 10K station names
* improved integer conversion, delayed string creation.
* new algorithm hash, use ConcurrentHashMap
* fix rounding test
* added the length of the string in the hash initialization.
* fix hash code collisions
* cleanup prepare script
* native image options
* fix quardaric probing (no change to perf)
* mask to get the last chunk of the name
* extract hash functions
* tweak the probing loop (-100ms)
* fiddle with native image options
* Reorder conditions in hope it makes branch predictor happier
* extracted constant
* Improve hash function
* remove limit on number of cores
* fix calculation of boundaries between chunks
* fix IOOBE
---------
Co-authored-by: Jason Nochlin <hundredwatt@users.noreply.github.com>
* Contribution by albertoventurini
* Shave off a couple of hundreds of milliseconds, by making an assumption on temperature readings
* Parse reading without loop, inspired by other solutions
* Use all cores
* Small improvements, only allocate 247 positions instead of 256
---------
Co-authored-by: Alberto Venturini <alberto.venturini@accso.de>
* Update with Rounding Bugfix
* Simplification of Merging Results
* More Plain Java Code for Value Storage
* Improve Performance by Stupid Hash
Drop around 3 seconds on my machine by
simplifying the hash to be ridicules stupid,
but faster.
* Fix outdated comment
* Dmitry challenge
* Dmitry submit 2.
Use MemorySegment of FileChannle and Unsafe
to read bytes from disk. 4 seconds speedup in local test
from 20s to 16s.
* tonivade improved not using HashMap
* use java 21.0.2
* same hash same station
* remove unused parameter in sameSation
* use length too
* refactor parallelization
* use parallel GC
* refactor
* refactor
1. Use Unsafe
2. Fit hashtable in L2 cache.
3. If we can find a good hash function, it can fit in L1 cache even.
4. Improve temperature parsing by using a lookup table
* Go implementation by AlexanderYastrebov
This is a proof-of-concept to demonstrate non-java submission.
It requires Docker with BuildKit plugin to build and export binary.
Updates
* #67
* #253
* Use collision-free id lookup
* Use number of buckets greater than max number of keys
* Init Push
* organize imports
* Add OpenMap
* Best outcome yet
* Create prepare script and calculate script for native image, also add comments on calculation
* Remove extra hashing, and need for the set array
* Commit formatting changes from build
* Remove unneeded device information
* Make shell scripts executable, add hash collision double check for equality
* Add hash collision double check for equality
* Skip multithreading for small files to improve small file performance
* final comit
changing using mappedbytebuffer
changes before using unsafe address
using unsafe
* using graalvm,correct unsafe mem implementation
---------
Co-authored-by: Karthikeyans <karthikeyan.sn@zohocorp.com>
* Inline parsing name and station to avoid constantly updating the offset field (-100ms)
* Remove Worker class, inline the logic into lambda
* Accumulate results in an int matrix instead of using result row (-50ms)
* Use native image
* Deploy v2 for parkertimmins
Main changes:
- fix hash which masked incorrectly
- do station equality check in simd
- make station array length multiple of 32
- search for newline rather than semicolon
* Fix bug - entries were being skipped between batches
At the boundary between two batches, the first batch would stop after
crossing a limit with a padding of 200 characters applied. The next
batch should then start looking for the first full entry after the
padding. This padding logic had been removed when starting a batch. For
this reason, entries starting in the 200 character padding between
batches were skipped.
* fast-path for keys<16 bytes
* fix off by one error
the mask is wrong for he 2nd word when len == 16
* less chunks per thread
seems like compact code wins. on my test box anyway.
* Some clean up, small-scale tuning, and reduce complexity when handling longer names.
* Do actual work in worker subprocess. Main process returns immediately
and OS clean up of the mmap continues in the subprocess.
* Update minor Graal version after CPU release.
* Turn GC back to epsilon GC (although it does not seem to make a
difference).
* Minor tuning for another +1%.
- It avoids creating unnecessary Strings objects and handles with the station names with its djb2 hashes instead
- Initializes hashmaps with capacity and load factor
- Adds -XX:+AlwaysPreTouch
* final comit
changing using mappedbytebuffer
changes before using unsafe address
using unsafe
* using graalvm,correct unsafe mem implementation
---------
Co-authored-by: Karthikeyans <karthikeyan.sn@zohocorp.com>
* Version 3
* Use SWAR algorithm from netty for finding a symbol in a string
* Faster equals - store the remainder in a long field (- 0.5s)
* optimise parsing numbers - prep
* Keep tweaking parsing logic
* Rewrote number parsing
may be a tiby bit faster it at all
* Epsilon GC
* 1bc challenge, but one that will run using jdk 8 without unsafe and still do reasonably well.
* Better hashtable
* the fastest GC is no GC
* cleanups
* increased hash size
* removed Playground.java
* collision-handling allocation free hashmap
* formatting
on automatic closing of ByteBuffers.. previously, a straggler could hold
up closing the ByteBuffers.
Also
- Improve Tracing code
- Parametrize additional options to aid in tuning
Our previous PR was surprising; parallelizing munmap() call did not
yield anywhere near the performance gain I expected. Local machine had
10% gain while testing machine only showed 2% gain. I am still not clear
why it happened and the two best theories I have are
1) Variance due to stragglers (that this change addresses)
2) munmap() is either too fast or too slow relative to the other
instructions compared to our local machine. I don't know which. We'll
have to use adaptive tuning, but that's in a different change.
* Version 3
* trying to optimize memory access (-0.2s)
- use smaller segments confined to thread
- unload in parallel
* Only call MemorySegment.address() once (~200ms)
* Squashing a bunch of commits together.
Commit#2; Uplift of 7% using native byteorder from ByteBuffer.
Commit#1: Minor changes to formatting.
* Commit #4: Parallelize munmap() and reduce completion time further by
10%. As the jvm exits with exit(0) syscall, the kernel reclaims the
memory mappings via munmap() call. Prior to this change. all the unmap()
calls were happening right at the end as the JVM exited. This led to
serial execution of about 350ms out of 2500 ms right at the end after
each shard completed its work. We can parallelize it by exposing the
Cleaner from MappedByteBuffer and then ensure that it is truly parallel
execution of munmap() by using a non-blocking lock (SeqLock). The
optimal strategy for when each thread must call unmap() is an interesting math problem with an exact solution and this code roughly reflects it.
Commit #3: Tried out reading long at a time from bytebuffer and
checking for presence of ';'.. it was slower compared to just reading int().
Removed the code for reading longs; just retaining the
hasSemicolonByte(..) check code
Commit #2: Introduce processLineSlow() and processRangeSlow() for the
tial part.
Commit #1: Create a separate tail piece of work for the last few lines to be
processed separately from the main loop. This allows the main loop to
read past its allocated range (by a 'long' if we reserve atleast 8 bytes
for the tail piece of work.)
* Golang implementation
* Speed up by avoiding copying the lines
* Memory mapping
* Add script for testing
* Now passing most of the tests
* Refactor to composed method
* Now using integer math throughout
* Now using a state machine for parsing!
* Refactoring state names
* Enabling profiling
* Running in parallel!
* Fully parallel!
* Refactor
* Improve type safety of methods
* The rounding problem is due to difference between Javas and Gos printf implementation
* Converting my solution to Java
* Merging results
* Splitting the file in several buffers
* Made it parallel!
* Removed test file
* Removed go implementation
* Removed unused files
* Add header to .sh file
---------
Co-authored-by: Matteo Vaccari <mvaccari@thoughtworks.com>
* Modify baseline version to improve performance
- Consume and process stream in parallel with memory map buffers, parsing it directly
- Use int instead of float/double to store values
- Use Epsilon GC and graal
* Update src/main/java/dev/morling/onebrc/CalculateAverage_adriacabeza.java
* Update calculate_average_adriacabeza.sh
---------
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
* - Read file in multiple threads if available: 17" -> 15" locally
- Changed String to BytesText with cache: 12" locally
* - Fixed bug
- BytesText to Text
- More checks when reading the file
* - Combining measurements should be thread safe
- More readability changes
* Initial version
* Small result merge optimisation
* Switched from reading bytes to longs
* Reading into internal buffer, test fixes
* Licence and minor string creation optimisation
* Hash collision fix
* Initial commit with custom implementation, 2:40
* Initial file-channel based version, 1:27
* Individual maps for executors, 0:54
* Use better-suited map: 0:34
* Verified correct, skip CharBuffer, :37
* Minor improvements and cleanup, 0:24
* String to byte[], 0:21
* Additional cleanup, use GraalVM, 0:17
* Faster number handling, 0:11
* Faster buffer reading, 0:08
* Prepare for environment with variable RAM and CPU, 0:08
* Fix bug causing issues with certain buffer sizes
* Larger overhead to not miss long station names that overlap buffers
* Reorder scripts and fix one-off bug
Implementation that uses the Vector API for the following
- scan for separators
- calculate hash
- n-way lookup in hash table
- parse digits
e; fix queue size
* feat(flippingbits): Improve parsing of station names
* chore(flippingbits): Remove obsolete import
* feat(flippingbits): Use custom hash map
* feat(flippingbits): Use UNSAFE
* fix(flippingbits): Support very small files
* chore(flippingbits): Few cleanups
* chore(flippingbits): Align names
* fix(flippingbits): Initialize hash with first byte
* fix(flippingbits): Fix initialization of hash value
* Update create_measurements.py
Added license header to the python script to avoid breaking the build.
* Update src/main/python/create_measurements.py
---------
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
* added python script to build test data
* moved create_measurements.py to src/main/python and updated paths for file io
* Updated readme to include blurb about python script to generate measurements