* Initial impl
* Fix bad file descriptor error in the `calculate_average_serkan-ozal.sh`
* Disable Epsilon GC and rely on default GC. Because apparently, JIT and Epsilon GC don't play well together in the eval machine for short lived Vector API's `ByteVector` objects
* Take care of byte order before processing key length with bit shift operators
* Fix key equality check for long keys
/**
* Solution based on thomaswue solution, commit:
* commit d0a28599c2
* Author: Thomas Wuerthinger <thomas.wuerthinger@oracle.com>
* Date: Sun Jan 21 20:13:48 2024 +0100
*
* Changes:
* 1) Use LinkedBlockingQueue to store partial results, that
* will then be merged into the final map later.
* As different chunks finish at different times, this allows
* to process them as they finish, instead of joining the
* threads sequentially.
* This change seems more useful for the 10k dataset, as the
* runtime difference of each chunk is greater.
* 2) Use only 4 threads if the file is >= 14GB.
* This showed much better results on my local test, but I only
* run with 200 million rows (because of limited RAM), and I have
* no idea how it will perform on the 1brc HW.
*/
* fix test rounding, pass 10K station names
* improved integer conversion, delayed string creation.
* new algorithm hash, use ConcurrentHashMap
* fix rounding test
* added the length of the string in the hash initialization.
* fix hash code collisions
* cleanup prepare script
* native image options
* fix quardaric probing (no change to perf)
* mask to get the last chunk of the name
* extract hash functions
* tweak the probing loop (-100ms)
* fiddle with native image options
* Reorder conditions in hope it makes branch predictor happier
* extracted constant
* Improve hash function
* remove limit on number of cores
* fix calculation of boundaries between chunks
* fix IOOBE
---------
Co-authored-by: Jason Nochlin <hundredwatt@users.noreply.github.com>
* Contribution by albertoventurini
* Shave off a couple of hundreds of milliseconds, by making an assumption on temperature readings
* Parse reading without loop, inspired by other solutions
* Use all cores
* Small improvements, only allocate 247 positions instead of 256
---------
Co-authored-by: Alberto Venturini <alberto.venturini@accso.de>
* Update with Rounding Bugfix
* Simplification of Merging Results
* More Plain Java Code for Value Storage
* Improve Performance by Stupid Hash
Drop around 3 seconds on my machine by
simplifying the hash to be ridicules stupid,
but faster.
* Fix outdated comment
* Dmitry challenge
* Dmitry submit 2.
Use MemorySegment of FileChannle and Unsafe
to read bytes from disk. 4 seconds speedup in local test
from 20s to 16s.
* tonivade improved not using HashMap
* use java 21.0.2
* same hash same station
* remove unused parameter in sameSation
* use length too
* refactor parallelization
* use parallel GC
* refactor
* refactor
1. Use Unsafe
2. Fit hashtable in L2 cache.
3. If we can find a good hash function, it can fit in L1 cache even.
4. Improve temperature parsing by using a lookup table
* Go implementation by AlexanderYastrebov
This is a proof-of-concept to demonstrate non-java submission.
It requires Docker with BuildKit plugin to build and export binary.
Updates
* #67
* #253
* Use collision-free id lookup
* Use number of buckets greater than max number of keys
* Init Push
* organize imports
* Add OpenMap
* Best outcome yet
* Create prepare script and calculate script for native image, also add comments on calculation
* Remove extra hashing, and need for the set array
* Commit formatting changes from build
* Remove unneeded device information
* Make shell scripts executable, add hash collision double check for equality
* Add hash collision double check for equality
* Skip multithreading for small files to improve small file performance
* final comit
changing using mappedbytebuffer
changes before using unsafe address
using unsafe
* using graalvm,correct unsafe mem implementation
---------
Co-authored-by: Karthikeyans <karthikeyan.sn@zohocorp.com>
* Inline parsing name and station to avoid constantly updating the offset field (-100ms)
* Remove Worker class, inline the logic into lambda
* Accumulate results in an int matrix instead of using result row (-50ms)
* Use native image
* Deploy v2 for parkertimmins
Main changes:
- fix hash which masked incorrectly
- do station equality check in simd
- make station array length multiple of 32
- search for newline rather than semicolon
* Fix bug - entries were being skipped between batches
At the boundary between two batches, the first batch would stop after
crossing a limit with a padding of 200 characters applied. The next
batch should then start looking for the first full entry after the
padding. This padding logic had been removed when starting a batch. For
this reason, entries starting in the 200 character padding between
batches were skipped.
* fast-path for keys<16 bytes
* fix off by one error
the mask is wrong for he 2nd word when len == 16
* less chunks per thread
seems like compact code wins. on my test box anyway.
* Some clean up, small-scale tuning, and reduce complexity when handling longer names.
* Do actual work in worker subprocess. Main process returns immediately
and OS clean up of the mmap continues in the subprocess.
* Update minor Graal version after CPU release.
* Turn GC back to epsilon GC (although it does not seem to make a
difference).
* Minor tuning for another +1%.
- It avoids creating unnecessary Strings objects and handles with the station names with its djb2 hashes instead
- Initializes hashmaps with capacity and load factor
- Adds -XX:+AlwaysPreTouch
* final comit
changing using mappedbytebuffer
changes before using unsafe address
using unsafe
* using graalvm,correct unsafe mem implementation
---------
Co-authored-by: Karthikeyans <karthikeyan.sn@zohocorp.com>
* Version 3
* Use SWAR algorithm from netty for finding a symbol in a string
* Faster equals - store the remainder in a long field (- 0.5s)
* optimise parsing numbers - prep
* Keep tweaking parsing logic
* Rewrote number parsing
may be a tiby bit faster it at all
* Epsilon GC
* 1bc challenge, but one that will run using jdk 8 without unsafe and still do reasonably well.
* Better hashtable
* the fastest GC is no GC
* cleanups
* increased hash size
* removed Playground.java
* collision-handling allocation free hashmap
* formatting
on automatic closing of ByteBuffers.. previously, a straggler could hold
up closing the ByteBuffers.
Also
- Improve Tracing code
- Parametrize additional options to aid in tuning
Our previous PR was surprising; parallelizing munmap() call did not
yield anywhere near the performance gain I expected. Local machine had
10% gain while testing machine only showed 2% gain. I am still not clear
why it happened and the two best theories I have are
1) Variance due to stragglers (that this change addresses)
2) munmap() is either too fast or too slow relative to the other
instructions compared to our local machine. I don't know which. We'll
have to use adaptive tuning, but that's in a different change.
* Version 3
* trying to optimize memory access (-0.2s)
- use smaller segments confined to thread
- unload in parallel
* Only call MemorySegment.address() once (~200ms)
* Squashing a bunch of commits together.
Commit#2; Uplift of 7% using native byteorder from ByteBuffer.
Commit#1: Minor changes to formatting.
* Commit #4: Parallelize munmap() and reduce completion time further by
10%. As the jvm exits with exit(0) syscall, the kernel reclaims the
memory mappings via munmap() call. Prior to this change. all the unmap()
calls were happening right at the end as the JVM exited. This led to
serial execution of about 350ms out of 2500 ms right at the end after
each shard completed its work. We can parallelize it by exposing the
Cleaner from MappedByteBuffer and then ensure that it is truly parallel
execution of munmap() by using a non-blocking lock (SeqLock). The
optimal strategy for when each thread must call unmap() is an interesting math problem with an exact solution and this code roughly reflects it.
Commit #3: Tried out reading long at a time from bytebuffer and
checking for presence of ';'.. it was slower compared to just reading int().
Removed the code for reading longs; just retaining the
hasSemicolonByte(..) check code
Commit #2: Introduce processLineSlow() and processRangeSlow() for the
tial part.
Commit #1: Create a separate tail piece of work for the last few lines to be
processed separately from the main loop. This allows the main loop to
read past its allocated range (by a 'long' if we reserve atleast 8 bytes
for the tail piece of work.)
* Golang implementation
* Speed up by avoiding copying the lines
* Memory mapping
* Add script for testing
* Now passing most of the tests
* Refactor to composed method
* Now using integer math throughout
* Now using a state machine for parsing!
* Refactoring state names
* Enabling profiling
* Running in parallel!
* Fully parallel!
* Refactor
* Improve type safety of methods
* The rounding problem is due to difference between Javas and Gos printf implementation
* Converting my solution to Java
* Merging results
* Splitting the file in several buffers
* Made it parallel!
* Removed test file
* Removed go implementation
* Removed unused files
* Add header to .sh file
---------
Co-authored-by: Matteo Vaccari <mvaccari@thoughtworks.com>
* Modify baseline version to improve performance
- Consume and process stream in parallel with memory map buffers, parsing it directly
- Use int instead of float/double to store values
- Use Epsilon GC and graal
* Update src/main/java/dev/morling/onebrc/CalculateAverage_adriacabeza.java
* Update calculate_average_adriacabeza.sh
---------
Co-authored-by: Gunnar Morling <gunnar.morling@googlemail.com>
* - Read file in multiple threads if available: 17" -> 15" locally
- Changed String to BytesText with cache: 12" locally
* - Fixed bug
- BytesText to Text
- More checks when reading the file
* - Combining measurements should be thread safe
- More readability changes