instead of writing result line by line, implemented random.choices for randomisation of multiple stations and writing large batche ot the disk, also instead of "round" just using :.1f which is probably quicker on a large scale, because it's not a mathematical function
* added code
* Fixed pointers bugs
* removed my own benchmark
* added comment on how I handle hash collisions
* executed mwvn clean verify
* made scripts executable & fixed rounding issues
* Fixed way of dealing with hash collisions
* changed method name sameNameBytes to isSameNameBytes
* changes script from sh to bash
* fixed chunking bug
* Fixed bug in chunking when file size is too small
* added Runtime.getRuntime().availableProcessors
* added improvemnts on string copying, calculation of next index of Map in case on collision & improved string comparing
* Some clean up, fine tuning, removing non-supported options, added credit
section and additional comments.
* Put license header year back to 2023 to pass checks.
* Remove static linking (as it requires some more setup on the target
machine).
- split big regions into shared smaller tasks, so the workers complete their own tasks can pick up from the remaining instead of leaving its core idle
- reduce number of executed instructions in the hot path
/**
* Solution based on thomaswue solution, commit:
* commit d0a28599c2
* Author: Thomas Wuerthinger
* Date: Sun Jan 21 20:13:48 2024 +0100
*
* The goal here was to try to improve the runtime of his 10k
* solution of: 00:04.516
*
* With Thomas latest changes, his time is probably much better
* already, and maybe even 1st place for the 10k too.
* See: https://github.com/gunnarmorling/1brc/pull/606
*
* But as I was already coding something, I'll submit just to
* see if it will be faster than his *previous* 10k time of
* 00:04.516
*
* Changes:
* It's a similar idea of my previous solution, that if you split
* the chunks evenly, some threads might finish much faster and
* stay idle, so:
* 1) Create more chunks than threads, so the ones that finish first
* can do something;
* 2) Decrease chunk sizes as we get closer to the end of the file.
*/
* CalculateAverage_pdrakatos
* Rename to be valid with rules
* CalculateAverage_pdrakatos
* Rename to be valid with rules
* Changes on scripts execution
* Fixing bugs causing scripts not to be executed
* Changes on prepare make it compatible
* Fixing passing all tests
* Increase direct memory allocation buffer
* Fixing memory problem causes heap space exception
* Fresh solution to optimize performance of the execution
* Solution without unsafe
* Solution without unsafe
* Solution without unsafe, remove the usage of bytebuffer, passes the create_measurements3 test
* bug fix for 10k test, update also the CreateMeasurements3.java to use '\n' as newline instead of the os value (if it runs on windows it uses crlf and "breaks" the file format )
* new version that should perform way better than the previous one
* removed prepare script for giovannicuccu
* removed some comments
---------
Co-authored-by: Giovanni Cuccu <gcuccu@imolainformatica.it>
* improve speed, thanks to the following improvements:
- loop unrolling and eleminating extra calculations
- eleminating instance level variable access
- quicker equals check, checking long by long chunks instead of bytes
- update GraalVM version to the latest
* faster equals check
* fix equals bug in 10K, more optimizations on equals and calculate hash parts
* New solution optimized for Linux/AMD hardware
* Optimize solution, try to fix 10K bug on native
* Optimize solution, move records to a local field
* test timing
* revert back accidentally pushed code
---------
Co-authored-by: Yavuz Tas <yavuz.tas@ing.com>
* calculate_average_mtopolnik
* short hash (just first 8 bytes of name)
* Remove unneeded checks
* Remove archiving classes
* 2x larger hashtable
* Add "set" to setters
* Simplify parsing temperature, remove newline search
* Reduce the size of the name slot
* Store name length and use to detect collision
* Reduce memory loads in parseTemperature
* Use short for min/max
* Extract constant for semicolon
* Fix script header
* Explicit bash shell in shebang
* Inline usage of broadcast semicolon
* Try vectorization
* Remove vectorization
* Go Unsafe
* Use SWAR temperature parsing by merykitty
* Inline some things
* Remove commented-out MemorySegment usage
* Inline namesMem.asSlice() invocation
* Try out JVM JIT flags
* Implement strcmp
* Remove unused instance variables
* Optimize hashing
* Put station name into hashtable
* Reorder method
* Remove usage of MemorySegment.getUtf8String
Replace with UNSAFE.copyMemory() and new String()
* Fix hashing bug
* Remove outdated comments
* Fix informative constants
* Use broadcastByte() more
* Improve method naming
* More hashing
* Revert more hashing
* Add commented-out code to hash 16 bytes
* Slight cleanup
* Align hashtable at cacheline boundary
* Add Graal Native image
* Revert Graal Native image
This reverts commit d916a42326d89bd1a841bbbecfae185adb8679d7.
* Simplify shell script (no SDK selection)
* Move a constant, zero out hashtable on start
* Better name comparison
* Add prepare_mtopolnik.sh
* Cleaner idiom in name comparison
* AND instead of MOD for hashtable indexing
* Improve word masking code
* Fix formatting
* Reduce memory loads
* Remove endianness checks
* Avoid hash == 0 problem
* Fix subtle bug
* MergeSort of parellel results
* Touch up perf
* Touch up perf
* Remove -Xmx256m
* Extract result printing method
* Print allocation details on OOME
* Single mmap
* Use global allocation arena
* Add commented-out Xmx64m XXMaxDirectMemorySize=1g
* withinSafeZone
* Update cursor earlier
* Better assert
* Fix bug in addrOfSemicolonSafe
* Move declaration lower
* Simplify code
* Add rounding error test case
* Fix DANGER_ZONE_LEN
* Deoptimize parseTemperatureSimple()
* Inline parseTemperatureAndAdvanceCursor()
* Skip masking until the last load
* Conditionally fetch name words
* Cleanup
* Use native image
* Use supbrocess
* Simpler code
* Cleanup
* Avoid extra condition on hot path
* added code
* Fixed pointers bugs
* removed my own benchmark
* added comment on how I handle hash collisions
* executed mwvn clean verify
* made scripts executable & fixed rounding issues
* Fixed way of dealing with hash collisions
* changed method name sameNameBytes to isSameNameBytes
* changes script from sh to bash
* fixed chunking bug
* Fixed bug in chunking when file size is too small
* added Runtime.getRuntime().availableProcessors
* Solution without unsafe
* Solution without unsafe
* Solution without unsafe, remove the usage of bytebuffer, passes the create_measurements3 test
* bug fix for 10k test, update also the CreateMeasurements3.java to use '\n' as newline instead of the os value (if it runs on windows it uses crlf and "breaks" the file format )
---------
Co-authored-by: Giovanni Cuccu <gcuccu@imolainformatica.it>
- faster merge by ignoring empty entries in the map
- enable CDS for faster startup (added `prepare_serkan-ozal.sh` to generate CDS archive in advance)
- some tweaks with JVM options
- optimized result printing
* Read file with multiple virtual threads and process chunks of file data in parallel.
* Updated logic to bucket every chunk of aggs into a vector and merge them into a TreeMap for printing.
* Virtual Thread / File Channels Impl.
* Renamed files with GHUsername.
* Added statement to get vals before updating.
* Added executable permission to the files.
* Latest snapshot (#1)
preparing initial version
* Improved performance to 20seconds (-9seconds from the previous version) (#2)
improved performance a bit
* Improved performance to 14 seconds (-6 seconds) (#3)
improved performance to 14 seconds
* sync branches (#4)
* initial commit
* some refactoring of methods
* some fixes for partitioning
* some fixes for partitioning
* fixed hacky getcode for utf8 bytes
* simplified getcode for partitioning
* temp solution with syncing
* temp solution with syncing
* new stream processing
* new stream processing
* some improvements
* cleaned stuff
* run configuration
* round buffer for the stream to pages
* not using compute since it's slower than straightforward get/put. using own byte array equals.
* using parallel gc
* avoid copying bytes when creating a station object
* formatting
* Copy less arrays. Improved performance to 12.7 seconds (-2 seconds) (#5)
* initial commit
* some refactoring of methods
* some fixes for partitioning
* some fixes for partitioning
* fixed hacky getcode for utf8 bytes
* simplified getcode for partitioning
* temp solution with syncing
* temp solution with syncing
* new stream processing
* new stream processing
* some improvements
* cleaned stuff
* run configuration
* round buffer for the stream to pages
* not using compute since it's slower than straightforward get/put. using own byte array equals.
* using parallel gc
* avoid copying bytes when creating a station object
* formatting
* some tuning to increase performance
* some tuning to increase performance
* avoid copying data; fast hashCode with slightly more collisions
* avoid copying data; fast hashCode with slightly more collisions
* cleanup (#6)
* tidy up