Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Java 8 does not properly flush buffers to disk on OSX #153

Open
jameshilliard opened this issue May 7, 2020 · 16 comments
Open

Java 8 does not properly flush buffers to disk on OSX #153

jameshilliard opened this issue May 7, 2020 · 16 comments

Comments

@jameshilliard
Copy link
Contributor

jameshilliard commented May 7, 2020

In investigating a performance regression in Java 9 and newer it was discovered that on Java 8 does not properly flush the buffers to disk. So we should probably see if we can fix this on Java 8 and find a workaround for the performance issue on Java 9 and newer on OSX. See #152 and this for details.

@alcarraz
Copy link
Contributor

Does anyone have this same performance problem with java 8 in linux?

I believe I could never (or almost since a long time ago) perform a full test in linux without interrupting it, because of binlog module, may be java 8 in linux did always flush to disk

@jameshilliard
Copy link
Contributor Author

I believe I could never (or almost since a long time ago) perform a full test in linux without interrupting it, because of binlog module

Is this an issue you are seeing for all java versions or just java 8?

may be java 8 in linux did always flush to disk

That was always my assumption, the change between java 8 and java 9 only seems to only be an issue on OSX.

@alcarraz
Copy link
Contributor

Is this an issue you are seeing for all java versions or just java 8?

In java 11 too, and it also takes travis a while:

> Task :modules:binlog:test
Finished generating test XML results (0.002 secs) into: /home/travis/build/jpos/jPOS-EE/modules/binlog/build/test-results/test
Generating HTML test report...
Finished generating test html results (0.02 secs) into: /home/travis/build/jpos/jPOS-EE/modules/binlog/build/reports/tests/test
Stored cache entry for task ':modules:binlog:test' with cache key f211c7caf11e0727c8dfc2d69a79c412
:modules:binlog:test (Thread[Execution worker for ':',5,main]) completed. Took 3 mins 15.599 secs.

In my machine takes a lot longer, don't know if travis performs the test in an in memory filesystem or ssd disk:

JAVA_HOME=/usr/lib/jvm/java-1.11.0-openjdk-amd64/ gradle test
Starting a Gradle Daemon, 2 stopped Daemons could not be reused, use --status for details

> Configure project :modules:qi-core
Using Gradle Vaadin Plugin 2.0.0.beta2
<===========--> 85% EXECUTING [6m 51s]
> :modules:binlog:test > 0 tests completed
> :modules:binlog:test > Executing test org.jpos.binlog.BinLogTest

7 minutes and counting ...

@jameshilliard
Copy link
Contributor Author

Hmm, what type of hard disk is your Linux system using? Also what distribution and kernel version is it?

@alcarraz
Copy link
Contributor

It's a standard magnetic hard drive. Kernel and system programs are in an SSD.

Kernel version is 5.4.0, distribution Ubuntu 20.04

Still running:

<===========--> 85% EXECUTING [54m 7s]
> :modules:binlog:test > 0 tests completed
> :modules:binlog:test > Executing test org.jpos.binlog.BinLogTest

@alcarraz
Copy link
Contributor

I tried debugging it once, it was not stuck just advancing really slow.

@jameshilliard
Copy link
Contributor Author

Hmm, what does sudo hdparm -I /dev/sda with /dev/sda being the magnetic disk path show?

@alcarraz
Copy link
Contributor

Not sure which part of the output are you interested in, I don't have any other problem with this setup.

@jameshilliard
Copy link
Contributor Author

Looking for the output of that(model might be enough) to rule out it being a bad drive.

@alcarraz
Copy link
Contributor

I really don't think is the harddrive, I could not be using this machine for developing and as a daily driver if that was the case but since you insist:

 sudo hdparm -I /dev/sda

/dev/sda:

ATA device, with non-removable media
	Model Number:       ST1000LM035-1RK172                      
	Serial Number:      WDERAMBJ
	Firmware Revision:  LCM2    
	Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0

I'm starting to think the problem is some kind of a deadlock with ext4 filesystem, building on the SSD disk and the same thing happens. It's being happening at least for the last year, so there were several kernel and OS upgrades.

Will have to create some partition to test with another FS, would love to hear from someone running linux with ext4 FS that can run this test without problem.

@alcarraz
Copy link
Contributor

My bad, in fact it's using same the tmp dir in both cases 🤦, but it's just writing very slowly to the tmp file.

/tmp/binlog-5932872186016472376/00000001.dat

@jameshilliard
Copy link
Contributor Author

I really don't think is the harddrive, I could not be using this machine for developing and as a daily driver if that was the case but since you insist

Most daily development doesn't involve forced flushing of dirty pages to the disk so your page cache will compensate for poor drive performance if you have enough ram.

Model Number: ST1000LM035-1RK172

Yeah, this is a DM-SMR drive(according to this at least) which means it will have terrible random write performance to the point where say rebuilding a ZFS array would likely fail due to it being so slow that the filesystem driver would think the drive died. This test forces a flush of dirty pages to the disk on Linux from my understanding while most writes in general just end up in the page cache and are written to disk periodically. A number of storage vendors recently got caught swapping out CMR drives for DM-SMR drives without telling people.

My bad, in fact it's using same the tmp dir in both cases 🤦, but it's just writing very slowly to the tmp file.

Weird, I suspect the fsync operation is somehow hitting the DM-SMR drive and slowing down the test somehow, maybe the way the call is being done is forcing a full fsync for all disks including those that the file isn't being written to? Should probably check the codepaths called by FileChannel.force in Linux or try seeing what's going on in strace.

@alcarraz
Copy link
Contributor

Yes, I didn't this was related to random access, clearly a magnetic drive is not good for that.

In that case this test has no point to be run on developers machines since not everyone has tmp mounted in a SSD drive.

I run it in my raspberry pi, that has tmp in the sdcard and at least it finished (in 22m)

I would suggest to disable this test in the normal run and just running it on purpose.

But more importantly we need to define if this is the expected behavior of the module in a magnetic drive or we need to fine tune it, and keep the test out until we do that

@jameshilliard
Copy link
Contributor Author

clearly a magnetic drive is not good for that

Well it's more than just that, DM-SMR drives are far worse than even a typical low end magnetic drive from 10 years ago. It's kind of crazy that vendors are putting them in normal laptops/desktops when they are really only suitable as non-RAID infrequent access file archive drives.

But more importantly we need to define if this is the expected behavior of the module in a magnetic drive or we need to fine tune it, and keep the test out until we do that

Well I'm assuming the flushing to disk behavior is intentional to ensure data integrity.

@jameshilliard
Copy link
Contributor Author

jameshilliard commented Jun 21, 2020

I refactored binlog to use the new nio file api's in #160, I'm assuming it probably won't make much difference for this particular issue but might be worth trying out as it should in theory have some performance advantages.

@jameshilliard
Copy link
Contributor Author

Updated #160 again with some more changes to use the AsynchronousFileChannel api instead of the regular FileChannel api.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants