-
Notifications
You must be signed in to change notification settings - Fork 639
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance decrease 30x when running on .NET 8 #933
Comments
i think the problem related to the locking, @NightOwl888 added this work around to solve other locking problems. the question is how long we want to support the older frameworks. |
Not quite the same severe reduction on Windows/Intel (older CPU):
Seems like .NET 7.0 .Net 8.0 I don't have the time now to dig further, but you can get dotTrace outputs by: Obviously you need access to dotTrace to open the traces from there, but could be useful. @AddictedCS considering your results are from a M2, it might be interesting to add the trace logger, if you don't have dotTrace you can upload the files here instead I guess, then others may have a look. the magnitude of difference makes me wonder if it's the same pattern or a different one. Edit: After a bit further digging, I end up in "flushing" of file streams. At this point things seems to begin to diverge between .NET 7.0 and .NET 8.0 but can't make out why. It also spent significant more time in pieces I can't figure if I can dig deeper into, (Stack traces without user methods) - I have not succeeded in digging out more there, but I am by no means an expert in dotTrace. |
@AddictedCS Thank you for the report. I can reproduce a significant discrepancy on a M1 Pro MacBook Pro with beta 16:
For me, the ratio is ~30x, not 12,000x, but that's still unacceptable. I tried it against master with similar results (note: I dropped N to 25 just to speed up the test run, but similar ratio):
I'll dig into this on my end as well and will update here if I find anything. |
@AddictedCS and @paulirwin Could either of you or both (since you have different chips) run with the DotTraceDiagnoser attached, if you don't have dotTrace feel free to zip the reports then I (and/or perhaps others) can dig into them a little. Seeing as from my findings we do end up in the IO level, this is obviously different implementations due to platforms, but going from 300ms -> 10000ms is a HUGE factor compared to on windows/intel where the factor is down about 4x. (not to mention the (12k x which is insanity) So I am wondering if it's the same areas or if other areas light up. |
My findings are reproduceable outside of Lucene.NET, so it might be something we need to workaround or bring to Microsoft. Findings on writing and flushing small buffers to FileStreams. The problem goes away for larger files. However for the test case here, it almost solely end up producing files in the 1-4Kb range so I wondered.
Could you try that as well:
I will probably keep expanding the above for different options. |
from my analysis, its flush method consume which is going to FileStream.Flush() so as @jeme said, this probably a regression Microsoft should investigate. |
@jeme Awesome find, thanks! I ran the benchmarks on my M1 Pro and sure enough there's a large discrepancy, up to 179x slower on .NET 8:
|
after some thinking with myself, this is a design bug. in the end of the day, this method is performing actual write to disk fsync() which is expensive. this should have a debounce, that will limit the number of calls per timeframe. |
@eladmarg I am curious how you ran your trace, it's a Dispose that has the Hotspot when I attach the profiler to the test, we can't really avoid paying that tax when disposing the FSIndexOutput. (Although calling flush(true) explicitly is probably unnecessary as the same thing would happen on dispose. Now I can't say that there isn't other situations where flush might be called to frequently. However in regards to this particular sample, it calls
I would have assumed that the same rule of thump applies here even though its a SearcherTaxonomyManager event though the docs does not explicitly mention this. I am not to familiar with the Taxonomy stuff, but since this is solely for small files, i am hoping the regression will be less as an index grow, unless we do flush(true) for small amounts of data in other cases as well. I am currently executing a bigger test to see where the breakpoint is (how many bytes before before et equalizes) |
its not related to the index / code. just to validate this, i changed the FSDirectory file stream with flags, and the problem gone. file = new FileStream(
path: Path.Combine(parent.m_directory.FullName, name),
mode: FileMode.OpenOrCreate,
access: FileAccess.Write,
share: FileShare.ReadWrite,
bufferSize: CHUNK_SIZE,
FileOptions.Asynchronous | FileOptions.RandomAccess | FileOptions.WriteThrough); any chance to let Microsoft to know about it? here |
Hey guys, thanks for helping, I've reran the benchmark multiple times now, and here are the results:
I can't get the 12k ratio now as I'm running it at home, and I concur with @paulirwin that it is "only" 30x, which is still unacceptable. I'm quite confident I got 12000x degradation originally, though I've had it on a slightly modified benchmark, which I simplified when opening the GitHub issue. I didn't stash the changes, so now they are gone. If I manage to get it again, I will post it. @jeme thanks a deeper benchmark, here are my results
The results corroborate with @jeme, which is 150x performance degradation. @eladmarg I respectfully disagree that this is a design bug. The pattern that is used in the initial benchmark is quite often used by those who use Lucene, calling it soft commit. This paradigm is used in Solr and other custom implementations. The index is not flushed to the disk, by refreshing the searcher after index writer update the changes are reflected in new searches which is important when near-realtime updates are required. In any case, @jeme benchmark points to a framework performance issue. |
Ohh mo***** - now I have to run the tests again with different variations on those FileOptions :D... I will probably attempt to submit this once I have run those more detailed benchmarks to get a deeper picture. |
@AddictedCS - i agree with you totally, hence, should be a debounce mechanism on the library which somehow isn't happening. of course this is another discussion and isn't related to this netcore framework bug which shouldn't happen in the first place. |
@eladmarg At least such a debounce can actually be implemented on the caller side, it doesn't exist in the java version (if I am to judge from the documentation) - So i guess the general stance would be that Lucene.net would not add such a thing. Just found this at random: https://github.com/coddicat/DebounceThrottle/blob/master/DebounceThrottle/DebounceDispatcherGeneric.cs Maybe that can be of inspiration for people seeking this. |
just for curiosity, (after i changed the FileStream) - the problem was gone on 8, BenchmarkDotNet v0.13.12, Windows 11 (10.0.22631.3296/23H2/2023Update/SunValley3)
AMD Ryzen Threadripper 3970X, 1 CPU, 16 logical and 16 physical cores
.NET SDK 9.0.100-preview.2.24157.14
[Host] : .NET 8.0.3 (8.0.324.11423), X64 RyuJIT AVX2
.NET 8.0 : .NET 8.0.3 (8.0.324.11423), X64 RyuJIT AVX2
.NET 9.0 : .NET 9.0.0 (9.0.24.12805), X64 RyuJIT AVX2
Server=True
| Method | Job | Runtime | Mean | Error | StdDev | Median | Ratio | RatioSD |
|------------ |--------- |--------- |-----------:|-----------:|-----------:|-----------:|------:|--------:|
| Write1Kb | .NET 8.0 | .NET 8.0 | 4.970 ms | 0.3795 ms | 1.1130 ms | 4.857 ms | 1.00 | 0.00 |
| Write1Kb | .NET 9.0 | .NET 9.0 | 10.373 ms | 0.7447 ms | 2.1839 ms | 10.273 ms | 2.20 | 0.70 |
| | | | | | | | | |
| Write2Kb | .NET 8.0 | .NET 8.0 | 4.566 ms | 0.3104 ms | 0.9105 ms | 4.601 ms | 1.00 | 0.00 |
| Write2Kb | .NET 9.0 | .NET 9.0 | 9.956 ms | 1.2714 ms | 3.7486 ms | 8.913 ms | 2.21 | 0.82 |
| | | | | | | | | |
| Write4Kb | .NET 8.0 | .NET 8.0 | 4.340 ms | 0.3761 ms | 1.0970 ms | 4.229 ms | 1.00 | 0.00 |
| Write4Kb | .NET 9.0 | .NET 9.0 | 12.500 ms | 0.8261 ms | 2.3835 ms | 12.313 ms | 3.09 | 0.98 |
| | | | | | | | | |
| Write512Kb | .NET 8.0 | .NET 8.0 | 11.235 ms | 0.7612 ms | 2.2205 ms | 10.852 ms | 1.00 | 0.00 |
| Write512Kb | .NET 9.0 | .NET 9.0 | 40.424 ms | 3.9548 ms | 11.6608 ms | 36.904 ms | 3.90 | 1.77 |
| | | | | | | | | |
| Write1024Kb | .NET 8.0 | .NET 8.0 | 14.783 ms | 0.6947 ms | 2.0484 ms | 14.620 ms | 1.00 | 0.00 |
| Write1024Kb | .NET 9.0 | .NET 9.0 | 52.843 ms | 1.4089 ms | 4.1321 ms | 53.069 ms | 3.64 | 0.60 |
| | | | | | | | | |
| Write16Mb | .NET 8.0 | .NET 8.0 | 724.461 ms | 14.1549 ms | 20.7481 ms | 726.672 ms | 1.00 | 0.00 |
| Write16Mb | .NET 9.0 | .NET 9.0 | 529.327 ms | 10.2676 ms | 15.9854 ms | 528.721 ms | 0.73 | 0.03 | so, after all - this isn't a lucenenet issue, its only Microsoft bug. this issue can be closed. |
Since I can't remember the exact benchmark data setup which generated 12k X degradation, I will edit the issue to reflect the reproducible 30x degradation. I agree, it seems to be a Microsoft issue, exacerbated when the code executes on a macOS ARM chip. The 150x degradation in BTW, I've executed @jeme benchmark with added flags in
@jeme let me know if you intend to open an issue in dotnet/core repository. I will close this issue then. |
And here are the results for different sets of parameters. The params options set as follows: [Params((FileOptions.Asynchronous | FileOptions.RandomAccess | FileOptions.WriteThrough),
(FileOptions.Asynchronous | FileOptions.RandomAccess),
(FileOptions.Asynchronous | FileOptions.WriteThrough),
(FileOptions.RandomAccess | FileOptions.WriteThrough),
(FileOptions.Asynchronous),
(FileOptions.RandomAccess),
(FileOptions.WriteThrough))]
public FileOptions Options; The degradation varies between 120x-180x.
|
Would one of you like to submit an issue to the .NET team on this? Feel free to use my benchmark results as well and please tag me in it. Thanks! |
Closing this issue, moving discussion to dotnet/runtime#100229 |
Unfortunately, when lining up the data, this is likely a bug in .NET 7 rather than 8, which means the performance will stay is it is going forward. (disabling the buffers brings the numbers up to .NET 8 numbers, not down to .NET 7 numbers) |
Yeah, can we then revive the discussion on soft-commit? |
BTW, if we analyze the 77374 and 77373 the bug report and the fix could not provide a test case to reproduce the issue.
The decision to remove the if statement may not be that obvious if the original writer had put it there on purpose as an optimization. It's suspicious that the bug existed for years, and nobody reported a data corruption issue. |
Does anyone know if this behaves the same in Java Lucene? |
@paulirwin On this level I guess it becomes hard to say as you would need to know how Java's internal IO libraries would work. It's not a 1-to-1 for sure, but since Java and .NET has somewhat different IO layers, there can be reasoning behind the differences. @AddictedCS in most cases data will get flushed correctly anyways on dispose, it's however something that might be left for the OS or even the actual disk/diskdrivers to manage (cashes on the disk it self helps speed up IO) which means that you would find your self in a rather special case if that where to happen and depending on the software, I don't think you would get to the position where you would find this bug anyways, but if you are writing something that should be ACID compliant you would need it. |
IMO, if Java also flushes in this scenario and matches .NET 8, then I'm inclined to keep the behavior the same (at least for now...) and instead better document performance considerations. If Java matches .NET <= 7, then I think we can explore a solution here for the 4.8 release, although given the part of code this is in, this is a very risky change that would need to be tested sufficiently. |
@paulirwin As far as the hard flush goes, one can get rid of this as: public class CustomMMapDirectory : MMapDirectory
{
public CustomMMapDirectory(DirectoryInfo path, LockFactory lockFactory) : base(path, lockFactory) { }
public CustomMMapDirectory(DirectoryInfo path) : base(path) { }
public CustomMMapDirectory(DirectoryInfo path, LockFactory lockFactory, int maxChunkSize) : base(path, lockFactory, maxChunkSize) { }
public CustomMMapDirectory(string path, LockFactory lockFactory) : base(path, lockFactory) { }
public CustomMMapDirectory(string path) : base(path) { }
public CustomMMapDirectory(string path, LockFactory lockFactory, int maxChunkSize) : base(path, lockFactory, maxChunkSize) { }
public override IndexOutput CreateOutput(string name, IOContext context)
{
EnsureOpen();
EnsureCanWrite(name);
return new CustomFSIndexOutput(this, name);
}
protected class CustomFSIndexOutput : IndexOutput
{
private readonly CustomMMapDirectory directory;
public const int DEFAULT_BUFFER_SIZE = 16384;
private const int CHUNK_SIZE = DEFAULT_BUFFER_SIZE;
internal readonly string name;
private readonly CRC32 crc = new CRC32();
private readonly FileStream file;
private volatile bool isOpen; // remember if the file is open, so that we don't try to close it more than once
public override long Length => file.Length;
public override long Position => file.Position;
public override long Checksum => crc.Value;
public CustomFSIndexOutput(CustomMMapDirectory directory, string name)
{
this.directory = directory;
this.name = name;
file = new FileStream(
path: Path.Combine(this.directory.m_directory.FullName, name),
mode: FileMode.OpenOrCreate,
access: FileAccess.Write,
share: FileShare.ReadWrite,
bufferSize: CHUNK_SIZE);
isOpen = true;
}
public override void WriteByte(byte b)
{
CheckDisposed();
crc.Update(b);
file.WriteByte(b);
}
public override void WriteBytes(byte[] b, int offset, int length)
{
CheckDisposed();
crc.Update(b, offset, length);
file.Write(b, offset, length);
}
public override void Flush()
{
CheckDisposed();
file.Flush();
}
protected override void Dispose(bool disposing)
{
if (!disposing) return;
//Method is empty
//parent.OnIndexOutputClosed(this);
if (!isOpen) return;
Exception priorE = null;
try
{
file.Flush(flushToDisk: false);
}
catch (Exception ioe) when (ioe is IOException or UnauthorizedAccessException or ObjectDisposedException)
{
priorE = ioe;
}
finally
{
isOpen = false;
IOUtils.DisposeWhileHandlingException(priorE, file);
}
}
public override void Seek(long pos)
{
CheckDisposed();
file.Seek(pos, SeekOrigin.Begin);
}
private void CheckDisposed()
{
if (!isOpen)
throw new ObjectDisposedException("");
}
}
} But you take on some of the responsibility instead of leaving it to the core library, you also need to accept that your leaving the code more open to failure, so if your index is your only storage then perhaps it's best not to do so, even though a error here would be considered very rare. But as a short term fix that should do. However that does not really change the behavior as @AddictedCS requests, and since we don't really want to diverge to much from the java source, that is a bit more problematic. I did look into how "easy/hard" it might be to extend the index writer and maybe just override the GetReader part to avoid flushing, as it stands, that's a much more involved task unfortunately. |
I created a reproduction of the original benchmark using jmh. Forgive my quick and dirty Java port 😄 Please review and let me know if I made any mistakes. Make a project using the jmh mvn archetype, replace the benchmark code with the code below, and add mvn dependencies as below package org.example;
import org.apache.commons.io.FileUtils;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.StringField;
import org.apache.lucene.document.TextField;
import org.apache.lucene.facet.FacetField;
import org.apache.lucene.facet.FacetsConfig;
import org.apache.lucene.facet.taxonomy.SearcherTaxonomyManager;
import org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.SearcherFactory;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.MMapDirectory;
import org.apache.lucene.util.Version;
import org.openjdk.jmh.annotations.*;
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.concurrent.ThreadLocalRandom;
@State(Scope.Benchmark)
@BenchmarkMode(Mode.AverageTime)
public class MyBenchmark {
private DirectoryTaxonomyWriter taxonomyWriter;
private IndexWriter indexWriter;
private Document[] documents;
private FacetsConfig facetsConfig;
private SearcherTaxonomyManager searcherManager;
@Setup
public void setup() {
if (Files.exists(Paths.get("test_index"))) {
try {
FileUtils.deleteDirectory(new File("test_index"));
} catch (IOException e) {
e.printStackTrace();
}
}
if (Files.exists(Paths.get("test_facets"))) {
try {
FileUtils.deleteDirectory(new File("test_facets"));
} catch (IOException e) {
e.printStackTrace();
}
}
Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_48);
try {
Directory luceneDirectory = new MMapDirectory(new File("test_index"));
indexWriter = new IndexWriter(luceneDirectory, new IndexWriterConfig(Version.LUCENE_48, analyzer));
taxonomyWriter = new DirectoryTaxonomyWriter(new MMapDirectory(new File("test_facets")));
searcherManager = new SearcherTaxonomyManager(indexWriter, true, new SearcherFactory(), taxonomyWriter);
} catch (IOException e) {
throw new RuntimeException(e);
}
facetsConfig = new FacetsConfig();
facetsConfig.setRequireDimCount("track_id", true);
documents = new Document[N];
for (int i = 0; i < N; i++)
{
String facet = generateRandomString(5);
documents[i] = new Document();
documents[i].add(new StringField("_id", Integer.toString(i), Field.Store.YES));
documents[i].add(new TextField("content", generateRandomString(10), Field.Store.YES));
documents[i].add(new FacetField("track_id", facet));
}
}
@Param({"25"})
public int N;
@Benchmark
public void indexDocumentsBenchmark() {
for (int i = 0; i < documents.length; ++i)
{
try {
Document taxonomyDocument = facetsConfig.build(taxonomyWriter, documents[i]);
indexWriter.updateDocument(new Term("_id", Integer.toString(i)), taxonomyDocument);
searcherManager.maybeRefresh(); // maybe refresh causing dramatic performance drop on .NET 8.0
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
private static String generateRandomString(int length)
{
// more spaces added on purpose
final String chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789 ";
char[] stringChars = new char[length];
for (int i = 0; i < length; i++)
{
stringChars[i] = chars.charAt(ThreadLocalRandom.current().nextInt(chars.length()));
}
return new String(stringChars);
}
} additional mvn deps: <dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-core</artifactId>
<version>4.8.0</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-facet</artifactId>
<version>4.8.0</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-analyzers-common</artifactId>
<version>4.8.0</version>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.15.1</version>
</dependency> I ran it against JDK 11, 17, and 21 via specifying a manual path to the JVM to run it on with the Java 21:
Java 17:
Java 11:
The results on the order of ~60-70ms (see my N=25 results above from master), with no significant difference between Java versions, imply to me that Java behaves similarly to .NET <= 7, so we should consider a core library fix for this that is lucenenet specific. Unless I've done something wrong above... |
Changing to Just to add as an example, the vast majority of developers use the Dispose method to ensure the stream is closed and flushed, and by default, Here is an example of the modified benchmark (I've removed the private static int WriteData(byte[] buffer, int bufferSize)
{
using var stream =
new FileStream(
path: Path.Combine("test_dir", $"test.{buffer.Length}.bin"),
mode: FileMode.OpenOrCreate,
access: FileAccess.Write,
share: FileShare.ReadWrite,
bufferSize: bufferSize);
stream.Write(buffer, 0, buffer.Length);
// stream.Flush(flushToDisk: true);
return buffer.Length;
} The results are so much faster now, both .NET 7 and 8 showing great performance.
|
I think I agree with that. If this is supposed to semantically represent a soft commit, I don't think we should have to do a hard flush to disk. |
Looks like we could add back the lucenenet/src/Lucene.Net/Store/FSDirectory.cs Lines 589 to 595 in 444e6d0
Interestingly, the blame shows the most recent commit of that block to be a revert of a revert of a commit 😄 cc @NightOwl888 Also note this from the Java Lucene 4.8.0 release notes:
I'm going to reopen this issue until we come up with a solution. My instinct is to support |
@paulirwin - I agree. If it is possible to mimic an fsync in .NET, we should also do that. This may be the root cause of #894. As for the history, this goes back to this conversation: https://lists.apache.org/list?dev@lucenenet.apache.org:2017-1 and anything from Vincent after that point. He helped pull what we had in the Store namespace together and make it more stable and efficient. It was around this time when we nix'd fsync, but at that point .NET Core support was still in the works and we had no way to test on anything but Windows. So, the benefit of supporting fsync seems more clear now. Whether providing fsync is feasible in .NET is another question. We had to drastically alter the way file locking works in order to provide support in .NET and I am not sure whether that is also relevant to this conversation. I haven't gone over the benchmarks in detail, but adding the Based on the LUCENE-5588 comment, it also seems that supporting that JRE crash test (ACID) on Windows is probably not possible. It would probably be worth a try to disable that test on Windows (and remove the |
I just wanted to add that
Thanks, @jeme, for the example! |
just dropping this here as it may be worth knowing https://ayende.com/blog/164673/the-difference-between-fsync-write-through-through-the-os-eyes What bugs me out a little is that I thought the whole idea with introducing getting the reader from the IndexWriter was to enable NRT where updates would be able to be made available prior to a commit, I implicitly assumed that they also wasn't necessarily flushed to storage (disk), specifically to speed things up, but that is clearly not the case (when we look at the Java source), so that seems like a huge thing to leave on the table. |
Any fixes coming out anytime soon? |
this is probably by design and not Lucene specific issue. @NightOwl888 what do you think? |
I have done a bit of research on the fsync issue. The following info is helpful:
Here is the fsync approach in Lucene 4.8.0: It supports either a file name or a directory name (a string) and is called like this: Correct me if I am wrong, but it seems that .NET only supports fsync on individual files, but not on all of the files in a directory. So, if that assumption is true, the only way to do fsync is to use native code. And indeed that is what is done in ravendb. It also looks like the fsync call needs to be synchronized with any writes or flushes to the file buffer, but maybe Lucene already does external synchronization. Do note that our file locking approach depends on Directory Level FsyncRavendb has a method that accepts a directory path and will do the fsync on the whole directory, so this appears to be a suitable replacement for the Unfortunately, it seems that ravendb isn't covered under one of the licenses that are acceptable by Apache. They were GNU back on version 3.5 and prior, but it doesn't look like any of those versions have this set up yet on non-Windows OSes. File Level FsyncIn Lucene, "stale files" were tracked in a synchronized I looked at the source of So, it appears that a better option might be to use the same low level native calls that Ravendb uses at the file level as well. SynchronizationThis is the tricky bit. In Java, the It also appears that the need to block makes it impossible (or at least extremely difficult) to use It would be great to get some feedback on this to see if anyone has any ideas that we could integrate. And of course setting up the project for deploying the native bits via NuGet on the various operating systems that .NET Core supports is not something I have looked into. If someone with experience doing this could provide some guidance, that would be great. |
* Restore fsync behavior in FSDirectory via P/Invoke This restores the commented-out fsync behavior in FSDirectory to help mitigate a performance regression in .NET 8. * Use System.IO.Directory.Exists to avoid caching exists status * Add unit test for ConcurrentHashSet.ExceptWith * Improve errors thrown by CreateFileW * Change FileSystemInfo use to string in IOUtils.Fsync * Change Debug.Assert to Debugging use * Lucene.Net.Index.TestIndexWriterOnJRECrash::TestNRTThreads_Mem(): Removed AwaitsFix attribute. The FSync implementation should fix this test. * Make ExceptWith atomic * Improve error handling if directory not found on Linux/macOS * Refactor interop methods into separate partial class files * Lucene.Net.Index.TestIndexWriterOnJRECrash::TestNRTThreads_Mem(): Added [Repeat(25)] attribute. * Lucene.Net.Index.TestIndexWriterOnJRECrash: Instead of using a temp file to pass the process ID to kill back to the original test process, open a socket and listen for the process ID to be written. * Synchronize access to stale files collection This is necessary to prevent race conditions, even though this code is not in the upstream Java code. A thread could try to add an item to the collection after it has been synced in `Sync` but before it is removed from the collection, then the file is removed from the collection, resulting in a missed sync. * Rename syncLock to m_syncLock * Lucene.Net.Index.TestIndexWriterOnJRECrash: Added try/finally block and refactored to ensure the TcpListener and Process are cleaned up at the end of each test iteration. This makes it run ~20% faster. * Refactor rename namespace to Lucene.Net.Native * Mark JRE crash test as [AwaitsFix] --------- Co-authored-by: Shad Storhaug <shad@shadstorhaug.com>
Is there an existing issue for this?
Describe the bug
After upgrading to .NET 8 I've noticed a dramatic performance decrease. I've localized the issue to
SearcherTaxonomyManager.maybeRefresh
method. I don't know the internals of this method though, so any help will be of an immense value.Below if a Benchmark that reproduces the issue. The benchmark is simulating document updates, using
maybeRefresh
to refresh the indexer. This code has been running absolutely fine on net5, net6, net7, but is now completely unusable on .NET 8.The results are attached below.
Attaching project details, for those who want to reproduce it locally:
Any help on this will be greatly appreciated, as I'm right now blocked from migrating toward .NET 8 because of this issue.
I've noticed somehow similar issue #929, and the fix (using ServerGc) didn't help.
Thanks!
Expected Behavior
Better performance.
Steps To Reproduce
Described in the attached benchmark.
Exceptions (if any)
No exceptions
Lucene.NET Version
4.8.0-beta00016
.NET Version
.NET8
Operating System
MacOS ARM
Anything else?
No response
The text was updated successfully, but these errors were encountered: