-
Notifications
You must be signed in to change notification settings - Fork 871
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issue when having large object set to save #7514
Comments
hi @jcgouveia, Yes this is a good point, we rarely have really big transactions, but the case exists, it can be optimized easily moving the collection in case the target collection is empty and do some check on the size and merge the small in the big collection with relative swap if necessary, all this code not need to be multi-thread aware so easy to change. In any case for a few more version orientdb will need to keep the transaction all in memory so the list has to exist. thanks for the detailed report, I will work on some optimization soon, feel free to give more suggestion or propose a solution. Regards |
@tglman @lvca , any update on this issue. I am having a similar performance issue right now. Importing around 370,000 records to OrientDB (DB v 2.2.25 with default configure, win10 x64, JDK 1.8). At the first 2000 records, it runs real quick, but after that, it became 1 record/3 sec, and then stable with 1 record/4 sec. the script:
|
Have you created indexes on all the fields you're looking up in the WHERE conditions? |
I did some optimization of your case that will be released in 2.2.27. @HuangKBAaron for your case I agree whit the Luca suggestion to introduce indexing for the upsert/where fields Regards |
Hi, My fix should have solved this problem, closing this issue. Regards |
OrientDB Version: 2.2.20
Java Version: 1.8
OS: Windows 8
I´ve found a performance issue when having large sets of objects pending to save.
The problem occurs when setting a value in a POJO with another one that is already connected with DB (OIdentifiable)
Consider the situation when having 10.000 objects pending to save.
In the following code, the operation newRecords.addAll(newRecords) will insert 10.000 objects (and increasing), every time a setter is used on a POJO object. This code will take almost all the processing time of creating the object set.
public void merge(ODirtyManager toMerge) {
if (isSame(toMerge))
return;
final Set newRecords = toMerge.getNewRecords();
if (newRecords != null) {
if (this.newRecords == null)
this.newRecords = Collections.newSetFromMap(new IdentityHashMap<ORecord, Boolean>(newRecords.size()));
this.newRecords.addAll(newRecords);
}
final Set updateRecords = toMerge.getUpdateRecords();
if (updateRecords != null) {
if (this.updateRecords == null)
this.updateRecords = Collections.newSetFromMap(new IdentityHashMap<ORecord, Boolean>(updateRecords.size()));
this.updateRecords.addAll(updateRecords);
}
...
This situation may not be the most recommended one (using large set of pending objects), but I was not expecting this behaviour on every setter. I suppose it can be optimized or implemented in another way.
The text was updated successfully, but these errors were encountered: