Description
I prepared an alternative function to savejson for the specific case when saving directly to files: savejson_fastfile, I'm also including a tiny fix in savejson to use "FileName" in the options field (right now it mixes "filename" and "FileName" and makes it crash)
I'm also adding a simple Benchmark.m and representative data that I use often and that helped to identify this issue with the speed in savejson. You truly need to have very large datasets with thousands of cell entries including all sort of mixed structures.
When running in my Macbook pro as I write this, the speed of the test with savejson is 220.17 s vs 62.3 s with savejson_fastfile, giving an average speedup of 3.5X. This test is using 1000 cell entries, you can inspect the data to have an idea of the type of mixed entries I use. In regular basis, I have datasets with 30,000 or more cell entries as this one, so you can imagine why I was motivated to speed it up a little.
The larger is the data collection the more pronounced the difference becomes as concatenating so many strings creates a lot of dynamic reallocation of the memory. Using 2000 entries of my typical data savejson requires 824.9 s vs 112.58 with savejson_fastfile, then the speedup is 7.3x. Writing directly to file keeps the penalty costs linear, but with string concatenation the costs seems at least quadratic.
Thanks for sharing your library, by all means it has been very useful for my work. The produced JSON is imported in other libraries systems (such as .Net) with no issues.