## Description
This PR makes it possible to cut intermediate files generated in
analytics indexer based on size. For certain data types like objects
there is a high variation in record sizes i.e. object sizes could be
from a few bytes to large number of kbytes. With the current approach of
cutting files based on number of checkpoints processed, it could lead to
very large or very small files sometimes depending on the config. It'd
be nice to be able to generate files of consistent size.
With this PR, we allow file writers to return intermediate file size
which is used by analytics processor to decide when to cut. While this
does not seem possible to do easily in parquet writer (since it doesn't
serialize records to an intermediate file until flush is invoked), it is
certainly possible to do in csv writers (and be leveraged for bq data
stores)
## Test Plan
Running locally and verifying file sizes