Skip to content

Large Chapter Save Recommendation #8

Closed
@shakenbake15

Description

Your chapter save method is non-optimal on books with large chapters. I would recommend that you consider changing the save method to combine the wav files. Currently, you're loading the "combined" file to add a smaller file. When your combined file starts to get large, this considerably slows down the process. loading a 1 min wav file to add 10 seconds is not a big deal, but when you are loading an hour long wav file to add 10 seconds, it can take a while to get to 2 hours or 3 hours. I hope that explanation makes sense. I would suggest that you set a batch limit of 256, then combine the batches for each chapter. This is a minor improvement, but it will speed things up when saving large chapter files.

This is how chat gpt recommends doing the update. Seems reasonable that this would work, but I'm using the program right now, so I can't test it at the moment.

def combine_wav_files(chapter_files, output_path, batch_size=256):
# Initialize an empty audio segment
combined_audio = AudioSegment.empty()

# Process the chapter files in batches
for i in range(0, len(chapter_files), batch_size):
    batch_files = chapter_files[i:i + batch_size]
    batch_audio = AudioSegment.empty()  # Initialize an empty AudioSegment for the batch

    # Sequentially append each file in the current batch to the batch_audio
    for chapter_file in batch_files:
        audio_segment = AudioSegment.from_wav(chapter_file)
        batch_audio += audio_segment

    # Combine the batch audio with the overall combined_audio
    combined_audio += batch_audio

# Export the combined audio to the output file path
combined_audio.export(output_path, format='wav')
print(f"Combined audio saved to {output_path}")

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions