Skip to content

Conversation

@ktrzcinx
Copy link
Member

This value is unused in whole source code.

Signed-off-by: Karol Trzcinski karolx.trzcinski@linux.intel.com

This value is unused in whole source code, so it only makes
code more messy.

Signed-off-by: Karol Trzcinski <karolx.trzcinski@linux.intel.com>
@ktrzcinx ktrzcinx force-pushed the remove-buffer-fmt branch from 97754c5 to fb04ed1 Compare July 14, 2020 12:41
@ktrzcinx ktrzcinx changed the title buffer: Remove buffer_fmt field [RFC] buffer: Remove buffer_fmt field Jul 14, 2020
@lgirdwood lgirdwood added this to the ABI-3.17 milestone Jul 15, 2020
@lgirdwood lgirdwood changed the title [RFC] buffer: Remove buffer_fmt field [Do Not Merge] buffer: Remove buffer_fmt field Jul 15, 2020
@kv2019i
Copy link
Collaborator

kv2019i commented Jul 17, 2020

Hmm, why the "[Do Not Merge]" in summary? This seems good to go and is a good test case for the relaxed ABI process in the sense that when kernel upgrades to 3.17 ABI, the header needs to be updated then.

@ktrzcinx
Copy link
Member Author

@lgirdwood can we go forward with this PR ?

@ktrzcinx
Copy link
Member Author

SOFCI TEST

Copy link
Member

@lgirdwood lgirdwood left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ktrzcinx @mrajwa @dbaluta sorry for the delay here - I'm wondering here if we should keep this and add a SOF_IPC_BUFFER_COMPRESSED type for MP3 etc. This would save the ABI change and would mean this could be used and propagated across any pipeline that decodec compressed audio.

@plbossart
Copy link
Member

@ktrzcinx @mrajwa @dbaluta sorry for the delay here - I'm wondering here if we should keep this and add a SOF_IPC_BUFFER_COMPRESSED type for MP3 etc. This would save the ABI change and would mean this could be used and propagated across any pipeline that decodec compressed audio.

That doesn't seem quite right to me. We want to have bytes in a buffer, plus metadata that tells us what to do with the buffer. Strong-typing the buffer type isn't too good. Are we also going to have BUFFER_IEF61937 as well? That seems like the wrong direction to me.

@dbaluta
Copy link
Collaborator

dbaluta commented Aug 5, 2020

@plbossart @lgirdwood it is yet not clear to me if compressed buffers have anything special wrt PCM buffer, I think there are just bytes passed around.
It looks like the original options for buffer_fmt were:

/* stream buffer format */
enum sof_ipc_buffer_format {
»       SOF_IPC_BUFFER_INTERLEAVED,                                                                                                           
»       SOF_IPC_BUFFER_NONINTERLEAVED,
»       /* other formats here */
};

Maybe this is useful for DAIs? Would be nice to understand what was the original purpose of this field. I'm not very fond of removing it just right now.

@plbossart
Copy link
Member

@plbossart @lgirdwood it is yet not clear to me if compressed buffers have anything special wrt PCM buffer, I think there are just bytes passed around.
It looks like the original options for buffer_fmt were:

/* stream buffer format */
enum sof_ipc_buffer_format {
»       SOF_IPC_BUFFER_INTERLEAVED,                                                                                                           
»       SOF_IPC_BUFFER_NONINTERLEAVED,
»       /* other formats here */
};

Maybe this is useful for DAIs? Would be nice to understand what was the original purpose of this field. I'm not very fond of removing it just right now.

Yeah, this is the sort of meta-data that needs to be passed all the way to the DAIs. This may also be used by processing elements in case they do channel-based processing that is more efficient when non-interleaved.

@lgirdwood
Copy link
Member

@dbaluta I'bve only come across SOF_IPC_BUFFER_NONINTERLEAVED data once, but it's part of the ALSA spec so was carried forward into IPC.
@plbossart I think "strong typing" would be SOF_IPC_BUFFER_MP3 or SOF_IPC_BUFFER_AAC, just saying it's compressed would cater for all compressed data and allow the compressed type to be held in a compressed container (used by decoders/encoders only) whilst sof_ipc_buffer_format could still be used by PCM based components that could reject any attempts to render SOF_IPC_BUFFER_COMPRESSED in a simple switch() without having to care about the compressed types ?

@plbossart
Copy link
Member

plbossart commented Aug 6, 2020

data

Well my point is that 'COMPRESSED" means nothing. Most of the uses of the 'compressed API' are actually for PCM data for wake-on-voice. We use the compressed stream for probe extraction, again with PCM data muxed in a custom way. Likewise the SPDIF support relies on compressed data formatted to meet the PCM bitrate and transmitted as PCM, but on that PCM you don't want to do ANY volume control for example.

That's why I prefer a 'byte buffer' that just contains bytes. what to do with the buffer needs to come from metadata, a 'compressed' flag provides at the same time too much and too little information.

@dbaluta
Copy link
Collaborator

dbaluta commented Aug 7, 2020

@plbossart @lgirdwood @ktrzcinx Lets reach a conclusion here.

My proposal is not to remove this field until we have the compress implementation ready. At that point we can say for sure that this field can be removed.

@lgirdwood
Copy link
Member

@plbossart @lgirdwood @ktrzcinx Lets reach a conclusion here.

My proposal is not to remove this field until we have the compress implementation ready. At that point we can say for sure that this field can be removed.

Ok, lets close now and we can reopen once we know the compressed implementation. @ktrzcinx @dbaluta this will need to be on your radars.

@lgirdwood lgirdwood closed this Aug 7, 2020
@dbaluta
Copy link
Collaborator

dbaluta commented Aug 11, 2020

@ktrzcinx @lgirdwood One notable difference between PCM buffers and COMPR bufffers is their params and the way buffer size is computed.

--- a/src/include/ipc/stream.h
+++ b/src/include/ipc/stream.h
@@ -61,6 +61,7 @@ enum sof_ipc_frame {
 enum sof_ipc_buffer_format {
        SOF_IPC_BUFFER_INTERLEAVED,
        SOF_IPC_BUFFER_NONINTERLEAVED,
+       SOF_IPC_BUFFER_COMPRESSED,
        /* other formats here */
 };
 
diff --git a/src/include/sof/audio/audio_stream.h b/src/include/sof/audio/audio_stream.h
index ab1d68c04..d11944cb5 100644
--- a/src/include/sof/audio/audio_stream.h
+++ b/src/include/sof/audio/audio_stream.h
@@ -56,6 +56,10 @@ struct audio_stream {
        enum sof_ipc_frame frame_fmt;   /**< Sample data format */
        uint32_t rate;          /**< Number of data frames per second [Hz] */
        uint16_t channels;      /**< Number of samples in each frame */
+       
+       /* compressed stream params */
+       uint32_t fragments;
+       uint32_t fragment_size;

A pipeline will look like this:

image

I'm having trouble figuring out who will drive the data copying because compr buffer size and PCM buffer size are most likely very different.

@paulstelian97
Copy link
Collaborator

@dbaluta The DAI will still do the scheduling, but maybe the host component can have some sort of hysteresis to not attempt copying unless you need like 25% of the buffer (otherwise it will keep attempting to keep the buffer full even though you exhaust it in several seconds or however long it is).

@dbaluta
Copy link
Collaborator

dbaluta commented Aug 12, 2020

@paulstelian97 I'm thinking initially to use timer based scheduling. Important things to consider here is how do you choose period length here so that we avoid:

  • starving the DAI
  • overflowing DAI
  • starving Process component

Important thing to notice is that Process component can return 'NOT ENOUGH DATA' this means we need to find a way to quickly feed more data.

@paulstelian97
Copy link
Collaborator

Well we still need to ensure the DAI is scheduled often enough based on the size of the DMA buffer. And you can't skip the host component without special provisions when scheduling the DAI so either you make huge buffers for the DAI (which will cause latency issues and instability) or schedule the pipeline more often.

I would like a way to have two pipelines that are scheduled differently and one feeds into the other. One of them has the host component and its buffer and is scheduled more rarely, the other has the DAI component and the decoder and is scheduled often.

@dbaluta
Copy link
Collaborator

dbaluta commented Aug 12, 2020

I like the idea of having two pipelines, in this way we could easily solve the fact that they need to be scheduled differently. Current SOF implementation assume some fixed endpoints, like Host and DAI. How would we connect our pseudo-pipelines now?

Host -> compr buff -> Process   |
                                | ------> pcm buff ----> DAI

@paulstelian97
Copy link
Collaborator

We gotta figure out how multiple pipeline setups work. I think we can look at any topology with a mixer or a mux in it.

@lgirdwood
Copy link
Member

@dbaluta @paulstelian97 yes, there are 2 options for scheduling - the DAI or timer. I would imagine that downstream (PCM data) of the decoder will work as it does today. The upstream parts of the decoder (e.g. MP3) will only process data when the downstream PCM buffer has enough free space for the next uncompressed fragment (and this should be reflected in buffer size that connects decoder to downstream).
We probably want a buffer/stream flag that says "I'm variable rate so dont complain if I dont process any data during this copy()" which would mean a small code change in host.c

@paulstelian97
Copy link
Collaborator

@dbaluta @paulstelian97 yes, there are 2 options for scheduling - the DAI or timer. I would imagine that downstream (PCM data) of the decoder will work as it does today. The upstream parts of the decoder (e.g. MP3) will only process data when the downstream PCM buffer has enough free space for the next uncompressed fragment (and this should be reflected in buffer size that connects decoder to downstream).
We probably want a buffer/stream flag that says "I'm variable rate so dont complain if I dont process any data during this copy()" which would mean a small code change in host.c

Agreed, the flag makes sense.

We also have another issue for implementing that. The decoder algorithm needs to be explicitly told when the stream is stopping and we're getting the last data. Given the variable rate decoder we cannot exactly know for certain that the data is truly over, do we?

Our intent is a low power solution in which a good chunk of input buffer is taken from host RAM, then the host RAM can be suspended for a while (many seconds, as in 20 seconds or more) while the DSP decodes and plays back the decoded data without waking up the host. Given my current understanding of the Host component I'm not sure how that will work either.

@lgirdwood
Copy link
Member

We also have another issue for implementing that. The decoder algorithm needs to be explicitly told when the stream is stopping and we're getting the last data. Given the variable rate decoder we cannot exactly know for certain that the data is truly over, do we?

Our intent is a low power solution in which a good chunk of input buffer is taken from host RAM, then the host RAM can be suspended for a while (many seconds, as in 20 seconds or more) while the DSP decodes and plays back the decoded data without waking up the host. Given my current understanding of the Host component I'm not sure how that will work either.

Maybe another flag for host.c that says "wake up host by sending IPC when my source buffer watermark is X bytes or lower" ?

@paulstelian97
Copy link
Collaborator

We also have another issue for implementing that. The decoder algorithm needs to be explicitly told when the stream is stopping and we're getting the last data. Given the variable rate decoder we cannot exactly know for certain that the data is truly over, do we?

Still need to figure out, although maybe the ALSA interface itself isn't perfect and when we will implement ALSA Compress we might add a few extra changes. Needs some thought still.

Our intent is a low power solution in which a good chunk of input buffer is taken from host RAM, then the host RAM can be suspended for a while (many seconds, as in 20 seconds or more) while the DSP decodes and plays back the decoded data without waking up the host. Given my current understanding of the Host component I'm not sure how that will work either.

Maybe another flag for host.c that says "wake up host by sending IPC when my source buffer watermark is X bytes or lower" ?

That sounds good, although I'm actually talking about the buffer in SOF. We want to really decode from that buffer without waking up the host (i.e. not even doing Host DMA transfers) for a while. The DRAM itself can get suspended (clocks turned off) in that sleep state.

@dbaluta
Copy link
Collaborator

dbaluta commented Aug 13, 2020

@paulstelian97 Lets not discuss now about Low Power Audio. This will complicate the discussion.

We need to concentrate on the basic usecase:

$ cplay sample.mp3

For now, we have the following topology:

Host -> Compr Buffer -> Process -> PCM Buffer -> DAI

We will be using timer driven pipeline because it is easier to visualize the flow (not sure if DAI driven pipeline even works).

Things to clarify & initial assumptions:

  1. how do we define a period in this scenario? what's the size of the period.
    Possible answers:
  • a period is a timeframe in which our pipeline copies some data to the DAI?
  • a period is a timeframe between two pipeline timer interrupts.
  1. Process component library has:
    • one input buffer, usually 2048 bytes in size.
    • one output buffer, usually 4096 bytes in size.

Buffers sizes are fixed by the decoder library and cannot be changed at runtime (maybe only at initialization time).

PROCESS component during decoding MIGHT signal that its input buffer data is not enough, and can return
an error code namely 'NEEDS MORE DATA'

Now, because of this, in our pipeline we can encounter the following scenario:

Host -> Compr buffer -> [(internal input buffer) PROCESS (internal output buffer ] -> PCM Bufer -> DAI

  • Process component does not produce any output, and requires more dat. In this case will we call produce function on PCM buffer with a size of 0 (this will just issue an warning). But the DAI in the current period will not receive any data. This can produce an underrun.

I think with proper buffer sizes this can work just fine, assuming the fact that passing over a buffer without consume/producing any data is not a concern.

@plbossart
Copy link
Member

@paulstelian97 for the compressed API, you have the drain and partial-drain/next track mechanism to deal with the last parts of the stream.
This is actually a problem we have with SOF, we don't support drain, so even with PCM dealing with the last bytes is broken thesofproject/linux#66

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ABI ABI change involved

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants