[U-Boot] [PATCH] nvme: add more cache flushes

J. William Campbell jwilliamcampbell at comcast.net
Thu Oct 17 15:46:17 UTC 2019


Hi All,
On 10/17/2019 12:12 AM, Patrick Wildt wrote:
> On Thu, Oct 17, 2019 at 03:08:59PM +0800, Bin Meng wrote:
>> Hi Patrick,
>>
>> On Thu, Oct 17, 2019 at 2:44 PM Patrick Wildt <patrick at blueri.se> wrote:
>>> On Thu, Oct 17, 2019 at 10:55:11AM +0800, Bin Meng wrote:
>>>> Hi Patrick,
>>>>
>>>> On Wed, Oct 16, 2019 at 11:35 PM Patrick Wildt <patrick at blueri.se> wrote:
>>>>> On Wed, Oct 16, 2019 at 06:11:23PM +0800, Bin Meng wrote:
>>>>>> On Mon, Oct 14, 2019 at 7:11 PM Patrick Wildt <patrick at blueri.se> wrote:
>>>>>>> On an i.MX8MQ our nvme driver doesn't completely work since we are
>>>>>>> missing a few cache flushes.  One is the prp list, which is an extra
>>>>>>> buffer that we need to flush before handing it to the hardware.  Also
>>>>>>> the block read/write operations needs more cache flushes on this SoC.
>>>>>>>
>>>>>>> Signed-off-by: Patrick Wildt <patrick at blueri.se>
>>>>>>> ---
>>>>>>>   drivers/nvme/nvme.c | 15 +++++++++------
>>>>>>>   1 file changed, 9 insertions(+), 6 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
>>>>>>> index 2444e0270f..69d5e3eedc 100644
>>>>>>> --- a/drivers/nvme/nvme.c
>>>>>>> +++ b/drivers/nvme/nvme.c
>>>>>>> @@ -123,6 +123,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
>>>>>>>          }
>>>>>>>          *prp2 = (ulong)dev->prp_pool;
>>>>>>>
>>>>>>> +       flush_dcache_range((ulong)dev->prp_pool, (ulong)dev->prp_pool +
>>>>>>> +                          dev->prp_entry_num * sizeof(u64));
>>>>>>> +
>>>>>>>          return 0;
>>>>>>>   }
>>>>>>>
>>>>>>> @@ -717,9 +720,10 @@ static ulong nvme_blk_rw(struct udevice *udev, lbaint_t blknr,
>>>>>>>          u16 lbas = 1 << (dev->max_transfer_shift - ns->lba_shift);
>>>>>>>          u64 total_lbas = blkcnt;
>>>>>>>
>>>>>>> -       if (!read)
>>>>>>> -               flush_dcache_range((unsigned long)buffer,
>>>>>>> -                                  (unsigned long)buffer + total_len);
>>>>>>> +       flush_dcache_range((unsigned long)buffer,
>>>>>>> +                          (unsigned long)buffer + total_len);
>>>>>> Why we need this for read?
>>>>> I'm no processor engineer, but I have read (and "seen") the following
>>>>> on ARMs.  If I'm wrong. please correct me.
>>>>>
>>>>> Since the buffer is usually allocated cache-aligned on the stack,
>>>>> it is very possible that this buffer was previously still used
>>>>> as it's supposed to be used: as stack.  Thus, the caches can still
>>>>> be filled, and are not yet evicted/flushed to memory.  Now it is
>>>>> possible that between the DMA access from the device and our cache
>>>>> invalidation, the CPU cache writes back its contents to memory,
>>>>> overwriting whatever the NVMe just gave us.
>>>> OK, this makes sense. So if we allocate the buffer from the heap, we
>>>> should only care about flush on write, right? Can you test this?
>>> If you're talking about having a bounce buffer:  You'd need to flush
>>> it once upon allocation, because that part of the heap could have also
>>> be used earlier by someone else.
>>>
>> I was not talking about bounce buffer. I mean the buffer allocated
>> from help and use that directly for DMA.
>>
>> Regards,
>> Bin
> If you allocate a buffer from the heap, you still need to make sure
> to flush it once, since there's still the chance that you have used
> it shortly earlier.  But it's less of an issue as on the stack.

The "rules" for cache management of DMA buffers for non-cache-coherent 
CPUs are immutable. It doesn't matter where the memory came from 
(static, heap, or stack). It may be in cache, and it may be dirty. You 
can't know  it is for sure "clean". It is assumed that the DMA buffers 
are allocated to begin on a cache line boundary and are a multiple of a 
cache line in length. If this is not the case, references by the CPU to 
locations outside the buffer can make the cache state of the buffer 
dirty, which is an error. It is also required that there be no accesses 
to the DMA buffer by the CPU while DMA is in progress. This is normally 
true by default, and if it isn't true, it is an error. The rules are 
then as follows:

On write, before DMA is started. the cache corresponding to the DMA 
buffer addresses MUST be flushed to ensure the desired content is 
transferred from cache to RAM before write DMA begins.

On read, before DMA is started, the cache corresponding to the DMA 
buffer addresses MUST be either invalidated or flushed to ensure that no 
cache system initiated writes to RAM will occur while read DMA is in 
progress. Normally, invalidate is preferred because it is faster. 
However, for programming simplicity some drivers choose to flush before 
both read or write DMA is started. If that is the case, it is good 
practice to note that choice in a comment.

On write, after DMA is completed, no additional cache actions are required.

On read, after DMA is completed, the cache corresponding to the DMA 
buffer addresses MUST be invalidated. This is necessary to ensure that 
stale data in the cache will not be used instead of the new read data in 
RAM. If one elected to invalidate the cache before the read DMA started, 
one may wonder why a second invalidate is necessary.  Since the buffer 
is not allowed to be referenced by the program in the interim, the cache 
should not contain any data from the DMA buffer area. The reason is that 
modern CPUs, may load data from the DMA buffer into cache while DMA is 
in progress. This can be the product of hardware "optimizations" such as 
pre-load. This data may be stale, and shouldn't be used. The invalidate 
after DMA is complete makes sure that it isn't used. If you absolutely, 
for sure, know your CPU doesn't do this, you could in theory omit the 
second invalidate. IMHO, it isn't worth the risk. If the code is re-used 
on a newer CPU, it may stop working.

These are the things you must do. You don't need to do anything more, 
and to do less will cause problems. I believe that some CPUs do not 
provide a cache_invalidate function, but rather a 
cache_flush_and_invalidate function. That is fine. The flush is 
gratuitous but not harmful.

 From these rules, the code in question would be correct as

        if (read)
                invalidate_dcache_range((unsigned long)buffer,
                                        (unsigned long)buffer + total_len);
        else
                flush_dcache_range((unsigned long)buffer,
                                   (unsigned long)buffer + total_len);

or equally
        /* update DMA buffer RAM, ensure no cache system writes to DMA buffer RAM after this point */
        flush_dcache_range((unsigned long)buffer,
                           (unsigned long)buffer + total_len);

FWIW, I would also caution that testing is NOT a good way to ensure the cache management is correct. It must be correct by design. Inspect the code to verify it follows the rules. The problems induced by incorrect cache management may only show up for certain memory address reference patterns. Everything may appear to work fine, then somebody makes a change in a totally unrelated area, which moves the DMA buffer address, and suddenly you have problems.

Regards,
Bill

>
> Regards,
> Patrick
> _______________________________________________
> U-Boot mailing list
> U-Boot at lists.denx.de
> https://lists.denx.de/listinfo/u-boot




More information about the U-Boot mailing list