[U-Boot] [PATCH] nvme: add more cache flushes
Bin Meng
bmeng.cn at gmail.com
Thu Oct 17 07:08:59 UTC 2019
Hi Patrick,
On Thu, Oct 17, 2019 at 2:44 PM Patrick Wildt <patrick at blueri.se> wrote:
>
> On Thu, Oct 17, 2019 at 10:55:11AM +0800, Bin Meng wrote:
> > Hi Patrick,
> >
> > On Wed, Oct 16, 2019 at 11:35 PM Patrick Wildt <patrick at blueri.se> wrote:
> > >
> > > On Wed, Oct 16, 2019 at 06:11:23PM +0800, Bin Meng wrote:
> > > > On Mon, Oct 14, 2019 at 7:11 PM Patrick Wildt <patrick at blueri.se> wrote:
> > > > >
> > > > > On an i.MX8MQ our nvme driver doesn't completely work since we are
> > > > > missing a few cache flushes. One is the prp list, which is an extra
> > > > > buffer that we need to flush before handing it to the hardware. Also
> > > > > the block read/write operations needs more cache flushes on this SoC.
> > > > >
> > > > > Signed-off-by: Patrick Wildt <patrick at blueri.se>
> > > > > ---
> > > > > drivers/nvme/nvme.c | 15 +++++++++------
> > > > > 1 file changed, 9 insertions(+), 6 deletions(-)
> > > > >
> > > > > diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
> > > > > index 2444e0270f..69d5e3eedc 100644
> > > > > --- a/drivers/nvme/nvme.c
> > > > > +++ b/drivers/nvme/nvme.c
> > > > > @@ -123,6 +123,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
> > > > > }
> > > > > *prp2 = (ulong)dev->prp_pool;
> > > > >
> > > > > + flush_dcache_range((ulong)dev->prp_pool, (ulong)dev->prp_pool +
> > > > > + dev->prp_entry_num * sizeof(u64));
> > > > > +
> > > > > return 0;
> > > > > }
> > > > >
> > > > > @@ -717,9 +720,10 @@ static ulong nvme_blk_rw(struct udevice *udev, lbaint_t blknr,
> > > > > u16 lbas = 1 << (dev->max_transfer_shift - ns->lba_shift);
> > > > > u64 total_lbas = blkcnt;
> > > > >
> > > > > - if (!read)
> > > > > - flush_dcache_range((unsigned long)buffer,
> > > > > - (unsigned long)buffer + total_len);
> > > > > + flush_dcache_range((unsigned long)buffer,
> > > > > + (unsigned long)buffer + total_len);
> > > >
> > > > Why we need this for read?
> > >
> > > I'm no processor engineer, but I have read (and "seen") the following
> > > on ARMs. If I'm wrong. please correct me.
> > >
> > > Since the buffer is usually allocated cache-aligned on the stack,
> > > it is very possible that this buffer was previously still used
> > > as it's supposed to be used: as stack. Thus, the caches can still
> > > be filled, and are not yet evicted/flushed to memory. Now it is
> > > possible that between the DMA access from the device and our cache
> > > invalidation, the CPU cache writes back its contents to memory,
> > > overwriting whatever the NVMe just gave us.
> >
> > OK, this makes sense. So if we allocate the buffer from the heap, we
> > should only care about flush on write, right? Can you test this?
>
> If you're talking about having a bounce buffer: You'd need to flush
> it once upon allocation, because that part of the heap could have also
> be used earlier by someone else.
>
I was not talking about bounce buffer. I mean the buffer allocated
from help and use that directly for DMA.
Regards,
Bin
More information about the U-Boot
mailing list