[U-Boot-Users] [PATCH v2] spi: Kill spi_chipsel table and introduce spi_setup()

Haavard Skinnemoen hskinnemoen at atmel.com
Wed Feb 6 16:39:08 CET 2008


On Wed, 06 Feb 2008 10:18:51 -0500
Ben Warren <biggerbadderben at gmail.com> wrote:
  
> If there really is a need to turn off the controller, or change the 
> transfer rate on the fly, then this is good. OTOH, this is a bootloader, 
> not an OS, and probably the vast majority of use cases would just be to 
> initialize the controller to a speed that all devices can handle, 
> transfer some data to/from one or more devices, then boot an OS. Maybe 
> some people need to do more, I don't know.

The transfer rate is one thing. We can probably do without that, even
though it means that whoever writes the board code needs to look at all
the devices connected and pick the lowest transfer rate instead of
having the driver set it automatically.

SPI transfer modes (i.e. clock polarity, clock phase), etc. is a whole
different issue since if you get them wrong, you will simply not be
able to talk to the SPI slave, and I don't think it's safe to assume
that the transfer mode is the same for all SPI slaves on a given board.

And again, the SPI device driver should know what mode the slave
requires, so setting it from the driver is the natural thing to do.

If we can do proper power management and catch bus contention through
the same interface, that ought to be a good thing, no?

> > Btw, the master driver is currently controlling the chip selects from
> > spi_xfer(). I suspect we need to give clients greater control of the
> > chip selects eventually.
> >
> >   
> Decoupling chip select from spi_xfer() is a good idea, since spi_xfer() 
> is all about sending and receiving streams of bits from the master point 
> of view and is slave-agnostic. We may want to add a wrapper function so 
> that the user doesn't have to remember too much. Something like:
> 
> int spi_send_receive(int cs, int bitlen, char *inbuf, char *outbuf) {
> 
>     spi_cs_activate(cs);
> 
>     spi_xfer(bitlen, inbuf, outbuf);
> 
>     spi_cs_deactivate(cs);
> 
> }

I don't know...it would be a bit counter-productive with controllers
that can do automatic chip select generation.

How about adding a "flags" parameter to spi_xfer() to control chip
select toggling? That way, you can do multiple transfers without
releasing the chip select like this:

spi_xfer(cs, bitlen, inbuf, outbuf, SPI_XFER_BEGIN);
spi_xfer(cs, bitlen, inbuf, outbuf, 0);
spi_xfer(cs, bitlen, inbuf, outbuf, SPI_XFER_END);

Of course, we can still add the wrapper, doing something like this:

int spi_send_receive(int cs, int bitlen, char *inbuf, char *outbuf)
{
	return spi_xfer(cs, bitlen, inbuf, outbuf, SPI_XFER_BEGIN | SPI_XFER_END);
}

Then again, we could simply ignore the automatic CS capability of those
controllers and just use GPIO anyway.

> >> also, what's the deal with spi_xfer() taking a length in # of bits ?  is it 
> >> realistic to transmit anything less tan 8 bits ?  the Linux kernel driver 
> >> does not support this, so it cant be a big need ...
> >>     
> >
> > I don't know. That's unchanged from the original API. But I certainly
> > wouldn't object if we turned it into a length in bytes.
> >
> >   
> I seem to remember working with a Broadcom device where some of the 
> transfers were odd numbers of nibbles (e.g. 12 bits). Not necessarily a 
> reason to keep bit granularity, but I don't see a reason to artificially 
> limit things either.

Right...it's just that handling odd bit sizes correctly in a general
way may complicate controller drivers a lot. Maybe we could split it up
into "number of bits per transfer" / "number of transfers", which
should be easier to handle?

Haavard




More information about the U-Boot mailing list