[U-Boot] [RFC] Timer API (again!)

J. William Campbell jwilliamcampbell at comcast.net
Sat Sep 24 23:47:37 CEST 2011


On 9/16/2011 4:53 AM, Graeme Russ wrote:
 > Hi All,
 >
 > Well, here we are again, a Timer API discussion
 >
 > All things considered, I don't think the Linux approach is right for 
U-Boot
 > - It is designed to cater for way more use-cases than U-Boot will 
ever need
 > to deal with (time queues and call-backs in particular)
Hi Graeme,
        Glad you are tackling this again.
 >
 > To summarize, in a nutshell, what _I_ think U-Boot need from a Timer API
 >
 >   1. A reliable udelay() available as early as possible that is OK to 
delay
 >      longer than requested, but not shorter - Accuracy is not implied or
 >      assumed
I think you don't really mean this. SOME attempt at accuracy is to be 
expected. Not perfect is allowed, but to be totally bogus is not 
allowed. That is worse than nothing. Longer by 20% or so is possibly ok, 
20000% isn't.
 >   2. A method of obtaining a number of 'time intervals' which have 
elapsed
 >      since some random epoch (more info below)
 >   3. A nice way of making A->B time checks simple within the code
 >
 > OK, some details:
 >
 > 1. I'm starting to thing this should be a function pointer or a flag 
in gd.
 > Why oh why would you do such a thing I hear you ask... udelay() is 
needed
 > _EARLY_ - It may well be needed for SDRAM or other hardware 
initialisation
 > prior to any reliable timer sub-system being available. But early 
udelay()
 > is often inaccurate and when the timer sub-system gets fully 
initialised,
 > it can be used for an accurate udelay(). x86 used to have an global data
 > flag that got set when the timer-subsystem got initialised. If the 
flag was
 > not set, udelay() would use one implementation, but if it was set, 
udelay()
 > would use a more accurate implementation. In the case of the eNET 
board, it
 > had an FPGA implementation of a microsecond timer which was even more
 > accurate that the CPU, so it had it's own implementation that should 
have
 > duplicated the 'if (gd->flags&  TIMER_INIT)' but never did (this was OK
 > because udelay() was never needed before then)
 >
 > I think a function pointer in gd would be a much neater way of 
handling the
 > fact that more accurate (and efficient) implementations of udelay() may
 > present themselves throughout the boot process
I think this won't be popular, but I am not against it on the face of it.
 >
 > An other option would be to have the gd flag and make udelay() a lib
 > function which performs the test - If the arch timer has better than 1us
 > resolution it can set the flag and udelay() will use the timer API
 >
 > 2a (random epoch) - The timer sub-system should not rely on a particular
 > epoch (1970, 1901, 0, 1, since boot, since timer init, etc...) - By 
using
 > the full range of an unsigned variable, the epoch does not matter
 >
 > 2b (time interval) - We need to pick a base resolution for the timer 
API -
 > Linux uses nano-seconds and I believe this is a good choice. Big Fat 
Note:
 > The underlying hardware DOES NOT need to have nano-second resolution. 
The
 > only issue with nano-seconds is that it mandates a 64-bit timer - A few
 > people are against this idea - lets discuss. A 32-bit microsecond timer
 > provides ~4300 seconds (~1.2 hours) which _might_ be long enough, but
 > stand-alone applications doing burn-in tests may need longer. Going
 > milli-seconds means we cannot piggy-back udelay() on the timer API.
Well, as you probably know, I am "one of those" against a 64 bit timer 
API to figure out if my disk is ready, especially if my hardware 
timebase does not support nanosecond resolution anyway. I think having a 
time base in milliseconds like we have now is perfectly adequate for 
"sane" bootloader usage. The requirement for the timer API would then be 
to provide the "current time" in millisecond resolution, and the time in 
as close to microsecond resolution as possible. This would be a "helper" 
function for udelay.  Udelay could then always be implemented using 
microseconds to ticks and getting the subtracting delta ticks from the 
target value.
If you can provide the first function, you can provide the second. They 
can usually be the same code with different constants involved. The gd-> 
call may actually be inside udelay, with the timer init replacing the 
pointer used by udelay to get the current time in "microseconds". The 
helper function for udelay should not be called by other parts of 
u-boot, to prevent abuse of this capability. Note that a 32 bit 
millisecond delay is like 49 days, so that should be plenty. I think it 
must be kept in mind that u-boot is used on lots of "small" systems. 
Using up lots of resources for 64 bit arithmetic that we don't need 
elsewhere is , IMHO, not a good idea. If it really did something that we 
need, all well and good, I am for it. But in this case, I don't think it 
really helps us any. But it does make things harder for some CPUs, and 
that is not a good thing.
 >
 > My preference is a 64-bit nano-second timer with udelay() 
piggy-backed on
 > the timer API (unless, like NIOS, the timer sub-system cannot provide
 > micro-second resolution, in which case the gd flag or function 
pointer does
 > not get set to use the timer API for udelay())
 >
 > I really want to get the Timer API sorted this time around
Good luck! I share your desire to see this resolved. Lots of cleanup has 
already happened, which is I think a good thing.

Best Regards,
Bill Campbell
 >
 > Regards,
 >
 > Graeme
 > ____________


More information about the U-Boot mailing list