[U-Boot] [RFC] Review of U-Boot timer API

J. William Campbell jwilliamcampbell at comcast.net
Wed May 25 04:53:58 CEST 2011


On 5/24/2011 5:17 PM, Graeme Russ wrote:
> On Wed, May 25, 2011 at 5:19 AM, Wolfgang Denk<wd at denx.de>  wrote:
>> Dear Graeme Russ,
>>
>> In message<4DDBE22D.6050806 at gmail.com>  you wrote:
>>>>> Why must get_timer() be used to perform "meaningful time measurement?"
>>>> Excellent question!  It was never intended to be used as such.
>>> Because get_timer() as it currently stands can as it is assumed to return
>>> milliseconds
>> Yes, but without any guarantee for accuracy or resolution.
>> This is good enough for timeouts, but nothing for time measurements.
> Out of curiosity, are there any platforms that do not use their most
> accurate source(*) as the timebase for get_timer()? If a platform is using
> it's most accurate, commonly available, source for get_timer() the the
> whole accuracy argument is moot - You can't get any better anyway so
> why sweat the details.
Hi All,
        Well, it is not quite that simple. The "accuracy" of the 1 ms 
interrupt rate is controlled in all cases I know about by the resolution 
of the programmable divider used to produce it. It appears that the x86 
uses a 1.19318 MHz crystal oscillator to produce the nominal 1 ms timer 
tick. (There is a typo in line 30 of arch/x86/lib/pcat_timer.c that says 
1.9318. I couldn't make any of the numbers work until I figured this 
out). The tick is produced by dividing the 1.19318 rate999.313 by 1194, 
which produces an interrupt rate of 999.3 Hz, or about 0.068% error. 
However, the performance counter on an x86 is as exact as the crystal 
frequency of the CPU is. FWIW, you can read the performance counter with 
rdtsc on a 386/486 and the CYCLES and CYCLES2 registers on later 
Intel/AMD chips. So yes, there is at least one example of a cpu that 
does not use it's most accurate (or highest resolution) time source.
> (*)I'm actually referring to what is commonly available for that platform,
> and not where a board has a high precision/accuracy source in addition to
> the common source.
>
> As a followup question, how many platforms use two completely independent
> sources for udelay() and get_timer() - x86 does, but I plan to change this
> so the interrupt kicks the new prescaler which can be done at>>  1ms period
> and udelay() and get_timer() will use the same tick source and therefore
> have equivalent accuracy.
Are you sure of this? From what I see in arch/x86/lib/pcat_timer.c, the 
timer 0 is programmed to produce the 1 kHz rate timer tick and is also 
read repeatedly in __udelay to produce the delay value. They even 
preserve the 1194 inaccuracy, for some strange reason. I see that the 
sc520 does appear to use different timers for the interrupt source, and 
it would appear that it may be "exact", but I don't know what the input 
to the prescaler is so I can't be sure. Is the input to the prescaler 
really 8.3 MHz exactly? Also, is the same crystal used for the input to 
the prescaler counter and the "software timer millisecond count". If 
not, then we may have different accuracies in this case as well.

Also of note, it appears that the pcat_timer.c, udelay is not available 
intil interrupts are enabled. That is technically non-compliant, 
although it obviously seems not to matter.

Best Regards,
Bill Campbell
>>> OK, let's wind back - My original suggestion made no claim towards changing
>>> what the API is used for, or how it looks to those who use it (for all
>>> practical intents and purposes). I suggested:
>>>   - Removing set_timer() and reset_timer()
>>>   - Implement get_timer() as a platform independent function
>> Trying to remember what I have read in this thread I believe we have
>> an agreement on these.
>>
>>> Exposing ticks and tick_frequency to everyone via a 'tick' HAL
>> I skip this.  I don't even read it.
> Hmmm, I think it is worthwhile at least comparing the two - What is the
> lesser of two evils
>
>   1. Exposing 'ticks' through a HAL for the prescaler
>   2. Duplicating a function with identical code 50+ times across the source
>      tree
>
> I personally think #2 is way worse - The massive redundant duplication and
> blind copying of code is what has get us into this (and many other) messes
>
>>> =======================
>>> Not exposing ticks and tick_frequency to everyone
>>>
>>> In /lib/timer.c
>>>
>>> void prescaler(u32 ticks, u32 tick_frequency)
>>> {
>>>        u32 current_ms;
>>>
>>>        /* Bill's algorithm */
>>>
>>>        /* result stored in gd->timer_in_ms; */
>>> }
>>>
>>> In /arch/cpu/soc/timer.c or /arch/cpu/timer.c or /board/<board>/timer.c
>>>
>>> static u32 get_ticks(void)
>> Currently we have unsigned long long get_ticks(void)  which is better
>> as it matches existing hardware.
> Matches PPC - Does it match every other platform? I know it does not match
> the sc520 which has a 16-bit millisecond and a 16-bit microsecond counter
> (which only counts to 999 before resetting to 0)
>
> Don't assume every platform can implement a 64-bit tick counter. But yes,
> we should cater for those platforms that can
>
>> Note that we also have void wait_ticks(u32) as needed for udelay().
>>
>>> static u32 get_tick_frequency(void)
>>> {
>>>        u32 tick_frequency;
>>>
>>>        /* Determine tick frequency */
>>>
>>>        return tick_frequency;
>>> }
>> Note that we also have u32 usec2ticks(u32 usec) and u32 ticks2usec(u32 ticks).
> Yes, they are better names
>
> Regards,
>
> Graeme
>
>



More information about the U-Boot mailing list