[U-Boot] [RFC][Timer API] Revised Specification - Implementation details

Graeme Russ graeme.russ at gmail.com
Mon May 30 13:47:31 CEST 2011


Hi Wolfgang,

On 30/05/11 20:57, Wolfgang Denk wrote:
> Dear Simon Glass,
> 
> In message <BANLkTim-JZyMmzw_tWHhHQQYoVmK9mZyVw at mail.gmail.com> you wrote:
>>
>> Sure if you are tracking the timer, and wait for it to increment, and
>> then wait for it to increment a second time, you can be confident that
>> the time between the first and second increments is 10ms.
> 
> OK.  Good.
> 
>> But in general it is possible that your first read of the timer
>> happens at 9.999ms after the timer last incremented, just because you
>> are unlucky. Then perhaps two successive reads of the timer only 1us
>> apart will see a difference of 10ms in time.
> 
> Agreed.
> 
>> I believe this resolution problem could/should be solved by a function
>> which returns a time not less than n ms in the future. It would work
>> by returning something like current_time_ms + (delay_ms +
>> resolution_ms * 2 - 1) / resolution_ms * resolution_ms, where
>> resolution_ms is 10 in this case. I think someone else mentioned that
>> too.
>>
>> When the timer reaches that time then it is guaranteed that at least
>> delay_ms ms has passed, although it might be up to delay_ms + 10.
> 
> We (well, Detlev and me) discussed this.  We think it is important to
> realize (and to document) that the timing information provided by
> get_timer() is inherently subject to the principles of interval
> arithmetics with an implementation dependent interval width.

Agree - I fully intend to collate all of this information into an API
document in the source tree

> So far, most (all ?) of the code ignored this property, or silently
> assumed that the interval width was equal to or better than 1 milli-
> second which is the timing unit used by get_timer().

Well, we were until I tried to clean up CFI and found out about the Nios
limitation - Since then, I think we  have all been acutely aware that it
was the 'elephant in the room'

> I think it is important to realize the (most important) use cases of
> get_timer(): 1) [longish] timeouts and 2) other timing
> computations.  For completeness, I also include a third situation
> here:  0) [short] timeouts:
> 
> 0) [short] timeouts:
> 
>    Short timeouts are obviously all timeouts that are shorter than one
>    millisecond - it is obvious that get_timer() cannot be used for
>    these, and instead we use udelay() based delay loops.
> 
>    Note that this method also can be used for longer timeouts,
>    especially when we really have to wait for some event to happen,
>    i. e. when we cannot do "something useful" while waiting.
> 
>    A typical implementation to wait for some event with a timeout of
>    5 seconds might look like this:
> 
>    	int loops = 10;	/* timeout after 10 * 500 milliseconds */
> 
> 	while (test_for_event() == 0) {
> 		if (--loops <= 0)
> 			handle_timeout();
> 
> 		udelay(500000);	/delay 500 millisecond */
> 	}
> 
>    Note that this implementation has the disadvantage that it may
>    introduce an unnecessary delay of up to 500 milliseconds if the
>    event happens quickly after the call to test_for_event(), so
>    typically it is more efficient to use a large number of short loops
>    intead - due to the loop overhead these are less accurate, but for
>    timeouts this usually does not matter, but at least they insert
>    less heavy delays.  Example:
> 
>    	int loops = 50000; /* timeout after 50,000 * 100 microseconds */
> 
> 	while (test_for_event() == 0) {
> 		if (--loops <= 0)
> 			handle_timeout();
> 
> 		udelay(100);
> 	}
> 
>    Here we lose a maximim of 100 us in the delay call.
>  
> 
>    The inherent disadvantage of these delay based timeout loops is the
>    low accuracy - depending on the loop overhead (for example the time
>    spent in the test_for_event() call and on the size of the argument
>    to the udelay() call the total execution time of the loop is always
>    bigger than the assumed duration, especially for short delays.
>    Usually this is not a problem in such contexts; also it is possible
>    to trade accuracy for additional delays after the event happens -
>    see first code example above.

Some platforms are _way_ worse than this - I am sure I have seen a udelay()
done with the millisecond time - So udelay(100) could be closer to
udelay(1000) - So your above 5 second delay could take as long as 50 seconds!!!

> 1) [longish] timeouts:
> 
>    Timeouts in the order of several milliseconds or longer are
>    frequently implemented using get_timer(). Above 5 second timeout
>    could be implemented like this:
> 
>    	u32 start,now;
> 	u32 timeout = 5 * CONFIG_SYS_HZ; /* timeout after 5 seconds */
> 
>    	start = get_timer(0);
> 
> 	while (test_for_event() == 0) {
> 		now = get_timer(0);
> 		if ((now - start) > timeout)
> 			handle_timeout();
> 
> 		udelay(100);
> 	}
> 
>    Here the loop overhead caused by short delay which may result in a
>    high number of calls to test_for_event() gets eliminated because
>    the time reference is independet of the delay.

I personally think _any_ timeout greater than 1s (maybe even >500ms) should
be done this way

> 2) Other timing computations:
> 
>    These are usually used to measure the time needed to perform some
>    specific actions, for example like this:
> 
>    	u32 start, now;
> 
> 	start = get_timer(0);
> 
> 	do_something_complicated();
> 
> 	now = get_timer(0);
> 
> 	printf("execution time: %ld millisec\n", now - start);

Currently fails spectacularly if do_something_complicated() involves a
delay loop which calls reset_timer()

I think this may become more popular for performance tuning once the API
has been established. The desire for boot profiling was what initially made
me investigate the API given the current use of reset_timer() prohibits
profiling across a timeout loop.

And you missed one:

3) Raw Delay

   When you know an operation takes AT LEAST 'x' ms/us and there is no way
   to detect if the operation has completed before then

	do_something();
	udelay(500);
	/* Operation has finished - Keep going  */

	OR

	u32 start;

	do_something();

	start = get_timer(0);
	while (get_timer(start) < 500)
		;
	/* Operation has finished - Keep going  */

> Neither 1) nor 2) take into account that both "start" and "now" time
> stamps depend on the resolution of the underlying, implementation
> dependent timer.  For large delays this is not critical, but for short
> delays (few milliseconds) this introduces large errors, and can even
> be fatal.
> 
> 
> One solution to the problem could be to take the interval length into
> consideration. However, it seems unwise to make such a low level,
> implementation specific detail visible to normal "application" code.

Agree - The user should be never have to consider individual platform 'quirks'

> Instead, we suggest to introduce a new function delta_timer() which
> hides this implementation detail from the user.  delta_timer() will
> compute the differnce between two time stamps from get_timer(), and
> will return the number of milliseconds that are guaranteed to have
> passed AT LEAST between these two moments:
> 
> 	/*
> 	 * Everybody uses a 1 millisecond interval, 
> 	#ifdef CONFIG_NIOS2
> 	static inline u32 timer_interval_size(void) { return 10; }
> 	#else
> 	static inline u32 timer_interval_size(void) { return 1; }
> 	#endif
> 
> 	u32 delta_timer(u32 from, u32 to)
> 	{
> 		u32 delta = to - from;
> 
> 		if (delta < timer_interval_size())
> 			return 0;
> 
> 		return detla - timer_interval_size();
> 	}
> 
> 
> We could also design a more complicated API like this one, but I doubt
> this is needed:
> 
> 	/*
> 	 * round - used to control rounding:
> 	 * <0 : round down, return time that has passed AT LEAST
> 	 * =0 : don't round, provide raw time difference
> 	 * >0 : round up, return time that has passed AT MOST
> 	 */
> 	 u32 delta_timer(u32 from, u32 to, int round)
> 	 {
> 		u32 delta = to - from;
> 
> 		if (round == 0)	/* raw result, no rounding*/
> 			return delta;
> 
> 		if (round > 0)	/* round up */
> 			return delta + timer_interval_size();
> 
> 		/* round down */
> 		if (delta < timer_interval_size())
> 			return 0;
> 
> 		return delta - timer_interval_size();
> 	}
> 
> 
> So our timeout from case 1) above would now be written like this:
> 
>    	u32 start,now;
> 	u32 timeout = 5 * CONFIG_SYS_HZ; /* timeout after 5 seconds */
> 
>    	start = get_timer(0);
> 
> 	while (test_for_event() == 0) {
> 		now = get_timer(0);
> 
> 		if (delta_timer(start, now) > timeout)
> 			handle_timeout();
> 
> 		udelay(100);
> 	}
> 
> and would be guaranteed never to terminate early.
> 
> 
> 
> Comments?

I figured we would need another API function - I was thinking along the
lines of:

	u32 start = get_current_ms();

	while (test_for_event() == 0) {
		if (time_elapsed(start, timeout))
			handle_timeout();

		udelay(100);
	}

I don't like the 'time_elapsed' function name though

Both are essentially identical

Regards,

Graeme


More information about the U-Boot mailing list