[U-Boot] [PATCH v1 (WIP) 00/16] [Timer]API Rewrite
J. William Campbell
jwilliamcampbell at comcast.net
Fri Jul 15 01:52:07 CEST 2011
On 7/14/2011 12:41 PM, Wolfgang Denk wrote:
> Dear "J. William Campbell",
>
> In message<4E1CF2E0.1030702 at comcast.net> you wrote:
>> Yes, this is true. However, the time_elapsed_since routine can do
>> this dynamically (i.e. add twice the timer resolution) . I think you had
> That would IMHO be a very bad idea. We have a number of places where
> we have to deal with pretty long timeouts (usually because of protocol
> specifications that require this - often in the order of several
> seconds), where the normal path is very fast. The typical approach is
> to break the timeout into a large number of very short loops.
> Sometimes we use udelay() for this, other places use get_timer().
>
> So assume we have a timeout of 5 seconds, and implement this as 50,000
> loops of 100 microseconds. If you silently turn each of these into 20
> milliseconds on NIOS, the timeout would become 1,000 seconds instead
> of 5 - users would return boards as broken and report "it just
> freezes" because nobody expects that it will wake up again after some
> 16 minutes.
Hi All,
If such a condition existed, that is indeed what would
happen. However, at present, such code is not being used on NIOS. We
know this because the current "work-around" of resetting the timer at
the start of any timeout operation extends the timeout to a minimum of
10 milliseconds. So we would be waiting 8 minutes currently, not 16, and
I am pretty sure that is long enough for someone to notice. . I would be
interested in seeing an example of such code as you refer to. Could you
point me to one, because it seems to me that the only reason to code
such a delay is that for some reason the user didn't want to keep
looking at the termination condition so often. I think that that
equivalent operation can be produced by a pretty simple re-coding of
the loop. In any case, NIOS does not have the problem at present, so the
suggested new work-around would be no worse than the present situation.
It is also true that the hardware timer cannot be used in a
reasonable version of udelay, as most of the desired delays may be very
short relative to the timebase. A calibrated timing loop would be the
best approach to the udelay problem.
>> another function name (at_least) involved, but you can define
>> time_elapsed_since as always compensating for the resolution. That will
>> fix any resolution questions in a processor-specific way. It is either
>> that or the ifdefs. One way or another, the resolution must be
>> addressed. Up to now, the implicit resolution has been 1 ms, but we now
>> know that is not general enough.
> It's not as simple as this. You have to change a lot of code to make
> this work for such slow clock systems.
In general, that is true. There may be a few cases where a delay of less
than the resolution is essential to make something work. There are
probably lots of other cases where we can easily remove the restriction
on the resolution. We cannot fix the first kind, no matter what we do,
to work on a lower resolution timer. The second kind we can and probably
should fix, because they are coded in an overly-restrictive manner. In
any case, we don't absolutely have to fix them until somebody decides to
use the code on a CPU with a low resolution timer. Practically speaking,
the suggested solution will therefore work on all extant cases.
Best Regards,
Bill Campbell
>
> Best regards,
>
> Wolfgang Denk
>
More information about the U-Boot
mailing list