[U-Boot] [PATCH] eth_receive(): Do not assume that caller always wants full packet.
Marcel Moolenaar
xcllnt at mac.com
Thu Jul 16 17:08:50 CEST 2009
On Jul 16, 2009, at 5:39 AM, Wolfgang Denk wrote:
> Dear Piotr =?iso-8859-2?q?Zi=EAcik?=,
>
> In message <200907161151.59353.kosmo at semihalf.com> you wrote:
>>
>>>> This patch fixes above problem by allowing partial packet read.
>>>
>>> seems like it could easily introduce incorrect behavior in existing
>>> applications. the code also sounds a bit risky ... your change
>>> would mean
>>> people could read the leading part, but the rest is lost ?
>>
>> discaded, if buffer is too small. This behaviour is similar to
>> Linux recv()
>
> But recv() is on another level. Here we are dealing with receiving raw
> ethernet frames.
Yes. As such truncation and other protocol errors need
to be checked for. Including the reception of packets
you're not waiting for.
If an application prepares a 100-byte buffer, then it does
so under the assumption that what it's waiting for is not
larger than 100 bytes. This is a fair assumption, because
applications wait for specific packets. In the trigger case:
an ARP response.
This is also the crux of the problem. The application
waits for a specific response, but there's no guarantee
(due to broadcast or multicast packets on the LAN) that
the next packet to arrive on the interface is the one
that we're waiting for.
Now, one approach would be to ignore packets that don't
fit the buffer. This seems unflexible, because it makes
it impossible to employ flexible buffer allocation in
the application. What's left? The only thing left is that
you return whatever arrived on the interface truncated
to the buffer size. That way the application can discard
and call read again if headers don't match, or it can
allocate a bigger buffer and retry.
>
>> function. I do not see why we have to force application to prepare
>> 1,5kB
>> buffer for received packets when for example it waits for ARP reply.
>
> Come on - what exactly is the extra effort you have to spend to
> prepare a bigger buffer?
The problem with this approach is that theoretically
an application needs to use a buffer that is as large
as the maximum size of a packet that can appear on the
interface. For example, with jumbo frames this means
that any application, module or function that wants to
receive even the smallest amount of data (say an ARP
response) needs to allocate a 9K buffer.
The question is not of effort -- there's virtually none.
The question is whether this is good engineering. Worst
case buffer allocation doesn't strike me as portable nor
reasonable.
My $0.02.
--
Marcel Moolenaar
xcllnt at mac.com
More information about the U-Boot
mailing list