[U-Boot] [PATCH] test/py: Add basic QSPI testing
Stephen Warren
swarren at wwwdotorg.org
Tue May 17 21:27:10 CEST 2016
On 05/17/2016 11:55 AM, Michal Simek wrote:
> On 17.5.2016 19:30, Stephen Warren wrote:
>> On 05/17/2016 08:05 AM, Michal Simek wrote:
>>> This is the first attempt how to test qspi.
>>> Detect SPI size. Read it all, random size, erase every block,
>>> write random data with random size and read it back,
>>> erase the whole qspi and at the end load kernel image via tftp
>>> and save it to memory.
...
>> I would rather this test were explicitly parametrized via the boardenv:
>>
>> * The boardenv should specify whether sf testing is expected to occur.
>> The current test attempts to detect sf, and if U-Boot can't, it just
>> skips the test. I'd rather the boardenv say whether sf detection should
>> work, and if it fails when boardenv says it should work, the test should
>> fail rather than be skipped.
>
> ok. Good idea. It can be added as separate patch on the top of this.
My naive expectation is that the patch would change so much that you may
as well fold it all into a revised patch.
>> * Some devices might store useful data in their SPI flash (e.g. some
>> Tegra devices boot from SPI) and we should not destroy that data in this
>> test. As such, the boardenv should indicate whether the test should
>> erase anything at all, and specify what part(s) of the flash can be
>> erased if so (e.g. offset/size of a writable region, or a flash to
>> indicate whether the entire flash erase can be tested).
>>
>
> Ok Good idea - add protection. Is this just space from start?
> Or does it make sense to design it in more generic way - start-end
> regions which you can't rewrite?
It's probably best to have the boardenv explicitly list the region(s)
which can be written, rather than acting as an exclusion list. That way
it's directly specified where writes will occur. I'd suggest an
offset/length for the/each writeable region.
Perhaps something like:
env__sf_tests = (
{
"id": "0", # "sf probe" parameter
"offset": 0x1000,
"size": 0x1000,
"writeable": True, # Trigger erase/write/read test
},
{
"id": "1:2", # "sf probe" parameter
"offset": 0x10000,
"size": 0x1000,
# Lack of writeable:True triggers just a read test
# The following is then used to validate the read data
"read_crc32": 0x1234,
},
}
?
>> Simon Glass has requested that all strings use ' not ", except perhaps
>> where there's an embedded ' in the string. This comment applies
>> elsewhere in the file too.
>
> What do you mean by this?
Python allows you to use ' or " to delimit strings; 'abc' and "abc" mean
the same thing. Simon requested we standardize on using ' not ".
>>> + # TODO maybe add alignment and different start for pages
>>> + for size in random.randint(4, page_size), random.randint(4,
>>> total_size), total_size:
>>
>> I'm not sure it's a good idea for tests to use random numbers and hence
>> operate differently each time. This can lead to intermittent failures
>> that are hard to track down. Rather, if there are known corner-cases,
>> the test should always test them all. Take a look at the DFU test (and
>> its example boardenv content) for how that's handled there.
>
> I choose to use random sizes to have more variants because it can
> discover problem.
> I am not saying that there shouldn't be tests for fixed sizes but I
> still think that some sort of randomization is good for testing because
> it can discover errors which you didn't cover.
Yes, randomization can discover additional problems. This means that
repeatedly running the same test on the same SW/HW can see different
results. Personally, I'd rather avoid such intermittency, and use
test/py as a regression test that covers an explicit set of conditions
that are known to work.
I think we can achieve both our desires by having the test code NOT
perform any randomization, having the code take the list of tests to
perform from boardenv (via an example data structure similar to what I
quoted above), and since the boardenv is itself Python code, I can write
"0x1000" for the test size, whereas you could import the random module
into your boardenv, and write "random.randint(4, 0x1000)" in one entry
in the list there, "random.randint(4, 0x100000)" in another, and so on.
>>> + # FIXME using 0 is failing for me
>>> + output = u_boot_console.run_command('sf read %x 0 %x' % (addr
>>> + total_size, size))
>>
>> "using 0" for what fails? That's confusing since the second parameter is
>> "0" so I wonder if this test fails or passes for you? Perhaps that
>> comment applies to the address parameter? If so, explicitly mentioning
>> the parameter name would be useful to remove the ambiguity. Are you
>> investigating the problem?
>
> size 0 was causing problem. That's why it is starting from 4. Not sure
> what's the standard sf behavior if you want to save 0 size.
Ah. I would expect a size of 0 to be rejected, since it implies
reading/writing nothing. It sounds like that comment applies more to the
randint() calls in the for loop rather than the run_command() function then?
>>> + assert expected_read in output
>>> + output = u_boot_console.run_command('crc32 %x %x' % (addr +
>>> total_size, size))
>>> + m = re.search('==> (.+?)', output)
>>> + if not m:
>>> + pytest.fail("CRC32 failed")
>>> + expected_crc32 = m.group(1)
>>> + output = u_boot_console.run_command('sf read %x 0 %x' % (addr
>>> + total_size + 10, size))
>>> + assert expected_read in output
>>> + output = u_boot_console.run_command('crc32 %x %x' % (addr +
>>> total_size + 10, size))
>>> + assert expected_crc32 in output
>>
>> That just proves that the SPI read wrote the same data into memory both
>> times, or failed to write anything both times thus leaving the same
>> pre-existing content in memory each time. It would be better to at least
>> clear the memory before each sf read (via mw I imagine) to ensure that
>> sf read actually wrote something to memory, and even better to compare
>> against some known crc32 rather than just assuming that anything read
>> from SPI is correct. The latter could be achieved by either:
>
> That's why there is offset 10. It means it is not rewriting the same data.
Ah right, I hadn't noticed that part. Still, I'd suggest just clearing
the memory before the read would be just as good, plus it would validate
the entire written data.
>> a) In boardenv, specify the offset/length/crc32 of a region to be
>> verified by read. This would make sense if at least part of the flash
>> content is invariant across test runs.
>
> What should be transfer mechanism to get data to the board to be able to
> say it has this crc? For ethernet this is easy for qspi not sure.
For a read-only test of flash content, I would expect the user to
pre-configure the flash content before running test/py. This would be
the equivalent of ensuring that a TFTP server's data directory contained
the filename that the test attempts to download.
>> b) Do a write-then-read test of known data. Something like:
>>
>> 1) erase the region
>> 2) read the region, validate crc mismatches (i.e. the erase worked)
>
> This is good one. I should check this for sure that flash was erased.
>
>> 3) write the region
>> 4) read the region, validate crc matches written data
>>
>> Make sure that in steps 2 and 4, the memory buffer used is cleared
>> before the sf read to guarantee that sf read actually read the data,
>> rather than simply leaving existing RAM content in place.
>
> Is this really needed because I do use small offset that data is not
> saved to the same address space.
Perhaps not, but with the erase, you can validate the entire data. That
said, I suppose it's fairly unlikely a bug could exist that caused a
problem only in the first 10 bytes, so perhaps it's fine as-is.
>>> + at pytest.mark.buildconfigspec('cmd_sf')
>>> +def test_qspi_erase_all(u_boot_console):
>>> + qspi_pre_commands(u_boot_console)
>>> +
>>> + if not qspi_detected:
>>> + pytest.skip('QSPI not detected')
>>> +
>>> + expected_erase = "Erased: OK"
>>> + start = 0
>>> + output = u_boot_console.run_command('sf erase 0 ' + str(hex(total_size)))
>>> + assert expected_erase in output
>>
>> That should probably read the region and validate that the erase
>> actually happened?
>
> Agree. Read back and check should be there.
> Is using mw addr val size, crc over it, erase, read back, crc, compare
> good for you?
I don't think you need to CRC after the mw, since the mw only exists to
ensure that the memory content doesn't match what you expect to read,
and you know exactly what you expect to read (all 0xff) so can mw
something else. So, I think an erase test would be:
mw addr val size # val anything other than 0xff
erase
read back to addr
CRC addr size
Calculate CRC of that many 0xff's in Python only
Compare the two CRCs.
More information about the U-Boot
mailing list