[PATCH] tools: copyfile: use 64k instead of 512 buffer
Dragan Simic
dsimic at manjaro.org
Wed Mar 20 16:59:52 CET 2024
Hello Ahelenia,
Please see my comments below.
On 2024-03-20 14:08, Ahelenia Ziemiańska wrote:
> This is an incredible pessimisation:
s/pessimisation/optimization/
> mkimage took >200ms (and 49489 writes (of which 49456 512)),
> now it takes 110ms (and 419 writes (of which 386 64k)).
>
> sendfile is much more appropriate for this and is done in one syscall,
> but doesn't bring any significant speedups over 64k r/w
> at the 13M size ranges, so there's no need to introduce
> #if __linux__
> while((size = sendfile(fd_dst, fd_src, NULL, 128 * 1024 * 1024)) > 0)
> ;
> if(size != -1) {
> ret = 0;
> goto out;
> }
> #endif
>
> Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli at nabijaczleweli.xyz>
Looking good to me. With the small nitpick above and
a suggestion below,
Reviewed-by: Dragan Simic <dsimic at manjaro.org>
> ---
> tools/fit_common.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/tools/fit_common.c b/tools/fit_common.c
> index 2d417d47..373fab6a 100644
> --- a/tools/fit_common.c
> +++ b/tools/fit_common.c
> @@ -145,14 +145,14 @@ int copyfile(const char *src, const char *dst)
> goto out;
> }
>
> - buf = calloc(1, 512);
> + buf = calloc(1, 64 * 1024);
> if (!buf) {
> printf("Can't allocate buffer to copy file\n");
> goto out;
> }
>
> while (1) {
> - size = read(fd_src, buf, 512);
> + size = read(fd_src, buf, 64 * 1024);
Perhaps this would be a good opportunity to introduce a new
#define for 64 * 1024 as the new buffer size.
> if (size < 0) {
> printf("Can't read file %s\n", src);
> goto out;
More information about the U-Boot
mailing list