0

My starting point is this question. Now aria2 is downloading the files but it is not as fast as I expected it (I am running this on a Macbook Pro with an i7, gigabit connection and AC-wifi. I am most definitely not maxing out any of these links of the chain.).

I use aria2 with these switches

-x 16 -s 1

Since the files are small I see no reason to open several sockets (-s 1) for each download. However, parallell downloading (-x 16. 16 is max, isn't it?) of several files should increase the overall speed, shouldn't it?

Nevertheless, when I read the output log, the downloads doesn't seem to be parallell. Am I missing something in how one use aria?

Or is it the feed of URLs into aria that is the bottleneck (I use find . -t f and then use the result to build the URL fed into aria using string concatenation)?

d-b
  • 1,891
  • 3
  • 18
  • 30

1 Answers1

2

Opening multiple connections will not help you much when you only send one download to aria2c and when the file sizes are small. But you can easily let multiple aria2c commands run in parallel using xargs -P <num>

Make sure that -P value is not more then is allowed by the server, otherwise the server will return an error.


find . -type f -printf '%P\n' \
    | xargs -I{} -P6 aria2c -x 1 -s 1 "https://web.archive.org/save/https://{}"

Or let xargs add all URLs to just one aria2c call:

find . -type f -exec 'https://web.archive.org/save/https://%P\n' \
    | xargs aria2c -x 16 -s 1

But I think the best option would be to create a file descriptor from find as input for aria2 instead of using pipes and xargs.

aria2c -x 16 -s 1 -i <(find . -type f -printf 'https://web.archive.org/save/https://%P\n')
pLumo
  • 22,565