I would have instinctively agreed with Satō Katsura's answer; it makes sense. However, it's easy enough to test.
I tested writing a million lines to the screen, writing (appending) to a file, and redirecting to /dev/null
. I tested each of these in turn, then did five replicates. These are the commands I used.
$ time (for i in {1..1000000}; do echo foo; done)
$ time (for i in {1..1000000}; do echo foo; done > /tmp/file.log)
$ time (for i in {1..1000000}; do echo foo; done > /dev/null)
I then plotted the total times below.

As you can see, Satō Katsura's presumptions were correct. As per Satō Katsura's answer, I also doubt that the limiting factor will be the output, so it's unlikely that the choice of output will have a substantial effect on the overall speed of the script.
FWIW, my original answer had different code, which had the file appending and /dev/null
redirect inside the loop.
$ rm /tmp/file.log; touch /tmp/file.log; time (for i in {1..1000000}; do echo foo >> /tmp/file.log; done)
$ time (for i in {1..1000000}; do echo foo > /dev/null; done)
As John Kugelman pointed out in the comments at the time of writing this, this adds a lot of overhead. As the question stands, this is not really the right way to test it, but I'll leave it here as it clearly shows the cost of re-opening a file repeatedly from within the script itself.

In this case, the results are reversed.