1

I know I can use:

$ my_program > output.txt

To redirect the output to a file, but the problem I have is that when the file reaches 64 Kb no more is written in the file and I lose all information that comes next...

What can I do?

OiciTrap
  • 221
  • 4
  • 12
  • I can not reproduce such a problem in my systems. Please provide more details. What OS, which shell, connected via ssh?. –  Jun 18 '17 at 16:35
  • @Arrow What OS?: CentOS release 6.8 (Final), Which shell?: -bash, connected via ssh? I tried both via SSH and directly in the machine. – OiciTrap Jun 18 '17 at 16:41
  • 1
    Run under strace, and see where an exception occurs. – Paul_Pedant Jun 22 '22 at 16:05

2 Answers2

0

Check the output of ulimit -f. You may be limited by the current file limit value. If so, you can remove your limit by running ulimit -f unlimited.

0

The shell use some internal limits. In bash (the shell you are using) the command ulimit is used to list them (-a for all):

$ ulimit -a

However, even if the "pipe size" is set --both the limit for a redirect (>) or a pipe (|)-- to a low value, 8 (-p) in this linux system, the size of the stream that could be sent over a pipe may be quite big (more than 10 million bytes):

$ printf '%0*d' 11000111 0 | wc -c
11000111

The other limit, and the real one that apply to a pipe, is read from:

$ cat /proc/sys/fs/pipe-max-size
65536

And set in the same file:

$ sudo echo $((4 * 1024)) > /proc/sys/fs/pipe-max-size

However, even a quite small value as set above does not limit an stream in a pipe. The command above with more than 10 million bytes still works.
The reason is that the limit is applied to what one block could transfer.

You need a tool like this script that mix perl and bash to get writes of one block.

$ ./pipesize 128 1
write size:        128; bytes successfully before error: 4096

That shows the real limit for a pipe (the same limit as in a redirect).