Testing how long will it take to pass 50000 arguments from php to a bash script, turns that I cannot pass even 1000 arguments from php to a bash script at once, unless I can?
PHP:
$array = fetch_results_from_working_pool_temp_table ();
$outputfile = "/var/www/html/outputFile";
$pidfile = "/var/www/html/pidFile";
$id = "";
$array_check=array();
foreach ( $array as $row => $column ) {
$id .= $column ['id'];
$id .= " ";
}
$cmd = "sudo /bin/bash /var/www/html/statistics/pass_all.sh {$id}";
exec( sprintf ( "%s >> %s 2>&1 & echo $! >> %s", $cmd, $outputfile, $pidfile ) );
bash:
#!/bin/bash
for ip in "$@"
do
echo "${ip}"
done
So my php passes arguments to the bash, bash prints to the outputFile along with any errors. pidfile will hold pid of the process that was launched with this exec. The command is not even being executed because I see no process launched. Is there any limit for passed arguments in exec? or from PHP or in Linux shell? I am running php 5.4 and Linux Redhat 7 I want to run processes using GNU parallel but because PHP is single-threaded (there are libraries to pass this but I would prefer to avoid that). Maybe I could pass it somehow to a text file and exec to a script that pulls from this text file? Help!
**Update: my machine limits:** #getconf ARG_MAX 2097152#ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 256634 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited