You can do it using only bash, readlink, and find:
#!/bin/bash
while IFS= read -d '' -r g ; do
dir="$(readlink -f "$g/..")"
git pull -C "$dir" origin master &
done < <(find "$@" -name '.git' -print0)
You can make it run a bit faster with careful use of find's -maxdepth
option. e.g. if you know that all .git
directories are going to be found only in the top-level sub-directories of the current directory, -maxdepth 3
will stop it recursing into any lower sub-directories.
I've used "$@"
with the find
command rather than the unquoted $1
you used so that not only is a directory argument optional, you can use multiple directory args and/or add whatever find options you might need to the command line.
If the version of find
on your work computer doesn't do -print0
, you can use \n
as the input separator, but the script will break if any directory names contain a newline (which is uncommon but perfectly valid).
while IFS= read -r g ; do
dir="$(readlink -f "$g/..")"
git pull -C "$dir" origin master &
done < <(find "$@" -name '.git')
findutils
on your work computer? or something else that providesxargs
? if so, you can write a version of your gitpull.sh that pipes the sed output into a shell function that does something likexargs -I{} git pull -C {} origin master &
. – cas Jan 04 '18 at 08:40