24

The man page doesn't give me much hope, but I'm hoping it's an undocumented (and/or GNU-specific) feature.

Hank Gay
  • 3,549

6 Answers6

31

The moreutils package from ubuntu (and also debian) has a program called sponge, which sort-of also solves your problem.

From man sponge:

sponge reads standard input and writes it out to the specified file. Unlike a shell redirect, sponge soaks up all its input before opening the output file. This allows constricting pipelines that read from and write to the same file.

Which would let you do something like:

cut -d <delim> -f <fields> somefile | sponge somefile
Kjetil Jorgensen
  • 1,244
  • 7
  • 7
19

You can't. Either use ed or GNU sed or perl, or do what they do behind the scenes, which is to create a new file for the contents.

ed, portable:

ed foo <<EOF
1,$s/^\([^,]*\),\([^,]*\),\([^,]*\).*/\1,\3/
w
q
EOF

GNU sed:

sed -i -e 's/^\([^,]*\),\([^,]*\),\([^,]*\).*/\1,\3/' foo

Perl:

perl -i -l -F, -pae 'print @F[1,3]' foo

cut, creating a new file (recommended, because if your script is interrupted, you can just run it again):

cut -d , -f 1,3 <foo >foo.new &&
mv -f foo.new foo

cut, replacing the file in place (retains the ownership and permissions of foo, but needs protection against interruptions):

cp -f foo foo.old &&
cut -d , -f 1,3 <foo.old >foo &&
rm foo.old

I recommend using one of the cut-based methods. That way you don't depend on any non-standard tool, you can use the best tool for the job, and you control the behavior on interrupt.

  • Better than .old method for in-place changes, echo "$(cut -d , -f 1,3 <foo)" > foo – GypsyCosmonaut Oct 26 '18 at 03:10
  • 2
    @GypsyCosmonaut No, this is not “better”. It's more fragile. Its only benefit is that it's shorter to type. The main problem with your method is that if an error happens while processing the input file or writing the output, the data is lost. It also doesn't work with binary data: the output will be truncated at the first null byte. Even with text files, it removes empty lines from the end of the file. With large files, it may fail because the data has to be stored as a string in the shell's memory (and remember, if this happens, the data is lost). – Gilles 'SO- stop being evil' Oct 26 '18 at 06:48
  • o.O Thanks, I didn't know there might be problems with null byte – GypsyCosmonaut Oct 26 '18 at 07:06
  • 1
    I think this is simpler: cut -d , -f 1,3 foo > foo.new rm foo mv foo.new foo – LoMaPh Dec 21 '18 at 01:13
  • 2
    @LoMaPh Indeed, I don't know why I renamed the old file: it doesn't have any advantages over renaming the new file. It's also simpler because you don't need the step rm foo. And you shouldn't call rm foo, because mv foo.new foo is atomic: it removes the old version and puts the new version in place at the same time. – Gilles 'SO- stop being evil' Dec 21 '18 at 07:41
  • What about cp -f foo foo.new && cut -d , -f 1,3 <foo >foo.new && mv -f foo.new foo to retain perms/ownership AND protect against interruptions? – stackprotector Aug 17 '21 at 10:46
  • @stackprotector That works too, provided that the file isn't so large that you'd mind temporarily duplicating the data. – Gilles 'SO- stop being evil' Aug 17 '21 at 11:08
  • In that case, one could then use cut -d , -f 1,3 <foo >foo.new && chmod --reference foo foo.new && chown --reference foo foo.new && mv -f foo.new foo to retain perms/ownership and protect against interruptions. Thx for clarification! – stackprotector Aug 17 '21 at 11:34
9

I don't think that is possible using cut alone. I couldn't find it in the man or info page. You can do something such as

mytemp=$(mktemp) && cut -d" " -f1 file > $mytemp && mv $mytemp file

mktemp makes you a relatively safe temporary file that you can pipe the cut output into.

Steven D
  • 46,160
1

Try vim-way:

$ ex -s +'%!cut -c 1-10' -cxa file.txt

This will edit the file in-place (so do the backup first).

Alternatively use grep, sed or gawk.

kenorb
  • 20,988
0

You can use slurp with POSIX Awk:

cut -b1 file | awk 'BEGIN{RS="";getline<"-";print>ARGV[1]}' file

Example

Zombo
  • 1
  • 5
  • 44
  • 63
0

Well, since cut produces less output than it reads, you can do:

cut -c1 < file 1<> file

That is, make its stdin the file open in read-only mode and its stdout the file open in read+write mode without truncation (<>).

That way, cut will just overwrite the file over itself. However, it will leave the rest of the file untouched. For instance, if file contains:

foo
bar

The output will become:

f
b
bar

The f\nb\n have replaced foo\n, but bar is still there. You'd need to truncate the file after cut has finished.

With ksh93, you can do it with its <>; operator which acts like <> except that if the command succeeds, ftruncate() is called on the file descriptor. So:

cut -c1 < file 1<>; file

With other shells, you'd need to do the ftruncate() via some other means like:

{ cut -c1 < file; perl -e 'truncate STDOUT, tell STDOUT';} 1<> file

though invoking perl just for that is a bit overkill here especially considering that perl can easily do that cut's job like:

perl -pi -e '$_ = substr($_, 0, 1)' file

Beware that with all methods that involve actual in-place rewriting, if the operation is interrupted midway, you'll end up with a corrupted file. Using a temporary second file avoids this problem.