2

I'm looking for a way to delete the smaller of two files. I have found a way to find the duplicate files in my Movie Library, but now I need to delete the smaller file.

bash - Find All Files with Same Name Regardless of Extension

A few "gotchas" of course... The current output of the list has no extensions.

Eg:

root@fs:ls * |  awk '!/.srt/'  | sed 's/.\{4\}$//' | sort | uniq -d
Captain Fantastic (2016)
The Hurt Locker (2008)
The Warriors (1979)

I need a way to go back and compare both files with the name Captain Fantastic (2016).* because the will have different extensions. (All Files are in the same folder)

This is a bit beyond the scope of the question:

I also want to check if FILENAME.srt file exists, if it does, then I want to just do nothing and save the filename to a log for manual verification. (I need to figure out which file the srt syncs up with).

2 Answers2

2

Suggestion:

#!/bin/sh

# Look at filenames in current directory and generate list with filename
# suffixes removed (a filename suffix is anything after the last dot in
# the file name). We assume filenames that does not contain newlines.
# Only unique prefixes will be generated.
for name in ./*; do
        [ ! -f "$name" ] && continue # skip non-regular files
        printf '%s\n' "${name%.*}"
done | sort -u |
while IFS= read -r prefix; do
        # Set the positional parameters to the names matching a particular prefix.
        set -- "$prefix"*

        if [ "$#" -ne 2 ]; then
                printf 'Not exactly two files having prefix "%s"\n' "$prefix" >&2
                continue
        fi

        # Check file sizes and remove smallest.
        if [ "$( stat -c '%s' "$1" )" -lt "$( stat -c '%s' "$2" )" ]; then
                # First file is smaller
                printf 'Would remove "%s"\n' "$1"
                echo rm "$1"
        else
                # Second file is smaller, or same size
                printf 'Would remove "%s"\n' "$2"
                echo rm "$2"
        fi
done

This assumes GNU stat.

Kusalananda
  • 333,661
1

Well this really answers my question specifically, in that Re: Kusalananda's comment, if it finds more than 2 files, it will screw everything up and delete the wrong files. This script is tailored for what I need, but could be adapted for other purposes.

#!/bin/bash

#Create Log with Single Entry for Each Duplicate Without File Extension
duplog='dupes.log'
ls * |  awk '!/.srt/'  | sed 's/.\{4\}$//' | sort | uniq -d > "$duplog"

#Testing!
cat "$duplog"

#List Each Iteration of File in log starting with Largest File
log='tmp.log'
while read p; do

#More Testing!
du -k "$p".*

ls -1S  "$p".* >> "$log"
done < $duplog

#Testing!
cat "$log"

#Remove Large File Entry via Sed
#Note: This relies on only two variations being found or it will delete wrong lines in file
sed -i '1~2d' "$log"

#Testing!
cat "$log"

#Delete Smaller File
while read p; do
  echo "Deleting $p"
  rm "$p"
done <"$log"

#Delete Log
rm "$log"

Output:

root@fs:/Movies# du -k tk.m*
4       tk.mkv
0       tk.mp4
root@fs:/Movies# ./test.sh
tk
4       tk.mkv
0       tk.mp4
tk.mkv
tk.mp4
tk.mp4
Deleting tk.mp4
root@fs:/Movies#

PS: I'm sure this is 'hackish', but it works for what I need and its another step in the learning process :)