I have a text file that has a multiple lined string of text I'd like to scan the file for and remove all instances that it finds of that multilined and potentially sometimes duplicate string.
example string:
recursive-test yes;
test-limit{
tests 10;
};
location "testLoc" {
type test;
};
location "testLoc2"{
type test;
file "/etc/var/test.sql";
};
include "/etc/var/test.conf";
};
recursive-test yes;
test-limit{
tests 10;
};
location "testLoc" {
type test;
};
location "testLoc2"{
type test;
file "/etc/var/test.sql";
};
include "/etc/var/test.conf";
};
otherTestTextHere
123
321
recursive-test yes;
test-limit{
tests 10;
};
location "testLoc" {
type test;
};
location "testLoc2"{
type test;
file "/etc/var/test.sql";
};
include "/etc/var/test.conf";
};
As you can see, the repetitive string of text in the text file is always the same, from start of the string, to the end of the multiple lines, it's always the same:
recursive-test yes;
test-limit{
tests 10;
};
location "testLoc" {
type test;
};
location "testLoc2"{
type test;
file "/etc/var/test.sql";
};
include "/etc/var/test.conf";
};
The multilined string should not be duplicated normally, but as a failsafe I'm looking also for a method that will just scan for all instances and remove it entirely if for some reason the string ever gets duplicated from another application that's writing to the text file.
Using sed
I can only figure out how to delete one line at a time, however that wont work for me since sometimes some of the words on some of the lines in the multilined string, show up in other multilined strings that are similar but I want to keep. I'm really just trying to search for 'exact' duplicates of this multilined string from start to finish of the string.
I'm trying to keep it to a one line command line/optimized.