I have following content in a file.
$ cat file.txt
code-coverage-api
jsch
cloudbees-folder
apache-httpcomponents-client-4-api
apache-httpcomponents-client-4-api
jsch
apache-httpcomponents-client-4-api
jsch
apache-httpcomponents-client-4-api
jackson2-api
apache-httpcomponents-client-4-api
workflow-api
echarts-api
workflow-api
envinject-api
workflow-durable-task-step
apache-httpcomponents-client-4-api
My expected output is:
code-coverage-api
jsch
cloudbees-folder
apache-httpcomponents-client-4-api
jackson2-api
workflow-api
echarts-api
envinject-api
workflow-durable-task-step
At the moment, I am sorting the content like below and then removing the duplicates (except one element) by hand.
$ cat file.txt |sort
Is there a way to keep only one duplicate element in the file and remove the remaining duplicate elements from the list? also, keep in mind that there are some elements which don't have any duplicates.
awk
's way : https://unix.stackexchange.com/questions/159695/how-does-awk-a0-work/159697#159697 – Archemar Sep 06 '20 at 10:53