I really enjoying using control+r
to recursively search my command history. I've found a few good options I like to use with it:
# ignore duplicate commands, ignore commands starting with a space
export HISTCONTROL=erasedups:ignorespace
keep the last 5000 entries
export HISTSIZE=5000
append to the history instead of overwriting (good for multiple connections)
shopt -s histappend
The only problem for me is that erasedups
only erases sequential duplicates - so that with this string of commands:
ls
cd ~
ls
The ls
command will actually be recorded twice. I've thought about periodically running w/ cron:
cat .bash_history | sort | uniq > temp.txt
mv temp.txt .bash_history
This would achieve removing the duplicates, but unfortunately the order would not be preserved. If I don't sort
the file first I don't believe uniq
can work properly.
How can I remove duplicates in my .bash_history, preserving order?
Extra Credit:
Are there any problems with overwriting the .bash_history
file via a script? For example, if you remove an apache log file I think you need to send a nohup / reset signal with kill
to have it flush it's connection to the file. If that is the case with the .bash_history
file, perhaps I could somehow use ps
to check and make sure there are no connected sessions before the filtering script is run?
ignoredups
instead oferasedups
for a while and see how that works for you. – jw013 Sep 20 '12 at 15:54history
command. Where should I be looking? – Jonathan Hartley Oct 21 '19 at 14:39