You probably have an app/process that has that file open. When you delete a file that is held open by an app the OS still sees the file size in df as the file is still open in memory.
Here is some documentation I wrote for a co worker that should get you what you need.
Truncate large open files
You have deleted files to free space but space not free afterward.
Now df -lah and du -lah show different sizes
Use LSOF to get deleted but held files
lsof |grep deleted
This will show all files deleted but held open by an app.
java 2943 gateway 410w REG 253,3 50482102 139274 /opt/span/app/node/default/var/attachments/att180368_0.part (deleted)
java 2943 gateway 411w REG 253,3 46217973 139284 /opt/span/app/node/default/var/attachments/att182230_0.part (deleted)
java 2943 gateway 412w REG 253,3 50483894 139280 /opt/span/app/node/default/var/attachments/att181920_0.part (deleted)
You can either restart the app to unlock files or truncate the files.
To truncate the files you will have to look at the output above to get the PID and fd (file descriptor number).
truncate the file with
echo > /proc/PID/fd/fd_number
Example: to zero the file size of three listed above you would issue the following
echo > /proc/2943/fd/410
echo > /proc/2943/fd/411
echo > /proc/2943/fd/412
if you have many to truncate bash to the rescue.
for n in {410..412}; do 'echo > /proc/2943/fd/$n'; done;
df -lah should show free space now
BUT
files will show under lsof |grep deleted but will be 1 in size
java 2943 gateway 410w REG 253,3 1 139274 /opt/span/app/node/default/var/attachments/att180368_0.part (deleted)
java 2943 gateway 411w REG 253,3 1 139284 /opt/span/app/node/default/var/attachments/att182230_0.part (deleted)
java 2943 gateway 412w REG 253,3 1 139280 /opt/span/app/node/default/var/attachments/att181920_0.part (deleted)
The files descriptors will be released on next reboot or restarting/reloading of the app that opened the files.
Of course you will have to adjust your commands to match the output of your locked files.