Our outsourced IT service provider runs the following script via cron to clean up Oracle core and trace files. It clearly is not a well-written script, but my question for the Serverfault crowd is whether there is an error or boundary condition that would have it remove other directories, such as these:
/ora/admin/SCRM01P/bdump /ora/admin/SCRM01P/cdump /ora/admin/SCRM01P/pfile /ora/admin/SCRM01P/udump
We recently had these directories deleted on a production system, crashing Oracle. Have a look at this code. Your insight is appreciated, as I am not very good at Korn shell.
#!/usr/bin/ksh
#This script check the utilization of the location "/ora/admin/SCRM01P"
#and if this exceeds the threshold which is 75%, then it attemps to remove all of the
#core dump files which are "core_*" and "cdmp_*"
#Otherwise, is removes these core dumps that are older than 7 days
THRESHOLD=75
MTIME=7
TOP_DIR=/ora/admin/SCRM01P
cd ${TOP_DIR}
USED=$(df -k ${TOP_DIR} |tail -1|awk '{print $5}'|grep \%|sed 's/%//')
[ ${USED} -gt ${THRESHOLD} ] && MTIME=-1
find ${TOP_DIR}/* -mtime +${MTIME} -type d \( -name "core_*" -o -name "cdmp_*" \) 2>/dev/null|while read DIRTOREMOVE
do
rm -rf $DIRTOREMOVE
#Due to a known Soralis issue, the directory may not be removed by the command above
rmdir $DIRTOREMOVE >/dev/null 2>&1
done
find ${TOP_DIR}/* -mtime +${MTIME} -name "*.trc" -size +2000 2>/dev/null|while read TRACE_FILE
do
cp /dev/null ${TRACE_FILE}
done