2

I've inherited a php project from an offshore firm and I'm pretty sure that ~50% of the files are no longer used. I want to weed out the unused files so I was thinking of just tracking the number of times each file is requested or included while I do the next round of dev work, then trimming the fat. Is there an easy way to track this? It's running on a pretty vanilla LAMP stack.

doub1ejack
  • 537
  • 1
  • 6
  • 12

4 Answers4

3

You could use a web stats solution to track hits across your entire site. However, this won't help with included PHP files

Another option would be to create a small script to include in every file that would write the filename to a log every time is was parsed.

sreimer
  • 2,168
  • 14
  • 17
1

Hmmm....if your backup solution preserves the last access times of files, then the solution is simple.....just do find at the top of the directory structure(s) and check for unaccessed files. For example:

 find [dir1][dir2]...[dirn] -type f -atime +180 -print  

would find files in the named directories that have not been accessed in 180 days.

mdpc
  • 11,698
  • 28
  • 51
  • 65
1

I think inotify is the best solution, try use inotifywatch for monitor ACCESS/OPEN operations in specified directory:

# inotifywait -e access -e open -m -r /home/jamzed/

Now all 'access' & 'open' operations in /home/jamzed/* will be monitored.

example: if I do '$ cat examples.desktop', then inotifywatch write on STDOUT:

/home/jamzed/ OPEN examples.desktop /home/jamzed/ ACCESS examples.desktop

You can redirect STDOUT to file ( >> file_to_analyze ), for better analyze which files wasn't used.

jamzed
  • 1,080
  • 7
  • 8
0

You can use auditd to monitor file access at the kernel level, then use ausearch to search/grep the audit logs. This should catch everything no matter what process/user/subshell is touching the files. If this is a busy server obviously do some sanity checks before setting up a large number of watches.

hurfdurf
  • 933
  • 7
  • 11