In part, I agree with D.W., prevention is greater than detection but only when it works. Once your machine is compromised, you'll kick yourself for not having better (or any!) detection mechanisms in place. In general, I try to split my focus on both, and at some points detection may get a little more attention.
Someone with your risk profile (small business owner with basic shared hosting provider etc), usually doesn't have the ability to spend the time, money and effort or make the required changes to fully invest in a security solution. I would recommend a 3 prong approach consisting of prevention, detection, and recovery.
Note, this is not an exhaustive discussion on each of these 'prongs', but rather are here to get your thinking juices going. I would love to hear what you decide to implement, along with anyone else's!
Prevention
I won't spend to much time on prevention, there is just to much to cover. D.W. pointed out some starting points in his post, and there are a ton of resources online regarding secure coding. I suggest you start with the OWASP and Mozilla guides:
https://www.owasp.org/index.php/Category:OWASP_Guide_Project
https://www.owasp.org/index.php/Cheat_Sheets
http://code.google.com/p/owasp-development-guide/wiki/Introduction
https://wiki.mozilla.org/WebAppSec/Secure_Coding_Guidelines
Detection
Some modern malware (and intruders) have the ability to root themselves so deeply into the innards of the victim OS that verifiable removal approches impossible. However, I would not suggest putting all your eggs in one basket e.g., only focusing on prevention. Focusing on only prevention leaves a lot of room post-compromise that could have been gathering artifacts and evidence.
Shared hosting environments are not always the best for their customizability in terms of what needs to happen for a security initiative, so it is very dependent on the provider. (see How to keep a shared web hosting server secure?)
Your incident was mostly likely part of a mass compromise, e.g., SQL injection of a site on your shared host that lead to OS compromise. That script then searches for *.php's to inject itself into. Once infected, the worm/attacker moves onto the next host(s). While this may not be exactly what happened, it's a common case, and demonstarts a common attack pattern that is hardly a sophisticated attack or an impossible task to detect and clean up.
Depending on what are able to deploy locally, your best bet maybe use a 3rd party service that looks for signs of infections, e.g.,
http://www.qualys.com/products/qg_suite/malware_detection/
http://www.stopthehacker.com/
Another option is to use a simple script to compare checksums of all your files and send you an email when it's different. You could then use git hooks to update your script config with the current 'good' checksums. I've had good luck with simple scripts like this for other things as well, e.g., SSL cert checking, verification of headers, etc.
Recovery
Now that you have a detection mechanism in place, what happens when you get that dreaded email at 3am? Having a clear process in place will greatly reduce downtime and grey hairs.
As you mentioned git in your question, I will assume you are a current user. There are a few good posts on how to use Git to manage a website. I would recommend using a process similar to one detailed in:
http://danielmiessler.com/study/git/#website
https://stackoverflow.com/a/2129286/85663
http://feed.nixweb.com/2008/11/24/using-git-to-sync-a-website/
Remember, that is only your code, not necessarily all your data, e.g., databases and uploaded content.