Whilst the answer given by AaronS gets to the gist of it (rather harshly if I may say so), I think the general principle that applies to any secure solution is defence-in-depth. That is, using several layers of security to prevent against attack, compromise or data leakage.
I'm not going to go into the debate of what percentage of attacks/leaks are internal, but whatever it is, this should not be forgotten or ignored either. You might follow AaronS's suggestion and 'never open your LAN to the internet' and put the server on the DMZ, and an employee might still be able to dump your database onto a memory stick and walk out the door with it.
So if you're looking for buzzwords, I would start with making sure you have network protection (e.g. Firewall, Intrusion Detection/Prevention) in place first, apply all the necessary patches and updates to your software and OS (Hardening). Use network segregation to separate your applications so that data flow is tightly controlled (DMZ is one common example, but other network segmentation models also work). Make sure the web service is developed securely, tested and code reviewed. Even with the best firewall and DMZ in the world, if the application itself is vulnerable, then most of this protection might have limited effect. Since the external facing web service will likely consume data or communicate somehow with internal components, make sure the same methodologies apply to them too. Make sure Authentication and Authorization are used to control access to data. Make sure you have solid monitoring and logging in place... The list goes on. This is by no means a substitute to a proper security architecture / analysis, which the security people in your organisation should be able to perform.
As for the 'high security proxy' you were asking about - there are a few products that provide application layer protection, see Application Firewall on Wikipedia. Those can also be added to the mix, but I would not rely solely on these for your security.