Squid dates back from when a company or enterprise, with a 1.5Mbps T1 connection (or less) to the Internet, would run a proxy server for many users on a network. This had the following benefits:
- This would prevent duplicate requests for resources going out over relatively slow WAN link (compared to 10/100/1000 LAN speeds).
- Since all Internet-facing HTTP requests have to go through Squid, logging and filtering was easy to do.
- IIRC Squid has authentication support so only allowing certain users to have Internet access is possible, as well as tracking accesses per user.
Squid benefits the best when you have multiple users/systems on a network that will pull from its cache. Browsers had some level of caching for a long time.
The benefit might be less these days because
WAN speeds are much higher than they were in T1 days
Most websites serve dynamic content.
Websites that are heavy on AJAX/XHR/Websockets for interactivity - very common now - don't generate a lot of cacheable traffic.
Many media-heavy sites would consume a lot of cache space and try to avoid caching due to copyright concerns.
HTTPS is not cacheable without a MITM setup which requires deployment of certificates to each browser wanting to use the proxy.
For a single user, Squid is overkill unless you want its logging facilities or want to purposefully not provide a system with direct Internet access but still let it get to the web in some way.
This is really brilliant information, thank you! I was wondering about the HTTPS part: These days it's obligatory in certain EU countries like Germany to provide HTTPS in business contexts (even simple websites), so this one really brings in doubt the usefulness of a web cache for the average use-case. – frhd – 2018-11-17T22:03:30.267