33

The only time I've used my browser's proxy settings is when setting up Burp Pro, which makes me wonder:

What was the original use-case for these settings? Other than testing / debugging, what was this feature designed for? Do people actually use this in the wild for corporate content filtering, access control, etc?

(Note: I'm less interested in Tor or other anonymization technologies; I'm more interested in traditional infrastructure / original design intent)

For reference, I'm talking about these settings:

Firefox proxy settings page

Mike Ounsworth
  • 57,707
  • 21
  • 150
  • 207
  • 8
    I used to use it to bypass my schools firewall when I was a kid to look a pron in the library – DotNetRussell Jul 06 '18 at 15:06
  • Personally I've found it convenient the few times I've wanted to proxy one browser to a development system but not another browser. No idea what the original use-case was though. – AndrolGenhald Jul 06 '18 at 15:09
  • 20
    my company uses proxies for content filtering so yeah, they're used in the wild. – BgrWorker Jul 06 '18 at 15:09
  • 12
    Used them in a student house where only one computer was connected to (dialup) internet and several of us wanted to access the internet - set up the connected one as a HTTP proxy, and everyone else could browse at the same time. – Matthew Jul 06 '18 at 15:12
  • 1
    Are you asking why one would want to browse through a proxy, or are you asking why there are program-specific settings in the browser (instead of just using the system proxy)? – Bergi Jul 06 '18 at 16:24
  • @Bergi Yes to all of the above. Why does this feature exist (at either the browser or system level)? Maybe it's an artifact of me being under 30, but I have never used this feature ever in my life, so I'm wondering why it exists. – Mike Ounsworth Jul 06 '18 at 16:26
  • 3
    You may want to tweak the wording, the way it's presented seems to oppose Tor and non nefarious, compounding the problem of people viewing privacy preservation as an inherently bad thing. – user36303 Jul 08 '18 at 13:16
  • @user36303 Wow, people are really touchy about that, eh? To avoid the argument, I've removed the word. – Mike Ounsworth Jul 08 '18 at 15:03
  • 1
    Thank you kindly. And yes, some people do get very much aware of the constant normalization of the "privacy is wrong" meme. Touchy if you want. I find it helps if, for the sake of argument, you replace "privacy" with, say, another freedom you like more, and think about what you'd think if people were couching that freedom in derogatory terms. This is when you come to realize that it is worth pointing these cases out, since people often do not realize they are falling in that normalization trap. – user36303 Jul 08 '18 at 15:11
  • Oh I'm aware of the privacy concerns, but I explicitly put the words "non-nefarious" in there to avoid answers like "Proxies are for getting netflix content from other countries" (and even so, there is one deleted answer like that below). You have to admit that while not all uses of anonymization tech are nefarious, there certainly are a lot of nefarious uses. – Mike Ounsworth Jul 08 '18 at 15:19
  • 1
    This question seems quite broad, since I see proxies being used for tons of reasons. One not yet mentioned is to control routing in an international office, that is you select the proxy of a branch office to see the internet as if you were using their connection. – PlasmaHH Jul 09 '18 at 09:47
  • 2
    Yes, I will readily admit people with nefarious intent also like privacy :) – user36303 Jul 09 '18 at 14:25
  • There's hundreds of use cases for utilizing proxies, from utilizing corporate networks, UTM routers, tunneling over SSH, DNS crypt / secure DNS services, etc. – JW0914 Jul 09 '18 at 19:34
  • @JW0914 I'm sure there are hundreds of applications, but in my 25 years of being "a computers guy" I've never encountered any of them, hence the question :) feel free to post an answer. – Mike Ounsworth Jul 09 '18 at 22:54
  • 1
    @MikeOunsworth I listed some in my comment... Comments aren't meant to be full-fledged answers to questions. There's several users that posted well written answers that expanded on reasons, else search engine of choice would also be a great outlet ([DuckDuckGo](https://duckduckgo.com/?q=purpose+of+browser+proxies&atb=v117-1&ia=web) / [Google](https://www.google.com/search?source=hp&ei=1RNEW5GdK4ngjwTwq7vICA&q=purpose+of+browser+proxies&oq=purpose+of+browser+proxies&gs_l=psy-ab.3...5985.5985.0.6536.1.1.0.0.0.0.0.0..0.0....0...1.1.64.psy-ab..1.0.0....0.y9H3dKsOfuo)) – JW0914 Jul 10 '18 at 02:04
  • @JW0914 Thank you for constructive links rather than sass. – Mike Ounsworth Jul 10 '18 at 02:44

6 Answers6

67

One of the early uses of HTTP proxies was caching proxies in order to make better use of the expensive bandwidth and to speed up surfing by caching heavily used content near the user. I remember a time when ISPs employed explicit mandatory proxies for the users. This was at a time when most content on the internet was static and not user-specific and thus caching was very effective. But even many years later transparent proxies (i.e. no explicit configuration needed) were still used in mobile networks.

Another major and early use case is proxies in companies. These are used to restrict access and also still used for caching content. While many perimeter firewalls employ transparent proxies where no configuration is needed, classic secure web gateways usually have at least the ability to require explicit (non-transparent) proxy access. Usually not all traffic is send through the proxy, i.e. internal traffic is usually excluded with some explicit configuration or using a PAC file which defines the proxy depending on the target URL. A commonly used proxy in this area is the free Squid proxy which provides extensive ACL's, distributed caching, authentication, content inspection, SSL interception etc.

Using an explicit proxy for filtering has the advantage that in-band authentication against the proxy can be required, i.e. identifying the user by username instead of source IP address. Another advantage is that the connection from the client ends at the proxy, and the proxy then only forwards the request to the server given in the HTTP proxy request if the ACL check is fine and maybe after rewriting parts of the request (like making sure that the Host header actually matches the target host). Contrary to this in inline-IDS/IPS (which is the basic technology in many NGFW) the connection setup from the client is already forwarded to the final server and ACL checks are done based on the Host header, which might or might not match the IP address the client is connecting to. This is actually used by some malware C2 communication to bypass blocking or detection by claiming a whitelisted host in the Host header but having actually a different target as IP address.

Myles
  • 103
  • 4
Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • 4
    @MikeOunsworth It may be worth mentioning that there are a couple browser extensions (Foxy Proxy, Switchy Omega, etc.) that let you set rule-based proxy settings. I can set private hostnames (`*.lan.example.com`) to run through a proxy to their network while other connections use the default settings. – Michael Jul 06 '18 at 16:12
  • 1
    Back in the day we used proxies to slow down LAN connections from the desktop to the server so designers could experience the web sites they were building at modem-speed. This was back in the day when 56.6 was considered high speed. No designers ever took this very seriously. – Gaius Jul 08 '18 at 11:39
  • 1
    @Michael: For that matter, you can do that without an extension if you know enough JS to write a basic [proxy auto-config](https://en.wikipedia.org/wiki/Proxy_auto-config) file. – Ilmari Karonen Jul 09 '18 at 13:40
  • @IlmariKaronen Oh, neat. I'd never heard of that. It seems very straight forward. – Michael Jul 09 '18 at 14:01
15

Some examples as follows:

  • To enable a firewall rule like 'proxy server to any destination on 80, 443' instead of from 'any internal to any external'
  • To monitor all websites visited through logs
  • To control, limit, filter websites visited through enforcing rules - these could be lists of approved sites, blacklisted sites, content categories etc
  • To enforce user authentication to use the internet - e.g. limiting to domain users, certificate holders
  • You could have separate egress points for different contexts if you're on VPN, in local office, on workstation, on a server
AndyMac
  • 3,149
  • 12
  • 21
  • Sounds like a swiss-army-knife feature. Is the reason that I've never had to use it in my lifetime because other technologies have taken its place - like more powerful router configs, getting redirected to a wifi login page, etc ? – Mike Ounsworth Jul 06 '18 at 15:19
  • 2
    Usually network traffic will be routed via the proxy rather than the need for local browser configuration. A disadvantage with local config is you've to change it depending on the environment you're in. Usually now everything is invisible to you. However, if you install TOR, you can use a local browser proxy config to route traffic through it so there are still legitimate current uses. – AndyMac Jul 06 '18 at 15:53
12

For the "original" reason, think back to 1993, when Netscape 0.9 was released. It had a "Mail and Proxies" Options dialog (per a copy of the manual). At that time, most Internet links were 56-kbit or fractional T1 lines between university campuses and government. There were several ways an HTTP proxy could help or even be required:

  • The web-browser might be on a TCP/IP LAN with no (IP-level) routing to the Internet, but with a host dual-homed on that LAN and the Internet. The dual-homed host could run an HTTP proxy service and a DNS proxy service, allowing clients on the LAN to access the Web. (Note that the RFCs describing the standard private-address ranges and NAT, RFC 1597 and RFC 1631, were not published until March and May of 1994.)
  • Even if the LAN had routable addresses, or even after NAT was deployed, the off-site bandwidth was probably a lot less than the local bandwidth between clients and a potential proxy location. As long as the clients were browsing a significant amount of the same, static or slowly-changing, content, the proxy made things better for the clients (by returning cached content quickly) as well as the network operator (by freeing up bandwidth for other network functions, or reducing charges for data usage when billing was per-packet or per-byte).
  • If enough end users were behind proxies, it took the edge off what would 10 years later be called the "Slashdot effect": origin servers for worldwide-popular content would only have to serve it to each proxy, not to each end user.

Of course, sending all HTTP traffic through a designated process also makes that process a good control-point for applying security policy: filtering by keywords in the (decoded) request or response body; only allowing access to users who authenticate to the proxy; logging.

Yes, there are organizations that "push" a proxy policy to end-user devices they control, and enforce it by a restrictive policy at the border routers.

Also note that even if you think you're browsing on a "clean" network, your traffic may be going through a proxy; see for example the Transparent Proxy with Linux and Squid mini-HOWTO.

But it's true that an explicitly-configured proxy may give little advantage or even make browsing a lot worse on today's Internet. When popular websites use CDNs, and most content is dynamic, and even content that seems like it could be cached (like Google Maps tiles and Youtube video data) is varied based on browser version, OS, screen size, or even a random cookie meant to make it uncacheable, caching saves little bandwidth for a cache near the end-user (although origin servers often have caches in front of them). For the uncacheable content, another cache adds RTT to every request, making browsing slower.

david
  • 711
  • 3
  • 11
  • Note however that HTTP caches were rarely enough to avoid the so called "/. effect". They could very slightly diminish it. Note also that at the time *extremely* few non-finance websites were in HTTPS (even Webmail was mostly in HTTP), allowing almost all Web accesses to be intercepted. – curiousguy Jul 09 '18 at 01:34
10

Yes, they're frequently used in the corporate world. Perhaps less so now than in the past, but years ago they were commonly the only gateway from a local network to the Internet. In many cases only some domain users were even authorized to access the Internet, and a proxy server like ISA would be configured on the network edge, and users would have to authenticate in order to traverse it.

Beyond simply restricting who could access the Internet this was indeed used for content filtering, content inspection, and reporting on who was spending all their work time looking for new jobs on Monster.com. There were other non-security related functions as well, such as caching. 20 years ago it was not uncommon to have hundreds or even several thousand people in a facility connected to the Internet using a relatively small pipe like a fractional T-3, or even a T-1. It was very useful to be able to cache content at the local network edge so the very limited bandwidth the the outside world wasn't saturated with repeated requests for the same resources.

Xander
  • 35,525
  • 27
  • 113
  • 141
1

Academic journal access is often restricted, and sometimes part of the restriction is IP address. By using my university's proxy server (which required a valid login) I could access journals while working from home (past tense only as I haven't tried it for a couple of years). I could either set it for only journal publisher websites or use a proxy-switching extension in Firefox.

In thoery I could have used the university VPN, but the one here requires a client program that doesn't run on Linux (let alone on my Chromebook). A previous VPN required a lot of setting rules in a dumb UI to get basic things to work (like any email not run by them).

Chris H
  • 4,185
  • 1
  • 16
  • 22
0

Also, there are use cases to control non-browser HTTP traffic:

Automatic update and "phoning home" functionalities in desktop applications can be to an extent controlled by letting actual, meatware-driven web browsers use a proxy that is less restrictive than what the network infrastructure allows by default.

In the same way, servers can be policed - while there are some processes on a server that create legit http egress traffic, allowing all http egress is likely to be most helpful to someone attempting to hack/sabotage the server. Also, it often makes sense to document all server egress traffic (which a proxy can do).

Also, low-bandwidth networks (eg satphone or umts) that need to conserve bandwidth often use proxies to reencode large images and other multimedia content that will not be useful in the given high-resolution, high-traffic format especially on mobile devices.

rackandboneman
  • 975
  • 4
  • 9