169

For years the press has been writing about the problem that there are now very few IPv4 addresses available. But on the other hand, I'm using a server hosting company which gladly gives out public IPv4 addresses for a small amount of money. And my private internet connection comes with a public IPv4 address.

How is that possible? Is the problem as bad as the press wants us to believe?

oz1cz
  • 1,565
  • 3
  • 11
  • 10
  • 3
    Pre-purchased IPv4 blocks and you can also still purchase IPv4 at what I assume would be high rates from IPv4 broker companies. – Daniel Cazares Jan 28 '18 at 14:06
  • 24
    Some companies still have lots of IPv4 addresses on hand. Others have very little. I have to think very carefully about using up an IPv4 address; as a result I have quite a few IPv6-only machines. – Michael Hampton Jan 28 '18 at 14:25
  • 22
    It also gives you some perspective to the amount of pain ISPs are willing to cause other people just to avoid having to deploy IPv6. – user253751 Jan 28 '18 at 22:34
  • 22
    I wouldn't call it *evil*, but it certainly is a pain. That said, most *consumers* probably wouldn't care they're behind a nat, assuming facebook and whatsapp work ._. – Journeyman Geek Jan 29 '18 at 02:57
  • 2
    My previous employer has class B address space though they only use hundreds of PCs. They have such a surplus of ipv4 addresses they literally have an IP scheme to identify PCs by their room. For example, the 3rd PC in room number 103 could have an IP of `a.b.103.3`. There were more than 255 rooms, but they have some tricks for that, such as adding 100 for certain rooms, so the second PC in room 305 could be `a.b.5.102`. If you knew the scheme and remembered the formula for cases like above, then you could mentally map back and forth. Still, about 65,000 unused addresses they still own. – Loduwijk Jan 29 '18 at 21:09
  • 9
    @JourneymanGeek Well, average consumers don't really care about anything they don't understand. There are ideas for distributed social media, for example (because that makes it very difficult to censor), but nobody cares about such things until *after* they've taken off the ground, which they can't because of NAT. I daresay NAT is one of the reasons we've ended up with a centralized Web, because it's basically impossible to host your own website without paying someone. – user253751 Jan 29 '18 at 23:19
  • 4
    (To summarize, just because average consumers don't care about an issue doesn't mean it doesn't severely impact them) – user253751 Jan 29 '18 at 23:20
  • 15
    As @Azendale pointed out, game server hosting is a big one. Why can't I just run minecraft_server.exe and give my friends my address? Because of NAT. "Consumers" most certainly do want to run game servers sometimes. – user253751 Jan 29 '18 at 23:25
  • Where I work, the network has been migrated to 10/8. Only specific legacy systems still have routable addresses. – RonJohn Jan 30 '18 at 00:34
  • 2
    On a related note, Hetzner have been using NAT for their virtual server offerings for some time now. See https://serverfault.com/q/725255/190981 – Anthony Geoghegan Jan 30 '18 at 19:06
  • 2
    @immibis You can't host your own webserver b/c your ISP consumer connection "forbids" it. They're selling you a lot of bandwidth cheap on the hope you don't use much of it, but if you have your own web server and it gets popular that screws it up. I'm sure if you paid for a business line you could host whatever you wanted. – Andy Jan 30 '18 at 23:25
  • 2
    @Andy I've *never* heard of an ISP enforcing that as long as you don't use too much bandwidth, possibly because it would drive away business. (And they also try to prevent you using excessive bandwidth via any means) But I think game servers are a better example; many people want to play games with their friends and they have relatively fixed bandwidth requirements. – user253751 Jan 30 '18 at 23:31
  • @immibis When I had Comcast I know they did. – Andy Jan 30 '18 at 23:43
  • @immibis Comcast should be nuked from orbit, but we're starting to digress a bit. – Andy Jan 30 '18 at 23:56
  • 1
    @Andy Also note that they're presumably applying the consumer definition of "server" under that clause. A web server is a server; a FTP server is a server; a game server may be a server depending on the circumstances. A peer-to-peer system is not a server even if it works by listening on a port. (however p2p system are probably prohibited under a different clause). A system like Dropbox is not going to be classified as a server by an ISP, even if it was to start doing peer-to-peer file transfers, even if it works the same as a FTP server under the hood. – user253751 Jan 31 '18 at 00:08
  • So the "no servers" restriction of ISPs doesn't make IPv6 any less useful. – user253751 Jan 31 '18 at 00:11
  • 1
    @immibis They defined it in their contract, I don't remember the exact wording. At the end of the day, they decide if you're violating your TOS. My point though is that even with IPv6 I don't think we'd see everyone running their own webservers (ISPs would still have "no server" clauses), and NAT isn't responsible for the "centralized" web. – Andy Jan 31 '18 at 00:19
  • It's not "responsible" but it is definitely a contributing factor. – user253751 Jan 31 '18 at 02:07
  • 4
    People are getting complacent because we were all told it would be a crisis around 2012 or 2013 and we're in 2018 and it's still not feeling like a crisis yet, so people have this false sense that there's nothing to worry about after all. But the "crisis" is already happening, it's just not very visible yet, at least to end clients (eg clients of hosting companies or ISPs) yet. It's affecting companies higher up that chain. – thomasrutter Jan 31 '18 at 05:43
  • 1
    The fact that your ISP has enough public IPv4 addresses to kick around its customer-base on a whim is arguably part of the problem - early ISPs got given far too many so there's basically nothing left for other high-level entities to snap up (even though you may not witness this at the customer level). It's not like ISPs are keen to "hand back" their allocated IPs to help with the global shortage. – Lightness Races in Orbit Feb 01 '18 at 15:06
  • 1
    @LightnessRacesinOrbit as far as I'm concerned, some people hoarding IPv4s is a Good Thing if it helps the rest of the Internet get off their ass to IPv6. If people hadn't handed back their addresses this transition might've been done by now instead of being kicked down the road several years. – user253751 Feb 02 '18 at 04:17

10 Answers10

179

It's very bad. Here is a list of examples of what I have first hand experience with consumer ISPs doing to fight the shortage of IPv4 addresses:

  • Repeatedly shuffling around IPv4 blocks between cities causing brief outages and connection resets for customers.
  • Shortening DHCP lease times from days to minutes.
  • Allow users to choose if they want network address translation (NAT) on the Customer Premise Equipment (CPE) or not, then retroactively turn it on for everybody anyway.
  • Enabling NAT on CPE for customers who already used the opportunity to opt out of NAT.
  • Reducing the cap on number of concurrently active media access control (MAC) addresses enforced by CPE.
  • Deploying carrier-grade NAT (CGN) for customers who had a real IP address when they signed up for the service.

All of these are reducing the quality of the product the ISP is selling to their customers. The only sensible explanation for why they would be doing this to their customers is shortage of IPv4 addresses.

The shortage of IPv4 addresses has lead to fragmentation of the address space which has multiple shortcomings:

Without NAT there is no way we could get by today with the 3700 million routable IPv4 addresses. But NAT is a brittle solution which gives you a less reliable connectivity and problems that are difficult to debug. The more layers of NAT the worse it will be. Two decades of hard work has made a single layer of NAT mostly work, but we have already crossed the point where a single layer of NAT was sufficient to work around the shortage of IPv4 addresses.

kasperd
  • 29,894
  • 16
  • 72
  • 122
  • 58
    One thing to add is that NAT also leads to malicious users impacting normal users and generally makes IP unreliable as a user-differentiation mechanism. For example, Wikipedia [blocking almost every Qatari user](https://en.wikinews.org/wiki/Qatari_proxy_IP_address_temporarily_blocked_on_Wikipedia) due to one or a few users' vandalism. – IllusiveBrian Jan 28 '18 at 17:55
  • 9
    @IllusiveBrian makes a valid point. I inherited ad-targeting software that used IP addresses as a primary identifier. This is nowhere near sufficient nowadays and has had to be extensively modified to keep it reliable. India and Greece seem to be two of the worst affected countries. I can see an ad being hit 100+ times per day from the same IPv4, but each hit can be a different user, determined by other tracking methods – Darren H Jan 28 '18 at 19:46
  • 2
    The shortage is also dependent on the region of the world where the ISP is based. Some regions have set aside a tiny bit of space for new ISPs to start with, other regions don't. Some regions allow trading/transferring of addresses to/from other regions, some don't. Some have reserved IPv4 space for IPv6 transition mechanisms etc. – Sander Steffann Jan 28 '18 at 20:02
  • 3
    What's bugging me about this is that no large tech company pushed to IPv6, like "All out traffic will be re-routed to IPv6, and we're gonna shut down IPv4 for good by the end of 2020" because if once the movement is there, the change will also come. I mean the techonology is here (I guess the infrastructure too), why rely on NAT/subnetting when we can advance? – Filnor Jan 29 '18 at 09:56
  • 2
    I heard that NAT protects outdated vulnerable devices from many network attacks. – Dmitriy Sintsov Jan 29 '18 at 13:21
  • 16
    @DmitriySintsov no more than a simple stateful firewall would. If an edge device can do NAT, it can do stateful firewalling. – mfinni Jan 29 '18 at 14:05
  • 4
    @DmitriySintsov NAT changes things so that instead of every computer being accessible (unless you have a firewall), every computer is inaccessible (and there's nothing you can do about it, in the case of CG NAT). Sure, the latter is better for security, but by that reasoning you should unplug your computer, lock it in a safe and throw away the key - because it's more secure. – user253751 Jan 29 '18 at 23:24
  • 1
    @DarrenH: Arguably from the customer standpoint that's an **advantage** of NAT/CGN. – R.. GitHub STOP HELPING ICE Jan 30 '18 at 01:18
  • @R.. I completely agree. Although the question is more about the extent of the issue, rather than its effects – Darren H Jan 30 '18 at 04:09
  • 4
    ISPs are greedy bastards. They can easily force users behind NAT just to be able to charge for not being behind NAT. – n0rd Jan 30 '18 at 21:25
  • 15
    @DarrenH "ad-targeting software that used IP addresses as a primary identifier... and has had to be extensively modified to keep it reliable. " Well that reason alone is enough to keep NAT. – Andy Jan 30 '18 at 23:35
  • @andy your tone suggests disagreement with me but I'm unsure what you disagree about. The question doesn't ask if NAT is good or bad and my comment doesn't suggest. I only mention the extent of it, not whether or not it is desirable – Darren H Jan 30 '18 at 23:52
  • 6
    @DarrenH Its just a comment about not liking ad software, whatever tone you're feeling is in your own head. – Andy Jan 30 '18 at 23:55
  • 3
    @Andy our personal likes and dislikes are off topic – Darren H Jan 30 '18 at 23:58
  • 3
    @chade_ it's not possible for a company to adopt IPv6 and "shut down IPv4". The two aren't compatible so IPv4 has to be kept alive alongside IPv6, until some distant future point after 1) *every other ISP also supports IPv6* and 2) *every website or server can be reached on IPv6* and 3) *all software on servers, routers and devices has been updated to support IPv6*. and 4) *it becomes generally agreed that IPv4 is now optional*. Even the transitional technologies require all or some of these pre-requisites to have been satisfied. – thomasrutter Jan 31 '18 at 05:49
  • @immibis NAT devices are not unplugged, they just have their inbound connectivity limited. Yes it is true that in some cases having direct IP is charged additionally. Some outdated devices (hardware boxes) also have limited support for IPv6. But perhaps inbound NAT should be easier to configure and more established approach. – Dmitriy Sintsov Jan 31 '18 at 08:25
  • Perhaps inbound NAT should be defined flexibly at application level, not as the simple inbound port to private address routing. But that would require to re-design stacks and protocols. – Dmitriy Sintsov Jan 31 '18 at 08:28
  • 3
    @thomasrutter : It is *possible*. It would just harm their bottom line (although if, say, Facebook told everyone to support IPV6 it might well happen). – Martin Bonner supports Monica Feb 01 '18 at 17:30
  • 1
    can you expand some of these initialisms/acronyms? I know what most of them are but not "CPE" and "CAM" – strugee Feb 02 '18 at 01:23
  • Why does the IPv4 shortage cause "Reducing the cap on number of concurrently active MAC addresses enforced by CPE."? – Qsigma Feb 02 '18 at 12:47
  • 1
    @Qsigma Usually each device will request one IPv4 address each through DHCP. The more devices request IPv4 addresses from the ISPs DHCP server, the more IPv4 addresses will be used. By enforcing a cap on the number of MAC addresses the ISP can reduce the number of devices consuming IPv4 addresses. I have first hand experience with three different ISPs enforce such a cap when the CPE isn't doing NAT. Once there is a NAT in front I don't know of any justification for such a cap and haven't seen such a cap applied behind a NAT. – kasperd Feb 02 '18 at 23:39
  • Interesting, thanks. Pls consider summarising that explanation in your answer. – Qsigma Feb 03 '18 at 08:20
  • Out of curiosity, where does"*(s)hortening DHCP lease times from days to minutes*" happen? Do I -- as a consumer customer of Cox Communications -- get a 24 hour lease on a routable address solely because Cox grabbed a huge chunk 20 years ago and is good at managing it's network? – RonJohn Feb 05 '18 at 09:01
  • @RonJohn I have seen one ISP in Denmark where DHCP leases have to be refreshed every 15 minutes. – kasperd Feb 06 '18 at 09:22
  • Maybe it's time reclaim the 13 /8 ranges from the DoD. And also allocate the 16 /8 ranges in the class E range. – Calmarius May 09 '19 at 16:35
138

Before we started to run out of IPv4 addresses, we didn't (widely) use NAT. Every internet connected computer would have its own globally unique address. When NAT was first introduced, it was to move from giving ISP's customers 1 real address per device the customer used/owned to giving 1 customer 1 real address. That fixed the problem for a while (years) while we were supposed to be switching to IPv6. Instead of switching to IPv6, (mostly) everybody waited for everybody else to switch and so (mostly) nobody rolled out IPv6. Now we are hitting the same problem again, but this time, a second layer of NAT is being deployed (CGN) so that ISPs can share 1 real address between multiple customers.

IP address exhaustion is not a big deal if NAT is not terrible, including in the case where the end user has no control over it (Carrier Grade NAT or CGN).

But I would argue that NAT is terrible, especially in the case where the end user does not have control over it. And (as a person whose job is network engineering/administration but has a software engineering degree) I would argue that by deploying NAT instead of IPv6, network administrators have shifted the weight of solving the address exhaustion out of their field and on to end users and application developers.

So (in my opinion), why is NAT a terrible, evil thing that should be avoided?

Lets see if I can do it justice in explaining what it breaks (and what issues it causes that we've become so accustomed to that we don't even realize it could be better):

  • Network layer independence
  • Peer to peer connections
  • Consistent naming and location of resources
  • Optimal routing of traffic, hosts knowing their real address
  • Tracking the source of malicious traffic
  • Network protocols that separate data and control into separate connections

Let's see if I can explain each of those items.

Network layer independence

ISPs are supposed to just pass around layer 3 packets and not care what is in the layers above that. Whether you are passing around TCP, UDP, or something better/more exotic (SCTP maybe? or even some other protocol that is better than TCP/UDP, but is obscure because of a lack of NAT support), your ISP is not supposed to care; it's all supposed to just look like data to them.

But it doesn't -- not when they are implementing the "second wave" of NAT, "Carrier Grade" NAT. Then they necessarily have to look at, and support, the layer 4 protocols you want to use. Right now, that practically means you can only use TCP and UDP. Other protocols would either just be blocked/dropped (vast majority of the cases in my experience) or just forwarded to the last host "inside" the NAT that used that protocol (I've seen 1 implementation that does this). Even forwarding to the last host that used that protocol isn't a real fix -- as soon as two hosts use it, it breaks.

I imagine there are some replacement protocols for TCP & UDP out there that are currently untested and unused just because of this issue. Don't get me wrong, TCP & UDP were impressively well designed and it is amazing how both of them have been able to scale up to the way we use the internet today. But who knows what we've missed out on? I've read about SCTP and it sounds good, but never used it because it was impractical because of NAT.

Peer to Peer connections

This is a big one. Actually, the biggest in my opinion. If you have two end users, both behind their own NAT, no matter which one tries to connect first, the other user's NAT will drop their packet and the connection will not succeed.

This affects games, voice/video chat (like Skype), hosting your own servers, etc.

There are workarounds. The problem is that those workarounds cost either developer time, end user time & inconvenience, or service infrastructure costs. And they aren't foolproof and sometimes break. (See other users comments about the outage suffered by Skype.)

One workaround is port forwarding, where you program the NAT device to forward a specific incoming port to a specific computer behind the NAT device. There are entire websites devoted to how to do this for all the different NAT devices there are out there. See https://portforward.com/. This typically costs end user time and frustration.

Another workaround is to add support for things like hole punching to applications, and maintain server infrastructure that is not behind a NAT to introduce two NATed clients. This usually costs development time, and puts developers in a position of potentially maintaining server infrastructure where it would have not be previously required.

(Remember what I said about deploying NAT instead of IPv6 shifting the weight of the issue from network administrators to end users and application developers?)

Consistent naming/location of network resources

Because a different address space is used on the inside of a NAT then on the outside, any service offered by a device inside a NAT has multiple addresses to reach it by, and the correct one to use depends on where the client is accessing it from. (This is still a problem even after you get port forwarding working.)

If you have a web server inside a NAT, say on port 192.168.0.23 port 80, and your NAT device (router/gateway) has a external address of 35.72.216.228, and you set up port forwarding for TCP port 80, now your webserver can be accessed by using either 192.168.0.23 port 80 OR 35.72.216.228 port 80. The one you should use depends on whether you are inside or outside of the NAT. If you are outside of the NAT, and use the 192.168.0.23 address, you will not get to where you are expecting. If you are inside the NAT, and you use the external address 35.72.216.228, you might get where you want to, if your NAT implementation is an advanced one that supports hairpin, but then the the web server serving your request will see the request as coming from your NAT device. This means that all traffic must go through the NAT device, even if there is a shorter path in the network behind the NAT, and it means that logs on the web server become much less useful because they all list the NAT device as the source of the connection. If your NAT implementation doesn't support hairpin, then you will not get where you were expecting to go.

And this problem gets worse as soon as you use DNS. Suddenly, if you want everything to work properly for something hosted behind NAT, you will want to give different answers on the address of the service hosted inside a NAT, based on who is asking (AKA split horizon DNS, IIRC). Yuck.

And that is all assuming you have someone knowledgeable about port forwarding and hairpin NAT and split horizon DNS. What about end users? What are their chances of getting this all set up right when they buy a consumer router and some IP security camera and want it to "just work"?

And that leads me to:

Optimal routing of traffic, hosts knowing their real address

As we have seen, even with advanced hairpin NAT traffic doesn't always flow though the optimal path. That is even in the case where a knowledgeable administrator sets up a server and has hairpin NAT. (Granted, split horizon DNS can lead to optimal routing of internal traffic in the hands of a network administrator.)

What happens when an application developer creates a program like Dropbox, and distribute it to end users that don't specialize in configuring network equipment? Specifically, what happens when I put a 4GB file in my share file, and then try to access in on the next computer over? Does it directly transfer between the machines, or do I have to wait for it to upload to a cloud server through a slow WAN connection, and then wait a second time for it to download through the same slow WAN connection?

For a naive implementation, it would be uploaded and then downloaded, using Dropbox's server infrastructure that is not behind a NAT as a mediator. But if the two machines could only realize that they are on the same network, then they could just directly transfer the file much faster. So for our first less-naive implementation try, we might ask the OS what IP(v4) addresses the machine has, and then check that against other machines registered on the same Dropbox account. If it's in the same range as us, just directly transfer the file. That might work in a lot of cases. But even then there is a problem: NAT only works because we can re-use addresses. So what if the 192.168.0.23 address and the 192.168.0.42 address registered on the same Dropbox account are actually on different networks (like your home network and your work network)? Now you have to fail back to using the Dropbox server infrastructure to mediate. (In the end, Dropbox tried to solve the problem by having each Dropbox client broadcast on the local network in hopes of finding other clients. But those broadcasts do not cross any routers you might have behind the NAT, meaning it is not a full solution, especially in the case of CGN.)

Static IPs

Additionally, since the first shortage (and wave of NAT) happened when many consumer connections were not always on connections (like dialup), ISPs could make better use of their addresses by only allocating public/external IP addresses when you were actually connected. That meant that when you connected, you got whatever address was available, instead of always getting the same one. That makes running your own server that much harder, and it makes developing peer to peer applications harder because they need to deal with peers moving around instead of being at fixed addresses.

Obfuscation of the source of malicious traffic

Because NAT re-writes outgoing connections to be as if they are coming from the NAT device itself, all of the behavior, good or bad, is rolled into one external IP address. I have not seen any NAT device that logs each outgoing connections by default. This means that by default, the source of past malicious traffic can only be traced to the NAT device it went through. While the more enterprise or carrier class equipment can be configured to log each outgoing connection, I have not seen any consumer routers that do it. I certainly think it will be interesting to see if (and for how long) ISPs will keep a log of all TCP and UDP connections made through CGNs as they roll them out. Such records would be needed to deal with abuse complaints and DMCA complaints.

Some people think that NAT increases security. If it does, it does so through obscurity. The default drop of incoming traffic that NAT makes mandatory is the same as having a stateful firewall. It is my understanding that any hardware capable of doing the connection tracking needed for NAT should be able to run a stateful firewall, so NAT doesn't really deserve any points there.

Protocols that use a second connection

Protocols like FTP and SIP (VoIP) tend to use separate connections for control and actual data content. Each protocol that does this must have helper software called an ALG (application layer gateway) on each NAT device it passes through, or work around the issue with some kind of mediator or hole punching. In my experience, ALGs are rarely if ever updated and have been the cause of at least a couple of issues I have dealt with involving SIP. Any time I hear someone report that VoIP didn't work for them because audio only worked one way, I instantly suspect that somewhere, there is a NAT gateway dropping UDP packets it can't figure out what to do with.

In summary, NAT tends to break:

  • alternative protocols to TCP or UDP
  • peer-to-peer systems
  • accessing something hosted behind the NAT
  • things like SIP and FTP. ALGs to work around this still cause random and weird problems today, especially with SIP.

At the core, the layered approach that the network stack takes is relatively simple and elegant. Try to explain it to someone new to networking, and they inevitably assume their home network is probably a good, simple network to try to understand. I've seen this lead in a couple of cases to some pretty interesting (excessively complicated) ideas about how routing works because of confusion between external and internal addresses.

I suspect that without NAT, VoIP would be ubiquitous and integrated with the PSTN, and that making calls from a cell phone or computer would be free (except for the internet you already paid for). After all, why would I pay for phone when you and I can just open a 64K VoIP stream and it works just as well as the PSTN? It seems like today, the number 1 issue with deploying VoIP is going through NAT devices.

I suspect we don't usually realize how much simpler many things could be if we had the end to end connectivity that NAT broke. People still email (or Dropbox) themselves files because if the core problem of needing a mediator for when two clients are behind NAT.

Azendale
  • 1,505
  • 2
  • 11
  • 14
  • While NAT may not be the ideal solution, a hierarchical model for world-wide addressing and routing seems better to me than trying to use a "flat" addressing space, which is what IPv6 would impose. – supercat Jan 29 '18 at 19:34
  • 3
    @supercat IPv6 addresses are globally unique, [but not flat](https://www.ietf.org/rfc/rfc3513.txt?number=3513) (to support routing, which needs to be hierarchical). Seems to me that if we want any two Internet-connected hosts to theoretically be able to communicate, globally unique addresses in some form are necessary. – Jakob Jan 29 '18 at 20:43
  • @Jakob: A hierarchical addressing format will allow for addresses to be much longer in extreme cases than they would need to be for simple cases. A fixed 128-bit address doesn't seem very hierarchical to me unless all components are required to be very short. – supercat Jan 29 '18 at 21:15
  • 2
    @supercat It's hierarchical. Seems like you're talking about variable-length addresses. That's an interesting idea, but probably not necessary given the abundance of IPv6 addresses. – Jakob Jan 29 '18 at 21:34
  • 10
    @supercat It's unfortunately a persistent myth that IPv6 still doesn't have enough space for everyone. You could give a /48 to everyone on earth and still have vast amounts of space left over. To exhaust the currently allocated `2000::/3` you would have to repeat that exercise over 4,000 times! or give everyone a /34. But a /48 is good enough for virtually everyone, and those who need more can easily get it. Even if that weren't enough, there's still `4000::/3`, `6000::/3`, etc., available. We have a LOT of room; it's time to use it. See also [RFC 6177](https://tools.ietf.org/html/rfc6177). – Michael Hampton Jan 29 '18 at 23:15
  • @MichaelHampton I think the point is the hierarchical routing introduces inefficiencies in the address usage, which might lead to exhaustion of easily routable prefixes. Say, if we were to use 3 bits for the "global address" prefix, 9 bits for a country code, 12 bits for an area within a country, 24 bits for an organization number, 16 bits for subnet within the organization, and 64 bits for each subnet. - then when an area exceeds 2^24 organizations it'll have to start borrowing from other areas and still have a mess. (Granted not on the same scale as IPv4) – user253751 Jan 29 '18 at 23:33
  • Re last paragraph: I wonder what the killer app for end-to-end connectivity (again) will be. My bass-ackwards ISP *still* doesn't offer IPv6 to consumers. – user253751 Jan 29 '18 at 23:35
  • @immibis The old POTS telephone network is routed something like that. Beyond the country code, every region has something different. It's not easy to determine in advance if you even have a complete phone number! It's only like that because all the phone networks were developed separately and then interconnected much later. Fortunately we don't have to repeat such mistakes with the Internet. – Michael Hampton Jan 29 '18 at 23:37
  • @MichaelHampton Personally the restriction that most concerns me is 2^16 subnets within an organization (for a /48). It's a lot, sure, but it still seems low enough to limit flexibility. Or if the organization gets a /32 then they have plenty of subnets, but then they're not really being a good Internet citizen by conserving addresses (that's equivalent to ~8 IPv4 addresses). Or they could subdivide their /48 (or /64) into 2^32 /80 (or /96) subnets, but they will have a headache dealing with all sorts of existing software and hardware that assumes subnets can't be smaller than /64. – user253751 Jan 29 '18 at 23:40
  • That's why I think it would be better to have either 192-bit addresses (/64 global routing, /64 local routing, /64 subnet address) or to relax the /64 minimum subnet restriction. Still IPv6 is "good enough". – user253751 Jan 29 '18 at 23:43
  • 7
    @immibis You seem to have missed something. Organizations are not limited to getting either a /48 or a /32. They can get virtually any size block. It could be a /44 or a /40 or /39 or /47 or whatever. You also should read RFC 6177. – Michael Hampton Jan 29 '18 at 23:52
  • @MichaelHampton: In what I would call a "truly" hierarchical system [e.g. DNS lookup], there isn't a fixed number of levels. Instead, each node can represent an endpoint or contain other nodes (each of which can be an endpoint or contain other nodes, etc.). When IPv4 was being implemented, four-byte fixed-sized addresses were much easier to handle than variable-sized addresses, but even IPv4 packets could contain variable-sized options fields. – supercat Jan 30 '18 at 00:05
  • @MichaelHampton Sure, I'm just concerned that everyone will have to work out the trade-off between not wasting addresses, and leaving room for future re-subnetting - whereas with slightly more address bits (for the subnet address), they could have both. – user253751 Jan 30 '18 at 00:55
  • Hopefully in the future things will stop assuming 64-bit subnets, and then there will be no such problem. – user253751 Jan 30 '18 at 00:57
  • 1
    @immibis As far as I can tell you are worrying over nothing. There are plenty of subnets for everyone to get an allocation now that will last them a decade or more. – Michael Hampton Jan 30 '18 at 01:19
  • @MichaelHampton Not if 4 billion people (or groups) each want 32 bits of subnetting to play with. – user253751 Jan 30 '18 at 01:59
  • 4
    Unfortunately many people have started to use NAT as a crappy form of security and many devices like chromecasts and IoT devices assume any device that is able to connect to it is a trusted device so every consumer router I have seen will drop incoming connections to ipv6 devices as well and some I have seen have no way to disable this, only the regular port forwarding. – Qwertie Jan 30 '18 at 02:17
  • @supercat "When IPv4 was being implemented, four-byte fixed-sized addresses were much easier to handle than variable-sized addresses" and the same is still true today and I don't see any reason for this to ever change. Variable sized addresses are simply much more complicated to deal with for hardware. That increased complexity means the hardware is slower, more expensive and needs more power. – Voo Jan 30 '18 at 16:20
  • Relevant video: https://youtu.be/v26BAlfWBm8 – Martin Schröder Jan 30 '18 at 16:38
  • 1
    @Voo: Is there any reason that complete persistent addresses should need to be handled directly by hardware? Having packets contain a DNS-formatted destination address and having a means by which a device that sends a packet can ask that it have attached to it a chain of possibly-ephemeral gateway addresses each of which is expected to understood for at least awhile by the preceding gateway (even if it's meaningless to anything else), hardware could regard everything beyond a fixed-size piece of the address as "payload". – supercat Jan 30 '18 at 17:27
  • @supercat Oh routing itself is really not my expertise, I wouldn't know. I'm just saying that from a hardware point of view knowing how large your "instructions" are, is very advantageous. The whole topic of routing on large scales is pretty fascinating, I remember reading [this arstechnica article](https://arstechnica.com/information-technology/2008/01/internet-routing-growing-pains/) (also ouch, is that really ten years old?). – Voo Jan 30 '18 at 17:47
  • 4 billion people don't need or want a /32. Again, go read RFC 6177. – Michael Hampton Jan 30 '18 at 17:49
  • 14
    ... Ok I hate NAT now; how do I switch to IPv6? – Adam Barnes Jan 30 '18 at 17:57
  • 1
    @AdamBarnes First check you're not using it already. Then ask your ISP. – user253751 Jan 30 '18 at 21:40
  • See also RIPE-690 for their recommended end-user assignment policy. – Alnitak Jan 31 '18 at 15:02
  • @supercat Having addresses that are less than complete (context dependent) means that all routing decisions can only decide how to route stuff for that context/network area. That means that things like BGP can only route around problems by taking paths in that network area/context. That could mean broken links could cause an outage when it could have been routed around. – Azendale Jan 31 '18 at 20:13
  • 2
    @AdamBarnes Asking your ISP about it is a good first step, even if just to tell them you care about it. If they don't have it, and you have a network device that can do the right kind of tunnel (protocol 41 tunnels, aka IPv6 in IP tunnels), you can get IPv6 tunnels and an address allocation to go with it from Hurricane Electric (a large, multi-country IP transit provider) for free at tunnelbroker.net. They also have great forums about everything IPv6. I learned networking there with IPv6 first, then later wrapped my head around NAT. – Azendale Jan 31 '18 at 20:17
  • @immibis It is currently possible to use a smaller than /64 subnet -- you just can't use SLAAC (stateless address auto config). I have tested (and it worked, with windows 7 as a client) a DHCPv6 setup that handed out addresses in a /96 IPv6 subnet. I just don't complain about the minimum 64 bits allocated to me by my ISP by default -- way better than the other way (having to beg for more address space). I see it as a great way to never have to ask for more. (My ISP uses a /64 for the link to the customer, and delegates a /56 automatically if requested with DHCPv6 prefix delegation.) – Azendale Jan 31 '18 at 20:21
  • @Azendale: As I've been thinking about things a bit more, I think the fundamental problem is with the concept of stateless packet delivery. That model was simple and cheap to implement, and worked well on the kinds of network that were in use when it was implemented. Fixed-length identifiers are easier to work with than variable-length, and if connections are identified using src_addr, src_port, dest_addr, and dest_port, making the length any of those parts variable would make the length of the connection ID variable as well. On the other hand, if some other means... – supercat Jan 31 '18 at 20:47
  • ...were used to identify connections, that wouldn't be an issue. The normal pattern for Internet traffic is that a client performs a DNS lookup and then uses the address returned thereby to establish a connection. If connection requests could be sent using DNS-style addressing rather than a fixed-length address, and the machines through which a packet passes build up a chain of nodes via which a reply may be sent, subsequent communications could use the received node chains for routing without needing globally-unique numeric addresses. – supercat Jan 31 '18 at 21:03
  • @supercat you might look at MPLS, my understanding is that the sender writes out the path they think it should take, and then each MPLS router reads one step in the delivery plan (1 label) and hands it off. End points decide where it should go instead of routers in between. (I could be wrong, I haven't used MPLS.) Alas, there is no "MPLS internet" AFAIK. Which goes back to the IPv6 thing: IPv6 didn't change that much from IPv4, but look how long and hard it has been to even get traction to 10% usage. Network effects are extreme in world-wide networks. (Let's re-do email next :-P ) – Azendale Jan 31 '18 at 21:38
  • @Azendale: If a NAT router which sits between an IPV4 local subnet and the Internet were to handle DNS lookups by generating a new address in the 17.xx.xx.xx block for each outside-world address, objects within the local subnet wouldn't need to know or care about addresses used in the outside world unless they (the objects within that single subnet) collectively needed to access more than 16,000,000 different hosts. IPv6 may make sense within the broader Internet, but most devices will only ever need to exist within subnets that are connected via very predictable routes... – supercat Jan 31 '18 at 21:59
  • ...i.e. either one net is contained within the other, or each can only access the other by going through the Internet. What fraction of IP-connected devices will ever need to exist in any context other than a subnet that accesses less than 16,000,000 hosts? If a device won't need to exist outside such subnets, why should it need the extra expense of IPv6? I think the reason IPv6 hasn't gotten a whole lot of traction is that it having IPv4 subnets connected via IPv6 makes more sense than imposing the expense of IPv6 onto everything. – supercat Jan 31 '18 at 22:02
  • @supercat The reason IPv6 hasn't gotten traction is nobody wants to spend the effort to deploy it until they actually need to. (such as when they have *zero* addresses left and still need to sign up customers - that seems to be what it took for US mobile networks to switch). It's certainly not because anyone is holding out for fake-IPv4-over-DNS. – user253751 Jan 31 '18 at 22:32
  • @immibis: If everything in a subnet will use the same router to communicate with everything outside, and nothing needs to accept direct connections from outside, what disadvantage would there be to having the router connect to everything inside the subnet using IPv4 indefinitely? IPv6 may be necessary for equipment that has multiple independent paths to the outside world, but there's no reason such details should need to be exposed at the endpoint level. – supercat Jan 31 '18 at 22:59
  • @supercat Complexity in the router... Nobody wants to implement such a router any more than they want to deploy IPv6. I'm not saying it *couldn't work*, just that nobody is interested in doing it except for you. – user253751 Jan 31 '18 at 23:27
  • @immibis: So replacing all existing IPv4 devices is somehow easier than having a few network engineers design IPv4<-->IPv6 bridges? I'd think having smart bridges should be a lot cheaper and easier, especially since the address translation logic wouldn't even have to be physically located at the point of the IPv4-to-IPv6 interface; that interface could simply wrap IPv4 packets within IPv6 packets sent to a bridge located elsewhere, and that bridge could in turn receive IPv6 packets on behalf of its IPv4 clients and wrap IPv4 equivalents and send them via IPv6 to the physical IPv4 gateway. – supercat Feb 01 '18 at 00:38
  • 3
    This post distorts the history of NAT to some extent. Reducing the IPv4 usage was only one reason. The main reason users adopted it actually was that back then, ISPs would often charge separately for every IP address. I recall that mine wanted around $7/month for each additional IP, and that was back in the 1990s. That was before NAT was widespread; I actually had to buy a software NAT driver for Windows 95. We have the same problem with some IPv6 providers today, too. – Kevin Keane Feb 01 '18 at 01:05
  • @supercat "what disadvantage would there be to having the router connect to everything inside the subnet using IPv4 indefinitely" -- other than the NAT issues, you end up with the devices on the subnet not being able to address any device on a network that doesn't have an IPv4 address, which may not be *much* of an issue now, but it is likely to become one in future, with more and more network connected devices being shared by IPv6 addresses that aren't addressable through IPv4. E.g. in my home, I have a security webcam that I can hook up to through an IPv6 address, but can't through IPv4. – Jules Feb 01 '18 at 22:03
  • ... I *could* configure NAT to allow connections, but thats only really an option because (1) I've taken the time to learn how to do this, which people shouldn't really need to do, and (2) my IPv4 address is supplied by an ISP that isn't using CGNAT, which is getting rarer. Also why bother when I'm likely to have IPv6 connectivity in most places I'd want to use it from anyway? Your suggestion breaks that use case. – Jules Feb 01 '18 at 22:05
  • 1
    @KevinKeane I would argue that now, if they do that, you should demand a /64. Then you have the option to use DHCPv6 and a longer subnet mask (say, a /96) and you should still have plenty of space. Or shop around for a better ISP. Also, I will say that the $7/IP was probably a measure to put economic pressure on consumers to conserve space, but now, it's probably just a nickle and dime exercise that will soon stabilize at not costing anything extra, considering the supply side of the equation. – Azendale Feb 02 '18 at 00:26
  • In my experience, the primary problem with VoIP is hosting the registration/mediator servers. No-one is willing to pay for the infrastructure that everyone else will be using for free. Skype managed to become the first truly successful VoIP software specifically because it found a solution -- to secretly host servers on capable user machines. – ivan_pozdeev Feb 02 '18 at 06:18
  • 1
    @Azendale - You are assuming that the ISP will give you a subnet in the first place. Sometimes, they'll run SLAAC (or DHCPv6) on *their* network. Cellular networks are notorious here. Sorry, no more tethering. And in IPv6, subnetting a /64 is a violation of various RFCs. quite a few things break if you try. Not just SLAAC, but also router discovery and a couple other things. Exception: a /127 for point-to-point connections is encouraged, and works because you can always assign IP addresses and routes statically. – Kevin Keane Feb 02 '18 at 07:04
  • I always thought the movement to NATs on home networks was just good networking practice finally making its way to people's homes. isn't it preferable to have a NAT at home (and even work) rather than every device being "plugged directly into" the internet? I guess NATs are different from firewalls, so every device having a public IP doesn't mean they aren't behind a firewall. – Dave Cousineau Feb 04 '18 at 05:36
  • @DaveCousineau Good networking practice would be having a stateful firewall. Stateful firewalls (by default) drop incoming connections unless they are explicitly allowed, and by default allow outgoing connections. NAT tends to imply stateful firewall like behavior because it makes where to forward incoming connections ambiguous, making the only sane default to drop them unless told otherwise (aka: port forwarding). Any hardware that can NAT SHOULD be able to just do the stateful firewall without (some) of the disadvantages of NAT. – Azendale Feb 08 '18 at 16:32
22

One big symptom of IPv4 exhaustion I didn't see mentioned in other answers is that some mobile service providers started going IPv6-only several years ago. There's a chance you've been using IPv6 for years and didn't even know it. Mobile providers are newer to the Internet game, and don't necessarily have huge pre-existing IPv4 allocations to draw from. They also require more addresses than cable/DSL/fiber, because your phone can't share a public IP address with other members of your household.

My guess is IaaS and PaaS providers will be next, due to their growth that isn't tied to customers' physical addresses. I wouldn't be surprised to see IaaS providers offering IPv6-only at a discount soon.

Karl Bielefeldt
  • 341
  • 2
  • 5
14

The major RIRs ran out of space for normal allocations a while ago. For most providers therefore the only sources of IPv4 addresses are their own stockpiles and the markets.

There are scenarios in which it is preferable to have a dedicated public IPv4 IP but it's not absolutely essential. There are also a bunch of public IPv4 addresses that are allocated but not currently in use on the public internet (they may be in use on private networks or they may not be in use at all). Finally there are older networks with addresses allocated far more loosely than they need to be.

The three largest RIRs now allow addresses to be sold both between their members and to each others members. So we have a market between organizations who either have addresses they are not using or who have addresses that could be freed up for a cost on one side and organizations who really need more IP addresses on the other.

What is difficult to predict is how much supply and demand there will be at each price-point and therefore what the market price will do in future. So-far the price per IP seems to have remained surprisingly low.

Peter Green
  • 4,056
  • 10
  • 29
  • AfriNIC has less than a /8 worth of addresses still available, and I've seen lots of examples of orgs outside Africa grabbing these up. – Michael Hampton Jan 28 '18 at 20:05
7

Ideally, every host on the internet should be able to obtain a global scope IP address, however IPv4 address exhaustion is real, infact ARIN has already ran out of address in their free pool.

The reason why everyone can still access internet services just fine, is thanks to Network Address Translation (NAT) techniques which allow multiple hosts to share public IP addresses. However, this doesn't come without problems.

Torin
  • 442
  • 1
  • 3
  • 7
  • 20
    I don't want to know how many man-hours, resources, and millions have been wasted between Napster, Gnutella, Gossip, Kazaa, BitTorrent, Kademlia, FastTrack, eDonkey, Freenet, Grokster, Skype, Threema, Spotify, and so on, developing NAT-piercing techniques. – Jörg W Mittag Jan 28 '18 at 17:16
  • @JörgWMittag Not to mention how spectacular it failed for Skype in December 2010. – kasperd Jan 28 '18 at 21:40
  • 4
    And the fact that you have to use NAT-piercing techniques in the first place. If machine X and machine Y are both on ordinary connections they can't talk to each other without a mediator. Annoying for things like file synchronization tasks. – Loren Pechtel Jan 29 '18 at 02:36
  • 1
    @kasperd Could you elaborate on this "failed for Skype in December 2010"? I could find that [a large number of supernodes failed at once, for some unspecified reason](http://www.disruptivetelephony.com/2010/12/index.html). And fail to see how that is relevant to IPv4 address exhaustion. – ivan_pozdeev Feb 02 '18 at 07:04
  • 6
    @ivan_pozdeev Supernodes is a workaround for problems caused by NAT. NAT itself is a workaround for the shortage of IPv4 addresses. Thus the need for Skype to use supernodes in the first place was entirely driven by shortage of IPv4 addresses. Had the internet been upgraded to IPv6 at a more reasonable pace Skype would not have needed supernodes, and that particular outage would not have happened. – kasperd Feb 02 '18 at 23:44
6

You already got many excellent answers, but I would like to add something that hasn't been mentioned yet.

Yes, IPv4 address exhaustion is bad, depending on how you measure it. Some companies still have a huge supply of IPv4 addresses, but we are starting to see workarounds like carrier-grade NAT.

But many of the answers are wrong when they veer off into IPv6.

Here is a list of technologies that can help deal with the IPv4 address shortage. Each has its own advantages and drawbacks.

  • IPv6

    • Advantage: standardized and available in most operating systems.
    • Drawback: despite frequent statements to the contrary, serious security problems. As far back as 2005, US CERT warned of security issues caused by IPv6's global addressing. IPv6 can be secured properly, but given the state of consumer routers, it may not happen.
    • Drawback: migrating takes time, money and expertise.
    • Drawback: many consumer-grade devices are seriously flawed. For instance, a number of D-Link routers support IPv6 by simply forwarding all traffic without offering any firewalling.

Another consideration: even if IPv6 caught on completely today, it would still take another 20 years or so to phase out IPv4, due to legacy equipment that people will be using for a very long time (I still see Windows 2003 servers and Windows XP workstations occasionally! Not to mention all the printers and cameras and IoT gadgets that don't support IPv6).

  • CGNat:
    • Advantage: works without changes on customer premises.
    • Drawback: only supports outbound connections.
    • Drawback: may not support a few protocols.

Eventually, CGNat won't be enough. Maybe IPv6 will catch on, but it's also quite possible that we'll end up seeing country-grade NAT, or something along those lines.

Currently, as a consultant, I often have to point out to my customers that they are exposed on IPv6 (often thanks to Teredo). The next question will invariably be: "how much does it cost to fix that?" and then "How much does it cost to block it? What do we lose if we turn it off?" Guess what the decision will be every time.

Bottom line: to answer your question, yes, IPv4 exhaustion is real. And we will see quite a few mechanisms for coping with it. IPv6 may or may not end up being the equation.

To be clear: I'm not saying that I like this situation. I would like for IPv6 to succeed (and I would like to see a number of improvements to IPv6). I'm just looking at the situation as it is on the ground right now.

Kevin Keane
  • 860
  • 1
  • 8
  • 13
  • 5
    CGN, like any NAT, only works with TCP, UDP, and ICMP, and not other transport protocols. It also breaks many application-layer protocols. NAT is an ugly solution to try to extend IPv4, and it has really outlived its usefulness. – Ron Maupin Feb 01 '18 at 02:40
  • @RonMaupin: What real need is there to have most 32-bit IP addresses be globally unique? For devices that connect to the Internet through a single point, what problems would exist with keeping a table of DNS lookups that have been performed through that connection and having the first lookup return 10.0.0.1, then 10.0.0.2, etc. and then mapping outgoing packet addressed to 10.0.0.1 to whatever the outside-world address was associated with the first DNS lookup, etc.? – supercat Feb 01 '18 at 21:23
  • 4
    @supercat, IP packets do not have DNS names. That would be a different protocol. Only TCP, UDP, and ICMP transport protocols work with NAPT, others do not. Many applications and application-layer protocols do not work with NAPT, and they require ugly hacks on top of the ugly NAPT hack. The premise of IP is that every end-device has a unique address, and many protocols were designed around that. IPv6 solves that problem, as well as some IPv4 shortcomings. – Ron Maupin Feb 01 '18 at 21:32
  • @RonMaupin: If a machine on my local subnet issues a DNS request to my router for `example.com`, my router could respond to that with 10.0.0.1 while observing that the IPv6 address for `example.com` is 1234:5678:ABCD:EF90. My router could then take packets that my host sends to 10.0.0.1 and forward them to the aforementioned IPv6 address without the local device that issued the DNS request having to know or care that the outside-world device only has an IPv6 address. – supercat Feb 01 '18 at 22:02
  • @RonMaupin: Obviously my router would have to use addresses that aren't used for any local machines, but in many cases that could be accommodated by using 192.168.x.x for local machines. The only problem I could see would be that devices whose route to the Internet changes would have to flush their DNS cache (and abandon existing connections, but that would be an issue under any NAT scheme anyway). – supercat Feb 01 '18 at 22:04
  • @supercat, I guess you missed the part about IPv4 and IPv6 being completely different protocols. There is much more to it than addressing. – Ron Maupin Feb 01 '18 at 22:05
  • 3
    @supercat, if it is really that simple, there would have been no reason for the huge installed base of IPX networks to convert to IPv4. You could do the same type of thing between IPX and IPv4, and it was done for a while, but it is just a kludge. – Ron Maupin Feb 01 '18 at 22:07
  • @supercat - what about packets that are sent directly to an address without a preceding DNS lookup? This is common in P2P applications, but can also happen with web browsers when opening URLs with embedded IP addresses. How would your router deal with your PC attempting to open https://[2607:f8b0:4000:816::2004]/ ? (Although I see stackexchange isn't happy with it, and won't let me make a link to that address...) – Jules Feb 01 '18 at 22:20
  • @Jules: Format the address in a way that convinces the device that it needs to do a DNS lookup [e.g. format that address as 2607.f8b0.4000.816.2004.ipv6], then have the gateway assign an internal-use IPv4 address which it will map to that remote address, as it would with a named remote host. Devices which are limited to connecting to a numerical IPv4 address without any means of specifying a name would only be usable if the gateway were manually configured to map a particular local IPv4 address to a remote IPv6 address, but adding DNS support may be cheaper than IPv6. – supercat Feb 01 '18 at 22:44
  • 1
    @supercat - so in order to support such a network, we need to abandon existing standards, and rewrite all existing applications that connect directly to addresses? That doesn't sound like a good approach to me. – Jules Feb 01 '18 at 22:58
  • @Jules: For applications that are running on devices that only understand IPv4, what approach would be better? – supercat Feb 01 '18 at 23:29
  • @RonMaupin Good point about not supporting some protocols; I added that to my answer. Other than that, though, you are basically repeating the exact same thing why I posted my answer in the first place. You may be right in theory, but when the market decides, that and $5 buys me a cup and coffee at Starbucks. As an aside, NAT hasn't outlived its usefulness; it has many other use cases besides alleviating address exhaustion. The fact that most IPv6 routers actually implement NAT nowadays despite the lack of an RFC shows that there still is a need for it. – Kevin Keane Feb 02 '18 at 00:21
  • I was actually responding to someone else. I don't know of any business routers that implement IPv6 NAPT. The experimental RFC for IPv6 NAT is a one-to-one NAT, not NAPT that is most commonly used. The other legitimate purpose for NAT is as a temporary solution for merging companies that have overlapping addresses, but that is really an effect of the address shortage, too. That is all NAPT is really useful for. I keep seeing that it has something to do with security, but that is simply not true because security is from a firewall, not NAPT. – Ron Maupin Feb 02 '18 at 00:34
  • @RonMaupin - I have seen one-to-many NAT on Fortigate and Sonicwall devices, both business-grade routers. OpenWRT also has it, and I was told that Cisco does it, too. NAT has many other use cases. Multihoming and ease of moving from one ISP to another comes to mind. And the security advantage comes from the fact that you cannot have NAPT without a default-closed packet filter. I've seen quite a few consumer-grade routers that "support IPv6" but do not even offer firewall functionality at all. – Kevin Keane Feb 02 '18 at 07:12
  • All of the consumer-grade devices that support IPv6 have the same level of firewall as IPv4 built in. Multihoming and ease of moving ISPs is built is because anyone can easily get provider-independent IPv6 address space because there is plenty of IPv6 address space, not NAT necessary, and a business would be foolish to not do that. There is nothing security-related in NAPT. Disable the firewall on your router and see how many seconds it is before your entire network is compromised, even with NAPT. Once your router is compromised, it has full access to the private addressing on your network. – Ron Maupin Feb 02 '18 at 12:14
  • @RonMaupin I challenge you to show me how to enable the IPv6 firewall on a D-Link DIR-601 router (reportedly, many other D-Links have this problem, but this is one I personally owned, and still widely used). Here's the manual: ftp://ftp2.dlink.com/PRODUCTS/DIR-601/REVA/DIR-601_REVA_MANUAL_1.00_EN.PDF. I heard that the lack of an IPv6 firewall has been an issue with D-Link for many years. They may have resolved it by now. – Kevin Keane Feb 02 '18 at 17:37
  • @RonMaupin ARIN now has strict rules for provider-independent IP address space, so those requests will generally be denied. Here are the conditions: https://www.arin.net/resources/request.html . And I *really* don't want to tell a 20 person law firm that they have to hire somebody to run BGP, or that changing ISPs means reconfiguring every printer in teh building. – Kevin Keane Feb 02 '18 at 17:44
  • 2
    @KevinKeane I'm not terribly surprised that an ancient consumer router from 2010 has IPv6 problems. A 30 second browse of Google search results indicates they solved that problem years ago. – Michael Hampton Feb 02 '18 at 19:05
  • @MichaelHampton for routers, 2010 is not particularly old. Outside the tech communities, most people buy their router once and never replace it until it actually fails. More importantly, it disproves the assumption that all consumer-grade routers will have firewalls. Who's to say that the we won't see again $99 801.11ac routers that cut cost and boost performance by saving a few bytes of firmware space or RAM by recompiling the Linux kernel without ip6tables? Ordinary consumers won't know the difference. – Kevin Keane Feb 02 '18 at 20:39
5

ISP's used to give out blocks of 256 IP addresses to companies. Now, ISP's are stingy and give you (a company) like 5. Back in the day (2003), every PC and connected device in your home had its own internet IP address. Now, the cable/DSN/Fios router has one IP address and gives out 10.0.0.x ip addresses to all the PCs in your home. Summary: ISP's used to waste IP addresses and now they're not wasting them any more.

  • 6
    "*Back in the day (2003), every PC and connected device in your home had its own internet IP address.*" Only if you paid for the 2nd, 3rd, 4th, etc. – RonJohn Jan 30 '18 at 00:30
  • 2
    RonJohn is correct. I was one of the early adopters of broadband when cable internet came to my area in 1997. I paid $50 (US) per month for it, and I distinctly remember that they offered a second IP address for an additional $20 per month. Even though I wanted one, I wasn't willing to pay for it. The following year, my problem was solved when I discovered NAT devices. They didn't have many features (such as port-forwarding for incoming connections) but the one I got solved my immediate need. – Charles Burge Jan 31 '18 at 01:15
  • @CharlesBurge I also remember that. And we are seeing some providers try to do the same thing with IPv6 now, too. – Kevin Keane Feb 01 '18 at 01:10
  • @CharlesBurge: This depended on your ISP. I had a friend on cable in Phoenix, AZ around the same time, and he got a fully routed subnet, a /29 block, with 8 addresses, 5 usable. We ran a Linux server on it with gated (by accident on our part), and the cable network actually shared full BGP routing information with it. That and people putting their Windows PCs and printers with fully open shares on the network made life interesting. – Zan Lynx Feb 01 '18 at 18:19
  • Oh yeah I do remember the network visibility. Everyone else on my loop was visible in "Network Neighborhood", and I could browse any shares that they had. – Charles Burge Feb 01 '18 at 19:32
  • @ZanLynx Of course, if you accidentally peered with your ISP's routers these days, you'd probably get a lifetime in prison or something like that (Never mind that it's your ISP's fault for not implementing security). – user253751 Feb 02 '18 at 04:22
-1

NAT is what happened when IPv6 was an idea, before it was reality, and IP address allocation was becoming a real issue (anyone remember when they were handing out Class C's basically for the asking?) and the real world needed a solution in the meantime.

NAT is not sufficient for IoT. If IoT is going to happen, it's going to happen with IPv6. The nature of IoT is more closely aligned with how the dialup world worked, except that there will be several orders of magnitude more devices connected at the same time.

Xavier
  • 9
  • 2
    From a quick search, NAT appears to have been originally defined by RFC 1631 in May 1994. IPv6 is defined in RFC 1883, published December 1995 as a *proposed standard* (which is pretty far along the standards track). I don't know where you draw the line between "an idea" and "reality", but *mostly working* IPv6 code almost certainly existed in testbeds well before RFC 1883 was published. Compare this to the often-referenced RFC 1918, which was published in February 1996, a few months *after* the initial IPv6 RFC. – user Jan 30 '18 at 21:38
  • @MichaelKjörling Some of the history there, including protocol proposals that failed, can be found in [RFC 1752](https://tools.ietf.org/html/rfc1752). Among them were protocols named TUBA and CATNIP.... And, "The IETF started its effort to select a successor to IPv4 in late 1990..." – Michael Hampton Jan 30 '18 at 23:26
  • 2
    Standards are useless without implementation, and an implementation that consumers or businesses are willing to pay for, at that. Testbeds and proofs of concept don't count in the market. My point about NAT is that working implementations reached market (and therefore gained traction) because the existing hardware (and there was a of of it by that time) all spoke IPv4. So it was more a matter of "problem solved, lets work on more pressing issues now". – Xavier Jan 31 '18 at 02:43
  • Can you explain why "NAT is not sufficient for IoT?" Even if you have plenty of devices, just use a /16 private network, and you can put thousands of devices behind it. – Kevin Keane Feb 01 '18 at 01:10
  • Part of.the problem I think is that people keep equating "device" with "computer/PC" or "handheld phone". When you consider the number of different things that can be connected to the Internet, you realize that 64K simply isn't workable. Drones, traffic cameras, cars (will be a larger problem as we move towards L5 automation), your refrigerator, the UPS delivery truck, as infinitum. Many of these things are in motion and what then of that /16? IoT is not (just) about computers, and it's a mistake to assume it is. – Xavier Feb 01 '18 at 08:48
  • 2
    @Xavier: 64K is an upper limit a NAT device can't even reach. For one, all the low ports under 1024 are restricted. And most NAT limits itself to a high port range of about 20K ports. And of course there's the memory issue: even today we have routers falling over and resetting because somebody tried to open 10,000 TCP connections at the same time. Looking at you, Google Home. – Zan Lynx Feb 01 '18 at 18:24
  • 2
    @KevinKeane - because part of the draw to IOT is being able to connect in to your devices from externally. At the moment, because configuring NAT is a pain that device manufacturers don't want to inflict on consumers, we're often doing this via external "hookup" services provided by device manufacturers *but this isn't sustainable long term*. All it needs is for a high profile manufacturer to go out of business and suddenly everyone will be wary of relying on their devices continuing to work. The only way this is going to carry on working in the long term is if most people have IPv6. – Jules Feb 01 '18 at 22:35
  • @Jules: External hookup services have a number of technical advantages; I'm not sure why they would be unsustainable if a device standardize their "hook-up" protocols protocols and allows configuration of the mediator address. – supercat Feb 01 '18 at 22:47
  • 1
    @supercat - perhaps, but so far that seems to be even less likely to happen than universal IPv6 availability... – Jules Feb 01 '18 at 22:56
  • @ZanLynx Those port limitations aren't inherent, though. Routers optimized for this type of use can use every single port. – Kevin Keane Feb 02 '18 at 00:26
  • 1
    @Jules - I would argue with that reasoning. For one, external hookup services exist for a business reason: they create vendor lock-in, as well as generate huge amounts of valuable data for the vendors. For another, have you ever heard of the Mirai botnet? *The last thing you'd want is for every IoT device to be directly accessible externally.* I had actually hoped that Mirai would finally kill off this myth that global routability is a good thing, but it does not seem to have worked. – Kevin Keane Feb 02 '18 at 00:32
  • @MichaelKjörling *proposed standard* isn't "pretty far along the standards track" actually. It is as far as most of them go actually. You that WorldWideWeb thingy we sometimes use, sometimes known as HTTP/1.1, from 1997? Defined in RFC2068, it is also "proposed standard". And it won't ever change, it's not how RFCs work. To cut the long story short: current IPv6 standard is RFC8200 and it is fully fledged INTERNET STANDARD (STD86) and has been out of "proposed standard" stage for almost two decades. – Matija Nalis Feb 02 '18 at 03:28
-3

The whole IPv4 address issue is rather convoluted. You may find certain article reporting it is exhausted, yet another talking about large number of surplus (never used) addresses being sold from one party to another. The question would be why are these not available to those (emerging regions and rural areas of developed countries) short of them?

Below is the result of a study that we accidentally ventured into. It utilizes nothing more than the original IPv4 protocol RFC791 and the long-reserved yet hardly-utilized 240/4 address block to expand the IPv4 pool by 256M fold. We have submitted a draft proposal called EzIP (phonetic for Easy IPv4) to IETF:

https://datatracker.ietf.org/doc/html/draft-chen-ati-adaptive-ipv4-address-space-03

Basically, the EzIP approach will not only resolve IPv4 address shortage issues, but also largely mitigate the root cause to cyber security vulnerabilities, plus open up new possibilities for the Internet, all within the confines of the IPv4 domain. In fact, this scheme may be deployed "stealthily" for isolated regions where needed. These should relieve the urgency to deploy the IPv6 for an appreciable length of time, and invalidate the market of trading the IPv4 addresses.

Any thought or comment will be much appreciated.

Abe(2018-07-15 17:29)

-4

Honestly, I think it is not as bad as people think. Yeah, maybe in some places, but not so much because there's not enough addresses. It's because they are all owned. Maybe it's my location or something, but I've done IT work for a bunch of small to medium-sized businesses in the last seven years or so, and all the things you are all talking about are usually just standard setup. Pretty easy unless you have a crappy device, or there's a shitty setup with the network in the first place that needs to be sorted out.

Personally, I'm fine with NAT. It's an added layer of protection, generally speaking. At least they either have to get through an extra device, or find a way to indirectly hijack my connection. As far as running servers, that's generally outside of and/or considered a breach of contract with your ISP unless your paying for it. Sure you can do it, and they probably won't bug you about it, but they could.

Port-forwarding and all that is not exactly complicated. Now, maybe some devices are not easy to configure, but that's not because of IPv4. It still offers the most compatibility simply because it is ubiquitous.

Nobody actually needs to email themselves, and sending something to Drop-box or Google Drive, or a million other similar services isn't exactly rocket science, nor slow, these days. I mean everything syncs. You drop it in a folder. Unless you're nerdy like me, and you do everything through ssh/sftp (okay not everything). And if you have some reason you really want to run your own server, cloud hosting is cheap-- I've got a dedicated virtual server that runs linux on an ssd. The bandwidth is crazy fast. It boots faster than I can type an up arrow and hit Enter. And is scalable The whole setup costs between 5 and 10 bucks a month, with free backups and no electric bill.

Don't really need a peer-network solution. Even most mult-user games these days are all setup to interact through an intervening server, all setup and preconfig'd. On the other hand, if what I'm reading in this post is all true, IT will be overcrowded and cheap if/when IPv6 takes off. Even cellphones are approaching fiber-like speeds. Or at least cable.

If you do run an in-house server, and you need to hit it with the same domain-name inside or outside you network, you can always spoof it's address using a linux-based router and dnsmasq or whatever and custom entries in the hosts file to redirect you to the local address if you're on the inside.

Really, I don't think it is actually desirable to have every device have it's own address straight out there floating open on the 'net. If someone wants to obvuscate themselves while attacking you, it's going to happen regardless. But you're a sitting duck if you're just sitting there balls out in the open breeze. Nah, I'll take my IPv4 and my NAT any day. But it's good that it's there.

Anywa, falling asleep now... probably more to say but I'll check in tomorrow in case I missed something. I'm sure there's more.

jdmayfield
  • 271
  • 2
  • 11
  • 12
    Uhm, it is actually desirable because of stabler connections, faster speeds, cheaper internet(ISP's not having to maintain their NAT servers, IP block allocations per region/city and shuffling things around to get by on specific peak hours). Do you know how confusing it is for websockets when a user on mobile hops from one cell tower to another and gets a new IP? There's a lot of compensation code, effort and energy required to keep it running. Your answer reads like, this tower might be missing it's foundation but hasn't toppled yet, so it's fine. – Tschallacka Jan 29 '18 at 13:11
  • 11
    You have some misconceptions about NAT and security. Please read [RFC 4864](https://tools.ietf.org/html/rfc4864). – Karl Bielefeldt Jan 29 '18 at 16:09
  • @Tschallacka: This is not exactly what I mean. Mobile is potentially an exception, and to a large degree. Yes, I'm dealing with web-sockets quite a lot these days. A good point. Ws's are still a budding technology in relative infancy. Personally, it's a pretty awesome one at that, allowing better functionality and efficency, and opening up a host of new possibilities. Mobile is a general exception that has a whole different set of needs than non-mobile devices. I did say I think it's good IPv6 is there. I think much of the world is not ready for the change, and money is... – jdmayfield Jan 29 '18 at 16:24
  • ...money is a key factor either way you look at it. There is a lot to gain for certain companies in pushing for the change, and also to lose for a lot individuals as well as established companies. You cannot force the world to comply, even if you are correct. It will take time for things to catch up, both people _and_ machines. I _am_ saying some of the reasons above are not the key factors. _Because it's easier_ is not a good reason. There are always going to be problems. IPv6 is no panacaea-- and will open up new vulnerabilities. Likely the change will take a whole generation. – jdmayfield Jan 29 '18 at 16:36
  • There are entire countries who are lucky to connectivity at all, and it will be difficult, if even possible, for them to adapt any time soon. – jdmayfield Jan 29 '18 at 16:39
  • Thanks for the juicy tidbit, @Karl. I'll he check that out. I do think IPv6 has has great promise. But we live in a world where many businesses are completely dependent on software that has rightfully moved under the 'legacy' umbrella. I mean Windows 7 is still the most common desktop OS I run into for larger businesses, and XP is still a thing-- yikes! There are even people still running their operations on NT! Scary, but it happens. Unfortunately businesses sometimes get entrenched in a situation where they can't afford to change, or their owners/controllers are just simply afraid. – jdmayfield Jan 29 '18 at 17:25
  • Just for the record, I don't think NAT is especially secure-- more akin to having locks on your front door. They keep out casual intruders. But it won't stop someone intent on gaining entry-- they'll just break the door or go through a window. What needs to happen, and _will happen_, is people will gradually change over, and as more and more people establish a sense of comfort and _feel secure_ with it, the pace of that change will quicken. – jdmayfield Jan 29 '18 at 17:36
  • 4
    At this rate it'll be more than a generation. IPv6 is _20 years old_ this year. – Michael Hampton Jan 29 '18 at 19:37
  • Wow! @Michael. That I did not know. Is that from first draft proposal? When was it instituted as an accepted solution in the industry? It does seem like a long time, but things like this tend to accelerate exponentially as more players join the game, so to speak. Really, I know, or know of, a number of people already making it a thing in the business world wherever they can do so without breaking existing infrastructure. Eventually it'll be common enough it's just everywhere and regular people will begin to be more comfortable with it. That will be the turning point. – jdmayfield Jan 29 '18 at 19:56
  • 1
    I mean it's already supported by the vast majority of consumer devices on the market. Sure most people have no idea what IPv4 or IPv6 are, except maybe that one uses numbers they are familiar with. That, I think, is a big part of the stumbling block. Personally, I've been using hex since I was a kid. But that's my background. Most people weren't exposed and have never had a reason to be even a tiny bit used to the concept of other numbering systems, except maybe Roman numerals! ;) – jdmayfield Jan 29 '18 at 20:01
  • 4
    [RFC 2460](https://tools.ietf.org/html/rfc2460) was published in December 1998. Several parts of it had been published prior to this point and there had been various testbeds up. IPv6 in roughly its current form was proposed in [RFC 1883](https://tools.ietf.org/html/rfc1883) which dates to December 1995. So you could say that IPv6 is even older than 20 years. But everyone regards RFC 2460 as the point where IPv6 was mature enough to implement. – Michael Hampton Jan 29 '18 at 21:28
  • 6
    BTW, while I'm on the subject, you should be aware that there are already IPv6-only gaming platforms, such as Xbox One. An Xbox One with IPv4 and not IPv6 connectivity [sets up its own Teredo tunnel in order to reach the IPv6 Internet](http://download.microsoft.com/download/A/C/4/AC4484B8-AA16-446F-86F8-BDFC498F8732/Xbox%20One%20Technical%20Details.docx), which of course brings with it a penalty in latency and reliability. IPv4 is in pretty sad shape when a Teredo tunnel is considered less unreliable than a typical consumer IPv4 connection. – Michael Hampton Jan 29 '18 at 23:29
  • @jdmayfield Ipv6 is really old at this moment. I remember me as a teen compiling kernels for linux and being presented with the "include ipv6 support" option. that was more than 20 years ago when it was still in draft status. IPv6 is supported in windows me and later. Windows 98 can work with ipv6 with a proxy server that is dualstack, The only thing holding back IPV6 have been ISP's, wether it being broadband providers or server hosters. Even back then I didn't fully grasp why we wouldn't move on to the bigger newer thing with plenty of address space. Money talks – Tschallacka Jan 30 '18 at 15:57
  • Yes, these are the two big problems with IPv6. It's not because it's a problem itself, not at all. It's because the infrastructure is just not there. And not because it can't be, at least in the more developed parts of the world. It's a money thing. It's important to remember that, although we tend to think of the Internet as this free wide-open space, The machines it runs on are owned by somebody-- a lot of somebody's. And gauranteed most of them are more concerned about money going into their wallet than the problem's with IPv4. Think back to Y2k. It's like a joke, even then... – jdmayfield Jan 30 '18 at 19:00
  • even then it seemed like one, but it there were real issues, a few of which did make it past the deadline, fortunately mostly minor glitches. But it could have been bad, and not just the tech-wise were onto to it. The people with the money knew this was going to be a problem for them, especially because what it really would have affected was their stability and their income. They are going to do what they always do: hold off until it's starting to be a problem. There will come a turning a point though, when they will foresee losing more than gaining. Probably not as far out as it seems. – jdmayfield Jan 30 '18 at 19:06
  • Oddly this is reminding me of the annoyance I have with the numerical color schemes in CSS. I prefer the #FF double-hex form myself, except one critical drawback: you can't access the alpha channel (yet). So you have to use the decimal rgba form, which oddly is harder for me to think in, because it represents an 8-bit integer that doesn't translate cleanly in decimal. I mean I can count powers of 2 out at least 16 bits in my sleep. But it's harder to guage the color (yeah, I know, color pickers). Anyway, I think this is similar for a lot of people, only in the opposite direction. You? – jdmayfield Jan 30 '18 at 19:22
  • concerning the hexadecimal or the decimal css, they are both okay for me, but that's also because I do 3d programming etc.. and I have to mix and match always. 255 is white, 0 is black, and the rest is in between. just as FF is white, 00 is black and everything in between. Money is the biggest problem, and a standard/technology will never get improved significantly when nobody uses it then bugs are not found. The "money" switch has been triggered. Look at https://www.google.com/intl/nl/ipv6/statistics.html When the west uses it, the rest will folow because of money, they don't want to miss out – Tschallacka Jan 31 '18 at 10:10
  • Nice, @Tschallacka. Very true. – jdmayfield Jan 31 '18 at 20:52
  • 1
    Aha, so *you're* part of the reason we're still struggling to get to the coexistence stage! Yeah, sure, many of the problems created by NAT have annoying and painful workarounds. Wouldn't it be better if the problems didn't exist in the first place?! And have you heard of firewalls? – user253751 Feb 02 '18 at 04:27
  • My university's CS department has a public IP for every computer (they got them before IPv4 exhaustion was much of a concern). Unless I'm on the network, I can still only access into the servers that the firewall is configured to allow me to access. If I'm already on the network I can access any computer. This is the exact same level of security that NAT gives you. – user253751 Feb 02 '18 at 04:32
  • 1
    @MichaelHampton Teredo done right is more reliable than the average CGN deployment. I have seen CGN done so poorly that running Teredo through the CGN improves reliability. But neither comes close to the reliability of proper native dual stack. – kasperd Feb 04 '18 at 13:24
  • I have to admit, I find it surprising that ISP's and backbone providers haven't quite caught up, considering pretty much every consumer and small-business device supports both versions these days. Probably it will be the cell companies that pave the way, considering the sheer number of units they need to support. Regarding the websocket thing mentioned earlier-- I make extensive use of websockets on mobile, though I haven't noticed any issues switching between towers. On the user end, are you talking about messages dropping or arriving more out of sequence than usual? – jdmayfield Feb 07 '18 at 00:50