So you wanna buy a used IP address block?

For the past 27 years, I have owned a class C block of IPv4 addresses. I don’t recall what prompted me back then to apply to Jon Postel for my block: I didn’t really have any way to run a network online, and back then the Internet was just catching on. Postel had the unique position to personally attend to the care and growth of the Internet.

Earlier this year I got a call from the editor of the Internet Protocol Journal asking me to write about the used address marketplace, and I remembered that I still owned this block. Not only would he pay me to write the article, but I could make some quick cash by selling my block.

It was a good block, perhaps a perfect block: in all the time that I owned it, I had never set up any computers using any of the 256 IP addresses associated with it. In used car terms, it was in mint condition. Virgin cyberspace territory. So began my journey into the used marketplace that began just before the start of the new year.

If you want to know more about the historical context about how addresses were assigned back in those early days and how they are done today, you’ll have to wait for my article to come out. If you don’t understand the difference between IPv4 and IPv6, you probably just want to skip this column. But for those of you that want to know more, let me give you a couple of pointers, just in case you want to do this yourself or for your company. Beware that it isn’t easy or quick money by any means. It will take a lot of work and a lot of your time.

First you will want to acquaint yourself with getting your ownership documents in order. In my case, I was fortunate that I had old corporate tax returns that documented that I owned the business that was on the ownership records since the 1990s. It also helped that I was the same person that was communicating with the regional Internet registry ARIN that was responsible for the block now. Then I had to transfer the ownership to my current corporation (yes, you have to be a business and fortunately for me I have had my own sub-S corps to handle this) before I could then sell the block to any potential buyer or renter. This was a very cumbersome process, and I get why: ARIN wants to ensure that I am not some address scammer, and that they are selling legitimate goods. But during the entire process my existing point of contact on my block, someone who wasn’t ever part of my business yet listed on my record from the 1990s, was never contacted about his legitimacy. I found that curious.

That brings up my next point which is whether to rent or to sell a block outright. It isn’t like deciding on a buying or leasing a car. In that marketplace, there are some generally accepted guidelines as to which way to go. But in the used IP address marketplace, you are pretty much on your own. If you are a buyer, how long do you need the new block – days, months, or forever? Can you migrate your legacy equipment to use IPv6 addresses eventually (in which cases you probably won’t need the used v4 addresses very long) or do you have legacy equipment that has to remain running on IPv4 for the foreseeable future?

If you want to dispose of a block that you own, do you want to make some cash for this year’s balance sheet, or are you looking for a steady income stream for the future? What makes this complicated is trying to have a discussion with your CFO how this will work, and I doubt that many CFOs understand the various subtleties about IP address assignments. So be prepared for a lot of education here.

Part of the choice of whether to rent or buy should be based on the size of the block involved. Some brokers specialize in larger blocks, some won’t sell or lease anything less than a /24 for example. “If you are selling a large block (say a /16 or larger) you would need to use a broker who can be an effective intermediary with the larger buyers,” said Geoff Huston, who has written extensively on the used IP address marketplace.

Why use a broker? When you think about this, it makes sense. I mean, I have bought and sold many houses — all of which were done with real estate brokers. You want someone that both buyer and seller can trust, that can referee and resolve issues, and (eventually) close the deal. Having this mediator can also help in the escrow of funds while the transfer is completed — like a title company. Also the broker can work with the regional registry staff and help prepare all the supporting ownership documentation. They do charge a commission, which can vary from several hundred to several thousand dollars, depending on the size of the block and other circumstances. One big difference between IP address and real estate brokers is that you don’t know what the fees are before you select the broker – which prevents you from shopping based on price.

So now I had to find an address broker. ARIN has this list of brokers who have registered with them. They show 29 different brokers, along with contact names and phone numbers and the date that the broker registered with ARIN. Note this is not their recommendation for the reputation of any of these businesses. There is no vetting of whether they are still in business, or whether they are conducting themselves in any honorable fashion. As the old saying goes, on the Internet, no one knows if you could become a dog.

Vetting a broker could easily be the subject of another column (and indeed, I take some effort in my upcoming article for IPJ to go into these details). The problem is that there are no rules, no overall supervision and no general agreement on what constitutes block quality or condition. IPv4MarketGroup has a list of questions to ask a potential broker, including if they will only represent one side of the transaction (most handle both buyer and seller) and if they have appropriate legal and insurance coverage. I found that a useful starting point.

I picked Hilco’s IPv4.Global brokerage to sell my block. They came recommended and I liked that they listed all their auctions right from their home page, so you could spot pricing trends easily. For example, last month other /24 blocks were selling for $20-24 per IP address. Rental prices varied from 20 cents to US$1.20 per month per address, which means at best a two-year payback when rentals are compared to sales and at worst a ten-year payback. I decided to sell my block at $23 per address: I wanted the cash and didn’t like the idea of being a landlord of my block any more than I liked being a physical landlord of an apartment that I once owned. It took several weeks to sell my block and about ten weeks overall from when I first began the process to when I finally got the funds wired to my bank account from the sale.

If all that seems like a lot of work to you, then perhaps you just want to steer clear of the used marketplace for now. But if you like the challenge of doing the research, you could be a hero at your company for taking this task on.

Sometimes, the tin-foil hat types are right

A recent story in the NY Times caught my attention. It is about a block in the Cleveland area where some of their wireless car key fobs and garage door openers stopped working. The block is near a NASA research facility, so that was an obvious first suspect. But it wasn’t. The actual source of the problem turned out to be an inventor that was flooding the radio spectrum at the same frequency as the fobs use: 315 Mhz. Once the radio emitter was turned off, the fobs and garage doors started working again. The issue was the inventor’s radio signal was so strong it was preventing anything else from transmitting on that frequency. He had no idea that he was the source of the radio interference.

This story reminds me of an experience that I had back in 1991 or so. At the time, I was the editor-in-chief of Network Computing magazine for what was then called CMP. It was a fun and challenging job, and one day I got a call from one of my readers who was the IT manager for the American Red Cross headquarters in DC. This was Jon Arnold, who spent a long career in IT, sadly dying of a heart attack several years ago. Turns out they had a chapter in Norfolk Va. that was having networking issues. Their office was a small one, of about 25 or so staffers as I recall. Every day their network would start slowing down and then eventually go kaput for several hours. It was at a different time during the day, so it wasn’t the Cleaning Person Problem (I will get to that in a moment). It would come back online sometimes by itself, sometimes with a server reboot. The IT manager asked if I would be willing to lend and hand, and the first person that I thought of to help me was Bill Alderson.

I first met Bill when he was a young engineer for Network General, which made the fabulous Sniffer protocol analyzer. Many of you who are not from that generation may not realize what this tool was or how much of a big deal it was to have a device that could record packet traffic and examine it bit by bit. Today we have open source tools that do the same thing for free, but back then the Sniffer cost four or five figures and came with a great deal of training. Bill cut his teeth on this product and now has his own company, HopZero that has an interesting way to protect your servers by restricting their hop count.

Bill and I first met back in 1989 when I worked at PC Week and we wanted to test the first local area network topologies. We set up three networks, running Ethernet, Token Ring and Arcnet in a networked classroom at UCLA during spring break. All were connected to the same Novell Netware server. Ethernet won the day (as you can see in this copy of the story), and the other topologies died of natural causes. But I digress.

Jon, Bill and I flew to Norfolk and spent a day with the Red Cross staff to try to figure out what was happening with their Novell network. We did all sorts of packet captures that weren’t conclusive. Our first thought was that it had to be something wrong with the server, but we didn’t see anything wrong. Our second thought was more insidious. Being in Norfolk, we were directly down the road from the naval base (you could say that about much of the town, it is a big base). We actually managed to get through to the base commander to find out if their radar was active when they were coming into port. Imagine making that phone call these days in our post-9/11 world? Anyway, the answer we got was negative. Eventually, after hours of shooting down various theories, we figured out the cause of the problem was a wonky network adapter card in someone’s PC. It usually operated just well enough that it didn’t interfere with the network most of the time. Once we replaced the card, everything went swimmingly, and we could put away our tin-foil hats.

Okay, so what is the Cleaning Person Problem? This sounds like folklore, but another reader told me about a problem they had on their network years ago. The reader was periodically disconnected from his network at the same time every night. He was one of the few people online at the time in their office, so it wasn’t like there was high traffic across the network. Eventually, after several evenings he figured out the problem: The cleaning crew was vacuuming the rug in the server room, and the network cable to the server was being run over by the vacuum. Because the cable wasn’t properly crimped and because it was run under the carpet (who knows why this was done), it was shaken just loose enough to disconnect it from the server. When the crew was finished, the cable would operate just fine. Thankfully, they made a better cable and ran it elsewhere where no one could step on it.

The Cleveland folks that had their car fobs disabled actually had it easy: the fault was in a very deliberate emitter that – while initially difficult to trace – was a very binary cause. Their challenge was that not every car fob and garage door was affected. The two scenarios that I mention here were not so cut-and-dried, which made troubleshooting them more difficult. So keep these stories in mind when you are troubleshooting your next computer or networking problem, and don’t be so quick to blame user error. It could be something not as obvious as the odd radio transmission.

Why your networking future shouldn’t include NAT

This post is taken from a recent issue of the Internet Protocol Journal and reused with permission. It is written by Leroy Harvey, a data network architect.

The networking world seems to be losing sight that NAT is a crutch of sorts, a way of dealing with the primary problem of a lack of IPv4 addresses. An earlier article in IPJ stated that NAT provides a firewall function. I think NATs and firewalls are mutually exclusive, even if they are found on the same networking device. This is because NATs don’t by themselves provide any natural protection from the host on the other side of a protection point. The two can operate independently.

NAT does present real-world problems with a number of products, such as Microsoft AD Replication and IBM’s Virtual Tape Library. Passing through a NAT breaks the application’s intended communication model and requires compensating mechanisms.

We are asking the wrong question if we say, “should I deploy IPv6 now”. Someone once told me that IPv6 was here to stay. To my way of thinking, it has not arrived after 25 years.

Let’s look at the situation where we want to merge two large company networks together that both make extensive use of NAT. This becomes more complex than if the two networks were originally using a valid replacement for IPv4 and sadly, that protocol doesn’t exist. While I agree with the notion that the Internet can’t be completely stateless, this doesn’t justify using NAT as middleware. Justifying NAT for the sake of IPv4 life-support is nonsensical.

We should appreciate NAT for its role as a tactical compensating mechanism for IPv4 address depletion, not a a strategic future-proofing scalability mechanism for IPv4. Really what many are saying about NAT is just putting lipstick on the IPv4 pig. Unfortunately, in IT there is nothing more permanent than a temporary solution. Let us not fall victim to this easy psychological trap only because we seem to have collectively painted ourselves into a corner of sorts.

Document your network

Over the weekend, I had an interesting experience. Normally, I don’t go into my office then, which is across the street from my apartment. But yesterday the cable guy was coming to try to fix my Internet connection. During the past week my cable modem would suddenly “forget” its connection. It was odd because all the idiot lights were solidly illuminated. There seemed to be no physical event that was associated with the break. After I power cycled the modem my connection would come back up.

I was lucky: I got a very knowledgeable cable guy, and he worked hard to figure out my issue. I will save you a lot of the description here and just tell you that he ended up replacing a video splitter that was causing my connection to drop. Cable Internet is using a shared connection, and my problem could have multiple causes, such as a chatty neighbor or a misbehaving modem down the block. But once we replaced the splitter, I was good to go.

Now I have been in my office for several years, and indeed built it out it from unfinished space when I first moved in. I designed the cable plant and where I wanted my contractor to pull all the wires and terminate them. But that was years ago. I didn’t document any of this, or if I did have misplaced that information. But the cable tech took the time to make up for my oversight, He tracked down my misbehaving video splitter that was buried inside one of my wall plates. And that is one of the morals of this story: always be documenting your infrastructure. It costs you less to do that contemporaneously, when you are building it, then when you have to come back after the fact and try to remember where your key network parts are located or how they are configured.

Part of this story was that I was using an Evenroute IQrouter, an interesting wireless router that can optimize for latency. I was able to bring up this graph that showed me the last several minutes’ connection details so I knew it wasn’t my imagination.

 

Now my network is puny compared to most companies’, to be sure. But I have been in some larger businesses that don’t do any better job of keeping track of their gear. Oh the wiring closets that I have been in, let me tell you! They look more like spaghetti. For example, here I am in the offices of Checkpoint in Israel in January 2016. Granted, this was in one of their test labs but still it shouldn’t look like this (I am standing next to Ed Covert, a very smart infosec analyst):

 

Compare this with how they should look. This was taken in a demonstration area at Panduit’s offices. Granted, it was set up to show how neat and organized their cabling could be.

Documentation isn’t just about making pretty color-coded cables nice and neat, although you can start there. The problem is when you have to change something, and then you need to keep track when you do. This means being diligent when you add a new piece of gear, or change your IP address range, or add a new series of protocols or applications. So many times you hear about network administrators that opened a particular port and didn’t remember to close it once the reason for the request was satisfied. Or a username which was still active months or years after the user had left the company. I had an email address on Infoworld’s server for years after I no longer wrote for them, and I tried to get it turned off to no avail.

So take the time and document everything. Otherwise you will end up like me, with a $5 part inside one of your walls that is causing you downtime and aggravation.

StateTech magazine: 4 Steps to Prepare for an IoT Deployment

As the Internet of Things (IoT) becomes more popular, state and local government IT agencies need to play more of a leadership role in understanding the transformation of their departments and their networks. Embarking on any IoT-based journey requires governments and agencies to go through four key phases, which should be developed in the context of creating strategic partnerships between business lines and IT organizations. Here is more detail on these steps, published in StateTech Magazine this month.

SecurityIntelligence.com: Protecting Your Network Through Understanding DNS Requests

Most of us know how the Domain Name System (DNS) is a critical piece of our network infrastructure and have at least one tool to keep DNS requests current and clear of potential abuses. Sometimes a little common sense and knowledge of your system log files and the DNS requests contained therein can go a long way toward understanding when your enterprise network infrastructure has been breached. I note a tale from the Cisco Talos blog how they just used some common sense research in my latest blog post for SecurityIntelligence.com today.

Coping with Mixed Operating Systems: Strategies for Supporting Enterprise Heterogeneity

Back in October 1993, I wrote a story for Computerworld about how IT shops are dealing with supporting a mixture of OS’s. Back then, we didn’t have Chrome OS, or BYOD, or even a common TCP/IP protocol that was in much use to connect disparate systems. I wrote then:

When it comes to supporting enterprise networks, heterogeneity has become a fact of life, and this is especially true when it comes to supporting operating systems. For better or worse, the networks of today have become a real mixed bag.

How very true. For a look back in time, check out the link above. And for a more modern story, I was interviewed on this topic for NewEgg’s B2B site, in this story: Support Chromebooks in a Windows Domain. This article links to some modern tools that can be used to administer mixed OS’s.

iBoss blog: The IoT Can Be a Potent Insider Threat

Insider threats can come from the most unexpected places. Earlier this year, the hacker Andrew Auernheimer created a script that would scan the Internet to find printers that had port 9100 open. The script then printed out racist documents across the globe

You can read my post here about the threat of Internet-connected printers.

The Evolution of today’s enterprise applications

Enterprises are changing the way they deliver their services, build their enterprise IT architectures and select and deploy their computing systems. These changes are needed, not just to stay current with technology, but also to enable businesses to innovate and grow and surpass their competitors.

In the old days, corporate IT departments built networks and data centers that supported computing monocultures of servers, desktops and routers, all of which was owned, specified, and maintained by the company. Those days are over, and now how you deploy your technologies is critical, what one writer calls “the post-cloud future.” Now we have companies who deliver their IT infrastructure completely from the cloud and don’t own much of anything. IT has moved to being more of a renter than a real estate baron. The raised-floor data center has given way to just a pipe connecting a corporation to the Internet. At the same time, the typical endpoint computing device has gone from a desktop or laptop computer to a tablet or smartphone, often purchased by the end user, who expects his or her IT department to support this choice. The actual device itself has become almost irrelevant, whatever its operating system and form factor.

At the same time, the typical enterprise application has evolved from something that was tested and assembled by an IT department to something that can readily be downloaded and installed at will. This frees IT departments from having to invest time in their “nanny state” approach in tracking which users are running what applications on which endpoints. Instead, they can use these staffers to improve their apps and benefit their business directly. The days when users had to wait on their IT departments to finish a requirements analysis study or go through a lengthy approvals process are firmly in the past. Today, users want their apps here and now. Forget about months: minutes count!

There are big implications for today’s IT departments. To make this new era of on-demand IT work, businesses have to change the way they deliver IT services. They need to make use of some if not all of the following elements:

  • Applications now have Web-front ends, and can be accessed anywhere with a smartphone and a browser. This also means acknowledging that the workday is now 24×7, and users will work with whatever device and whenever and wherever they feel the most productive.
  • Applications have intuitive interfaces: no manuals or training should be necessary. Users don’t want to wait on their IT department for their apps to be activated, on-boarded, installed, or supported.
  • Network latency matters a lot. Users need the fastest possible response times and are going to be running their apps across the globe. IT has to design their Internet access accordingly.
  • Security is built into each app, rather than by defining and protecting a network perimeter.
  • IT staffs will have to evolve away from installing servers and towards managing integrations, provisioning services and negotiating vendor relationships. They will have to examine business processes from a wider lens and understand how their collection of apps will play in this new arena.

 

Network World: Slow Internet links got you down? Try Dyn’s Internet Intelligence

dynAs businesses extend their reach to more corners of the world, wouldn’t it be nice if you could monitor any Internet service provider from any location? Thankfully, Dyn, which sells DNS management tools, acquired Renesys earlier this year and extended the features of the Renesys’ Internet Intelligence product.

You can read the full review in Network World here.