Over the weekend, I had an interesting experience. Normally, I don’t go into my office then, which is across the street from my apartment. But yesterday the cable guy was coming to try to fix my Internet connection. During the past week my cable modem would suddenly “forget” its connection. It was odd because all the idiot lights were solidly illuminated. There seemed to be no physical event that was associated with the break. After I power cycled the modem my connection would come back up.
I was lucky: I got a very knowledgeable cable guy, and he worked hard to figure out my issue. I will save you a lot of the description here and just tell you that he ended up replacing a video splitter that was causing my connection to drop. Cable Internet is using a shared connection, and my problem could have multiple causes, such as a chatty neighbor or a misbehaving modem down the block. But once we replaced the splitter, I was good to go.
Now I have been in my office for several years, and indeed built it out it from unfinished space when I first moved in. I designed the cable plant and where I wanted my contractor to pull all the wires and terminate them. But that was years ago. I didn’t document any of this, or if I did have misplaced that information. But the cable tech took the time to make up for my oversight, He tracked down my misbehaving video splitter that was buried inside one of my wall plates. And that is one of the morals of this story: always be documenting your infrastructure. It costs you less to do that contemporaneously, when you are building it, then when you have to come back after the fact and try to remember where your key network parts are located or how they are configured.
Part of this story was that I was using an Evenroute IQrouter, an interesting wireless router that can optimize for latency. I was able to bring up this graph that showed me the last several minutes’ connection details so I knew it wasn’t my imagination.
Now my network is puny compared to most companies’, to be sure. But I have been in some larger businesses that don’t do any better job of keeping track of their gear. Oh the wiring closets that I have been in, let me tell you! They look more like spaghetti. For example, here I am in the offices of Checkpoint in Israel in January 2016. Granted, this was in one of their test labs but still it shouldn’t look like this (I am standing next to Ed Covert, a very smart infosec analyst):
Compare this with how they should look. This was taken in a demonstration area at Panduit’s offices. Granted, it was set up to show how neat and organized their cabling could be.
Documentation isn’t just about making pretty color-coded cables nice and neat, although you can start there. The problem is when you have to change something, and then you need to keep track when you do. This means being diligent when you add a new piece of gear, or change your IP address range, or add a new series of protocols or applications. So many times you hear about network administrators that opened a particular port and didn’t remember to close it once the reason for the request was satisfied. Or a username which was still active months or years after the user had left the company. I had an email address on Infoworld’s server for years after I no longer wrote for them, and I tried to get it turned off to no avail.
So take the time and document everything. Otherwise you will end up like me, with a $5 part inside one of your walls that is causing you downtime and aggravation.
As the Internet of Things (IoT) becomes more popular, state and local government IT agencies need to play more of a leadership role in understanding the transformation of their departments and their networks. Embarking on any IoT-based journey requires governments and agencies to go through four key phases, which should be developed in the context of creating strategic partnerships between business lines and IT organizations. Here is more detail on these steps, published in StateTech Magazine this month.
Most of us know how the Domain Name System (DNS) is a critical piece of our network infrastructure and have at least one tool to keep DNS requests current and clear of potential abuses. Sometimes a little common sense and knowledge of your system log files and the DNS requests contained therein can go a long way toward understanding when your enterprise network infrastructure has been breached. I note a tale from the Cisco Talos blog how they just used some common sense research in my latest blog post for SecurityIntelligence.com today.
Back in October 1993, I wrote a story for Computerworld about how IT shops are dealing with supporting a mixture of OS’s. Back then, we didn’t have Chrome OS, or BYOD, or even a common TCP/IP protocol that was in much use to connect disparate systems. I wrote then:
When it comes to supporting enterprise networks, heterogeneity has become a fact of life, and this is especially true when it comes to supporting operating systems. For better or worse, the networks of today have become a real mixed bag.
How very true. For a look back in time, check out the link above. And for a more modern story, I was interviewed on this topic for NewEgg’s B2B site, in this story: Support Chromebooks in a Windows Domain. This article links to some modern tools that can be used to administer mixed OS’s.
You can read my post here about the threat of Internet-connected printers.
Enterprises are changing the way they deliver their services, build their enterprise IT architectures and select and deploy their computing systems. These changes are needed, not just to stay current with technology, but also to enable businesses to innovate and grow and surpass their competitors.
In the old days, corporate IT departments built networks and data centers that supported computing monocultures of servers, desktops and routers, all of which was owned, specified, and maintained by the company. Those days are over, and now how you deploy your technologies is critical, what one writer calls “the post-cloud future.” Now we have companies who deliver their IT infrastructure completely from the cloud and don’t own much of anything. IT has moved to being more of a renter than a real estate baron. The raised-floor data center has given way to just a pipe connecting a corporation to the Internet. At the same time, the typical endpoint computing device has gone from a desktop or laptop computer to a tablet or smartphone, often purchased by the end user, who expects his or her IT department to support this choice. The actual device itself has become almost irrelevant, whatever its operating system and form factor.
At the same time, the typical enterprise application has evolved from something that was tested and assembled by an IT department to something that can readily be downloaded and installed at will. This frees IT departments from having to invest time in their “nanny state” approach in tracking which users are running what applications on which endpoints. Instead, they can use these staffers to improve their apps and benefit their business directly. The days when users had to wait on their IT departments to finish a requirements analysis study or go through a lengthy approvals process are firmly in the past. Today, users want their apps here and now. Forget about months: minutes count!
There are big implications for today’s IT departments. To make this new era of on-demand IT work, businesses have to change the way they deliver IT services. They need to make use of some if not all of the following elements:
- Applications now have Web-front ends, and can be accessed anywhere with a smartphone and a browser. This also means acknowledging that the workday is now 24×7, and users will work with whatever device and whenever and wherever they feel the most productive.
- Applications have intuitive interfaces: no manuals or training should be necessary. Users don’t want to wait on their IT department for their apps to be activated, on-boarded, installed, or supported.
- Network latency matters a lot. Users need the fastest possible response times and are going to be running their apps across the globe. IT has to design their Internet access accordingly.
- Security is built into each app, rather than by defining and protecting a network perimeter.
- IT staffs will have to evolve away from installing servers and towards managing integrations, provisioning services and negotiating vendor relationships. They will have to examine business processes from a wider lens and understand how their collection of apps will play in this new arena.
As businesses extend their reach to more corners of the world, wouldn’t it be nice if you could monitor any Internet service provider from any location? Thankfully, Dyn, which sells DNS management tools, acquired Renesys earlier this year and extended the features of the Renesys’ Internet Intelligence product.
Snap can be used in a wide variety of monitoring situations, such as to track servers, virtual machines, applications, databases, network and storage devices.
We tested version 7.1 of Snap on a network where it quickly discovered our Windows, Mac and Linux machines in February 2014. It is free and fully functional to monitor up to 30 devices, with a paid version for larger networks.
One Hybrid Cloud can migrate both physical and virtual servers to the cloud using simple but powerful methods that can preserve an entire application with its networks and services.
One Hybrid Cloud
Supports various Linux and Windows servers and Amazon Web Services
Pricing starts at $15,000/year for a basic application license that can migrate up to 50 servers, based on app complexity and critical services.
Cisco ASA has better application granularity, a more flexible means of policy creation, and easier to use controls and more powerful reports than its predecessors. We tested the ASA-5525-X in January 2013 and found a much improved user interface and lots of content-aware features.
Pricing starts at $13,500 for hardware and software subscriptions.