Party like the Internet is 1994

BMW has this very funny ad where Katie Couric and Bryant Gumbel discuss the makeup of an Internet email address back in 1994.

To say that the Internet wasn’t mainstream enough for the Today show hosts is an understatement. Back then, few people had any idea of what it was, how email was used, or what the punctuation in the email address signified. Looking at the Today show this morning, things certainly have changed: live Tweeting of the snowstorm, Carson Daly and his magic touch screen surfing social media, and even some of the hosts reading off their laptops on air. We have come a long way.

But let’s go back to what we were all doing 20-some years ago. Back then it was hard to get online. We had dial-up modems: no Wifi, no broadband, no iPhones. PCs had PCMCIA cards, the precursor to USB ports. Other than Unix, none of the other desktop operating systems came with any support for IP protocols built-in.

Now it is hard to find a computer with a dial-up modem included, and without any Wifi support. Even the desktop PC that I last bought came with a Wifi adapter.

The communications software was crude and finicky: it was hard to run connections that supported both Ethernet (or Token Ring, remember that?) on the local office network and then switch to remote IP connections when you went on the road. I was using Fetch for file transfer (I still like that program, it is so dirt simple to use) and Mosaic, the first Web browser that came out that Illinois campus where a young Marc Andreessen was studying before he made it rich at Netscape. Companies such as Netmanage and Spry were packaging all the various programs that you needed to get online with an “Internet in a Box.” This was a product that was a bit different from that described in “The IT Crowd” TV show a few years later:

Back in 1994, I had a column in Infoworld where I mentioned that configuring TCP/IP was “an exercise in learning Greek taught by an Italian.” My frustration was high after trying a series of products, each of which took several days worth of tech support calls and testing various configurations with software and OS drivers to make them work. Remember NDIS and the protocol.ini file? You had to be familiar with that if you did a lot of communicating, because that is where you had to debug your DOS and early Windows communications strings. When they did work it was only with particular modems.

Finding an Internet service provider wasn’t easy. There were a few hardy souls that tried to keep track of which providers offered service, through a combination of mailing lists and other online documents. Of course, the Web was just getting started. Getting a dot com domain name was free – you merely requested one and a few seconds later it was yours. Before I had strom.com, I was using Radiomail and MCIMail as two options for Internet-accessible email addresses.

Indeed, mobility meant often using different modems with different software tools. When I traveled, I took four of them with me: cc:Mail (to correspond with my readers and to file my columns with the editors), Smartcom (to pick up messages on MCI Mail and others that I connected to from time to time), Eudora (for reading my Internet mail), and Versaterm AdminSlip (for connecting to my Internet service provider). That was a lot of gear and software to keep track of.

With all of these modems, if you can imagine, the telephone network was our primary means of connection when we were on the road. Of course, back then we were paying for long distance phone calls, and we tried to minimize this by finding collections of “modem pools” to dial into that were a local call away. Back then I was paying $100 a month for dial up! Then ISDN came along and I was paying $100 for 128 kbps! Now I pay $40 a month for broadband access. I guess things have improved somewhat.

The uneven history of collaboration

20_Options_for_Real-Time_Collaboration_ToolsThose of you who have been loyal readers of my work over the years will recognize a consistent theme where I talk about how we use computers to collaborate on projects. I came to work this morning to hear several complaints via IM from a friend who has an evil co-worker that was berating her about her ideas. “What a way to say good morning,” said my friend to me. That got me thinking.

We have come a long way on the route towards better collaboration, with fancier tools, higher-speed networks, and near ubiquitous Internet access, but the people part of the equation is still somewhat lacking. Let’s take a trip down memory lane and I’ll point out some of the things that I have seen over the years.

My most recent column on collaboration describes the British crypto program Colossus and how thousands of people worked over several years to decode German dispatches during World War II. That was collaboration with a capital C. The equipment was barely functional, and the workflows to get the job done were brutally complex. But it worked, because everyone pulled together and did their part and respected the individual roles that each had to play. So this is noteworthy because of the time period and the level of actual collaboration that took place.

But Colossus happened with everyone working in very close quarters, dictated by the wartime requirements and how the machine was constructed. In the modern era, we are more concerned about incorporating collaboration when we have distributed work teams. I had to deal with this early on, when in 1990 I founded Network Computing magazine and we hired the best editors from all around the country. Back then it was a challenge, especially as we had to build trust and help foster a sense of purpose with people that had never met face-to-face.

This is still an issue today. In this recent post for WPCurve, Don Virgillito talks about what he has seen from the best collaborative teams that were distributed geographically. He suggests that you should “consider existing remote work experience as an essential skill.” And you’ll also “have to grow as a team to continue being a team, which would involve meetings, team training, retreats to boost morale, and real-life meetings when possible to build long lasting relationships.” That is exactly what we did back in the early 1990s, and it worked so well that the magazine continued using some of these practices long after I left its helm.

As far back as 2008, I was writing about this people side of collaboration:

So, as PC processors get faster, disks get bigger, and our social networks get larger, we still don’t have the perfect collaboration solution. We still think of the data on our hard disks as our own, not our employer’s. Sharing is still for sissies. Until that attitude changes, the headphones will stay firmly stuck in our ears, blocking out the rest of the world around us.

How many of us are still stuck in a similar situation? And as more “bullpen” style office configurations are created, this only worsens the scenario.

If we look closer at the people relationships than the technology, we’ll see a lot more insights into how collaboration happens. That was the subject of this post from 2009 where a survey of collaboration habits found a key staffer way down the food chain was the glue that held things together. He worked closely with all the different stakeholders and pushed for better collaboration between departments, because the bosses of each department didn’t know how to talk to each other. Another part of this survey found that companies should focus on their least communicative employees: if they can bring them up even just slightly at being better communicators they will greatly improve productivity.

Let’s turn to the tools used for collaboration.

In 1990, we didn’t have the Internet, and setting up email meant running our own system that dialed into each of our remote offices to route messages around. IM and cell phones didn’t exist yet, so remote editors had to find payphones and RJ11 jacks for their modems. We spent a lot of time sending large graphics files around on our network because there wasn’t any other way to share them. In many ways, it was the dark ages of collaboration tools.

But as email became successful, it also brought overuse, something that we still wrestle with today. By 2009 we were focused more on cutting down on back-and-forth emails and improving remote access. In an article that I wrote for an IT publication then I talked about the various tools that were available, some of which are still being used today. (A chart that I prepared for my website is woefully outdated, but will give you a picture of what things I was looking at.)

And in this story that I wrote for Baseline magazine in 2011, I looked at other ways that collaboration happens, using screen sharing, document management and workflow management tools. For example at one after-market auto parts chain, installers can get the schematics of the car they are working on at the moment they need them, thanks to a collaboration tool that integrates into their workflow.

Since then, we have all sorts of fancy stuff to use. A recent review last year for Computerworld looks at three of them: Glip, Slingshot, and Flow. While none of the three are perfect, we have come a long way since the early days pre-Web. And if you click on that link above for the WPCurve article, you will see dozens of other suggestions of what the latest startups use for their collaboration technologies.

So yes, the history of collaboration has been one of fits and starts, making some of the same mistakes over again and not really considering the historical context. I hope you find the journey as interesting as I have.

Avoiding the IT rust belt

Living in the Midwest, I consider what life is like in the Rust Belt, where aging manufacturing companies have come and gone. When I first moved to St. Louis, all three of the domestic car makers had plants here: now we are down to just one of them. I was talking to a friend of mine who mentioned that she is seeing something similar: some enterprises are still stuck with their “rust belt” version of IT services and have yet to make their move. Whether their businesses will suffer the same fate as some of the manufacturing industry is hard to say.

So I thought I would talk to another friend of mine, who is the IT director at a Midwest steel mill and get his perspective. He has a small IT shop of six people but they are doing some interesting things to keep their plant competitive. “We can’t really innovate on our product line, but we have to do things from an IT perspective to serve our customers better and become more productive and cost effective,” he told me. Here is how he does it:

Get rid of aging equipment. While he still has a few Windows XP machines around, most of his IT gear is the latest and greatest. He is in the process of upgrading his Cisco network infrastructure, for example. Some of his most ancient computers that run his plant are connected to logic controllers, just because they don’t require much horsepower and they are on the plant floor where the environment is brutal. “Eventually, we will replace these when they break or migrate one of our older office PCs to there.”

Beware of legacy systems. “We were able to leapfrog other steel companies with our technology, because when we started the company we didn’t nave many legacy systems.” Now he has three developers who build custom apps that are used for handling their steel inventory and steel operations, along with a customer portal. For example, his team built a custom automated scheduling system when one of the two people in that department retired. Now it takes the remaining staffer just part of the day to do the scheduling, and they have cut wasted product and improved their steel yields tremendously, saving them money too.

Make use of the cloud and virtualization where it makes sense. “When I started with the company, we had a single PC server that was running everything. Now we have virtual servers and are looking at converged storage infrastructure too.” One of his first decisions was to employ Google Apps for their enterprise email. “That was a slam-dunk and one of the best decisions I ever made.”  He is looking at acquiring other cloud-based apps as well,

Vendor flexibility. Part of moving to the cloud means making use of services that don’t require vendor lock-in “I want to have options to migrate away from them if things don’t work out. That is why I haven’t gotten involved in SAP or Salesforce yet, there isn’t any easy way to walk away from them.”

Segregate production and office networks. As the stories from the German steel mill in December can testify, having totally separated networks between the office and the mill are critical. Hackers used an infected email to literally trash one of the mills. “We have different networks and vLANs so this won’t happen to us.”

Your own rust belt conditions might differ, for example, you might still be using Systems Network Architecture on your mainframes, or connecting via Frame Relay networks. So it might be a good time to reconsider these and other technologies, and start investing in your future. (The building pictured above is from an old automotive plant that has been renovated into condos, by the way.)

Network World: Uptime simplifies system and server monitoring

uptimeServer and systems management tools have long been too expensive and too complex to actually use without spending thousands of dollars on consultants. Uptime Software has been trying to change that with a product that can be installed and useful within an hour. I tested the latest version (7.3.1) on a small network of Windows physical and virtual machines in a review that was published on Network World today. 

The screenshot above is an example of Uptime’s dashboard, and is filled with all sorts of actionable information, and is completely customizable.

Masergy blog: How Cloud Computing Changes the Role of IT

We are experiencing a significant shift in how corporate IT departments deliver services to the business in this era of cloud computing. IT staffs are evolving beyond installing servers and software towards provisioning services, negotiating vendor relationships and collaborating with business users on application service delivery requirements.

I have some thoughts on what this means for IT staffs, skills and the resulting networking mix for Masergy, a broadband communications management vendor.

You can read my post on their blog here.

Network World: Six Unified Threat Management Units Reviewed

The world of unified threat management appliances continues to evolve. In my 2013 UTM review, I looked at units from Check Point Software (which topped the ratings), Dell/Sonicwall, Elitecore Technologies’ Cyberoam, Fortinet, Juniper Networks, Kerio Technologies, Sophos, and Watchguard Technologies.

This year I reviewed the Calyptix AccessEnforcer AE800, Check Point Software’s 620, Dell/Sonicwall’s NSA 220 Wireless-N, Fortinet’s FortiWiFi-92D, Sophos’ UTM SG125 and Watchguard Technologies’ Firebox T10-W (pictured below). With the exception of Calyptix, the other five are all in Gartner’s “leader” quadrant of their latest UTM report. We contacted other vendors including Cisco, Juniper and Netgear, but they declined to participate. In addition, Sophos has purchased the Cyberoam line and will combine its features with its existing UTM products sometime next year.

WG ROGUE ap detectionOverall, the market has slowly evolved more than had any big revolutionary changes. Products are getting better in terms of features and price/performance. All six of these units will do fine for securing small offices of 25 people.

You can read the review here, check out a slideshow of the screenshots of typical features here, and watch a short (two minute) screencast video summarizing the major points of the review here.

Why you need to review your stats regularly

I admit it; I have fallen out of the habit of reviewing my various stats on my websites and other content-oriented places. For many years I dutifully kept track of how my posts were doing, who was commenting, where backlinks were coming from, and so forth.

For some reason, I stopped doing this in the past year. Maybe it was just being lazy, maybe because I had gotten very busy with a lot of very interesting assignments. Maybe it was just old age: I have been writing stuff for more than 25 years, after all.

Well, all of those (and others) aren’t valid excuses. You need to check your stats, and check them regularly. There are lots of interesting things hidden in them that you might not realize, and some of these things can help you delivery better content, or target new audiences, or figure out what you are doing right (and do it more often) or wrong (and avoid or improve).

WordPress’ Jetpack delivers an annual email summary of your blog and its posts: this is a very useful reminder that you need to dive in deeper and see what is going on with your blog (or blogs, in my case). And Slideshare.net also has some great analytics. This service is a wonderful place to post PowerPoints of my presentations. Looking at these analytics, I would have found out:

  • Influence can be found in odd places. A post that I wrote for SoftwareAdvice.com about real-time retail store tracking was picked up by a blogger for the point-of-sale system Vend.com, that brought a bunch of visitors to my site back in the spring when I was quoted by their blogger. Could have been an opportunity to talk more about the subject.
  • Don’t knock the long tail. I am still the leading expert on a very obscure Windows error message: if you were to Google “Windows Media Player error c00d11b1” you will see my post in the first ten or so results. The post has received more than 380,000 views in the more than eight years since I wrote it, and it is still getting comments on my blog and links in the Microsoft forums too. Why is this important? All this traffic on a very specific subject can help raise your Google ranking, and also provide an entry point into your content ecosystem if you manage it properly.
  • My influence beyond North American borders is somewhat quirky. The second most-visited place of origin for my Slideshare.net account is Ukraine, with about half the views from the US over the past year. Again attesting to the very long tail, a good chunk of these views came from a presentation that I posted five years ago on how to set up your first blog and business email. (That kind of makes sense.) For my blog, other popular countries of origin for my visitors were India and Brazil. Don’t forget the rest of the world when you are posting your content and widen your perspective to engage more of these readers.
  • Twitter and Facebook were both important traffic drivers for my blog over the past year. This emphasizes how critical your own social media accounts are and how you need to cross-link posts among them. Combined the two were equal to the traffic brought in from Google organic searches, which is another important element in referring traffic too. Don’t just post blog entries on your blog: I have begun cross-posting my content on LinkedIn Pulse and Medium and they are getting a fair share of views there too. The analytics for those sites could be better though: for example, Medium only allows you to look at month-long intervals at a time and Pulse will only send you static results in regular email summaries.

There are lots of Twitter analytic tools, and some that are quite pricey. One that I like that has a free version is TwitterCounter, including who unfollowed and followed you over time. For example, I got excited this week to see that the actor Taye Diggs followed me (he has been following my wife’s Tweets for some time) but our local mayor dropped me (oh well). You can see the kind of graphs it produces such as this, which to me indicate a fairly steady stream of new followers replacing the drop-offs with a slow overall growth:

twittt

Happy new year and may your stats encourage you to deliver better content in 2015!

The sad ironies of the Sony affair


I have been spending time studying up on what actually happened at Sony over the past month. There has been a tremendous amount of inaccurate reporting, and a dearth of factual information. Let’s try to set that record a bit straighter. From where I sit, the attack and the activity about the movie were two separate events and were probably caused by at least two separate entities. Assigning blame across both of them to the same actor is ludicrous. (And Dr. Evil has a few funny things to say about the whole situation too.)

First, the sad irony of a company that deliberately injects malware into their products being hacked yet again. While many, including President Obama, were quick to assign blame to the North Koreans, the actual initial breech appears to be the work of a Sony insider who could guide the hackers toward specific servers and IP addresses.Certainly, this level of detail could have sussed out with lots of clever hacking, but the simple explanation is a dissatisfied former employe, of which there are many.

Second, the sad irony of the press becoming so enthralled with the sordid details of the leaked content that they so forgot their actual duty in telling the story of what happened. They share the blame with the hackers, who knew exactly how to manipulate them and feed our hunger for celebrity gossip.

The third irony is that Sony’s security should have been better: this isn’t their first rodeo and certainly now wont be their last. Storing passwords as plain text, using the word “password” or other commonly guessed words, and having no mechanism to monitor the exfiltrated data were all shameful practices. What is doubly wrong is that they have had numerous opportunities to improve their IT procedures, and haven’t.

Ironic that their passwords were so poor that a security researcher was able to inject a fake Sony SSL certificate by guessing one of them. Thankfully, this wasn’t a deliberate hack, just a demonstration of how easy Sony’s procedures could be circumvented.

Ironic too are all the calls for posting the movie on various online streaming services to counter the cancelled Christmas Day release. So the way to combat censorship, even self-imposed, is to take your content to the cloud, so that more people can see your movie. Wasn’t this was many private citizens were asking the MPAA to let them do when they posted movies online?

Also ironic are stories about how the MPAA and Sony were using denial of service methods to try to keep people from seeing their movies, including The Interview. See irony #1 about injecting malware, etc. And how ironic was it that the peer file sharing services actually working in cooperation with the movie studios to take down the leaked content, including some copies of pre-released movies, quickly once the hackers uploaded them?

Also ironic how one of the first things that our government is asking for a joint effort with the Chinese to cooperate in controlling this hack: perhaps the same unit within the Chinese government that we recently indicted for cyber espionage could be used? Granted there is a line between espionage and criminal hacking, or at least there used to be one.

Finally, while not ironic it is sad that the film’s creators so insisted on using the actual name of a living president in their film. While not the first time this has happened, they could have scored their satire points by going the Chaplin/Great Dictator route which doesn’t actually name Hitler but in every other way goes about pillorying him. Certainly, you can’t blame the North Koreans on this point: had someone used a similar plot line with our president, chances are even our bumbling Secret Service would have been all over that one too.

If you want to read a very solid collection of the various events of the past month, the folks at Risk Based Security (a Virginia security VAR) are worth your time and clicks. They continue to add to their coverage as new events unfold.

So what are some action items you can take? Here are a few:

  • Understand that all it takes is an unhappy employee with a thumb drive and basic file copying skills. You should think about your HR and data leak prevention policies accordingly.
  • Get thy passwords in order, puh-leeze. This isn’t something that will cost megabucks.
  • It is way past time to encrypt your email, especially if you communicate with global brands And even if you don’t, still more the reason.

And to wrap up, I want to quote S. Cobb’s blog where he says:

“Rather than berate those who are being realistic about our current weaknesses, let’s put our anger and our energy into demanding companies and governments do a better job of securing our digital assets and defending the digital world.”

Stepping up to better authentication

lock-and-key-icon-thumb355812The days of multifactor security tokens may be numbered, just as they are moving beyond hardware form factors. While they are clever solutions, users don’t always like to use them in whatever guise. Tokens do get in the way of the actual transaction itself. IT staffs tolerate tokens but they do require a fair amount of programming effort to integrate into their existing systems. Tokens also have their limitations and typically only address a single access threat vector. For example, some authentication methods are great at protecting e-commerce connections but don’t handle remote connections to in-house systems or pre-paid debit card exploits

What is catching on is to use what is called risk-based authentication, context-aware or adaptive access controls The idea is to base any access decisions on a dynamic series of circumstances. These count as the additional authentication factor, rather than rely on a particular set of tokens or pieces of smartphone software. Access to a particular business application goes through a series of trust hurdles, with riskier applications requiring more security so that users don’t necessarily even know that their logins are being vetted more carefully. Moreover, this all happens in real time, just like the typical multifactor methods.

What are the typical ways that this works? Logins to your account are scored based on a series of metrics, including the role you have (such as a network admin), if you are connecting from a particular country (just as the credit card companies examine their transactions) and if you have changes to particular transaction patterns or spending patterns. If a user is doing something that doesn’t match his or her history, that becomes a riskier transaction so that authentication requests and logins can be challenged with an additional authentication measure. Challenging unusual login or transaction patterns creates a barrier that a hacker or fraudster cannot easily circumvent, while not doing the customer the disservice of demanding such authentication in a blanket manner.

Or you could have a system that detects geo-locations in a series of logins (such as one from a Chinese-based IP address and another from Canada a few minutes later).

Firewalls and intrusion prevention products have had similar step-up risk-based rules for years to analyze and block particular network behavior. But now a number of vendors are including risk-based authentication into their security tools, including Symantec’s VIP service, Vasco, RSA, SecureAuth and CA. Expect to see more of them in the near future, as the notion gains traction. I have begun to review these tools on SearchSecurity.com for a series on multifactor authentication.

Finally, if you are interested in having me write or speak on this topic, let me know.