SearchSecurity.com: A closer look at ‘good enough’ security

As calls for breach accountability across industries grow louder, and the government introduces new cybersecurity initiatives, frustrated security experts say change will only occur when lawsuits from shareholders hold C-level executives and boardrooms accountable for lax security practices.

While agreement on what “good enough security” entails is hard to come by, chief information security officers can take actions to mitigate the security and risk tradeoffs that can result from business decisions, to make their organizations less vulnerable to security threats.

You can read my article for SearchSecurity here.

If you own a Lenovo PC, read this asap!

Lenovo has been shipping its PCs with built-in malware that is a new level of insidiousness and nasty. Before I explain what it does, if you have a Lenovo machine, or know someone who does, go now to this site and see what it says.

What is going on? It turns out that Lenovo, either by design or by sheer stupidity, has included a piece of software called a root certificate, from this company Superfish. Now, if you aren’t a computer expert, this is probably meaningless to you. So let me break it down. With this Superfish certificate, every site that you go to in your browser using the HTTPS protocol is subject to being exploited by some bad guys. Chances are, it may not happen to you.

In any case, you want to remove this thing pronto. Here are the instructions from Lenovo.

Back in those innocent days of the early Web, we use to say add the S for security when you were browsing. This forces an encrypted connection between you and the website that you are visiting, so your traffic over the Internet can’t be captured and exploited.

But having a bad certificate turns this completely around: with it, you can decrypt this traffic, indeed, you can manipulate the web browsing session in such a way that you might not even realize that you are going to ThievesRUs.com instead of your trusted BankofWhatever.com. While no one has yet reported that this has happened, it is only a matter of time. There is a great article explaining this exploit on ArsTechnica here.

Certificates are the basic underpinnings of secure infrastructure, they are used in numerous other situations where you want to make sure that someone is who they say they are. By using a bad certificate, such as the one from Superfish, you throw all that infrastructure into disarray.

certs2To get an idea of how many certs you use in your daily life, open up your browser’s preferences page and click on over to the Certs section, there you will dozens if not hundreds of suppliers. (see screenshot at left)  Do you really trust all of them? You probably never heard of most of them. On my list, there are certs from the governments of Japan and China, among hundreds of others. You really have no way of knowing which of these are fishy, or even superfishy.

This isn’t the first time that bad certs have popped on on the Intertubes. There have been other situations where malware authors have signed their code with legit certs, which kinda defeats the whole purpose of them. And back in 2012, Microsoft certificates were used to sign the Flame malware; the software vendor had to issue emergency instructions on how to revoke the certs. And in 2011, the Comodo Group had issued bogus certs so that common destinations could have been compromised.

It is getting harder to keep track of stuff and stay ahead of the bad guys, even when they don’t have the auspices of a major PC manufacturer behind them.

Check your Google Account security settings now, please

I feel almost embarrassed writing this column, but I figured if it can happen to me, it can happen to you. Google is running this cute promotion this week where you can tack on another 2 GB of storage to your account. The only thing you have to do is run through a series of security settings on your account. It will take about two minutes at the most. You go to this page for the detail to read more and then navigate over to your account. Go ahead, I will wait until you come back.

Nice, hunh? Well, not so nice for moi. I found out that someone was using a Windows computer last week in Kentucky and signed in as me. I quickly changed my password, and then forced everyone else to logout of my account. Borderline creepy, right? What happened? I have no idea. I guess that is one of the reasons why the promotion is so useful to them: they can tighten up everyone’s credentials quickly, and the extra storage costs them close to nothing.

Part of the security assessment is to see what connected apps are signing into your account. It is always a good idea to bring up the corresponding screens in other Web services to make sure that you know what is happening. I call this an “app audit” and I mention how to do it for LinkedIn, Twitter and Facebook (but curiously, forgot about Google) in this post from several years ago. That will take you another few minutes.

Please, for your own protection, run through these checks now.

The cyber femme fatales in the Syrian civil war

It is almost a cliche, but the femme fatale — the allure of a female spy who gets the lonely male soldier to give up military secrets — is still very much alive and well in the current Syrian civil war. But instead of using actual people, today’s take on Mata Hari has more to do about social networks, phishing, and clever use of a variety of keylogging programs.

A report this week by FireEye has tracked this trend in Syria and makes for interesting reading. Hackers operated between November 2013 and January 2014 to collect battle plans and specific operational details from the opposition forces’ computers. The information was substantial: FireEye found more than seven GB of data spanning  thousands of Skype conversations and 12,000 contact records. So much was taken from the soldiers and insurgents that FireEye was able to assemble profiles of several of them for their report:

fire2

What is astounding is how easily the various Syrians fell for some pretty old-fashioned social engineering. Skype contact requests would be sent to the fighters from unknown and seemingly female correspondents. Once they were engaged in text chats, the hackers would ask what kind of computer they were on, and then send them a “better photo” of themselves that, surprise, surprise, turned out to contain malware. Then the data extraction began, and they moved on to others in the target’s contacts.

It isn’t just that loose lips sink ships. It is that lonely guys are so easily manipulated. Back in WWII days, we needed a lot more human infrastructure to collect data to track enemy movements. Nowadays, all it takes is a female avatar and some sympathetic IM patter, a few pieces of code and let the gigabytes roll in.

The hackers were thorough. FireEye found “whole sets of files pertaining to upcoming large-scale military operations. These included correspondence, rosters, annotated satellite images, battle maps, orders of battle, geographic coordinates for attacks, and lists of weapons from a range of fighting groups.” In addition to using the fake female avatars on Facebook and Skype, they also setup a bogus pro-opposition website that would infect visitors with malware. The whole effort was aided by the fact that often soldiers shared computers, so once an infection landed on one PC it could collect multiple identities quite easily.

Finally, the hackers focused on Android phones as well as Windows PCs and had malware created for both environments.

Figuring out who was behind this massive data collection effort isn’t easy, of course. FireEye thinks there are ties to Lebanese or other pro-Syrian groups, and have tracked its command servers to outside of Syria. That could be almost anyone these days. Still, the report is quite chilling in what a determined hacking group can accomplish during wartime.

Party like the Internet is 1994

BMW has this very funny ad where Katie Couric and Bryant Gumbel discuss the makeup of an Internet email address back in 1994.

To say that the Internet wasn’t mainstream enough for the Today show hosts is an understatement. Back then, few people had any idea of what it was, how email was used, or what the punctuation in the email address signified. Looking at the Today show this morning, things certainly have changed: live Tweeting of the snowstorm, Carson Daly and his magic touch screen surfing social media, and even some of the hosts reading off their laptops on air. We have come a long way.

But let’s go back to what we were all doing 20-some years ago. Back then it was hard to get online. We had dial-up modems: no Wifi, no broadband, no iPhones. PCs had PCMCIA cards, the precursor to USB ports. Other than Unix, none of the other desktop operating systems came with any support for IP protocols built-in.

Now it is hard to find a computer with a dial-up modem included, and without any Wifi support. Even the desktop PC that I last bought came with a Wifi adapter.

The communications software was crude and finicky: it was hard to run connections that supported both Ethernet (or Token Ring, remember that?) on the local office network and then switch to remote IP connections when you went on the road. I was using Fetch for file transfer (I still like that program, it is so dirt simple to use) and Mosaic, the first Web browser that came out that Illinois campus where a young Marc Andreessen was studying before he made it rich at Netscape. Companies such as Netmanage and Spry were packaging all the various programs that you needed to get online with an “Internet in a Box.” This was a product that was a bit different from that described in “The IT Crowd” TV show a few years later:

Back in 1994, I had a column in Infoworld where I mentioned that configuring TCP/IP was “an exercise in learning Greek taught by an Italian.” My frustration was high after trying a series of products, each of which took several days worth of tech support calls and testing various configurations with software and OS drivers to make them work. Remember NDIS and the protocol.ini file? You had to be familiar with that if you did a lot of communicating, because that is where you had to debug your DOS and early Windows communications strings. When they did work it was only with particular modems.

Finding an Internet service provider wasn’t easy. There were a few hardy souls that tried to keep track of which providers offered service, through a combination of mailing lists and other online documents. Of course, the Web was just getting started. Getting a dot com domain name was free – you merely requested one and a few seconds later it was yours. Before I had strom.com, I was using Radiomail and MCIMail as two options for Internet-accessible email addresses.

Indeed, mobility meant often using different modems with different software tools. When I traveled, I took four of them with me: cc:Mail (to correspond with my readers and to file my columns with the editors), Smartcom (to pick up messages on MCI Mail and others that I connected to from time to time), Eudora (for reading my Internet mail), and Versaterm AdminSlip (for connecting to my Internet service provider). That was a lot of gear and software to keep track of.

With all of these modems, if you can imagine, the telephone network was our primary means of connection when we were on the road. Of course, back then we were paying for long distance phone calls, and we tried to minimize this by finding collections of “modem pools” to dial into that were a local call away. Back then I was paying $100 a month for dial up! Then ISDN came along and I was paying $100 for 128 kbps! Now I pay $40 a month for broadband access. I guess things have improved somewhat.

The uneven history of collaboration

20_Options_for_Real-Time_Collaboration_ToolsThose of you who have been loyal readers of my work over the years will recognize a consistent theme where I talk about how we use computers to collaborate on projects. I came to work this morning to hear several complaints via IM from a friend who has an evil co-worker that was berating her about her ideas. “What a way to say good morning,” said my friend to me. That got me thinking.

We have come a long way on the route towards better collaboration, with fancier tools, higher-speed networks, and near ubiquitous Internet access, but the people part of the equation is still somewhat lacking. Let’s take a trip down memory lane and I’ll point out some of the things that I have seen over the years.

My most recent column on collaboration describes the British crypto program Colossus and how thousands of people worked over several years to decode German dispatches during World War II. That was collaboration with a capital C. The equipment was barely functional, and the workflows to get the job done were brutally complex. But it worked, because everyone pulled together and did their part and respected the individual roles that each had to play. So this is noteworthy because of the time period and the level of actual collaboration that took place.

But Colossus happened with everyone working in very close quarters, dictated by the wartime requirements and how the machine was constructed. In the modern era, we are more concerned about incorporating collaboration when we have distributed work teams. I had to deal with this early on, when in 1990 I founded Network Computing magazine and we hired the best editors from all around the country. Back then it was a challenge, especially as we had to build trust and help foster a sense of purpose with people that had never met face-to-face.

This is still an issue today. In this recent post for WPCurve, Don Virgillito talks about what he has seen from the best collaborative teams that were distributed geographically. He suggests that you should “consider existing remote work experience as an essential skill.” And you’ll also “have to grow as a team to continue being a team, which would involve meetings, team training, retreats to boost morale, and real-life meetings when possible to build long lasting relationships.” That is exactly what we did back in the early 1990s, and it worked so well that the magazine continued using some of these practices long after I left its helm.

As far back as 2008, I was writing about this people side of collaboration:

So, as PC processors get faster, disks get bigger, and our social networks get larger, we still don’t have the perfect collaboration solution. We still think of the data on our hard disks as our own, not our employer’s. Sharing is still for sissies. Until that attitude changes, the headphones will stay firmly stuck in our ears, blocking out the rest of the world around us.

How many of us are still stuck in a similar situation? And as more “bullpen” style office configurations are created, this only worsens the scenario.

If we look closer at the people relationships than the technology, we’ll see a lot more insights into how collaboration happens. That was the subject of this post from 2009 where a survey of collaboration habits found a key staffer way down the food chain was the glue that held things together. He worked closely with all the different stakeholders and pushed for better collaboration between departments, because the bosses of each department didn’t know how to talk to each other. Another part of this survey found that companies should focus on their least communicative employees: if they can bring them up even just slightly at being better communicators they will greatly improve productivity.

Let’s turn to the tools used for collaboration.

In 1990, we didn’t have the Internet, and setting up email meant running our own system that dialed into each of our remote offices to route messages around. IM and cell phones didn’t exist yet, so remote editors had to find payphones and RJ11 jacks for their modems. We spent a lot of time sending large graphics files around on our network because there wasn’t any other way to share them. In many ways, it was the dark ages of collaboration tools.

But as email became successful, it also brought overuse, something that we still wrestle with today. By 2009 we were focused more on cutting down on back-and-forth emails and improving remote access. In an article that I wrote for an IT publication then I talked about the various tools that were available, some of which are still being used today. (A chart that I prepared for my website is woefully outdated, but will give you a picture of what things I was looking at.)

And in this story that I wrote for Baseline magazine in 2011, I looked at other ways that collaboration happens, using screen sharing, document management and workflow management tools. For example at one after-market auto parts chain, installers can get the schematics of the car they are working on at the moment they need them, thanks to a collaboration tool that integrates into their workflow.

Since then, we have all sorts of fancy stuff to use. A recent review last year for Computerworld looks at three of them: Glip, Slingshot, and Flow. While none of the three are perfect, we have come a long way since the early days pre-Web. And if you click on that link above for the WPCurve article, you will see dozens of other suggestions of what the latest startups use for their collaboration technologies.

So yes, the history of collaboration has been one of fits and starts, making some of the same mistakes over again and not really considering the historical context. I hope you find the journey as interesting as I have.

Avoiding the IT rust belt

Living in the Midwest, I consider what life is like in the Rust Belt, where aging manufacturing companies have come and gone. When I first moved to St. Louis, all three of the domestic car makers had plants here: now we are down to just one of them. I was talking to a friend of mine who mentioned that she is seeing something similar: some enterprises are still stuck with their “rust belt” version of IT services and have yet to make their move. Whether their businesses will suffer the same fate as some of the manufacturing industry is hard to say.

So I thought I would talk to another friend of mine, who is the IT director at a Midwest steel mill and get his perspective. He has a small IT shop of six people but they are doing some interesting things to keep their plant competitive. “We can’t really innovate on our product line, but we have to do things from an IT perspective to serve our customers better and become more productive and cost effective,” he told me. Here is how he does it:

Get rid of aging equipment. While he still has a few Windows XP machines around, most of his IT gear is the latest and greatest. He is in the process of upgrading his Cisco network infrastructure, for example. Some of his most ancient computers that run his plant are connected to logic controllers, just because they don’t require much horsepower and they are on the plant floor where the environment is brutal. “Eventually, we will replace these when they break or migrate one of our older office PCs to there.”

Beware of legacy systems. “We were able to leapfrog other steel companies with our technology, because when we started the company we didn’t nave many legacy systems.” Now he has three developers who build custom apps that are used for handling their steel inventory and steel operations, along with a customer portal. For example, his team built a custom automated scheduling system when one of the two people in that department retired. Now it takes the remaining staffer just part of the day to do the scheduling, and they have cut wasted product and improved their steel yields tremendously, saving them money too.

Make use of the cloud and virtualization where it makes sense. “When I started with the company, we had a single PC server that was running everything. Now we have virtual servers and are looking at converged storage infrastructure too.” One of his first decisions was to employ Google Apps for their enterprise email. “That was a slam-dunk and one of the best decisions I ever made.”  He is looking at acquiring other cloud-based apps as well,

Vendor flexibility. Part of moving to the cloud means making use of services that don’t require vendor lock-in “I want to have options to migrate away from them if things don’t work out. That is why I haven’t gotten involved in SAP or Salesforce yet, there isn’t any easy way to walk away from them.”

Segregate production and office networks. As the stories from the German steel mill in December can testify, having totally separated networks between the office and the mill are critical. Hackers used an infected email to literally trash one of the mills. “We have different networks and vLANs so this won’t happen to us.”

Your own rust belt conditions might differ, for example, you might still be using Systems Network Architecture on your mainframes, or connecting via Frame Relay networks. So it might be a good time to reconsider these and other technologies, and start investing in your future. (The building pictured above is from an old automotive plant that has been renovated into condos, by the way.)

Network World: Uptime simplifies system and server monitoring

uptimeServer and systems management tools have long been too expensive and too complex to actually use without spending thousands of dollars on consultants. Uptime Software has been trying to change that with a product that can be installed and useful within an hour. I tested the latest version (7.3.1) on a small network of Windows physical and virtual machines in a review that was published on Network World today. 

The screenshot above is an example of Uptime’s dashboard, and is filled with all sorts of actionable information, and is completely customizable.

Masergy blog: How Cloud Computing Changes the Role of IT

We are experiencing a significant shift in how corporate IT departments deliver services to the business in this era of cloud computing. IT staffs are evolving beyond installing servers and software towards provisioning services, negotiating vendor relationships and collaborating with business users on application service delivery requirements.

I have some thoughts on what this means for IT staffs, skills and the resulting networking mix for Masergy, a broadband communications management vendor.

You can read my post on their blog here.

Network World: Six Unified Threat Management Units Reviewed

The world of unified threat management appliances continues to evolve. In my 2013 UTM review, I looked at units from Check Point Software (which topped the ratings), Dell/Sonicwall, Elitecore Technologies’ Cyberoam, Fortinet, Juniper Networks, Kerio Technologies, Sophos, and Watchguard Technologies.

This year I reviewed the Calyptix AccessEnforcer AE800, Check Point Software’s 620, Dell/Sonicwall’s NSA 220 Wireless-N, Fortinet’s FortiWiFi-92D, Sophos’ UTM SG125 and Watchguard Technologies’ Firebox T10-W (pictured below). With the exception of Calyptix, the other five are all in Gartner’s “leader” quadrant of their latest UTM report. We contacted other vendors including Cisco, Juniper and Netgear, but they declined to participate. In addition, Sophos has purchased the Cyberoam line and will combine its features with its existing UTM products sometime next year.

WG ROGUE ap detectionOverall, the market has slowly evolved more than had any big revolutionary changes. Products are getting better in terms of features and price/performance. All six of these units will do fine for securing small offices of 25 people.

You can read the review here, check out a slideshow of the screenshots of typical features here, and watch a short (two minute) screencast video summarizing the major points of the review here.