SiliconANGLE: The changing economics of open-source software

The world of open-source software is about to go through another tectonic change. But unlike earlier changes brought about by corporate acquisitions, this time it’s thanks to the growing series of tech layoffs. The layoffs will certainly change the balance of power between large and small software vendors, and between free and commercial software versions, and the role played by OSS in enterprise software applications could change.

In this post for SilicionANGLE, I talk about why these changes are important and what enterprise software managers should take away from the situation.

 

25 years of ecommerce

In today’s post, I look back on the developments of ecommerce and my role in covering this technology. I was recently reminded of this history after writing last week about Paypal — this motivated one of you to recall events that happened in the early 2000s, back when the “internet bubble” was rising and then bursting.

I last took a long look back at ecommerce in 2014 with this blog post. In it I highlighted a series of other works:

While the web came of age in the 1990s, it took a while for ecommerce to get into gear. The technologies were bare-bones: back then, you could learn basic HTML coding in a couple of days and easily put together a static series of web pages. The key operative words in that sentence were “static” and “basic.” The 1990s era of HTML was waiting for the language to catch up with what we wanted to do with it, but eventually the standards process got there. The real stumbling block was making a site dynamic and being able to support online inventories that were accurate, checkout pages that were secure, and having access to software interfaces that were pretty crude and simplistic. All of that required other tools outside of HTML, which is somewhat ironic. Now if you look at the code behind the average webpage, it is almost impossible to parse its logic at first glance.

Yet, here we are today with ecommerce being a very sophisticated beast. HTML is no longer as important as the accompanying and supporting constellation of web programming languages and development frameworks that require lots of study to be competent and useful. Connecting various databases and using a web front-end is both easier and more complex: the APIs are richer, but how they are implemented will require a deft touch to pull off successfully. Payment processing has numerous vendors that occupy sub-markets. (Stripe, Bill.com, and Klarna are three such examples of companies that are all involved in payments but have taken different pieces of the market.)

You might not have heard about Klarna: they are one of more than a dozen “buy now, pay later” services that pop up at checkout. No purchase is too small to be spread across a payment plan. Back in the pre-internet times, we had layaway plans that had one important aspect: you didn’t get the item until you completely paid for it. Now items arrive in days, but attached to a stream of loan payments stretching out several months. The downside is that there are potential late fees and 30% annualized interest charges too.

And then there is Amazon and Google. The former has both made it easier and more complex to do online shopping. It used to be both free and easy to return merchandise purchased on Amazon. Now it is neither. If you don’t pay attention when you are purchasing something, you could end up using one of their contract sellers, which complicates the returns process. And the cost of Prime continues to climb.

Google’s Lens technology has also transformed online shopping. If you have a picture of what you want to buy, you can quickly view what websites are selling the product with a couple of clicks on any Android or iPhone. My interior designer wife uses this tech all the time for her clients.

Before I go, I want to mention that Cris Thomas, known by his hacker handle Space Rogue, has a new book out that chronicles his rise into infosec security, including his time as one of the founders of the hacking collective L0pht. Its early days were wild by today’s standards: the members would often prowl the streets of Boston and dumpster dive in search of used computer parts. They would then clean them up and sell them at the monthly MIT electronics flea market. Dead hard drives were one of their specialties — “guaranteed to be dead or your money back if you could get them working.” None of their customers took them up on this offer, however. There are other chapters about the purchase of L0pht by @stake and Thomas’ eventual firing from the company, then taking eight years to get a college degree at age 40, along with the temporary rebirth of the Hacker News Network and going to work for Tenable and now at IBM. I review the book in this post, and highly recommend it if you are looking at reliving those early infosec days.

Is it time to consider web v3?

I am not so sure. For those of you keeping score at home, web v1 was the early days where we had web servers delivering static pages of mostly text, starting in the early 1990s and lasting until about 2003 or 2004. The next version was the dynamic web where we created our own content, and where we freely gave away our privacy and data so that we could post cat memes and dance videos to the now giants of Facebook /Apple/Amazon/Netflix/Google, otherwise called FAANG. (Facebook and Google have renamed themselves, but the acronym has stuck.)

But now it is time for a new iteration, and v3 attempts to create a more egalitarian internet, protected by encrypted tokens that can keep everyone’s identity and data private and secure. Say what? At least, that is the plan.

Whether or not you agree with this vision, it has largely been unrealized. Yes, there is a Web 3 Foundation, and you can see at that link a very complex tech stack that will consist of multiple protocol layers, much still TBD. For those of us that cut our teeth on HTML, CSS, and HTTPS, these protocols are pretty much unknown.

Scott Carey writes in Infoworld summing things up this way: “To access most Web3 applications, users will need a crypto wallet, most likely a new browser, an understanding of a whole new world of terminology, and a willingness to pay the volatile gas fees required to perform actions on the Ethereum blockchain. Those are significant barriers to entry for the average internet user.” I’ll say. If you have never had a crypto wallet, never used Rust or Solidity and don’t know what a gas fee is, you need to go to web3 study hall. You may not understand the tech behind it — I don’t fully understand all of these items — but that is the point. The decentralized web is being built on a series of protocols and there are a lot of gaps.

But let’s put aside all the new tech and answer a few basic questions.

What is the role of clients and servers? One of the first things you come to is needing to understand the difference between clients and servers. In the web1 and web2 worlds, there were browsers, and there were various servers (web, database, applications, payments, and so forth). It was a pretty clean separation of powers. Some of us were happy to never touch any kind of server, something that leads off Moxie Marlinspike’s “first impressions” blog post. I don’t agree with this position. I have been running my own web server for more than 25 years. I wouldn’t have it any other way. I like being “master of my domain” (which is more than just running my own server, such as being able to move it from one place to another across the internet, which I had to do last year when my ISP went out of business).

I think what Moxie meant to say is that most people don’t like configuring and maintaining their own servers. But that is why we have ISPs.

But look at the tech stack that we are promised with web3: that is a lot of tech to deal with. If we had resistance to configuring HTML and HTTP, imagine what amount of pain we will be faced when all this new stuff comes to fruition?

Lance Ulanoff writes that the vision for web3 is “more a combination of edgy new technology and a reaction to centralized control.” He goes on to discuss some of the early descriptions before the web3 term came into the popular lexicon, such as the semantic web that was tossed around back in 2006. He describes web3 being when we can control our interactions and have a universal identity across all systems. That’s nice, but so much of the current vision about web3 doesn’t really fill in the blanks about how this control will happen or how we can create these universal identities. Moxie says that we need to use cryptography rather than infrastructure to distribute trust. I completely agree. Ignoring the trust issues is dangerous — look how long it has taken us to resolve email trust issues, and those protocols were created decades ago.

But how this infrastructure play out brings us to my next question:

What is the role of peer-to-peer (p2p) technology? Remember Napster and peer file sharing of music and videos? Back then (roughly 2000-2005), everyone was digitizing their CDs, or stealing music from others, or both. Napster and LimeWire and the other apps created peer file servers on your hard disk, and you then shared your digitized content with the world. Sharing wasn’t caring, and lawsuits ensued. Now we just pay Netflix et al. and stream the content when we want to listen or watch something. Who needs possession of the actual bits?

But see what has happened here: we went from this idealized p2p world to today where just a few centralized businesses (like FAANG) run the show. This could be the fate of web3, and all this talk about a decentralized, egalitarian web could fall apart. Today’s crypto/NFT world depends on just a few centralized service providers, and the distinction between client and server in a fully decentralized p2p blockchain isn’t all that clear, as one of the Ethereum founders Vitaly Buterin points out. He says that there are various gaps in web3 which are bridged with the various API suppliers, such as Infura and Opensea. The issue that Moxie has is that many NFT and crypto advocates have just accepted the role of these API vendors without much thought about the implications. Moxie is worried that these vendors have a lot of control over things, and that there is the potential for the decentralized web3 to turn into a less efficient and less private version of today’s internet. Think of one nightmare scenario, where Facebook (or one of the other giants) has its own web3 servers, APIs, and alt-coins. The horror!

But you think crypto is cool, and there is money to be made. Now we get to the real meat of the matter. Forget about a more equal internet and singing kumbaya off into the sunset. Let’s talk about how high the various alt-coins are trading at – or not, depending on when you entered the market. Remember the internet bubble of 1999-2000, when domains were being bought and sold on little more than a pitch deck. That was Gold Rush v1, and all you had to do to participate was to buy a domain and flip it. (I am guilty of this, but I didn’t buy my domain to flip it. I just got lucky.)  You could argue that all you need now is to hold a basket of crypto coins — as some of you have done. But look at all the knowledge you have to collect to participate in this gold rush. Nevertheless, there is some cool stuff that is being built, as this blogger documents. This post basically rebuts a few of Moxie’s complaints while making Moxie’s point that this is very early stuff.

So go cautiously into the web3 night, and good luck learning about all the requisite tech that will be needed. And for those of you complaining about the decentralized and private web of the future, you might want to spend some time doing the basic blocking and tackling and eliminating duplicate passwords and implementing MFA logins now, because you’ll need something like them to get on the blockchain train. Or at least protect all those crypto funds in your wallet from being lost or stolen.

Network Solutions blog: How to Prevent a Data Leak within VPN Environments

It has been one of the first things that most remote workers learn: use a Virtual Private Network (VPN) to connect your laptop when you aren’t in the office. And given that many of us haven’t stepped foot in our offices for months, using a VPN now is ingrained in our daily computer usage. But as VPNs have gotten popular, they are also getting harder to keep secure. Various reports document that private data from 20M users have been leaked because of poorly implemented VPNs, including email passwords and home addresses.

In this post for Network Solutions’ blog, I discuss ways to prevent data leaks from happening and to better secure your VPNs, along with links to the most trusted reviewers of these products.

Network Solutions blog: How to defend against web skimming attacks

Magecart web skimming group targets public hotspots and mobile users | CSO  OnlineYour eCommerce website is vulnerable to a variety of threats known collectively as web skimming. The hackers behind these threats are getting better at penetrating your site and installing their malware to steal your customers’ money and private information. And web skimming is getting more popular both with the rising frequency of attacks and with bigger data breaches recorded. In this post for Network Solutions’ blog, I describe how these attacks work, reference a few of the more newsworthy ones and provide a bunch of tips on how to prevent your own eCommerce site from becoming compromised.

 

The dangers of DreamHost and Go Daddy hosting

If you host your website on GoDaddy, DreamHost, Bluehost, HostGator, OVH or iPage, this blog post is for you. Chances are your site icould be vulnerable to a potential bug or has been purposely infected with something that you probably didn’t know about. Given that millions of websites are involved, this is a moderate big deal.

It used to be that finding a hosting provider was a matter of price and reliability. Now you have to check to see if the vendor actually knows what they are doing. In the past couple of days, I have seen stories such as this one about GoDaddy’s web hosting:

 

And then there is this post, which talks about the other hosting vendors:

Let’s take them one at a time. The GoDaddy issue has to do with their Real User Metrics module. This is used to track traffic to your site. In theory it is a good idea: who doesn’t like more metrics? However, the researcher Igor Kromin, who wrote the post, found the JavaScript module that is used by GoDaddy is so poorly written that it slowed down his site’s performance measurably. Before he published his findings, all GoDaddy hosting customers had these metrics enabled by default. Now they have turned it off by default and are looking at future improvements. Score one for progress.

Why is this a big deal? Supply-chain attacks happen all the time by inserting small snippets of JavaScript code on your pages. It is hard enough to find their origins as it is, without having your hosting provider to add any additional burdens as part of their services. I wrote about this issue here.

If you use GoDaddy hosting, you should go to your cPanel hosting portal, click on the small three dots at the top of the page (as shown above), click “help us” and ensure you have opted out.

Okay, moving on to the second article, about other hosting provider scripting vulnerabilities. Paulos Yibelo looked at several providers and found multiple issues that differed among them. The issues involved cross-site scripting, cross-site request forgery, man-in-the-middle problems, potential account takeovers and bypass attack vulnerabilities. The list is depressingly long, and Yibelo’s descriptions show each provider’s problems. “All of them are easily hacked,” he wrote. But what was more instructive was the responses he got from each hosting vendor. He also mentions that Bluehost terminated his account, presumably because they saw he was up to no good. “Good job, but a little too late,” he wrote.

Most of the providers were very responsive when reporters contacted them and said these issues have now been fixed. OVH hasn’t yet responded.

So the moral of the story? Don’t assume your provider knows everything, or even anything, about hosting your site, and be on the lookout for similar research. Find a smaller provider that can give you better customer service (I have been using EMWD.com for years and can’t recommend them enough). If you don’t know what some of these scripting attacks are or how they work, go on over to OWASP.org and educate yourself about their basics.

Watch that browser add-on

This is a story about how hard it is for normal folks to keep their computers secure. It is a depressing but instructive one. Most of us take for granted that when we bring up our web browser and go to a particular site, we are safe and we know what we see is malware-free. However, that isn’t always the case, and is getting harder.

Many of you make use of browser add-ons for various things: Right now I am running a bunch of them from Google, to view online documents and launch apps. One extension that I rely on is my password manager. I used to have a lot of other ones but found that after the initial excitement (or whatever you want to call it, I know I live a sheltered life) wears off, I don’t really take advantage of them.

So my story today is about an add-on called Web Security. It is oddly named, because it does anything but what it says. And this is the challenge for all of us: many add-ons or smartphone apps have misleading names, because their authors want you to think they are benign. Initially, Mozilla wrote a recommendation for this add-on earlier this month. Then they started getting complaints from users and security researchers. Turns out that they made a big mistake. Web Security tries to track what you are doing in your browsing around the Internet, and could compromise your computer. When Mozilla add-on analyst (that is his real job) Rob Wu looked into this further, he found some very nasty behavior that made it finally clear to him that the add-on was hiding malicious code. Mozilla basically turned off the extension for the hundreds of thousands of users that had installed it and would have been vulnerable. This story on Bleeping Computer provides more details.

In the process of researching this one add-on’s behavior, Wu found 22 other add-ons that did something similar, and they were also disabled and removed from the add-on store. More than half a million Firefox users had at least one of them add-ons installed.

So what can we learn from this tale of woe? One thing is the sobering thought when security experts have trouble identifying badly behaving programs. Granted, this one was found and fixed quickly. But it does give me (and probably you too) pause.

Here are some suggestions. First off, take a look at your extensions. Each browser does this slightly differently. Cisco has a great post here to help you track them down in Chrome and IEv11. Make sure you don’t have anything more than you really need to get your work done. Second, keep your browser version updated. Most of the modern browsers will warn you when it time for an update, and don’t tarry when you see that warning. Finally, be aware of anything odd when you bring up a web page: look closely at the URL and any popups that are displayed. Granted, this can get tedious, but you are ultimately safer.

A new way to speed up your Internet connection

How often do you comment on how slow the Internet is? Now you have a chance to do something to speed it up. Before I tell you, I have to backtrack a bit.

Most of us don’t give a second thought about the Domain Name System (DNS) or how it works to translate “google.com” into its numerical IP address. But that work behind the scenes can make a difference between you having and hot having access to your favorite websites. I explain how the DNS works in this article I wrote ten years ago for PC World.

Back when I wrote that article, there was a growing need for providing better DNS services that were more secure and more private than the default one that comes with your broadband provider. But one of the great things about the Internet is that you usually have lots of choices for something that you are trying to do. Don’t like your hosting provider? Nowadays there are hundreds. Want to find a better server for some particular task? Now everything is in the cloud, and you have your choice of clouds. And so forth.

And now there are various ways to get DNS to your little patch of cyberspace, with the introduction of a free service from Cloudflare. If you haven’t heard of them before, Cloudflare has built an impressive collection of Internet infrastructure around the world, to deliver webpages and other content as quickly as possible, no matter where you are and where the website you are trying to reach is located. If you think about that for a moment, you will realize how difficult a job that is. Given the global reach of the Internet, and how many people are trying to block particular pieces of it (think China, Saudi Arabia, and so forth), you begin to see the scope and achievement of what they have done.

I wanted to test the new 1.1.1.1 DNS service, but I didn’t have the time to do a thorough job.  Now Nykolas has done it for me in this post on Medium. He has somewhat of a DNS testing fetish, which is good because he has collected a lot of great information that can help you make a decision to switch to another DNS provider.

There are these five “legacy” DNS providers that have been operating for years:

  • Google 8.8.8.8: Private and unfiltered. Most popular option and until now the easiest DNS to remember. Their IP address was spray-painted on Turkish buildings (as shown above) during one attempt by their government to block Internet access.
  • OpenDNS 208.67.222.222: Bought by Cisco, they supposedly block malicious domains and offer the option to block adult content.
  • Norton DNS 199.85.126.20: They supposedly block malicious domains and integrate with their Antivirus.
  • Yandex DNS 77.88.8.7: A Russian service that supposedly blocks malicious domains.
  • Comodo DNS 8.26.56.26: They supposedly block malicious domains.

I have used Google, OpenDNS and Comodo over the years in various places and on various pieces of equipment. As an early tester of OpenDNS, I had some problems that I document here on my blog back in 2012.

Then there are the new kids on the block:

  • CleanBrowsing 228.168.168: Private and security aware. Supposedly blocks access to adult content.
  • CloudFlare 1.1.1.1: Private and unfiltered, and just recently announced.
  • Quad9 9.9.9.9: Private and security aware. Supposedly blocks access to malicious domains, based in NYC and part of the NYCSecure project.

How do they all stack up? Nykolas put together this handy feature chart, and you can read his post with the details:

As I mentioned earlier, he did a very thorough job testing the DNS providers from around the globe, using VPNs to connect to their service from 17 different locations. He found that all of the providers performed well across North America and Europe, but elsewhere in the world there were differences. Overall though, CloudFlare was the fastest DNS for 72% of all the locations. It had an amazing low average of 5 ms across the globe. When you think about that figure, it is pretty darn fast. I have seen network latency from one end of my cable network to the other many times that.

So why in my commentary above do I say “supposedly”? Well, because they don’t really block malware. In another Medium post, he compared the various DNS providers’ security filters and found that many of the malware-infested sites he tested weren’t blocked by any of the providers. Granted, he couldn’t test every piece of malware but did test dozens of samples, some new and some old. But he found that the Google “safe browsing” feature did a better job at block malicious content at the individual browser than any of these DNS providers did at the network level.

Given these results, I will probably use the Cloudflare 1.1.1.1 DNS going forward. After all, it is an easy IP address to remember (they worked with one of the regional Internet authorities who have owned that address since the dawn of time), it works well, and plus I like the motivation behind it, as they stated on their blog: “We don’t want to know what you do on the Internet—it’s none of our business—and we’ve taken the technical steps to ensure we can’t.”

One final caveat: speeding up DNS isn’t the only thing you can do to surf the web more quickly. There are many other roadblocks or speed bumps that can delay packets getting to your computer or phone. But it is a very easy way to gain performance, particularly if you rely on a solid infrastructure such as what Cloudflare is providing.

Looking back at 25 years of the world wide web

This week the web has celebrated yet another of its 25th birthdays, and boy does that make me feel old. Like many other great inventions, there are several key dates along the way in its origin story. For example, here is a copy of the original email that Tim Berners-Lee sent back in August 1991, along with an explanation of the context of that message. Steven Vaughan-Nichols has a nice walk down memory lane over at ZDnet here.

Back in 1995, I had some interesting thoughts about those early days of the web as well. This column draws on one that I wrote then, with some current day updates.

I’ve often said that web technology is a lot like radio in the 1920s: station owners are not too sure who is really listening but they are signing up advertisers like crazy, programmers are still feeling around for the best broadcast mechanisms, and standards that are changing fast making for lots of unsure technology that barely works on the best of days. Movies obviously are the metaphor for Java and audio applets and other non-textual additions.

So far, I think the best metaphor for the web is that of a book: something that you’d like to have as a reference source, entertaining when you need it, portable (well, if you tote around a laptop), and so full of information that you would rather leave it on your shelf.

Back in 1995, I was reminded of so-called “electronic books” that were a big deal. One of my favorites then was a 1993 book/disk package called The Electronic Word by Richard Lanham. It is ironically about how computers have changed the face of written communications. The book is my favorite counter-example of on-line books. Lanham is an English professor at UCLA and the book comes with a Hypercard stack that shows both the power of print and how unsatisfactory reading on the screen can be. Prof. Lanham takes you through some of the editing process in the Hypercard version, showing before and after passages that were included in the print version.

But we all don’t want to read stuff on-line, especially dry academic works that contain transcripts of speeches. That is an important design point for webmasters to consider. Many websites are full reference works, and even if we had faster connections to the Internet, we still wouldn’t want to view all that stuff on-screen. Send me a book, or some paper, instead.

aaaSpeaking of eliminating paper, in my column I took a look at what Byte magazine is trying to do with their Virtual Press Room. (The link will take you to a 1996 archive copy, where you can see the beginnings of what others would do later on. As with so many other things, Byte was so far ahead of the curve.)

Byte back then had an intriguing idea of having vendors send their press releases electronically, so editors don’t have to plow through the printed ones. But how about a step further in the interest of saving trees: sending in both the links to the vendor’s own websites and whatever keywords are needed. Years later, I am still asking vendors for key information from their press releases. Some things don’t change, no matter what the technology.

What separates good books from bad is good indexing and great tables of contents. We use both in books to find our way around: the latter more for reference, the former more for determining interest and where to enter its pages. So how many websites have you visited lately that have either, and have done a reasonable job on both? Not many back in 1995, and not many today, sadly. Some things don’t change.

Today we almost take it for granted that numerous enterprise software products have web front-end interfaces, not to mention all the SaaS products that speak native HTML. But back in the mid 1990s, vendors were still struggling with the web interface and trying it on. Cisco had its UniverCD (shown here), which was part CD-ROM and part website. The CD came with a copy of the Mosaic browser so you could look up the latest router firmware and download it online, and when I saw this back in the day I said it was a brilliant use of the two interfaces. Novell (ah, remember them?) had its Market Messenger CD ROM, which also combined the two. There were lots of other book/CD combo packages back then, including Frontier’s Cybersearch product. It had the entire Lycos (a precursor of Google) catalog on CD along with browser and on-ramp tools. Imagine putting the entire index of the Internet on a single CD. Of course, it would be instantly out of date but you can’t fault them for trying.

The reason why vendors combined CDs with the web was because bandwidth was precious and sending images down a dial-up line was painful. (Remember that the first web browser shown at the top of this column was text-only.) If you could off load these images on to a CD, you could have the best of both worlds. At the time, I said that if we wanted to watch movies, we would go to Blockbuster and rent one. Remember Blockbuster? Now we get annoyed if our favorite flick isn’t available to be immediately streamed online.

Yes, the web has come a long ways since its invention, no matter which date you choose to celebrate it. It has been an amazing time to be around and watch its progress, and I count myself lucky that I can use its technology to preserve many of the things that others and I have written about it.

Authentic8 whitepaper: Why a virtual browser is important for your enterprise

The web browser has become the defacto universal user applications interface. It is the mechanism of choice for accessing modern software and services. But because of this ubiquity, it puts a burden on browsers to handle security more carefully.

silo admin console2Because more malware enters via the browser than any other place across the typical network, enterprises are looking for alternatives to the standard browsers. In this white paper that I wrote for Authentic8, makers of the Silo browser (their console is shown here), I talk about some of the issues involved and benefits of using virtual browsers. These tools offer some kind of sandboxing protection to keep malware and infections from spreading across the endpoint computer. This means any web content can’t easily reach the actual endpoint device that is being used to surf the web, so even if it is infected it can be more readily contained.