Many website operators have wrestled with the decision to move all their web infrastructure to support HTTPS protocols. The upside is obvious: better protection and a more secure pathway between browser and server. However, it isn’t all that easy to make the switch. In this piece that I wrote for IBM’s Security Intelligence blog, I bring up the case study of The Guardian’s website and what they did to make the transition. It took them more than a year and a lot of careful planning before they could fully support HTTPS.
It used to be so simple to understand how a web browser and a web server communicated. The server held a bunch of pages of HTML and sent them to the browser when a user would type in a URL and navigate to that location. The HTML that was sent back to the browser was pretty much human-readable, which meant anyone with little programming knowledge and a basic knowledge of command syntax could figure out what is going on in the page.
I can say this because I remember learning HTML code in those early days in a few days’ time. While I am not a programmer, I have written code in the distant past.
Those days (both me doing any code or parsing web pages) are so over now. Today’s web servers do a lot more than just transmit a bunch of HTML. They consolidate a great deal of information from a variety of sources: banners from ad networks, images from image headers that are used in visitor analytics, tracking cookies for eCommerce sites (so they can figure out if you have been there before), content distribution network codes and many more situations.
Quite frankly, if you look at all the work that a modern web server has to do, it is a wonder that any web page ends up looking as good as it does. But this note isn’t just about carping on this complexity. Instead, it is because of this complexity that the bad guys have been exploiting it for their own evil ways for many years, using what are called script injection techniques.
Basically what is happening is because of poorly written code on third-party websites or because of clever hacking techniques, you can inject malware into a web page that can do just about anything, including gathering usernames and passwords without the browser’s knowledge.
One type of injection, SQL injection, is usually near the top of the list of most frequent attacks year after year. This is because it is easy to do, it is easy to find targets, and it gets big results fast. It is also easy to fix if you can convince your database and web developers to work together.
But there is another type of injection that is more insidious. Imagine what might happen if an ad network server would be compromised so that it could target a small collection of users and insert a keylogger to capture their IDs and passwords. This could easily become a major data breach.
A variety of security tools have been invented to try to stop these injections from happening, including secure browsers (such as Authentic8.com), using various sandboxing techniques (such as Checkpoint’s Sandblast), running automated code reviews (such as with runtime application self-protection techniques from Vasco and Veracode), or by installing a browser extension that can block specific page content. None of these is really satisfactory or a complete solution.
If you are concerned about these kinds of injections, you might want to experiment with a couple of browser extensions. These are not new. Many of these tools were created years ago to stop pop-up ads from appearing on your screen. They have gotten new attention recently because many ad networks want to get around the ad blockers (so they can continue to make money selling ads). But you can use these tools to augment your browser security too. If you are interested in trying one of them out, here is a good test of a variety of ad blocker performance done several years ago. There is another comparative review by LifeHacker which is also several years old that focuses on privacy features.
I was interested so I have been running two of these extensions lately: Privacy Badger (shown here) and Ghostery. I wanted to see what kind of information they pick up and exactly how many third-parties are part of my web transactions when I do my banking, buy stuff online, and connect to the various websites that I use to run my life. The number will surprise you. Some sites have dozens of third-party sites contributing to their pages.
Privacy Badger is from the Electronic Frontier Foundation, and is focused on the consumer who is concerned about his or her online privacy. When you call it up onscreen, it will show you a list of the third-party sites and has a simple three-position slider bar next to each one: you can block the originating domain entirely, just block its cookies, or allow it access. Ghostery works a bit differently, and ironically (or unfortunately) wants you to register before it provides more detailed information about third party sites. It provides a short description of the ad network or tracking site that it has discovered from reading the page you are currently browsing. The two tools cite different sites in their reports.
There are some small signs of hope on the horizon. An Israeli startup called Source Defense is in beta; they will secure your website from malicious third-party script injections such as keylogger insertions. I saw a short demo of it and it seems promising. Browsers are getting better, with more control over pop-ups and third-party cookies and blocking more obvious malware attacks. Although as browser security controls become more thorough, they also become more difficult to use. It is the nature of the Internet that security will always chase complexity.
In a recent LinkedIn post, Kyle Cassidy proposed Why ‘Content Marketing’ Needs to be Killed Dead and Buried Deep. Cassidy is a former ad agency content marketer who has grown tired of the term and wants to see it retired. His well-written – and somewhat tongue-in-cheek – post gives some solid reasons why the term should be put out of its misery, including over-inclusive usage that renders it meaningless, not unlike the cutesy names that are now applied to departments that used to be called “personnel” and “marketing.” Given that our hosts both come from a long-standing journalism tradition in which the quality of our words was Job #1, he does have some salient points to consider.
I had an opportunity to discuss this on a recent podcast that I do with Paul Gillin here. If you don’t know Paul, he is cut from the same cloth as I: a long-time technology journalist who has started numerous pubs and websites and has written several books.
Cassidy writes about the “hot mess of skills” that can be found in the typical content marketer, who as he says is “a steaming pile of possibility” that combines “a savvy copywriter, editor, and brand strategist” all rolled up into one individual. True enough: you need a lot of skills to survive these days. But one skill that he just briefly mentions is something that both Paul and I have in spades. We consider ourselves journalists first and marketing our “content” a distant second.
Cassidy has a good point: “Content Marketing is a meaningless term. PR is content. Product is content. Blogs and social are content. Emails are content. Direct mail is content.” Yes, but. Not all content is created equal. Some content is based on facts, and some isn’t. Without a solid foundation in determining facts you can’t market anything, whether it is content or the latest music tracks. You have to speak truth to power, as the old Quaker saying goes.
Of course, fact-based journalism – what we used to call just “journalism” – is under siege as well these days, given the absence and abuse of facts that is streaming live every day from our nation’s capital. The notions of fake news – what we used to call rumors, exaggerations, lies and misleading statistics – is also rife and widespread. And even the New York Times seems to have trouble finding the right person to quote recently.
Part of me wants to assign blame to content marketers for these trends. But the real reason is just laziness on the part of writers, and the lack of any editors who in the olden days – say ten years ago – used to work with them to sharpen their writing, find these lazy slips of the keyboard, and hold their fingers to the fire to make sure they checked their quotes, found another source, deleted unsupported conclusions and the like. I still work with some very fine editors today, and they are uncanny how quickly they can zoom in on a particular weak spot in my prose. Even after years of writing and thousands of stories published, I still mess up. It isn’t (usually) deliberate: we all make mistakes. But few of us can find and fix them. Part of this is the online world we now inhabit.
But if the online world has decimated journalists, it really has taken its toll on editors who are few and infrequently seen. Few publications want to take the time to pass a writer’s work through an editor: the rubric is post first, fix later. Be the first to get something “out there,” irregardless (sic.) of its accuracy. As I said, you can’t be your own editor, no matter how much experience you might have and how many words a week you publish. You need a second (and third) pair of eyes to see what you don’t.
When I first began in tech journalism in the mid-1980s, we had an entire team of copy editors working at “the desk,” as it was called. The publication I was working for at the time was called PC Week, and we put the issue to bed every Thursday night. No matter where in the world you were, on Thursday nights you had to be near a phone (and this was the era before cell phones were common). You invariably got a call from the desk about something that was awry with your story. It was part of the job.
Several years ago, I was fortunate to do freelance work for the New York Times. It was a fun gig, but it wasn’t an easy one. By the time my stories would be published in the paper, almost every word had been picked over and changed. Some of these changes were so subtle that I wouldn’t have seen them if the track changes view wasn’t turned on. A few (and very few) times, I argued with the copy desk over some finer point. I never thought that I would miss either of those times. They seem like quaint historical curiosities now.
So let’s kill off the term content marketing, but let’s also remember that if we want our content to sing, it has to be true, fact-based, and accurate. Otherwise, it is just the digital equivalent of a fish wrapper.
Going back to our podcast, Paul and I next pick up on the dust-up between Crowdstrike and NSSLabs over a test of the former’s endpoint security products. Crowdstrike claims NSS tests didn’t show its product in the best light and weren’t ‘authorized’ to review it. It’s even taken NSS to court. Our view: too bad. If you don’t like the results, shame on you for not working more closely with the testers. And double shame for suing them. David has been on the other end of this scenario for a number of years and offers an inspiring anecdote about how a vendor can turn a pig’s ear into a silken test. Work with the testing press, and eventually, you too can turn things around to benefit both of you.
Finally, we bring up the issue of a fake tweet being used by the New York Times and Newsmax in regards to the firing of National Security Adviser Michael Flynn earlier this week. The Times eventually posted a correction, but if the Grey Lady of journalism can be fooled, it brings up questions of how brands should work with parody or unauthorized social media accounts. Lisa Vaas has a great post on Naked Security that provides some solid suggestions on how to vet accounts in the future: Look for the blue verification check mark, examine when the account was created and review the history of tweets.
You can listen to our podcast (23 min) here:
Lenny Zeltser has been teaching security classes at SANS for more than 15 years now and has earned the prestigious GIAC Security Expert professional designation. He is not some empty suit but a hands-on guy who developed the Linux toolkit REMnux that is used by malware analysts throughout the world. He is frequently quoted in the security trades and recently became VP of Products of Minerva Labs and spoke to me about his approach to understanding incident response, endpoint protection and digital forensics.
“I can’t think about malware in the abstract,” he said. “I have to understand it in terms of its physical reality, such as how it injects code into a running process and uses a command and control network. This means I have to play with it to learn about it.”
“Malware has become more elaborate over the past decade,” he said. “It takes more effort to examine it now. Which is interesting, because at its core it hasn’t changed that much. Back a decade or so, bots were using IRC as their command and control channel. Now of course there is much more HTTP/HTTPS-based connections.”
One interesting trend is that “malware is becoming more defensive, as a way to protect itself from analysis and automated tools such as sandboxes. This makes sense because malware authors want to derive as much value as they can and try to hide from discovery. If a piece of malware sees that it is running or a test machine or inside a VM, it will just shut down or go to sleep.”
Why has he made the recent move to working for a security vendor? “One reason is because I want to use the current characteristics of malware to make better protective products,” he said. Minerva is working on products that try to trick malware into thinking that they are running in sandboxes when they are sitting on user’s PCs, as a way to shut down the infection. Clever. “Adversaries are so creative these days. So two can play that game!”
Another current trend for malware is what is called “fileless,” or the ability to store as little as possible in the endpoint’s file system. While the name is somewhat misleading – you still need something stored on the target, whether it be a shortcut or a registry key – the idea is to have minimal and less obvious markers that your PC has been infected. “Something ultimately has to touch the file system and has to survive a reboot. That is what we look for.”
Still, no matter how sophisticated a piece of malware is, there is always user error that you can’t completely eliminate. “I still see insiders who inadvertently let malware loose – maybe they click on an email attachment or they let macros run from a Word document. Ultimately, someone is going to try to run malicious code someplace, they will get it to where they want to.”
“People realize that threats are getting more sophisticated, but enterprises need more expertise too, and so we need to train people in these new skills,” he said. One challenge is being able to map out a plan post-infection. “What tasks do you perform first? Do you need to re-image an infected system? You need to see what the malware is doing, and where it has been across your network, before you can mitigate it and respond effectively,” he said. “It is more than just simple notification that you have been hit.”
I asked him to share one of his early missteps with me, and he mentioned when he worked for a startup tech company that was building web-based software. The firm wanted to make sure their systems were secure, and paid a third-party security vendor to build a very elegant and complex series of protective measures. “It was really beautiful, with all sorts of built-in redundancies. The only trouble was we designed it too well, and it ended up costing us an arm and a leg. We ended up overspending to the point where our company ran out of money. So it is great to have all these layers of protection, but you have to consider what you can afford and the business impact and your ultimate budget.”
Finally, we spoke about the progression of technology and how IT and security professionals are often unsure when it comes to the shock of the new. “First there was vLANs,” he said. “Initially, they were designed to optimize network performance and reduce broadcast domains. And they were initially resisted by security professionals, but over time they were accepted and used for security purposes. The same thing initially happened with VMs and cloud technologies. And we are starting to see containers become more accepted as security professionals get used to them. The trick is to stay current and make sure the tools are advancing with the technology.”
Paul Gillin and I discuss a variety of topics this week. First, the notion of automated phone attendants to provide outbound sales support is taking on new meaning when Paul’s got a call from Brian the fund-raiser. Turns out Brian wasn’t a real person, but it initially fooled Paul!
Next, perhaps it’s time to sharpen our use of language. We talk about lazy usage of meaningless words, such as flexible robust high-performance. Say what?
I note that the latest crop of domain name extensions is completely out of control, not to mention pricey and making it harder for brands too. You can listen to our 20-minute podcast here:
The latest domain-based scam depends on you not noticing the difference between ɢoogle.com and Google.com. Look closely, and note that first “g” looks a bit off between the two samples. This is because this domain name is using Latin characters (as shown from the Wikipedia entry above with all those K’s). Thanks to Unicode alphabet support in domain names (which makes Chinese and Hebrew and other non-Roman lettered domains possible), scammers are registering these near-typo-squatted domains to fool users into clicking on them. This also makes it harder for IT security folks to keep malware hosted on these domains from infecting their networks. This particular domain was registered to an alleged Russian criminal called Vitaly Popov. He also owns the domain lifehacĸer.com. (Note the odd “k” there.)
Needless to say, the legit owners of these domains have filed legal disputes, claiming that users would be confused and at peril.
This isn’t the only challenge for users of the domain name system. I recently explored registering a new domain name. Given that the old standbys such as .com and .net are usually taken for the most common names, the Internet authorities now have opened up dozens of new extensions to choose from such as .camera and .kitchen (see the screenshot here) that I could use. In fact, there are far too many choices. I guess this was inevitable.
But my surprise wasn’t just at the sheer number of them, but their excessive cost: some of these extensions will set you back hundreds of dollars a year. And that is just for the registration of the name, let alone putting up a website for that domain. While many domains now get sold through brokers for higher fees, this is the just the initial retail cost from a registrar. This makes it a lot harder for brands to know what to purchase, and it could up the ante if they are startups who will have to purchase multiple names to register their brand.
Remember those halcyon days of Pets.com and its spokes-puppet? Seems like a long time ago.
reading my article on HPE’s latest website. You’ll learn something essential to maintaining your overall IT security posture. I provide an overall timeline of events since last fall, show how Mirai was first detected, and what things you should do to protect your enterprise infrastructure.You’ve probably seen your fill of Mirai-inspired headlines, but keep
Both Paul and I have known Sam Whitmore since all three were at PC Week (now eWeek) back in the go-go 1980s. Since 1998, Sam has been running his own consultancy for PR firms, called MediaSurvey. We spent some time talking to him about a fascinating series of posts on his site that began with an open letter that purported to be from a fictional agency to its fictional B2B client. The letter explains, from the agency’s point of view, why the relationship isn’t more productive. It inspired several comments, as well our own curiosity about Sam’s motivations.
The letter makes three points, with the basic thesis being that “We need max access and a budget bump,” meaning that PR budgets have to reflect a more approach to what agencies do. The fictional PR firm asks to be given better access to customer feedback and become a more strategic partner of the client’s marketing efforts, and to have better relationships with content gateways that will outlast a point product release. The tone of the letter is snarky, but also to the point, with good suggestions about the brave new world of what Sam calls “content platforms” such as ITCentralStation, ProductHunt, and SoftwareAdvice. Whitmore calls these the “IT version of Yelp,” and notes that they’re increasingly powerful in shaping buying decisions. Do you know about them? I actually contributes product reviews to the first site and have seen impressive results, but Paul had barely heard of them.
You can listen to our 27 minuter podcast here:
Today, the issue of digital equity is receiving more attention than ever. For good reason; Internet access is no longer a luxury, it is a daily necessity in how we live, work and play. Still, we are far from the most connected nation on earth (as shown above from TransferWise), and a quarter of our homes aren’t yet on broadband networks.
One issue is that the digital divide isn’t a simple binary split between “haves” and “have nots.” There are many shades of grey in between. Not everyone uses the Internet and connected technologies the same way, with the same skill set, or even with the same context. Before we can solve this divide, we have to understand these subtleties.
I met Michael Liimattta at an event last week and he got me started thinking about this in more detail. He is the co-founder of Connecting for Good, a Kansas City nonprofit focusing on digital inclusion. I have taken his remarks from this blog post and added my own thoughts as well.
In our efforts to level the digital playing field for low income families, we must avoid the assumption that all of them relate to technology, computers and the Internet in the same way. To be effective in digital inclusion efforts, we must recognize that there are at least four different subsets within this population, each with its own and different needs.
- The early adopters: Several national studies indicate that low income families with school children have a higher rate of broadband adoption; approximately half of them can access the Internet at home. The cities where we find the highest adoption rates are those where discounted Internet plans have been offered for a number of years and where there is extensive outreach in the public schools. However, these plans are not available everywhere. There are also cost issues: some families have to purchase expensive smartphone data plans to connect their computers, and many families have outdated PCs or don’t have the necessary tech support or lack sufficient bandwidth. These early adopter families also have another issue: understanding the dangers of the Internet in terms of accessing inappropriate content and meeting child predators.
- The uninformed: We do not want to forget that there are still low income families that know they need to be online and can afford a discounted Internet plan but simply don’t know what plans are available. ISPs like Comcast, Cox and Google Fiber have staff members dedicated to this type of outreach in cities where they offer discounted Internet services. But they will need more local help to increase awareness.
- The financially challenged: The truth is, there are families that recognize the need to be connected but truly cannot afford to do so. With the FCC’s modernization of the Lifeline program, a $9.25 per month subsidy for broadband services should be available to eligible low income families, if only more ISPs adopted it. There are other programs from local housing authorities and private philanthropy that can also help to defray these costs.
- The unconvinced and intimidated: Lastly, there are low income families that are able to afford a discounted Internet connection but are simply not convinced that they need one or are too intimidated by technology. Ultimately, convincing the adult heads of household is the trick. They must value access enough to dedicate seriously limited financial resources toward paying for an Internet subscription. When it comes to broadband adoption efforts, this can be the most challenging group of all, representing a significant portion of households living on the wrong side of the digital divide. This group also includes people who don’t know the difference between accessing the Web via a phone or the larger screens of tablets and PCs.
Digital inclusions efforts need both dedicated leadership and “boots on the ground” to be executed successfully. Too many efforts focus on providing computers and connectivity but fail to factor in the social dynamic of broadband adoption. To be effective, crossing this divide will take hours and hours of time spent in training and technical support if we are to bring the Internet to the rest of America’s poorest families.
Here is one small step forward: Next week, the National Digital Inclusion Alliance will hold a webinar to introduce digital inclusion practitioners and advocates on the state of digital inclusion at the local community level. You might want to tune in.