Web Informant

David Strom's musings on technology

Security Intelligence blog: Making the Move to an All-HTTPS Network

Many website operators have wrestled with the decision to move all their web infrastructure to support HTTPS protocols. The upside is obvious: better protection and a more secure pathway between browser and server. However, it isn’t all that easy to make the switch. In this piece that I wrote for IBM’s Security Intelligence blog, I bring up the case study of The Guardian’s website and what they did to make the transition. It took them more than a year and a lot of careful planning before they could fully support HTTPS.

Block that script!

It used to be so simple to understand how a web browser and a web server communicated. The server held a bunch of pages of HTML and sent them to the browser when a user would type in a URL and navigate to that location. The HTML that was sent back to the browser was pretty much human-readable, which meant anyone with little programming knowledge and a basic knowledge of command syntax could figure out what is going on in the page.

I can say this because I remember learning HTML code in those early days in a few days’ time. While I am not a programmer, I have written code in the distant past.

Those days (both me doing any code or parsing web pages) are so over now. Today’s web servers do a lot more than just transmit a bunch of HTML. They consolidate a great deal of information from a variety of sources: banners from ad networks, images from image headers that are used in visitor analytics, tracking cookies for eCommerce sites (so they can figure out if you have been there before), content distribution network codes and many more situations.

Quite frankly, if you look at all the work that a modern web server has to do, it is a wonder that any web page ends up looking as good as it does. But this note isn’t just about carping on this complexity. Instead, it is because of this complexity that the bad guys have been exploiting it for their own evil ways for many years, using what are called script injection techniques.

Basically what is happening is because of poorly written code on third-party websites or because of clever hacking techniques, you can inject malware into a web page that can do just about anything, including gathering usernames and passwords without the browser’s knowledge.

One type of injection, SQL injection, is usually near the top of the list of most frequent attacks year after year. This is because it is easy to do, it is easy to find targets, and it gets big results fast. It is also easy to fix if you can convince your database and web developers to work together.

But there is another type of injection that is more insidious. Imagine what might happen if an ad network server would be compromised so that it could target a small collection of users and insert a keylogger to capture their IDs and passwords. This could easily become a major data breach.

A variety of security tools have been invented to try to stop these injections from happening, including secure browsers (such as Authentic8.com), using various sandboxing techniques (such as Checkpoint’s Sandblast), running automated code reviews (such as with runtime application self-protection techniques from Vasco and Veracode), or by installing a browser extension that can block specific page content. None of these is really satisfactory or a complete solution.

If you are concerned about these kinds of injections, you might want to experiment with a couple of  browser extensions. These are not new. Many of these tools were created years ago to stop pop-up ads from appearing on your screen. They have gotten new attention recently because many ad networks want to get around the ad blockers (so they can continue to make money selling ads). But you can use these tools to augment your browser security too. If you are interested in trying one of them out, here is a good test of a variety of ad blocker performance done several years ago. There is another comparative review by LifeHacker which is also several years old that focuses on privacy features.

I was interested so I have been running two of these extensions lately: Privacy Badger (shown here) and Ghostery. I wanted to see what kind of information they pick up and exactly how many third-parties are part of my web transactions when I do my banking, buy stuff online, and connect to the various websites that I use to run my life. The number will surprise you. Some sites have dozens of third-party sites contributing to their pages.

Privacy Badger is from the Electronic Frontier Foundation, and is focused on the consumer who is concerned about his or her online privacy. When you call it up onscreen, it will show you a list of the third-party sites and has a simple three-position slider bar next to each one: you can block the originating domain entirely, just block its cookies, or allow it access. Ghostery works a bit differently, and ironically (or unfortunately) wants you to register before it provides more detailed information about third party sites. It provides a short description of the ad network or tracking site that it has discovered from reading the page you are currently browsing. The two tools cite different sites in their reports.

There are some small signs of hope on the horizon. An Israeli startup called Source Defense is in beta; they will secure your website from malicious third-party script injections such as keylogger insertions. I saw a short demo of it and it seems promising. Browsers are getting better, with more control over pop-ups and third-party cookies and blocking more obvious malware attacks. Although as browser security controls become more thorough, they also become more difficult to use. It is the nature of the Internet that security will always chase complexity.

FIR B2B podcast #67: Is it Time to Kill the Term ‘Content Marketing?’

In a recent LinkedIn post, Kyle Cassidy proposed Why ‘Content Marketing’ Needs to be Killed Dead and Buried Deep. Cassidy is a former ad agency content marketer who has grown tired of the term and wants to see it retired. His well-written – and somewhat tongue-in-cheek – post gives some solid reasons why the term should be put out of its misery, including over-inclusive usage that renders it meaningless, not unlike the cutesy names that are now applied to departments that used to be called “personnel” and “marketing.” Given that our hosts both come from a long-standing journalism tradition in which the quality of our words was Job #1, he does have some salient points to consider.

I had an opportunity to discuss this on a recent podcast that I do with Paul Gillin here. If you don’t know Paul, he is cut from the same cloth as I: a long-time technology journalist who has started numerous pubs and websites and has written several books.

Cassidy writes about the “hot mess of skills” that can be found in the typical content marketer, who as he says is “a steaming pile of possibility” that combines “a savvy copywriter, editor, and brand strategist” all rolled up into one individual. True enough: you need a lot of skills to survive these days. But one skill that he just briefly mentions is something that both Paul and I have in spades.  We consider ourselves journalists first and marketing our “content” a distant second.

Cassidy has a good point: “Content Marketing is a meaningless term. PR is content. Product is content. Blogs and social are content. Emails are content. Direct mail is content.” Yes, but. Not all content is created equal. Some content is based on facts, and some isn’t. Without a solid foundation in determining facts you can’t market anything, whether it is content or the latest music tracks. You have to speak truth to power, as the old Quaker saying goes.

Of course, fact-based journalism – what we used to call just “journalism” – is under siege as well these days, given the absence and abuse of facts that is streaming live every day from our nation’s capital. The notions of fake news – what we used to call rumors, exaggerations, lies and misleading statistics – is also rife and widespread. And even the New York Times seems to have trouble finding the right person to quote recently.

Part of me wants to assign blame to content marketers for these trends. But the real reason is just laziness on the part of writers, and the lack of any editors who in the olden days – say ten years ago – used to work with them to sharpen their writing, find these lazy slips of the keyboard, and hold their fingers to the fire to make sure they checked their quotes, found another source, deleted unsupported conclusions and the like. I still work with some very fine editors today, and they are uncanny how quickly they can zoom in on a particular weak spot in my prose. Even after years of writing and thousands of stories published, I still mess up. It isn’t (usually) deliberate: we all make mistakes. But few of us can find and fix them. Part of this is the online world we now inhabit.

But if the online world has decimated journalists, it really has taken its toll on editors who are few and infrequently seen. Few publications want to take the time to pass a writer’s work through an editor: the rubric is post first, fix later. Be the first to get something “out there,” irregardless (sic.) of its accuracy. As I said, you can’t be your own editor, no matter how much experience you might have and how many words a week you publish. You need a second (and third) pair of eyes to see what you don’t.

When I first began in tech journalism in the mid-1980s, we had an entire team of copy editors working at “the desk,” as it was called. The publication I was working for at the time was called PC Week, and we put the issue to bed every Thursday night. No matter where in the world you were, on Thursday nights you had to be near a phone (and this was the era before cell phones were common).  You invariably got a call from the desk about something that was awry with your story. It was part of the job.

Several years ago, I was fortunate to do freelance work for the New York Times. It was a fun gig, but it wasn’t an easy one. By the time my stories would be published in the paper, almost every word had been picked over and changed.  Some of these changes were so subtle that I wouldn’t have seen them if the track changes view wasn’t turned on. A few (and very few) times, I argued with the copy desk over some finer point. I never thought that I would miss either of those times. They seem like quaint historical curiosities now.

So let’s kill off the term content marketing, but let’s also remember that if we want our content to sing, it has to be true, fact-based, and accurate. Otherwise, it is just the digital equivalent of a fish wrapper.

Going back to our podcast, Paul and I next pick up on the dust-up between Crowdstrike and NSSLabs over a test of the former’s endpoint security products. Crowdstrike claims NSS tests didn’t show its product in the best light and weren’t ‘authorized’ to review it. It’s even taken NSS to court. Our view: too bad. If you don’t like the results, shame on you for not working more closely with the testers. And double shame for suing them. David has been on the other end of this scenario for a number of years and offers an inspiring anecdote about how a vendor can turn a pig’s ear into a silken test. Work with the testing press, and eventually, you too can turn things around to benefit both of you.

Finally, we bring up the issue of a fake tweet being used by the New York Times and Newsmax in regards to the firing of National Security Adviser Michael Flynn earlier this week. The Times eventually posted a correction, but if the Grey Lady of journalism can be fooled, it brings up questions of how brands should work with parody or unauthorized social media accounts. Lisa Vaas has a great post on Naked Security that provides some solid suggestions on how to vet accounts in the future: Look for the blue verification check mark, examine when the account was created and review the history of tweets.

You can listen to our podcast (23 min) here:

Lenny Zeltser is teaching us how malware operates

Lenny Zeltser has been teaching security classes at SANS for more than 15 years now and has earned the prestigious GIAC Security Expert professional designation. He is not some empty suit but a hands-on guy who developed the Linux toolkit REMnux that is used by malware analysts throughout the world. He is frequently quoted in the security trades and recently became VP of Products of Minerva Labs and spoke to me about his approach to understanding incident response, endpoint protection and digital forensics.

“I can’t think about malware in the abstract,” he said. “I have to understand it in terms of its physical reality, such as how it injects code into a running process and uses a command and control network. This means I have to play with it to learn about it.”

“Malware has become more elaborate over the past decade,” he said. “It takes more effort to examine it now. Which is interesting, because at its core it hasn’t changed that much. Back a decade or so, bots were using IRC as their command and control channel. Now of course there is much more HTTP/HTTPS-based connections.”

One interesting trend is that “malware is becoming more defensive, as a way to protect itself from analysis and automated tools such as sandboxes. This makes sense because malware authors want to derive as much value as they can and try to hide from discovery. If a piece of malware sees that it is running or a test machine or inside a VM, it will just shut down or go to sleep.”

Why has he made the recent move to working for a security vendor? “One reason is because I want to use the current characteristics of malware to make better protective products,” he said. Minerva is working on products that try to trick malware into thinking that they are running in sandboxes when they are sitting on user’s PCs, as a way to shut down the infection. Clever. “Adversaries are so creative these days. So two can play that game!”

Another current trend for malware is what is called “fileless,” or the ability to store as little as possible in the endpoint’s file system. While the name is somewhat misleading – you still need something stored on the target, whether it be a shortcut or a registry key – the idea is to have minimal and less obvious markers that your PC has been infected. “Something ultimately has to touch the file system and has to survive a reboot. That is what we look for.”

Still, no matter how sophisticated a piece of malware is, there is always user error that you can’t completely eliminate. “I still see insiders who inadvertently let malware loose – maybe they click on an email attachment or they let macros run from a Word document. Ultimately, someone is going to try to run malicious code someplace, they will get it to where they want to.”

“People realize that threats are getting more sophisticated, but enterprises need more expertise too, and so we need to train people in these new skills,” he said. One challenge is being able to map out a plan post-infection. “What tasks do you perform first? Do you need to re-image an infected system? You need to see what the malware is doing, and where it has been across your network, before you can mitigate it and respond effectively,” he said. “It is more than just simple notification that you have been hit.”

I asked him to share one of his early missteps with me, and he mentioned when he worked for a startup tech company that was building web-based software. The firm wanted to make sure their systems were secure, and paid a third-party security vendor to build a very elegant and complex series of protective measures. “It was really beautiful, with all sorts of built-in redundancies. The only trouble was we designed it too well, and it ended up costing us an arm and a leg. We ended up overspending to the point where our company ran out of money. So it is great to have all these layers of protection, but you have to consider what you can afford and the business impact and your ultimate budget.”

Finally, we spoke about the progression of technology and how IT and security professionals are often unsure when it comes to the shock of the new. “First there was vLANs,” he said. “Initially, they were designed to optimize network performance and reduce broadcast domains. And they were initially resisted by security professionals, but over time they were accepted and used for security purposes. The same thing initially happened with VMs and cloud technologies. And we are starting to see containers become more accepted as security professionals get used to them. The trick is to stay current and make sure the tools are advancing with the technology.”

FIR B2B podcast #66: The Robot Who Fooled Me, Block That Buzzword and Domain Name Insanity

Paul Gillin and I discuss a variety of topics this week. First, the notion of automated phone attendants to provide outbound sales support is taking on new meaning when Paul’s got a call from Brian the fund-raiser. Turns out Brian wasn’t a real person, but it initially fooled Paul!

Next, perhaps it’s time to sharpen our use of language. We talk about lazy usage of meaningless words, such as flexible robust high-performance. Say what?

I note that the latest crop of domain name extensions is completely out of control, not to mention pricey and making it harder for brands too. You can listen to our 20-minute podcast here:

A new kind of domain name exploit: Latin letters

 The latest domain-based scam depends on you not noticing the difference between ɢoogle.com and Google.com. Look closely, and note that first “g” looks a bit off between the two samples. This is because this domain name is using Latin characters (as shown from the Wikipedia entry above with all those K’s). Thanks to Unicode alphabet support in domain names (which makes Chinese and Hebrew and other non-Roman lettered domains possible), scammers are registering these near-typo-squatted domains to fool users into clicking on them. This also makes it harder for IT security folks to keep malware hosted on these domains from infecting their networks. This particular domain was registered to an alleged Russian criminal called Vitaly Popov. He also owns the domain lifehacĸer.com. (Note the odd “k” there.)

Needless to say, the legit owners of these domains have filed legal disputes, claiming that users would be confused and at peril. 

This isn’t the only challenge for users of the domain name system. I recently explored registering a new domain name. Given that the old standbys such as .com and .net are usually taken for the most common names, the Internet authorities now have opened up dozens of new extensions to choose from such as .camera and .kitchen (see the screenshot here) that I could use. In fact, there are far too many choices. I guess this was inevitable.

But my surprise wasn’t just at the sheer number of them, but their excessive cost: some of these extensions will set you back hundreds of dollars a year. And that is just for the registration of the name, let alone putting up a website for that domain. While many domains now get sold through brokers for higher fees, this is the just the initial retail cost from a registrar. This makes it a lot harder for brands to know what to purchase, and it could up the ante if they are startups who will have to purchase multiple names to register their brand.

Remember those halcyon days of Pets.com and its spokes-puppet? Seems like a long time ago.

 

HPE Insights: 8 lessons about IoT security learned from the Mirai botnet

the fall of 2016, a set of attacks was launched using a clever exploit, by building an automated criminal collection of Internet-connected webcams and digital video recorders. Subsequently labeled “Mirai,” this botnet has been the source of a series of distributed denial of service (DDoS) attacks on numerous notable Internet destinations such as security journalist Brian Krebs’ site, a German ISP, and the Dyn.com domain name services that is used by many large-scale online companies.

Until Mirai came along, the vast majority of DDoS attacks were done using malware-infected Windows PCs, commandeered by criminals who could harness this collected computing power and control them remotely. But Mirai has changed all of that: the sheer numbers involved and the magnitude of damages inflicted on its targets has made Mirai a potent criminal force.

There are many things to learn from construction of its malware and its leverage of various IoT embedded devices. Let’s talk about the timeline of the destruction it has already accomplished, how Mirai was initially detected, and what IT managers need to know about defending their networks against some of the methods it used in its attacks.

Timeline: What actually happened?

Mirai has been in the news for a number of events from last fall. What is clear as you examine this timeline is how it has became increasingly more potent and dangerous as it was used against various online businesses.

  1. Sept 20: Brian Krebs

On September 20th Brian Krebs’ web servers became the target of one of the largest DDoS attacks ever recorded—between 600 billion and 700 billion bits per second. To give you an idea of the magnitude here, this level of traffic is almost half a percent of the Internet’s entire capacity. What makes this even more impressive is that these data rates were sustained for hours at a time against Krebs’ websites.

DDoS attacks are brute force: a collection of computers sends streams of automated TCP/IP traffic directed at a specific web destination. When the traffic reaches a certain volume, it can overwhelm and shut down this targeted server. An enterprise has to filter out the malicious traffic or otherwise divert it away from its network to bring its servers back online.

This wasn’t Krebs’ first DDoS attack: indeed, over the past several years, he has experienced hundreds of them. But it certainly was the biggest. According to Akamai, the Krebs’ attacks were launched by 24,000 systems infected with Mirai. During September, five attacks hit Krebs, ranging from 123 to 623 Gbps.

To better defend himself, he had been using the content delivery network Akamai to filter out the attacks. And for the most part, they were able to repel these earlier DDoS efforts. But the 9/20 attacks contained so much traffic that after several days Akamai had to throw in the virtual towel, and admit defeat. This meant that Krebs’ websites were offline for a few days, until he was able to move his protection to Google’s Project Shield. This is a free invitation-only program that is designed to help independent news sites stay up and running. So far Google’s efforts seem to be working and keeping his website up and running.

  1. Oct 1: source code for Mirai released on GitHub

 

The attack on Krebs was a great proof of concept, but the folks behind Mirai took things a step further. A few weeks later, a person going by “Anna_Senpai” posted the code for Mirai online, where it since has been downloaded thousands of times from various sources, including GitHub. The name refers to a Japanese anime character that is a law enforcer of sorts. The word Mirai is also Japanese for future. This further spreads the botnet infection as more criminals begin using the tool to assemble their own botnet armies.

 

  1. Oct 21: Dyn attack

 

Then in late October another huge attack was launched on Dyn, who provides domain name services (DNS) for a variety of large-scale customers such as GitHb, Twitter, Netflix, AirBnB and hundreds of others. These services are akin to an Internet phone book: when you request a particular website, such as Google.com, it routes your request to a particular TCP/IP address for Google’s webservers to respond. Without these naming services, your request goes nowhere. The Mirai attack used 100,000 unique IP addresses, a big step up from the earlier one on Krebs. Dyn has multiple data centers around the world, and there were three attempted attacks over the course of the day. The first two brought part of its operations down, meaning that Internet users couldn’t access the websites of certain Dyn customers. The third attack was thwarted by Dyn’s IT staff.

More information from Flashpoint here:

https://www.flashpoint-intel.com/mirai-botnet-linked-dyn-dns-ddos-attacks/

 

  1. Nov 1: Liberia’s Internet connection is taken offline

 

The Mirai botnet also brought down the entire Internet connection for Liberia in late October/early November. The attack was targeted at the two fiber companies that own the country’s Internet connections. These companies manage the link to a massive undersea cable that runs around the African continent, connecting other countries together. One possible reason for Liberia being targeted is its single fiber cable connection, and the fact that the Mirai botnet can overwhelm the connection with a 500 Gbps traffic flood.

 

  1. Nov 30: Deutsche Telekom

 

Then, in late November more than 900,000 customers of German ISP Deutsche Telekom (DT) were knocked offline after their Internet routers got infected by a new variant of the Mirai malware. The Mirai code seen in this attack has been modified with two important features: First, it has expanded its scope to exploit a security flaw in specific routers made by Zyxel and Speedport to allow remote code execution. These routers have been sold to numerous German customers, which is why DT was affected so severely.  Second, this new strain of Mirai now scans the entire Internet looking for all potential devices that could be compromised.

 

  1. Mirai is still continuing

 

These are just the most noteworthy attacks to date. Given the size and effect, Mirai continues to be deployed for a variety of targets. Security researchers MalwareTech.com have set up this Twitter account to keep track of these attacks in near real-time, where you can see several attacks occur daily:

https://twitter.com/MiraiAttacks

 

 

How was Mirai first detected?

 

September 2016 was the month when a series of IoT-based botnets were detected by a variety of security researchers, most notably Sucuri and Flashpoint. Sucuri published several blog posts that described their investigations of several botnets that added up to a collection of more than 45,000 individual IP addresses. (Note that is about twice the number of origins first experienced by Krebs.) The botnets were able to pull off an attack on one of their customers that reached 120,000 requests per second. Sucuri’s customer was concerned because the level of the attack was so large that they couldn’t fight it off, even using Amazon and Google clouds to spin up larger virtual machines to defend themselves. This was similar to what happened with Krebs trying to use Akamai’s defenses.

 

The Sucuri assessment found three different types of endpoints that made up the attack on their customer: webcams, home routers, and compromised enterprise web servers. They found eight major home router brands that were part of the botnet, with the majority of the total IP addresses coming from Huawei brands. Many of these routers were located in Spanish-speaking countries, but there were plenty of compromised routers located all around the world. This geographic diversity is one of the reasons why Mirai was both so powerful and so hard to defend.

Pic: https://blog.sucuri.net/wp-content/uploads/2016/08/chart_home-router-botnet-map.png

 

Flashpoint found subsequent compromised devices by scanning Internet traffic on TCP port 7547, according to their researchers. They say there are several million other vulnerable devices in other countries, including Brazil and the UK. The latest Mirai variant is likely an attempt by one of the existing Mirai botmasters to expand the number of infected devices under their control. According to BadCyber.com, part of the problem is that DT who was initially targeted in November does not appear to have followed the best practice of blocking the rest of the world from remotely managing these devices.

 

Lessons for IT managers

 

The Mirai botnet has developed quickly as a major threat that will require a combination of methods to defend against its massive traffic volumes that can overwhelm even the most capable web servers.  Here are several suggestions for IT and security managers.

 

First, have a DDoS strategy ahead of time.  If you thought your company wasn’t that important, you need to forget that security-by-obscurity plan and come up with something more definitive.  Anyone can become a target, and now is the time to plan appropriate measures. Flashpoint has some suggestions here that are worth reading.

 

Now is the time to examine how you obtain your DNS services. One of the problems for Dyn customers is that they didn’t make use of a secondary DNS provider, or didn’t configure their DNS servers to use more than one of Dyn’s data centers. Reconfiguring their servers took time and made the Mirai attack last longer. Some large online companies are now using both Dyn and other DNS providers (such as OpenDNS or easyDNS, for example) for redundant operations. This is a good strategy in case of future DNS-based attacks.

 

Flashpoint suggests you employ Anycast DNS as your provider. This has two benefits: first, it can spread the attacking botnet requests across a distributed network, lessening the burden on each individual machine. Second, it can also speed up DNS responses, making your Internet visitors happier when pages load more quickly.

 

Another strategy is to regularly check your routers for inadvertent DNS changes, what is called DNS hijacking. F-Secure has a simple and free tool that can determine if your routers’ DNS settings have been tampered with, and that only takes a few seconds to find out for each router. While this could be tedious, at least home routers should be checked with this tool.

 

One early strategy was to simply reboot your routers, since Mirai is memory-resident and rebooting removes the infection. While that initially will work, it isn’t a good longer-term solution, since the criminals have perfected scanning techniques to re-infect your router if it is still using the default passwords in their hit list. So of course, the next step is to change these defaults, then reboot again.

 

Find any unchanged factory default passwords on any network equipment and change them immediately. These passwords were the reason why Mirai was able to collect so many endpoint IoT webcams and routers to begin with. The F-Secure tool can help with home routers, but a more complete program should be put in place to ensure that all critical network infrastructure has appropriately complex and unique passwords going forward.

 

Make sure your network forensics are in order. You should be able to capture the attack traffic so you can analyze what happened and who is targeting you. Mirai made use of an exploit on TCP port 7547 to connect to those home routers, so add a detection rule to monitor that port especially. Also, make sure that legitimate traffic is not counted or recorded in your logs. Part of this is in understanding metrics of your normal traffic baselines too.

 

Finally, it may be time to consider a content delivery network provider to handle your peak traffic loads. As you investigate your historic traffic patterns, you can see if your webservers are stretched too thinly or if you need to purchase additional load balancing or content delivery networks to improvement performance.

HPE Insights: 9 ways to make IoT devices more secure

Devices must be more secure if IoT is to reach its full potential. The good news is that security policies and procedures can protect enterprise infrastructure, harden IoT configurations, and make the network smarter and more defensible. Here is where to start, in an article that I recently wrote for a new HPE IT site, where I provide what the bottom-line impact will be for enterprise IT folks and digest information from various sources, including the latest reports from the Broadband Internet Technical Advisory Group (BITAG) and the Cloud Security Alliance.

FIR B2B #65 WITH SAM WHITMORE: WHY CUSTOMER REVIEW PLATFORMS ARE PR’S GREAT MISSED OPPORTUNITY

Both Paul and I have known Sam Whitmore since all three were at PC Week (now eWeek) back in the go-go 1980s. Since 1998, Sam has been running his own consultancy for PR firms, called MediaSurvey. We spent some time talking to him about a fascinating series of posts on his site that began with an open letter that purported to be from a fictional agency to its fictional B2B client. The letter explains, from the agency’s point of view, why the relationship isn’t more productive. It inspired several comments, as well our own curiosity about Sam’s motivations.

The letter makes three points, with the basic thesis being that “We need max access and a budget bump,” meaning that PR budgets have to reflect a more approach to what agencies do. The fictional PR firm asks to be given better access to customer feedback and become a more strategic partner of the client’s marketing efforts, and to have better relationships with content gateways that will outlast a point product release. The tone of the letter is snarky, but also to the point, with good suggestions about the brave new world of what Sam calls “content platforms” such as ITCentralStation, ProductHunt, and SoftwareAdvice. Whitmore calls these the “IT version of Yelp,” and notes that they’re increasingly powerful in shaping buying decisions. Do you know about them? I actually contributes product reviews to the first site and have seen impressive results, but Paul had barely heard of them.

You can listen to our 27 minuter podcast here:

Bridging the digital divide: not everyone has the same needs

Today, the issue of digital equity is receiving more attention than ever. For good reason; Internet access is no longer a luxury, it is a daily necessity in how we live, work and play. Still, we are far from the most connected nation on earth (as shown above from TransferWise), and a quarter of our homes aren’t yet on broadband networks.

One issue is that the digital divide isn’t a simple binary split between “haves” and “have nots.” There are many shades of grey in between. Not everyone uses the Internet and connected technologies the same way, with the same skill set, or even with the same context. Before we can solve this divide, we have to understand these subtleties.

I met Michael Liimattta at an event last week and he got me started thinking about this in more detail. He is the co-founder of Connecting for Good, a Kansas City nonprofit focusing on digital inclusion. I have taken his remarks from this blog post and added my own thoughts as well.

In our efforts to level the digital playing field for low income families, we must avoid the assumption that all of them relate to technology, computers and the Internet in the same way. To be effective in digital inclusion efforts, we must recognize that there are at least four different subsets within this population, each with its own and different needs.

  1. The early adopters: Several national studies indicate that low income families with school children have a higher rate of broadband adoption; approximately half of them can access the Internet at home. The cities where we find the highest adoption rates are those where discounted Internet plans have been offered for a number of years and where there is extensive outreach in the public schools. However, these plans are not available everywhere. There are also cost issues: some families have to purchase expensive smartphone data plans to connect their computers, and many families have outdated PCs or don’t have the necessary tech support or lack sufficient bandwidth. These early adopter families also have another issue: understanding the dangers of the Internet in terms of accessing inappropriate content and meeting child predators.  
  2. The uninformed: We do not want to forget that there are still low income families that know they need to be online and can afford a discounted Internet plan but simply don’t know what plans are available. ISPs like Comcast, Cox and Google Fiber have staff members dedicated to this type of outreach in cities where they offer discounted Internet services. But they will need more local help to increase awareness.
  3. The financially challenged: The truth is, there are families that recognize the need to be connected but truly cannot afford to do so. With the FCC’s modernization of the Lifeline program, a $9.25 per month subsidy for broadband services should be available to eligible low income families, if only more ISPs adopted it. There are other programs from local housing authorities and private philanthropy that can also help to defray these costs.
  4. The unconvinced and intimidated: Lastly, there are low income families that are able to afford a discounted Internet connection but are simply not convinced that they need one or are too intimidated by technology. Ultimately, convincing the adult heads of household is the trick. They must value access enough to dedicate seriously limited financial resources toward paying for an Internet subscription. When it comes to broadband adoption efforts, this can be the most challenging group of all, representing a significant portion of households living on the wrong side of the digital divide. This group also includes people who don’t know the difference between accessing the Web via a phone or the larger screens of tablets and PCs.

Digital inclusions efforts need both dedicated leadership and “boots on the ground” to be executed successfully. Too many efforts focus on providing computers and connectivity but fail to factor in the social dynamic of broadband adoption. To be effective, crossing this divide will take hours and hours of time spent in training and technical support if we are to bring the Internet to the rest of America’s poorest families.

Here is one small step forward: Next week, the National Digital Inclusion Alliance will hold a webinar to introduce digital inclusion practitioners and advocates on the state of digital inclusion at the local community level. You might want to tune in.