FIR B2B podcast episode #124: How to supercharge your website content

In today’s episode, we examine different ways you can supercharge your website content by using some time-tested strategies that we may intrinsically know but don’t always talk about.

The first reference is from an article in Entrepreneur Magazine about three big mistakes one consultant made when building a new site. The mistakes all revolve around not understanding a basic tenet: B2B requires quality, not quantity. He chose AdWords keywords that were too general and ended up spending money on clicks that didn’t generate any real leads. He didn’t understand that buyers need prompting to get further into his content and needed ways for potential customers to actually talk or chat in real time with someone who can get them more engaged and further up the marketing funnel. We suggest all sorts of improvements, including having a FAQ and using different content types, to increase engagement.


The second piece is from Michael Brenner, CEO of Marketing Insider Group, who was our guest way back on episode 12.  He talks about the importance of using serialized content to capture more attention. We need to understand that generating demand is all about cultivating and nurturing your potential customers. Start with a content audit to see what material you have that can be collected and serialize. Also examine some of the leading sites that Brenner talks about in this post. Paul has plenty of other great suggestions that he mentions in this episode, and you might want to also buy his book to get further details.

You can listen to our 14 min. podcast here.

CSOonline: Best tools for single sign-on

I have been reviewing single sign-on (SSO) tools for nearly seven years, and in my latest review for CSOonline, I identify some key trends and take a look at the progress of products from Cisco/Duo, Idaptive, ManageEngine, MicroFocus/NetIQ, Okta, OneLogin, PerfectCloud, Ping Identity and RSA. You can see the product summary chart here.

If you have yet to implement any SSO or identity management tool, or are looking to upgrade, this roundup of SSO tools will serve as a primer on where you want to take things. Given today’s threat landscape, you need to up your password game by trying to rid your users of the nasty habit of reusing their old standby passwords.

I also look at five different IT strategies to improve your password and login security, the role of smartphone authentication apps, and what is happening with FIDO.

 

Do you really know where your XP lurks?

I was visiting an industrial firm this week and had a chance to walk around their shop floor to see their equipment. It was a mix of high and low tech, machines that cost several thousands of dollars sitting alongside some very primitive pieces of hardware. Unfortunately, these primitive things were PCs running Windows XP.

Now, I have a fond spot in my being for XP. Just playing that startup sound sends chills up my spine (well, almost). I spent a lot of time running it for various tests that I got paid to do back in the day when IT pubs paid for that sort of thing. I had a stack of VMs running various situations, along with a couple of real PCs that had different versions of XP that I maintained for years. It was only with some reluctance that I eventually gave them up. Since then I have rarely run any XP on anything, because it has been superseded by several newer (and supported) versions of Windows. It appears I am not alone: XP is still around: according to this report, it can be found on 3% of total PCs on consumer desktops, and I am sure that number doesn’t include those in industrial and embedded environments such as I witnessed this week. BTW, Microsoft ended support for XP five years ago, although earlier this year it did create a patch to fix the Bluekeep flaw for XP.

The XP PCs that I saw were used by the firm to control some of their pricey industrial machines. I have no idea the network infrastructure at this shop, nor how much protection was put in place to continue to use XP in their environment. But it almost doesn’t matter: if you have XP, you are basically hanging a sign outside your virtual door that says, “come on in and hack me.” It is just a matter of time before some bad actor finds and exploits these PCs. It is like leaving a jar of honey out. This post written to help consumers use XP more safely recommends, “stop using IE or go offline.” That is harder to do than you might think.

Most likely, replacing this equipment with a more modern version of Windows isn’t all that simple. The machinery has to be tested, and probably has code that needs to be rewritten to work on the newer Windows. And you will say, that is the entire point, and you would be right. But the firm isn’t going to stop using XP, because then they would be out of business. So they are in between a rock and a hard place, to be sure.

So here is a simple security test that you can try out in your business. How many endpoints do you have that are still running XP? Just take a census, using whatever automated tool you might have. Now walk around and see if you can find a few others that are hidden inside industrial equipment, or a printer server, or some other likely location. Do you have the right network isolation and protections in place? Can you do without an internet connection to these PCs? Why did your automated scanners fail to identify these devices? Can you get rid of them completely, or is the vendor still insisting on using XP for their equipment? I think you will be surprised, and not in a good way, what the answers are.

And for those of you that are running XP at home, do yourself a favor and take a trip this weekend to MicroCenter (or whatever is your local computer store) and buy yourself a new computer, and dispose of your old one (after first removing your hard drive). And if needed, conduct an appropriate memorial service to bid this OS a fond farewell.

 

RSA blog: Taking hybrid cloud security to the next level

RSA recently published this eBook on three tips to secure your cloud. I like the direction the authors took but want to take things a few steps further.  Before you can protect anything, you first need to know what infrastructure you actually have running in the cloud. This means doing a cloud census. Yes, you probably know about most of your AWS and Azure instances, but probably not all of them. There are various ways to do this – for example, Google has its Cloud Deployment Manager and Azure has an instance metadata service to track your running virtual machines. Or you can employ a third-party orchestration service to manage instances across different cloud platforms.

Here are my suggestions for improving your cloud security posture.

CSOonline: Evaluating DNS providers: 4 key considerations

The Domain Name System (DNS) is showing signs of strain. Attacks leveraging DNS protocols used to be fairly predictable and limited to the occasional DDoS floods. Now attackers use more than a dozen different ways to leverage DNS, including cache poisoning, tunneling and domain hijacking. DNS pioneer Paul Vixie has bemoaned the state of DNS and says that these attacks are just the tip of the iceberg. This is why you need to get more serious about protecting your DNS infrastructure and various vendors have products and services to help. You have four key options; here’s how to sort them out in a piece that I wrote for CSOonline..

Dark Reading: Understanding & Defending Against Polymorphic Attacks

I first wrote about polymorphic malware four years ago. I recall having a hard time getting an editor to approve publication of my piece because he claimed none of his readers would be interested in the concept. Yet in the time since then, polymorphism has gone from virtually unknown to standard practice by malware writers. Indeed, it has become so common that most descriptions of attacks don’t even call it out specifically. Webroot in its annual threat assessment from earlier this year reported that almost all malware it has seen had demonstrated polymorphic properties. You can think of it as a chameleon of malware.

In this post for Dark Reading, I describe how polymorphism has gotten popular with both attackers and defenders alike, the different approaches that the vendors have taken, and some suggestions on keeping it out of your infrastructure.

What becomes an online museum most?

Those of you of a certain age might remember a print ad campaign for the Blackgama fur company that ran for many years, beginning in the 1960s with this image of Lauren Bacall wearing one of their mink coats.

4 R

I am riffing on this theme after visiting the National Cryptologic Museum outside the NSA offices in suburban Maryland this week. I remembered the ads because of my overall experience with the museum, and its relationship between its physical plant and the online and other publications that the historical arm of the NSA has produced.

As long-time readers recall, last summer I visited Bletchley Park in the UK. It was a great day spent at the complex and I learned a lot. Sadly, the NSA’s museum was a disappointment. And it made me realize that what makes a great museum when you first go to the actual building is part of what makes for a great online museum experience. Unfortunately, the NSA museum has neither.

Many of the world’s greatest museums have played catch-up when it comes to their websites. This is more than getting their catalogs digitized, then getting them redone with higher resolution or newer imaging technologies. It is more than organizing their collection for visitors, academics and other specialists that want to search them for their own research or just personal interests. It is also more than having something that is visually attractive to leverage the latest curatorial trends.

These great museums have also had to embrace technology in their actual buildings, something that I first wrote about for the NY Times when I visited the Abe Lincoln museum in Springfield, Ill. back in 2008. At that visit, I got to see first-hand a variety of things that are normally used in theatrical productions or rock concerts, such as spotlights, one-way mirrors and sophisticated sound systems to tell the story about Lincoln’s life and times. From this piece, I wrote for HPE’s blog about how the best code developers are learning to hone their craft and improve their user experience from these innovative museum designers. For example, augmenting the visuals with other sensory experiences, understanding the consequences of context switching when it comes to tell your story and so forth.

That is why the NSA museum stands out, but not in a good way. It is a subject that is near and dear to my heart, cryptology and its origins and use in the modern era. Check. It is located near the NSA, an interesting place in its own right. Check. It has plenty of classic stories about some key developments, going back to the Revolutionary War and how codes and encryption played a role in the birth of our country. Check. It has several Engima units on display, showing the evolution of the machine that you can actually touch. Check. It is dull as dishwater and has exhibits that looked like they were created back when the Apollo program was in its heyday. Big fail.

The best part of the museum wasn’t any of the exhibits but a tour that I happened upon led by a docent. Turns out he was a former Russian linguist that worked for the NSA for many years. His stories were great, and he answered all my questions with interesting personal insights (and correctly, I might add). I only wish he had a better physical plant to show his visitors.

For example, one exhibit is about how the Soviets bugged our buildings in Moscow. It begins with this object that is on display in the museum: it looks like a nicely crafted wooden replica of the US government seal. It was given to then U.S. Ambassador Averell Harriman back in 1945 as a gift from Soviet children. It hung in the Ambassador’s residence for seven years, until a bug was found inside the carving. While what is shown is a replica, you can open a special hinge that was installed by the museum so you can see where the bug was located.

This story was a nice precursor to a major operation that took place in the 1970s called The Gunman Project. At that time, we found out the Soviets had increased their bugging program and put technology into 16 different IBM typewriters in our Moscow embassy offices to record the documents that were being prepared. I saw the Great Seal replica (and engaged with opening and closing it) at the museum, I took home a pamphlet about Gunman that I read avidly on the flight home. I tried to find an online copy of this document, I did find the text here. The document was nicely produced and I learned a lot from it. Now contrast that information with this link to another Gunman story, this one produced by two private Dutch crypto enthusiasts. It actually is a much better explanation, and even with the pictures included in the original NSA pamphlet, this latter piece is 1000% better and more engaging.

So if you are interested in the history of crypto, my suggestion is to forgo the actual visit. The NSA is working on building a new museum, but that could take years. In the meantime, read some of the supporting materials on their website or better yet, check out other entries at the online Crypto Museum. Second, if you are going to design a new museum, think of how the online and actual physical presence have to work together to build the best visitor experience.

HP Enterprise.nxt: Ways to expose your business to ransomware


No computing professional wants to encounter a ransomware attack. But these six poor IT decisions can make that scenario more likely to occur. Ransoms are not the result of an isolated security incident but the consequence of a series of IT missteps. Moreover, it often exposes poor decision-making that indicates deeper management issues that must be fixed. In this article for HPE’s Enterprise.nxt website, I discuss how these missteps can be corrected before you are the subject to the next attack.

FIR B2B podcast episode #123: The differences between B2B and B2C marketing

This recent article in Forbes caught our attention because it neatly sums up some of the biggest differences between B2B and B2C marketing. Unlike many B2C decisions – which are based on emotion, preference or impulse – B2B decisions are practical, thoughtful and undergirded by data, or at least they should be. Among the implications of that:

  • Know the who, the why and the multiple decision makers in the chain;
  • Tell how you will make the business better;
  • Sell solutions, not features; and
  • Use personas and create a path to the purchase

Paul co-wrote a book a while back called Social Marketing to the Business Customer that touched on some of these points, and you might want to pick up a copy as they are still relevant.

One suggestion is to build an emotional attachment to the product, which isn’t always easy to do in B2B scenarios. However, buyers have a lot on the line, and that can give you an emotional connection.

ChiefMarketer.com tells how Caterpillar did that. Just because you sell big tractors doesn’t mean you can’t create a story that resonates with the community. People who drive tractors care about their work, so Caterpillar focused on the pride they take in what they do. Decisions aren’t just about features.

This story reminded us of this brilliant video Volvo produced several years ago to promote its tractor trailers. The appearance of Van Damme is unexpected, powerful and memorable, as evidenced by its 93 million views and the fact that we both recalled it eight years later.

Finally, one item that has nothing to do with trucks is the spillback a year after the implementation of the EU’s General Data Protection Regulation. A piece in eConsultancy talks about how the rules have benefited B2B marketers by helping them weed out bad practices, improve lead quality and better focus their companies’ marketing efforts.

Listen to our podcast here:

Picking the right tech isn’t always about the specs

I have been working in tech for close to 40 years, yet it took me until this week to realize an important truth: we have too many choices and too much tech in our lives, both personal and work. So much of the challenges about tech is picking the right product, and then realizing afterwards the key limitations about our choice and its consequences. I guess I shouldn’t complain, after all, I have had a great career out of figuring this stuff out.

But it really is a duh! moment. I don’t know why it has taken me so long to come to this brilliant deduction. I am not complaining, it is nice to help others figure out how to make these choices. Almost every day I am either writing, researching or discussing tech choices for others. But like the barefoot shoemaker’s children, my own tech choices are often fraught with plenty of indecisions, or worse yet, no decision. It is almost laughable.

I was involved in a phone call yesterday with a friend of mine who is as technical as they come: he helped create some of the Net’s early protocols. We both were commiserating about how quirky Webex is when trying to support a multiple-hundred conference call. Yes, Webex is fine for doing the actual video conference itself. The video and audio quality are both generally solid. But it is all the “soft” support that rests on the foibles of how we humans are applying the tech: doing the run-up practice session for the conference, notifying everyone about the call, distributing the slide deck under discussion and so forth. These things require real work to explain what to do to the call’s organizers and how to create standards to make the call go smoothly. It isn’t the tech per se – it is how we apply it.

Let me draw a line from that discussion to an early moment when I worked in the bowels of the end-user IT support department of the Gigantic Insurance company in the early 1980s. We were buying PCs by the truckload, quite literally, to place on the desks of the several thousand IT staffers that until then had a telephone and if they were lucky a mainframe terminal. Of course, we were buying IBM PCs – there was no actual discussion because back then that was the only choice for corporate America. Then Compaq came along and built something that IBM didn’t yet have: a “portable” PC. The reason for the quotes was that this thing was enormous. It weighed about 30 pounds and was an inch too big to put in the overhead bins of most planes.

As soon as Compaq announced this unit (which sold for more than $5000 back then), our executives were conflicted. Our IBM sales reps, who had invested many man-years in golf games with them, were trying to convince them to wait for a year before their own portable PC could come to market. But once we got our hands on an IBM prototype, we could see that Compaq was a superior machine: First, it was already available. It also was lighter and smaller and ran the same apps and had a compatible version of DOS. We gave Compaq our recommendation and started buying them in droves. That was the beginning of what was called the clone wars, unleashing a new era of technology choices to the corporate world. After IBM finally came out with their portable, Compaq already had put hard drives in their model so they stayed ahead of IBM on features.

My point in recounting this quaint history lesson is to point out something that hasn’t changed in nearly 40 years: how tech reviews tend to focus on the wrong things, which is why we get frustrated when we finally decide on a piece of tech and then live with the consequences.

Some of our choices seem easy: who wants to pay a thousand bucks for a stand to sit your monitor on? Of course, some things haven’t changed: the new Macs also sell for more than $5000. That is progress, I guess.

My moral for today: looking beyond the specs and understand how you are eventually going to use the intended tech. You may choose differently.