FIR B2B podcast episode #123: The differences between B2B and B2C marketing

This recent article in Forbes caught our attention because it neatly sums up some of the biggest differences between B2B and B2C marketing. Unlike many B2C decisions – which are based on emotion, preference or impulse – B2B decisions are practical, thoughtful and undergirded by data, or at least they should be. Among the implications of that:

  • Know the who, the why and the multiple decision makers in the chain;
  • Tell how you will make the business better;
  • Sell solutions, not features; and
  • Use personas and create a path to the purchase

Paul co-wrote a book a while back called Social Marketing to the Business Customer that touched on some of these points, and you might want to pick up a copy as they are still relevant.

One suggestion is to build an emotional attachment to the product, which isn’t always easy to do in B2B scenarios. However, buyers have a lot on the line, and that can give you an emotional connection.

ChiefMarketer.com tells how Caterpillar did that. Just because you sell big tractors doesn’t mean you can’t create a story that resonates with the community. People who drive tractors care about their work, so Caterpillar focused on the pride they take in what they do. Decisions aren’t just about features.

This story reminded us of this brilliant video Volvo produced several years ago to promote its tractor trailers. The appearance of Van Damme is unexpected, powerful and memorable, as evidenced by its 93 million views and the fact that we both recalled it eight years later.

Finally, one item that has nothing to do with trucks is the spillback a year after the implementation of the EU’s General Data Protection Regulation. A piece in eConsultancy talks about how the rules have benefited B2B marketers by helping them weed out bad practices, improve lead quality and better focus their companies’ marketing efforts.

Listen to our podcast here:

Picking the right tech isn’t always about the specs

I have been working in tech for close to 40 years, yet it took me until this week to realize an important truth: we have too many choices and too much tech in our lives, both personal and work. So much of the challenges about tech is picking the right product, and then realizing afterwards the key limitations about our choice and its consequences. I guess I shouldn’t complain, after all, I have had a great career out of figuring this stuff out.

But it really is a duh! moment. I don’t know why it has taken me so long to come to this brilliant deduction. I am not complaining, it is nice to help others figure out how to make these choices. Almost every day I am either writing, researching or discussing tech choices for others. But like the barefoot shoemaker’s children, my own tech choices are often fraught with plenty of indecisions, or worse yet, no decision. It is almost laughable.

I was involved in a phone call yesterday with a friend of mine who is as technical as they come: he helped create some of the Net’s early protocols. We both were commiserating about how quirky Webex is when trying to support a multiple-hundred conference call. Yes, Webex is fine for doing the actual video conference itself. The video and audio quality are both generally solid. But it is all the “soft” support that rests on the foibles of how we humans are applying the tech: doing the run-up practice session for the conference, notifying everyone about the call, distributing the slide deck under discussion and so forth. These things require real work to explain what to do to the call’s organizers and how to create standards to make the call go smoothly. It isn’t the tech per se – it is how we apply it.

Let me draw a line from that discussion to an early moment when I worked in the bowels of the end-user IT support department of the Gigantic Insurance company in the early 1980s. We were buying PCs by the truckload, quite literally, to place on the desks of the several thousand IT staffers that until then had a telephone and if they were lucky a mainframe terminal. Of course, we were buying IBM PCs – there was no actual discussion because back then that was the only choice for corporate America. Then Compaq came along and built something that IBM didn’t yet have: a “portable” PC. The reason for the quotes was that this thing was enormous. It weighed about 30 pounds and was an inch too big to put in the overhead bins of most planes.

As soon as Compaq announced this unit (which sold for more than $5000 back then), our executives were conflicted. Our IBM sales reps, who had invested many man-years in golf games with them, were trying to convince them to wait for a year before their own portable PC could come to market. But once we got our hands on an IBM prototype, we could see that Compaq was a superior machine: First, it was already available. It also was lighter and smaller and ran the same apps and had a compatible version of DOS. We gave Compaq our recommendation and started buying them in droves. That was the beginning of what was called the clone wars, unleashing a new era of technology choices to the corporate world. After IBM finally came out with their portable, Compaq already had put hard drives in their model so they stayed ahead of IBM on features.

My point in recounting this quaint history lesson is to point out something that hasn’t changed in nearly 40 years: how tech reviews tend to focus on the wrong things, which is why we get frustrated when we finally decide on a piece of tech and then live with the consequences.

Some of our choices seem easy: who wants to pay a thousand bucks for a stand to sit your monitor on? Of course, some things haven’t changed: the new Macs also sell for more than $5000. That is progress, I guess.

My moral for today: looking beyond the specs and understand how you are eventually going to use the intended tech. You may choose differently.

If we do our job, nothing happens

There is a line in a recent keynote speech by Mikko Hypponen, the CRO of F-Secure that goes something like this: “If we do our job in cyber security, then nothing happens.” It is so true, and made me think of the times when various corporate executives challenge their investments in cyber security, wanting to see something tangible. Mikko makes this point by asking them to look around at the conference room where these conversations are taking place, asking them if the rooms are cleaned to the satisfaction of the execs. If so, perhaps they should fire their cleaning staff, because they are no longer needed.

Now the difference between your security engineering staff and your janitors is obvious. You can’ t see all the virtual dirt that is building up across your network, the cruft of old software that needs updating and polishing, and the garbage that your users download on to their PCs that will leave them susceptible to attack. And that is part of the problem with cyber security: most things are invisible to mere mortals, and even some specialists can’t always agree on the best cyber hygiene techniques. Most of us have an innate sense that mopping the floor before dusting the shelves above is the wrong way to go about cleaning the room. That is because we all understand (at least on a basic level) how gravity operates. But when it comes to cyber, should we be changing our password regularly (some say yes, some say nay)? Or using complex strings of a certain length (some say 10 digits is fine, others say longer ones are needed)?

Mikko ends his talk by saying that we must assume that we are all targets by someone, whether they be a hacker who is still in high school or a government spy that is eager to get inside our company’s network. He says, “The times of building walls are over, because eventually someone will get in our enterprise. Breach detection is key, and we all have to get better at it.”

I agree with him completely. We must get better at seeing the virtual dirt on our networks. Building a better or bigger wall won’t stop everyone and will just foster a false sense of cyber security. And just because nothing happens, this doesn’t mean that cyber security folks aren’t hard at work. They are the cleaners that we don’t ever see, unless one day they leave someone’s mess behind.

 

FIR B2B Podcast Episode #122: Why techies make for great speakers

For technology companies, the conventional wisdom is wrong when it comes to pitching a conference or webinar session. Instead of having your CMO or other C-suite executive tell your story, trust the technical people in your shop. Your audiences will thank you for it.

Here are some of the reasons:

  • Audiences want black-and-white issues. CMOs usually see the world in nuance and infinite shades of gray. Techies value certainty. Think Sheldon Cooper’s character.
  • Facts are an endangered species these days.  So who better to deliver facts that a techie?
  • Audiences want to hear stories. First-person experience from people on the front lines can deliver authenticity and credibility that the audience relates to.
  • Techies steer clear of self-promotion, which is the fastest turnoff for an audience.
  • Techies can be more effective at reaching potential customers precisely because they don’t try to promote or sell.
  • Techies can be trained to be good and sometimes great speakers. We have some tips for how to do it.

I wrote more about this for Sam Whitmore’s Media Survey. It is normally gated, but today you can read the post here.

CSOonline: What is Magecart?

Magecart is a consortium of malicious hacker groups who target online shopping cart systems, usually the Magento system, to steal customer payment card information. This is known as a supply chain attack. The idea behind these attacks is to compromise a third-party piece of software from a VAR or systems integrator or infect an industrial process unbeknownst to IT. I explain what this malware does, link to some of the more notable hacks of recent history, and also provide a few suggestions on how you can better protect your networks against it.

You can read my post for CSOonline here.

RSA blog: Risk analysis vs. assessment: understanding the key to digital transformation

When it comes to looking at risks, many corporate IT departments tend to get their language confused. This is especially true in understanding the difference between doing risk analysis, the raw materials used to collect data about your risks, with risk assessment, the conclusions and resource allocation to do something about these risks. Let’s talk about the causes, challenges and why this confusion exists and how we can avoid them as we move along our various paths of digital transformation. Part of this confusion has to do with the words we choose to use than any actual activity. When an IT person says some practice is risky, oftentimes what our users hear us say is, “No, you can’t do that.” That gets to the heart of the historical IT/user conflict.

In my latest blog post for RSA, I discuss how this is more than choosing the proper words, but goes towards a deeper understanding of how we evaluate digital risks.

Taking control over your own health care: the rise of the Loopers

I have been involved in tech for most of my professional career, but only recently did I realize its role in literally saving my life. Maybe that is too dramatic. Let’s just say that nothing dire has happened to me, I am healthy and doing fine. This realization has come from taking a longer view of my recent past and the role that tech has played in keeping me healthy.

Let me tell you how this has come about. Not too long ago, I read this article in The Atlantic about people with type 1 diabetes who have taken to hacking the firmware and software running their glucose pumps, such as the one pictured here. For those of you that don’t know the background, T1D folks are typically dealing with their illness from an early age, hence they are usually called “juvenile diabetics.” This occurs with problems with their pancreas producing the necessary insulin to metabolize food.

T1D’s typically take insulin in one of two broad ways: either by injection or by using a pump that they wear 24/7. Monitoring their glucose levels is either done with manual chemical tests or by the pump doing the tests periodically.

Every T1D relies on a great deal of technology to manage their disease and their treatment. They have to be extremely careful with what they eat, when they eat, and how they exercise. A cup of coffee can ruin their day, and something else can literally put them in mortal danger.

That is what got me thinking of my own situation. As I said, my case is far less dire, but I never really looked at my overall health care. To take three instances: I take daily blood pressure meds, use a sleep apnea machine every night, and wear a hearing aid. All of these things are to manage various issues with my health. All of them are tech-related, and I am thankful that modern medicine has figured them out to mitigate my problems. I would not be as healthy as I am today without all of them. Sometimes I get sad about the various regimens, particularly as I have to lug the apnea machine aboard yet another international flight or remember to reorder my meds. Yet, I know that compared to T1D folks, my reliance on tech is far less than their situation.

I know a fair bit about T1D through an interesting story. It is actually how I met my wife Shirley many years ago: we were both volunteers at a JDRF bike fundraising event in Death Valley, even though neither of us has a direct family connection to the disease. I was supposed to ride the event and had raised a bunch of money (thanks to many of your kind donations, BTW) but broke my shoulder during a training ride. Fortunately, the JDRF folks running the event insisted that I should still come, and the rest, as they say, is history.

One of the T1D folks that I know is a former student of mine, who is part of the community of “loopers” that are hacking their insulin pumps. Over the past several months he has collected the necessary gear to get this to work. Let’s call him Adam (not his real name).

Why is looping better than just using the normal pump controls? Mainly because you have better feedback and more precise control over insulin doses. “If you literally sat and watched your blood sugar 24/7 and were constantly making adjustments, sure you could get great control over your insulin levels. But it’s far easier to let the software do it for you, because it checks your levels every five minutes. In reality, I’m feeding my pump’s computer small pieces of data that is very commonly used in the T1D community for diabetes management. So it is no big deal.”

Adam also told me he took about four days to get used to the setup and understand what the computer’s algorithms were doing for his insulin management. So much information is available online in various forums and documentation of different pieces of open source software that include projects such as Xdrip, Spike, OpenAPS, Nightscout, Loop, Tidepool, and Diasend. It is pretty remarkable what these folks are doing. As Adam says, “You need to be involved in your own care — but some of the stress in decision making is gone. Having a future prediction of your glucose level makes it easier to plan for the longer term and feel more confident.”

But looping has another big benefit, because it is monitoring you even when asleep. It also gives you a new perspective on your care, because you have to understand what the computer algorithms are doing in dispensing insulin. “The most powerful way to use an algorithm is when you combine the human and computer together — the algorithm is not learning. It’s just reusing well established rules, “ says Adam. “It’s pretty dumb without me and I’m way better off with it when we work together. That’s why I say that my setup is a thousand times better than what I had before. I have an astonishingly better tool in this fight.”

There are a few down sides: you do need to learn how to become your own system integrator, because there are different pieces you have to knit together. The pumps have firmware that could disable the looping: this was done for the patient’s protection, when it was found that some of them were hackable (at close distances, but still) and for their protection. If you upgrade your pump, your looping could be disabled.

You also need to have a paid Apple Developer account to put everything together, because the iPhone app that is used to connect his pump requires this developer-level access. “It is more than worth the $100 a year,” Adam told me. There are also Android solutions, but he has been an iPhone user for so long it didn’t make sense for him to switch.

Finally, looping is not legal, and not yet approved by the FDA. Many other countries have recognized this pattern of treatment, and the FDA is considering approval.

This is the way of the modern tech era, and how savvy patients have begun to take back control over their care. It is great that we can point to this example as a way that tech can literally save lives, and that patients today have such powerful tools at their disposal too. And the looping story hopefully should inspire you to take control over your own medical care.

FIR B2B podcast episode #121: Standouts from The Conference Board’s 2019 Excellence in Marketing & Communications Awards

Paul and I first met Jen McClure (left) more than a decade ago and shortly after she founded the Society for New Communications Research (SNCR) in 2005. Jen, who has been a frequent guest on Shel Holtz’s FIR main podcast, was one of the first people to see the potential of social media in business communications. SNCR merged with The Conference Board three years ago, and fortunately the awards program Jen created at SNCR has continued as The Conference Board’s Excellence in Marketing & Communications Awards. This week we discuss three outstanding winners of the awards, which will be presented in New York City on June 26th, in conjunction with The Conference Board’s 24th Annual Corporate Communications Conference. Two of our picks are B2B.

  • SAP used Dynamic Signal to track and encourage employee engagement and brand advocacy. Staffers now share more than 15,000 social posts monthly and drive an impressive amount of traffic to the main SAP website.
  • Southern California Edison adopted Sprout Social’s chatbot technology to supplement the two staffers who respond to customers inquiries. The utility has not only increased the volume of questions it can field without adding staff but has maintained high satisfaction ratings and is now able to respond more quickly to major power outages. The project succeeded on all metrics.
  • Siemens used augmented and virtual reality technology to greatly expand the variety of equipment is can show at trade shows. These large and expensive machines are costly to ship and take up a lot of floor space. With AR/VR, Siemens was able to deliver a virtual experience that was in many ways better than a live demo.

Congratulations to these and all the other finalists and winners in this year’s awards program. The quality of entries keeps getting better every year. You can listen to our 16 min. podcast here:

HPE Enterprise.nxt: Six security megatrends from the Verizon DBIR

Verizon’s 2019 Data Breach Investigations Report (DBIR) is probably this year’s second-most anticipated report after the one from Robert Mueller. In its 12th edition, it contains details on more than 2,000 confirmed data breaches in 2018, taken from more than 70 different reporting sources and analyzing more than 40,000 separate security incidents.

What sets the DBIR apart is that it combines breach data from multiple sources using the common industry collection called VERIS – a third-party repository where threat data is uploaded and made anonymous. This gives it a solid authoritative voice, and one reason why it’s frequently quoted.

I describe six megatrends from the report, including:

  1. The C-suite has become the weakest link in enterprise security.
  2. The rise of the nation state actors.
  3. Careless cloud users continue to thwart even the best laid security plans.
  4. Whether insider or outsider threats are more important.
  5. The rate of ransomware attacks isn’t clear. 
  6. Hackers are still living inside our networks for a lot longer than we’d like.

I’ve broken these trends into two distinct groups — the first three are where there is general agreement between the DBIR and other sources, and last ones . are where this agreement isn’t as apparent. Read the report to determine what applies to your specific situation. In the meantime, here is my analysis for HPE’s Enterprise.nxt blog.

RSA blog: Managing the security transition to the truly distributed enterprise

As your workforce spreads across the planet, you must now support a completely new collection of networks, apps and endpoints. We all know this increased attack surface is more difficult to manage. Part of the challenge is having to create new standards and policies to protect your enterprise and reduce risk as you make the transformation to become a more distributed company. In this blog post for RSA, I examine some of the things to look out for. My thesis is that you’ll want to match the risks with the approaches, so that you focus on the optimal security improvements to make the transition to a distributed staffing model.