Researching the Twitter data feed

A new book by UCLA professor Zachary Steinert-Threkeld called Twitter as Data is available online free for a limited time, and I recommend you download a copy now. While written mainly for academic social scientists and other researchers, it has a great utility in other situations.

Zachary has been working with analyzing Twitter data streams for several years, and basically taught himself how to program enough code in Python and R to be dangerous. The book assumes a novice programmer, and provides the code samples you need to get started with your own analysis.

Why Twitter? Mainly because it is so transparent. Anyone can figure out who follows whom, and easily drill down to immediately see who are these followers, and how often they actually use Twitter themselves. Most Twitter users by default have open accounts, and want people to engage them in public. Contrast that with Facebook, where the situation is the exact opposite and thus much harder to access.

To make matters easier, Twitter data comes packaged in three different APIs, streaming, search and REST. The streaming API provides data in near-real-time and is the best way to get data on what is currently trending in different parts of the world. The downside is that you could be picking a particularly dull moment in time when nothing much is happening. The streaming API is limited to just one percent of all tweets: you can filter and focus on a particular collection, such as all tweets from one country, but still you only get one percent.That works out to about five million tweets daily.

Many researchers run multiple queries so they can collect more data, and several have published interesting data sets that are available to the public. And there is this map that shows patterns of communication across the globe over an entire day.

The REST API has limits on how often you can collect and how far back in time you can go, but isn’t limited to the real-time feed.

Interesting things happen when you go deep into the data. Zachary first started with his Twitter analysis, he found for example a large body of basketball-related tweets from Cameroon, and upon further analysis linked them to a popular basketball player (Joel Embiid) who was from that country and lot of hometown fans across the ocean. He also found lots of tweets from the Philippines in Tagalog were being miscataloged as an unknown language. When countries censor Twitter, that shows up in the real-time feed too. Now that he is an experienced Twitter researcher, he focuses his study on smaller Twitterati: studying the celebrities or those with massive Twitter audiences isn’t really very useful. The smaller collections are more focused and easier to spot trends.

So take a look at Zachary’s book and see what insights you can gain into your particular markets and customers. It won’t cost you much money and could payoff in terms of valuable information.



The best way to get more social media influence is to grow your own

The NY Times published the results of a fascinating research project a few days ago. Entitled The Follower Factory, it describes a firm that gets paid to add followers to your Twitter, Facebook and other social media accounts. Shocking, right? What is interesting and new about this report is how far a scam artist will go to replicate real users data profiles, such as their face, background images, user name (with homographic substitutions to make it harder to distinguish from the original account owner), and biographic data to make the purchased followers seem more legit. Many celebrities – or would-be ones anyway – have bought massive follower lists in the attempt to boost their own brand. It doesn’t work, and most of these efforts ultimately fail.

The NYT piece goes into detail, showing how different automated bots can be used to create seemingly human Twitter accounts. While most of them aren’t worth the electrons that are consumed, there are some useful Twitter bots such as those that can detect emergency situations or track other newsworthy events. The piece also describes a dissatisfied employee from the original scammers who left to start his own venture, copying his former employer’s tactics. On the Internet, no one knows how low you can go.

Real social media influencers get that way through an organic growth in their popularity, because they have something to say and people respond to that over time. There is no quick fix for providing value. If you buy a bunch of followers, the “real” followers will go elsewhere. If you try to game the system, ultimately the folks who are just creating solid content will show these con artists up.

Sadly, this is nothing new. My podcasting partner Paul Gillin wrote about this more than six years ago about the flaws in Klout scores, which was a darling back then. But the link between people who spent a lot of time massaging their Klout data and higher scores troubled him then, and still does. There are more recent metrics to try to measure social media influence, but they are just as flawed. Let’s try to forget that we can distill influence into a single metric, and instead look at what the best influencers are trying to do. Interestingly, Gillin wrote a book on the topic more than ten years ago.

Marshall Kirkpatrick also long ago wrote a blog post about ways to add value in online communications. They are still relevant today:

  1. Be first. If you can be the first place someone sees some valuable information, people will notice.
  2. Say it best. If you communicate more clearly, effectively, or insightfully about a topic of general interest, that’s a big value add.
  3. Bring multiple perspectives together.
  4. Have a unique perspective.
  5. Be funny.

Notice what is different about this list? Everything you can do here doesn’t cost money, but it does take time and you need talented people who aren’t just cutting-and-pasting from across the Interwebs. Too bad that message isn’t clear, years after the web and social media first became popular.

FIR B2B Podcast #88: The Decline of Trust and New Twists on End-of-Year Research

This week, Paul Gillin and I examine the results of the 2018 Edelman Trust Barometer, which shows a remarkable drop in the overall trust from the public. Some alarming results from the annual survey:

  • Sixty-three percent of respondents say they do not know how to tell good journalism from rumor or falsehoods or if a piece of news was produced by a respected media organization.
  • Chinese citizens trust their government more than U.S. citizens trust theirs. 
  • Technology remains the most trusted industry sector of them all, with a trust rating of 75% (whew).
  • CEOs are becoming more trusted sources and are increasingly being asked to address public policy issues.
  • One-quarter of respondents said they read no media at all because it is too upsetting. 

In the second part of our discussion, we look at some examples of annual trends/reports in the security field that I have been studying for this post. For example, Kaspersky’s “story of the year” was about the rise of ransomware, and this set of predictions from ServiceNow are short and sweet, which is a nice break from the norm. Watchguard has been posting a series of predictions to its blog using short videos. All are noteworthy. We suggest B2B marketers review these tactics and see if they can apply to their own media relations efforts.

You can listen to our 17 min. podcast here:

The role of the WWII coder girls

I am reading the book Code Girls, the true story about the thousands of women who worked decoding WWII message traffic for the Army and Navy. It is a fascinating look at how they shaped the crypto and spying industries, and largely an unknown and untold story. I would recommend it highly for your own reading.

One of the women featured in this book is Elizebeth Friedman. She was one-half of a power couple that worked on code breaking and is documented in another book called The Woman Who Smashed Codes that came out last year. Her role is mentioned in Code Girls, but the focus is more on others who are even less famous. The couple met at the offices of an eccentric philanthropist named George Fabyan, who thought that Bacon wrote Shakespeare’s works and wanted some crackerjack researchers to prove it. The couple ended up falling in love with each other and disproving the Bacon theories once and for all.

There has been a lot written about the activities of the British coding group at Bletchley Park (and you can read some links to them here), but not as much about the parallel American efforts to decode the German Enigma and Japanese Purple codes that were used during the war. What is interesting about this book is how it talks about the lives of ordinary women who were plucked from being school teachers, clerks, and recent college graduates into this top-secret life in the nation’s capital and elsewhere to help the war effort.

Why were women chosen for this task? Several reasons. First, most of the men were off fighting the war, so the potential employment pool was diminished. Second, the military found that women made for better code breakers: they had better concentration and more of an eye for detail. Many of them were math and science majors and liked the kind of work that was involved – this was an era before we started telling girls that they weren’t good at math! Finally, the country needed thousands of them for this job. In some cases, entire graduating classes were hired on the spot. All of the women had no idea what they were signing up for, and often left their lives with nothing more than a few dollars in the pocket and a one-way train ticket to DC.

The Army and Navy had different recruiting strategies and set up competing organizations, based in different parts of DC. Early on, one group worked on messages that were received on odd-numbered days and one on even days. That wasn’t very productive, and eventually the two sorted out different theaters of war to focus on.

Two myths are busted in this book. The first is that people who were good at solving crossword puzzles made for good code breakers. That isn’t necessarily accurate, because crosswords are built with escalating clue difficulties, since most people start at the upper left and work their way down the puzzle. Code breaking is very tedious, and you have to deal with tons more frustration as you run into big roadblocks in figuring out patterns as the codes frequently change.

Second is that decoding intercepts could have helped prevent Pearl Harbor. That might have been the case had the US tuned up its efforts but that wasn’t possible during peacetime, given the climate that we had before we entered the war. Decoding intercepts was one of the reasons why we were able to dominate the Pacific theater and sink so many Japanese ships. Often, our military was reading their messages concurrently with their intended recipients, and had to stage a fake aircraft fly-over to hide the real source of their intelligence on the Japanese Navy’s movements.

An interesting side note: this past week my colleague Elonka Dunin (who has spent time with the Cryptos sculpture at the CIA headquarters building) published a paper about the Friedman tombstone and how it contains a hidden cipher. Can’t see it? Look closer. That is why most of us would be terrible code breakers.


The intersection of art and technology with Thomas Struth

As I grow older, I tend to forget about events in my youth that shaped the person that I am now. I was reminded of this last week after seeing a Thomas Struth photography exhibit at the St. Louis Art Museum. Struth’s pictures are enlarged to mural size and depict the complex industrial environments of the modern age: repairing the Space Shuttle, a blowout preventer at an oil rig (shown here), the insides of physics and chemistry labs, the Disney Soarin’ hang glider simulation ride, and chip fabrication plants. Many of these are places that I have had the opportunity to visit over the years as a technology reporter.

The pictures reminded me of a part-time job that I had as an undergraduate student. My college had obtained a set of geometrical string models that were first constructed back in the 1830s and demonstrated conic sections, such as the intersection of a plane and a cone. Back then, we didn’t have Mathematica or color textbooks to show engineering students how to draw these things. These models were constructed out of strings threaded through moveable brass pieces that were attached to wooden bases, using lead weights to keep the strings taut.

The models were first built by a French mathematician Theodore Olivier, and were used in undergraduate descriptive geometry courses up until 1900. I was one of the students who helped restore them. While the models look very nice now, back when I was a student they were in pretty bad shape: the wooden bases were cracked, the brass pieces were tarnished, and the strings were either tangled or missing. It took some effort to figure out what shapes they were trying to display and how to string them properly. Sometimes there were missing parts and I had the help of the college machine shop and local auto body shops to figure out what to do. The best part of this job was that it came with its own private office, which was a nice perk for me when I needed to escape dorm life for a few quiet hours. After I graduated, the college put the finished models on display for everyone to see.

The intersection of art and technology has always been a part of me, and it was fun seeing Struth’s work. it was great to get to see the details captured and the point of view expressed from these images, lit and composed to show their colors and construction. And the photos reminded me of the beauty of these advanced machines that we have built too.

FIR B2B podcast #87: A LinkedIn Exec’s 2018 Sales and Marketing Predictions

We spoke to Justin Shriber, Vice President of Marketing for LinkedIn Sales and Marketing Solutions, to start off the new year. He put together a series of predictions for the year ahead, and in this discussion he explains them and the role that LinkedIn will play in advancing B2B sales and marketing in 2018.

Smart, quanitative driven marketers will still be in high demand, but the pendulum will start to swing back toward marketers that have a qualitative eye for good stories.

— Brands will re-evaluate the platforms on which they post their content, favoring those that have gone on record saying that user trust is a priority for them. Brands want platforms that give them control over affiliations and customer IDs, where they show up, and the audiences to which they’re exposed.

Sales will be the new awareness marketing channel. Sales used to be the direct connection to prospects, but we will see sellers start to build awareness through direct advocacy programs.

Marketing will gather more intelligence around the reach employees have on the sales side. There will be formal processes that make it possible for employees to easily decide what shareable content speaks to them so they can maintain their own individuality while also benefiting the company. 

LinkedIn Sales Navigator will play an increasing role in this unification of marketing and sales efforts. 

Sales and marketing alignment will continue to improve. Organizations need to engage both sales and marketing in a concerted way from awareness through conversion, rather than having marketing take the front end and sales the back end.

Listen to our 21 minute podcast here.

HPE Enterprise.nxt: How to protect hidden Windows services from attacks

The hijacking legitimate but obscure Windows services is a tough exploit to detect. Here are two lesser known Windows services that could be vulnerable to malware attacks. You might think you can tell the difference between benign and malicious Windows services, but some of these services are pretty obscure. Do you know what ASLR and BITS are? Exactly.

You can read my latest article for HPE here.

Behind the scenes at a Red Cross shelter

A friend of mine, Dave Crocker, has been volunteering for Red Cross activities around the California fires and Houston floods over the past several months, and has been working as a volunteer for them for more than nine years. I thought it would be an interesting time to chat with him about his experiences and consider why the media is so often critical of the Red Cross .

Crocker was in Houston for two weeks, starting two weeks after the hurricane hit. He has been a shelter supervisor at both small and large operations, a dispatcher for daily, local disasters, and helps out in other situations, both in the field and in their offices. Given his tenure as a volunteer, he has taken numerous Red Cross training classes, including learning to drive a fork lift (although not that well, he ruefully notes).

The work is challenging on several levels. First are the 12 hour shifts, usually 7 to 7. Except they often don’t end exactly at 7:00; so your shift lasts 13 or 14 hours or more. If you are on a night shift, that can be even tougher. You get one day off per week, if you are lucky. You sleep wherever you can find a bunk, sometimes that means you don’t exactly have five-star accommodations, or even one-star. “I’ve slept on a shelter’s army cots, but in Ventura I paid for my own accommodations and got a hotel room. I don’t sleep well on cots. Some of my fellow volunteers have slept in their cars or on the ground.”

He is very proud of his volunteer efforts, although he doesn’t carry any personal hubris in what he does. “First and foremost, it’s about helping our clients,” he told me in a recent phone call and over a series of emails and Facebook posts. “Self-praise almost never shows up in anyone’s behavior. The focus is the work.”

One of the things he learned from the recent series of disasters was to expand his definition of a “client”. Originally, he thought just the people displaced by the floods or fires were his clients, but other volunteers pointed out that the Red Cross ecosystem is much greater, including someone who donates items or funds to a relief effort. “The rest of the community is also our client, because they are also affected by the disaster and are compelled to be connected to it, by coming to the shelter to donate or by asking how can they help.”

One of the challenges is that these spontaneous donations can become overwhelming. In the Ventura County fires, Crocker experienced this first-hand. “We saw an enormous amount of donations of water, snacks, face masks, diapers, clothing, toys, and more, That was all brought to our shelters, and our warehouses quickly got filled. Processing all that requires a lot of staff. Historically, these donations have been turned away by the Red Cross, with a request to just send money. This has regularly produced word-of-mouth criticism of the Red Cross. This year, Red Cross policy changed and the rule is to say yes and then figure out how to make it work.”  Crocker said that many tens of thousands of bottles of water were donated, as were donations that had been ordered online, with enough showing up to fill a shipping container.

Running a large disaster response is sometimes compared to the logistics of running a military deployment. “Even the smallest shelter has an enormous amount of detail to it,” Crocker told me. “There is the whole setting-up of beds and linens, and then taking it all down, the ongoing cleaning of various items as clients leave and new ones register; then there is feeding three meals a day plus snacks. It is a massive logistics game and the situation is highly dynamic. Communication is challenging because you have to deal with a lot of noisy information. And equipment and geography can be difficult.”

Fires are unpredictable, especially when the wind changes, and that puts a wrench in your plans, for who is affected and where to locate the shelters. The Ventura Fairgrounds shelter he worked at had roughly 250 clients, with a peak of about 500, before he arrived. The range of quality in facilities that are available is also highly variable. At Ventura, the shelter was in a building that is typically used for livestock shows. “We were in better shape in the wine country fires because we had use of a church with excellent kitchen and shower facilities and had been explicitly designed for be used as a shelter.” That church-based facility has hosted a disaster shelter 11 times in the last few years. In Houston, there were roughly 4,000 volunteers in the relief effort, divided amongst 25 different shelters.

The timing of the Ventura fires produced an unusual benefit for the shelter’s clients. Because the fires were around the holidays, a lot of corporate parties were canceled and as a result restaurants had surplus food that they repurposed as donations to feed the volunteers working the shelters.

One of the frustrations Crocker cites for himself and his colleagues is the negative press surrounding the response of the Red Cross volunteers to these disasters. “Sometimes the reporting focuses only on the negative, citing only one or another disgruntled person.” While certainly there are issues, for the most part he sees the relief efforts as run as well as they can be, given the complex and dynamic circumstances that any large effort like this will have. “Certainly, there are people who try to scam the system, something that I’ve seen in my limited volunteer efforts. But Red Cross policy is to err in the direction of helping rather than rejecting people who ask for assistance.”

“The work itself, and the privilege to do it, is what I enjoy, and being around people with a similar attitude, and getting the work done.” Crocker mentioned in one Facebook post that “everyone has had a collaborative tone” including Red Cross volunteers, employees and even clients, which could be because many clients have been displaced by multiple fires in past years. Note that more than 90% of Red Cross staffing is done by volunteers.

I highly recommend taking a moment, and getting involved in your local Red Cross chapter. Give blood, give money, give your time. You are working with a great group of people and for something very worthwhile.

My love affair with the phone central office

I have a thing about the telephone central office (CO). I love spotting them in the wild, giving me some sense of the vast connectedness that they represent, the legal wrangling that took place over their real estate, and their history in our telecommunications connectedness. That is a lot to pack into a series of structures, which is why I am attracted to them.

Many of you may not be familiar with the lowly CO, so I should probably back up and set the scene. Back in the day when all of us had landline phones, indeed, before we even called them such things, the phone company had to wire these phone lines to one central place in each community. A pair of low-voltage wires ran from your home to this central office, and were connected to a vast wire switching center called the main distribution frame. The central office actually supplied the dial tone that you heard when you picked up your phone, which assured you that your phone line was operating properly.

Interestingly, one of the inventors of the central office was a Hungarian named Tivadar Puskas, and I actually got to visit the fruits of his labors when I was in Budapest several years ago. The building that housed his switchboard is now offices occupied by the firm Prezi. Notice it still is quite a beautiful structure, with gothic touches and high ceilings.

I was reminded of visiting the Prezi offices when I was on a trip last week to Columbus Indiana. I have been wanting to go there for several years since first learning that the small city contains some of the best examples of modern architecture in one place. What does this have to do with phone COs? Well, they have a beautiful CO building that was designed in 1978 and it once looked like this.

Columbus’ CO was actually an expansion of an earlier and more modest building that was built in the 1930s. The phone company needed more room as the town grew and they were adding services, and so we have this space-frame curtain-glass wall structure that was built around the original building. I would urge those of you interested in seeing other great modern structures to schedule your own trip to the town and see what they have done because there is a lot to see.

But let’s get back to the phone CO. These buildings were at the center of activity back in the 1990s when DSL technology (which is the broadband technology that the phone companies still use to deliver Internet services) was first coming into popularity. At the time there were a number of independent startups such as Covad and Rhythms that wanted to provide DSL to private customers. The phone companies tried to block them, claiming that there wasn’t enough room in the COs to add their equipment. Back in the day, there weren’t many higher-speed data service offerings, so DSL was a very big improvement. This battle eventually was won and while these startups have come and gone, we have Uverse service offered by AT&T as a result. (I wrote about these DSL issues in an interesting historical document from that era.)

In 2001, I was teaching TCP/IP networking to a class of high school boys and wanted to take them on a series of field trips to noteworthy places and businesses near the school. Now, field trips are normal for younger kids, but when you get to high school, that means taking your kids out of other regularly scheduled classes. I thought this would be interesting to my students and so one of the first trips we took was a short one, down the street to the local CO. This was in a very non-descript structure that many of us passed by frequently and didn’t give it any thought (see the photo here).

To give you additional context, this was about two months after the 9/11 attacks. Somehow I was able to convince the Verizon folks (what the local phone company was called at the time) to allow me to bring a bunch of high school kids into their CO. I recall we had old Bell veterans who gave us a tour, and when the time came to show the kids where their home phone lines were located, I volunteered to have them find my own wire pair. I remember the employee pulled the location of my phone wires and told the class that I had something very unusual: back then I had an office phone that also was wired to my home, and he could show the students how this worked so the phones rang in both locations.

I realize that nowadays the landline is a historical communications curiosity more than anything, and you have to look long and hard to even find someone who uses one anymore. So it is great that there are few phone COs that are grandly designed and stand out as great examples of architecture and design, such as the ones I have seen in Budapest and Columbus. Do send me your own favorites too, and best wishes for a great 2018.

Gregory FCA newsletter: How to get your annual year-end security reports noticed and read

It’s about as regular as hearing Auld Lang Syne on New Year’s Eve: The annual year-end security report issued by companies big and small looking to create awareness and build relationships. Our inboxes were flooded with dozens of them. In this newsletter that I co-authored with Greg Matusky and Mike Lizun, we look at some of the best and worst features of these annual reports and give our opinions. Hopefully you can use our findings to improve your own reports this time next year, and learn from the best and avoid the biggest mistakes.

The Scintillating Standouts of 2017!

Some of the more unusual reports are the ones that really caught our eyes.

Kaspersky’s “story of the year” takes the typical annual year-end report and transforms it into a cyber-security news story similar to People’s Person of the Year. Written in layman’s terms with an accompanying infographic, Kaspersky’s Story of the Year reworks the tired ransomware story into a can’t-not-read compendium on all things ransomware. And it’s understandable! The first line reads like the opening of a movie rather than a technical rehash. Consider, “In 2017, the ransomware threat suddenly and spectacularly evolved. Three unprecedented outbreaks transformed the landscape for ransomware, probably forever.”

Kasperksy then takes it one step forward by producing “The Number of the Year,” based on the number of malicious files its networks have seen transit its sensors. Our co-author David Strom calls it gimmicky, and maybe it is from his journalistic perch. But from a strictly PR perspective, the ability to distill a finding down to a single number (and one drawn from data at their ready disposal) is a brilliant PR take, and they are to be congratulated.

What about your organization? Do you have available internal data that could add PR gravitas to your next report? Might be something to consider.

Another take comes from ServiceNow. They opted to deliver their security predictions in a short-and-sweet format–one that takes less than three minutes to read. Their conclusions are compelling without overselling. For instance, they suggest that 2018 will see the emergence of security haves and have-nots–those having automated detection and response and those who don’t. Guess who sells such a solution? Still, they keep the sell to a minimum.

Watchguard uses their blog to make a series of predictions in a very attractive and still informative way. There are predictions about IoT botnets, a doubling of Linux-based attacks, what will happen to multi-factor authentication, and the state of election and voter hacking. Each prediction takes the form of a short video with high production values.

With all the news about Uber’s mistakes over the past year, here is a cogent analysis by Dark Reading of what Uber did wrong with its breach response: delayed notification, failure to implement stronger access controls, unclear approval workflows, storing access credentials in GitHub, and failing to compartmentalize data access. This analysis was a neat package that we wish others would emulate.

This report, which appeared in IBM’s Security Intelligence blog, is another rarity. It compares what few of these year-end surveys actually do by looking back a year and then scoring their predictions. The author looked at the threats posed by IoT, the rise of cybercrime-as-a-service, and the threats against brand reputations and concludes he was a bit ahead of the curve on some trends. We wish we would see more of these “truth telling” evaluation-type pieces.

Those were our top picks. But there are plenty of other year-end reports, most choosing one of three paths: presenting the results of a survey, focusing on a particular vertical market, or summarizing what telemetry they have collected from sensors located at major internet peering points or at their customers.

All in the Numbers: The Best of the Survey-Based Reports

Let’s look at the two best survey posts.

The State of Open Source Security” touches on both telemetry and survey methods. It presents the results of a survey of 500 open-source users combined with internal data from Snyk and scans of various GitHub repositories. Sadly, almost half of the code maintainers never audit their code, and less than 17 percent feel they have high security knowledge. Code vulnerabilities are on the rise for open-source projects but not for Red Hat Linux, which is an interesting factoid that isn’t often mentioned.

Beyond Trust’s report has a series of 18 predictions, most of which are obvious (bigger targets will fall, mobile spam on the rise, games can double as malware). A few are interesting, and what sets this report apart is a look ahead to five years from now when GDPR becomes untenable, online elections become secure, and the end of cash arrives.

Customer Telemetry-Based Reports Work Well Also

McAfee’s annual threat predictions have some interesting insights and cover some non-obvious subjects, including describing the machine learning arms race, the opportunities for serverless attackers, and the ways that home automation vendors will misuse your personal data.

Fortinet is another one of those companies that runs a massive protection network and can cull trends from its customers. Their quarterly threat report has identified 185 zero-day vulnerabilities, with an average of each customer experiencing more than 150 attacks over the quarter and unknowingly running an average of two botnets inside their networks. Like other security researchers, they talk about the delay to patching known exploits and how lousy most of their customers are at getting at root causes of infections.

Then there is Bitdefender’s insights into the past year’s threats. It is based on their own global sensor network and from their customers. Ransomware is still king, with one in every six spam emails including some kind of ransomware attack vector. Also on the rise this past year are crypto-currency miner malware, polymorphic attacks, and Android-based Trojans.

Dashlane’s report on the worst passwords of the year is entertaining, if a bit predictable. While they break all the rules about these year-in-review articles, it works. Yes, it is subjective, it is somewhat self-serving (Dashlane sells a password manager), and it covers familiar ground. But it is very amusing and that is why sometimes you can deliver old chestnuts in interesting ways.

Slicing and Dicing Vertical Markets in Reports 

Some vendors have taken a different tactic and written year-end reports that examine specific verticals. This is what eSentire has done with the healthcare industry. Rather than just positing the “chicken little” scenario, it provides specific case studies of security weaknesses in various enterprises that of course were eSentire customers and discovered malware on their networks. They conclude by saying that well-known exploits have been out for years and yet still aren’t patched. Yes, it is self-serving, but it is also instructive.

Another way to slice things is to just focus on bitcoin exploits, which have been increasing as its value rises. Incapsula looked at exploits across its own network and found three out of four bitcoin sites were attacked and a third of the network attacks were persistent attacks. Hong Kong was the most targeted country for bitcoin-based network layer assaults in Q3 2017, largely because of a persistent attack on a local hosting service that was hit hundreds of times throughout the quarter.

Another example is this report looking at mobile threats by RiskIQ. They used telemetry from their network of more than 120 different app stores and billions of endpoints. This is a rich source of exploits and a growing threat. It highlights the non-surprising trend toward using phony rave reviews to prop up a malicious app. It also reviews the collaboration over the takedown of the WireX botnet earlier this fall.

What to Avoid in Your Annual Report 

Finally, no compendium would be complete without mentioning some examples of what to avoid. As we mentioned in an earlier newsletter, having small survey sample sizes is never a good idea, and this report by Holger Schulze where he interviews 500 people forthis report for Alienvault is to be avoided. While it has numerous graphics that can be used in blog posts, it contains mostly subjective content.

Also to be avoided: reports that don’t say anything new, such as this report from Wandera on WiFi risks, or this report on security trends from Cipher. A corollary to this is to avoid predictions that are more self-serving or self-promotional, such as these from Axiomatics.

Another issue: checking your facts. In November, an organization called the Information Technology and Innovation Foundation posted a supposedly detailed review of the security compliance of hundreds of the more popular U.S. government websites. Sadly, the facts weren’t correct, and webmasters responded with complaints and corrections.

Don’t do what NordVPN and eSentire did. Both of their PR firms sent out predictions for 2018 in email messages, and neither of them posted any of this content online. That isn’t helpful, especially in a world where you want to cite a URL for any predictions-related materials.

Then there is this encyclopedic listing from our colleagues at MSSP Alert of dozens of predictions, culled from various security management vendors. We dare you to read through the entire list, which spans multiple pages. Sometimes less is more!

Finally, here is a somewhat different twist on the predictions route.Varonis put together a post that contained quotes from a series of podcasts. It was a good try, and a terrific example of repurposing content. But it held little value for discerning audiences that would want more context in their analysis.