Telegram designs the ideal hate platform

Last week the Parler social network went back online, after several weeks of being offline. Its return got me thinking more about what the ideal hate platform is. I think there are two essential elements: the ability to recruit new followers to hate groups, and the ability to amplify their message. The two are related: you ideally need both. Parler, for all the talk about its hate-mongering, really isn’t the right technical solution, and I will explain why Telegram has succeeded.

This blog post comes out of email discussions that I have had with Megan Squire who studies these groups for a living as a security researcher and CS professor. She gave me the idea when we were discussing this report from the Southern Poverty Law Center on how Telegram has changed the nature of hate speech. It is a chilling document that tracks the rise of these groups over the past year. But the SPLC isn’t the only one paying attention: numerous other computer science researchers have tracked the explosive growth in these pro-hate groups since the Capitol January riots and other seminal events in the hate landscape.

Telegram’s rise in numbers doesn’t tell the complete story. Telegram has crafted a more complete social platform for distributing hate speech and recruiting new followers. Certainly, Facebook still has the largest user base, but their tech hate stack (if you want to give it a name) is nowhere near as well developed as Telegram’s, and Parler’s is a distant third. Compare the three networks below in terms of both amplification and recruitment elements:

Criteria Parler Facebook Telegram
Type of service Microblog Social network Messaging+
Coherent and transparent reporting process for hate speech No Mostly and improving No
Support email inbox No Yes No
Content moderation team It depends Yes It depends (see below)
Appeals process Yes Yes No
Encrypted messaging No Separate app Built-in
Corporate HQ location USA (for now) USA Dubai
Growth in English-speaking hate group followers Unknown Unknown Huge growth (SPLC report)
Group cloud-based file storage No No < 2 GB
Group-based sticker sets No No Yes
Bot infrastructure and in-group payment processing No No Yes

“Telegram is absolutely the platform of choice right now for the harder-edged groups. This is for technical reasons as well as access/moderation reasons,” says Squire. You can see the dichotomy in the table above: most of the moderation features that are (finally) part of Facebook are nowhere to be found or are implemented poorly on Telegram, and Parler is pretty much a no-show. Telegram’s file-sharing feature, for example, “allows hate groups to store and quickly disseminate e-books, podcasts, instruction manuals, and videos in easy-to-use propaganda libraries.” I have put links in the chart above to descriptions on why the bot infrastructure and sticker creation features are so useful to these hate groups.

What about moderating content? Here we have conflicting information. I labeled the boxes for Parler and Telegram as “it depends.” Telegram has said that their users do content moderation. In their FAQ they claim to have a team of moderators. For Parler, their community guidelines document says in one place that they don’t moderate or remove content, and in another that they do. My guess is that they both do very little moderation.

The picture for Parler is pretty bleak. If they do succeed in keeping their site up and running (which isn’t a foregone conclusion), they have almost none of the elements that I call out for Facebook and Telegram. Using the Twitter micro-blogging model doesn’t make them very effective at amplification of their messages (at least, not until some of their personalities can bring over huge crowds of followers) or in recruitment, especially now that their mobile apps have been neutered.

There are two technical items that are both useful for Telegram: its encrypted messaging feature and the difference between its mobile app and web interfaces. Much has been written about the messaging features between the different social networks (including my own blog post for Avast here). But Telegram does a better job both at protecting its users’ privacy (than Facebook Messenger) and has much better integration into its main social network code.

The second item is how content can be viewed by Telegram users. To get approval for its app on the iTunes and Google Play app stores, Telegram has put in place self-censorship “flags” so that mobile users can’t view the most heinous posts. But all of this content is easily viewed in a web browser. Parler could choose to go this route, if they can get their site consistently running.

As you can see, defining the tech hate stack isn’t a simple process, and evolving as hate groups figure out how to attract viewership.

N.B.: If you want to read more blogs about the intersection with tech and hate, there is this post where I examine the evolution of holocaust deniers and this post on fighting online disinformation and hate speech.

Network Solutions blog: How to Recognize and Prevent Homograph Attacks

I have written a few times about ways to prevent brandjacking. In this blog post for Network Solutions, I discuss the use of homoglyph or homograph attacks by cybercriminals. These attacks involve exploiting international domain names and the idea is simple to explain once you know a bit of Internet history.

When the Internet was first created, it was based on using Roman alphabet characters in domain names. This is the character set that is used by many of the world’s languages, but not all of them. As the Internet expanded across the globe, it connected countries where other alphabets were in use, such as Arabic or Mandarin. 

Several years ago, researchers discovered the homograph ploy, and since then all modern browsers have been updated to recognize the homograph attack methods of using “xn–80ak6aa92e.com” instead of “apple.com.” I go into the details in my blog post and you can see an example of how a browser responds above.

There is an important lesson here for IT professionals: watch out for injection-style attacks across your web infrastructure. Every element of your web pages can be compromised, even rarely-used tiny icon files. By paying attention to all possible threats today, you’ll save yourself and your organization a lot of trouble tomorrow.

Internet Protocol Journal: Selling my IPv4 block

If your company owns a block of IPv4 addresses and is interested in selling it, or if your company wants to purchase additional addresses, now may be the best time to do so. For sellers, a good reason to sell address blocks is to make money and get some use out of an old corporate asset. If your company has acquired other businesses, particularly ones that have assets from the early Internet pioneers, chances are you might already have at least one range that is gathering dust, or is underused.

So began an interesting journey for me and my range of class C addresses. It took months to figure out the right broker to list my block and to work through the many issues to prove that I actually was assigned it back in the mid 1990s. Thus began my own journey to correct this information and get it ready for resale. The process involved spending a lot of time studying the various transfer webpages at ARIN, calling their transfer hotline several times for clarifications on their process, and paying a $300 transfer fee to start things off. ARIN staff promises a 48-hour turnaround to answer e-mails, and that can stretch out the time to prepare your block if you have a lot of back-and-forth interactions, as I did.

You can read my report in the current issue of the Internet Protocol Journal here. I review some of the historical context about IPv4 address depletion, the evolution of the used address marketplace, the role of the block brokers and the steps that I took to transfer and sell my block.

I remember c|net: a look back on computing in the mid-1990s

The news this week is that c|net (and ZDnet) have been sold to a private equity firm. I remember when c|net was just starting out, because I wrote one of the first hands-on reviews of web server software back in 1996. To test these products, I worked with a new company at the time called Keylabs. They were the team that built one of the first huge, 1000-PC testing labs at Novell and were spun out as a separate company, eventually spinning off their own endpoint automation software company called Altiris that was acquired by Symantec and now is part of Broadcom. They were eager to show their bona fides and worked with me to run multiple PC tests involving hundreds of computers trying to bang away on each web server product. “1996 was an exciting time for computing,” said Jan Newman who is now a partner at the VC firm SageCreek and was one of the Keylabs founders. “The internet was gathering steam and focus was changing from file and print servers to the web. I believe this project with David was the very first of its kind in the industry. It was exciting to watch new platforms rise to prominence.” Now we have virtual machines and other ways to stress test products. The review shows how the web was won back in the early days.

Here are some observations from re-reading that piece.

  1. The demise of NetWare as a server platform. Back in the mid 1990s, NetWare — and its associated IPX protocol — was everywhere, until Microsoft and the Internet happened. Now it is about as hard to find as a COBOL developer. One advantage that NetWare had was it was efficient: you could run a web server on a 486 box at about the same performance as any of the Windows servers running on a faster Pentium CPU.
  2. Remember Windows NT? That was the main Microsoft server platform at the time. It came in four different versions: running on Intel, DEC Alpha, MIPS and PowerPC processors. Those latter two were RISC processors that mostly have disappeared, although Apple Macs and Microsoft Xbox’s  ran on PowerPCs for years.
  3. Remember Netscape? In addition to their web browser that made Mark Andreesen rich, they also had their own web server, called FastTrack, that was in beta at the time of my review. Despite being a solid performer, it never caught on. It did support both Java and JavaScript, something that the NT-only web servers didn’t initially offer.
  4. The web wasn’t the only data server game. Back in the mid-1990s, we had FTP, and Gopher as popular precursors. While you can still find FTP (I mainly use to transfer files to my web server and to get content to cloud images), Gopher (which got its name from the University of Minnesota team mascot) is gone into a deep, dark hole.
  5. Microsoft’s web server, IIS, was underwhelming when first was released. It didn’t support Java, didn’t do server-side includes (an early way to use dynamic content), didn’t have a web-based management tool, didn’t support hosting multiple domains unless you used separate network adapters, didn’t have any proxy server support and made use of an unsecured user accounts. Of course, now it is one of the top web server platforms with Apache.
  6. You think your computer is slow? How about a 200 MHz Pentium. That was about as fast as you could expect back then. And installing 16 MB of RAM and using 10/100 Ethernet networks were the norm.

Getting my kicks on the old Route 66

Like many of you this past Labor Day weekend, my wife and I took a drive to get out of our pandemic bubble. And as the NY Times ran this piece, we also got our kicks on Route 66. Their photographer went to the portion through Arizona and New Mexico; we stayed a lot closer to home, about an hour’s drive from St. Louis. This wasn’t our first time visiting this area, but we wanted to see a few sights from a safe distance, and also for my wife to visit an ancestral home not far off the Mother Road, as it is called.

St. Louis has a complicated relationship with Route 66: there are many different paths that the road took through the city to cross the Mississippi River at various bridges over the years the road was active. And for those of you that want to discover the road in other parts of the country, you will quickly find that patience is perhaps the biggest skill you’ll need. Different parts were decommissioned or rerouted after the freeways were constructed that brought about its demise. In our part of the country, that is I-44, which goes between St. Louis and Oklahoma City, where it connects up with I-40.

My favorite Route 66 memory spot within city limits has to be the old Chain of Rocks Bridge, which was opened in the 1930s and was featured in that now classic film “Escape From New York.” The bridge is now a bike/pedestrian path and it is one of the few bridges that is deliberately bent in the middle. It lies on the riverfront bike trail that I have been on often.

Once you leave the city and head west you need to be a determined tracker. Many parts of it are on the map as the I-44 service road, but that doesn’t tell the entire story about the actual original roadbed that in many cases no longer exists. Speaking of which, one of the places that you might have heard of is Times Beach. The beach refers to the Meramec River and the reason for its memory is this is the town that became contaminated with Dioxin. Now the streets remain but not much else, and the state has turned it into a state park. The visitor center is a former roadhouse that was built in 1936. Speaking of other bygone inns, in a few miles you’ll pass the Gardenway Motel near Gray’s Summit. The motel had 40 rooms and was built in 1945 and eventually closed in 2014. It was owned by the same family during its entire run. A separate advertising sign still stands down the road.

There are a lot of other classic signs nearby too, but like I said you have to spend some time exploring to find them. If you are looking to stay in one of the period motels that is still operating, you might try the Wagon Wheel in Cuba, a few miles further west.

Another example of the bygone era that Route 66 spanned was captured by this National Park Service webpage on the Green Book. This was a guide for Black motorists who couldn’t stay at the then-segregated lodgings mentioned above. Mrs. Hilliard’s Motel in St. Clair, which is in the area, operated briefly in the 1960s. The guide (which was published annually from 1936 to 1964 by Victor Green) had other recommended and safe places for Black travelers such as dining and gas stations. Our history museum has an excellent explanation of its use and some sample pages here, which you can contrast with what was portrayed in the 2018 film.

One of the things that I learned when traveling in Poland is that history is often what you don’t see, sometimes painfully removed, other times left to rot and decay. That will require some investigation. Route 66 is a real time capsule to be sure.

The rise of the online ticketing bots

A new report describes the depth of criminality across online ticketing websites. I guess I was somewhat naive before I read the report, “How Bots affect ticketing,” from Distil Networks. (Registration is required.) The vendor sells anti-bot security tools, so some of what they describe is self-serving to promote their own solutions. But the picture they present is chilling and somewhat depressing.

The ticketing sites are being hit from all sides: from dishonest ticket brokers and hospitality agents who scrape details and scalp or spin the tickets, to criminals who focus on fan account takeovers to conduct credit card fraud with their ticket purchases. These scams are happening 24/7, because the bots never sleep. And there are multiple sources of ready-made bad bots that can be set loose on any ticketing platform.

You probably know what scalping is, but spinning was new to me. Basically, it involves a mechanism that appears to be an indecisive human who is selecting tickets but holding them in their cart and not paying for them. This puts the tickets in limbo, and takes them off the active marketplace just long enough that the criminals can manipulate their supply and prevent the actual people from buying them. That is what lies at the heart of the criminal ticketing bot problem: the real folks are denied their purchases, and sometimes all seats are snapped up within a few milliseconds of when they are put on sale. In many cases, fans quickly abandon the legit ticketing site and find a secondary market for their seats, which may be where the criminals want them to go. This is because the seat prices are marked up, with more profit going to the criminals. It also messes with the ticketing site’s pricing algorithms, because they don’t have an accurate picture of ticket supply.

This is new report from Distil and focusing just on the ticketing vendors. In the past year, they have seen a rise in the sophistication of the bot owners’ methods. That is because like much with cybercrime, there is an arms race between defenders and the criminals, with each upping their game to get around the other. The report studied 180 different ticketing sites for a period of 105 days last fall, analyzing more than 26 billion requests.

Distil found that the average traffic across all 180 sites was close to 40% consumed by bad bots. That’s the average: many sites had far higher percentages of bad bot traffic. (See the graphic above for more details.)

Botnets aren’t only a problem with ticketing websites, of course. In an article that I wrote recently for CSOonline, I discuss how criminals have manipulated online surveys and polls. (Registration also required.) Botnets are just one of many methods to fudge the results, infect survey participants with malware, and manipulate public opinion.

So what can a ticketing site operator do to fight back? The report has several suggestions, including preventing outdated browser versions, using better Captchas, blocking known hosting providers popular with criminals, and looking carefully at sources of traffic for high bounce rates, a series of failed logins and lower conversion rates, three tells that indicate botnets.

The dangers of DreamHost and Go Daddy hosting

If you host your website on GoDaddy, DreamHost, Bluehost, HostGator, OVH or iPage, this blog post is for you. Chances are your site icould be vulnerable to a potential bug or has been purposely infected with something that you probably didn’t know about. Given that millions of websites are involved, this is a moderate big deal.

It used to be that finding a hosting provider was a matter of price and reliability. Now you have to check to see if the vendor actually knows what they are doing. In the past couple of days, I have seen stories such as this one about GoDaddy’s web hosting:

 

And then there is this post, which talks about the other hosting vendors:

Let’s take them one at a time. The GoDaddy issue has to do with their Real User Metrics module. This is used to track traffic to your site. In theory it is a good idea: who doesn’t like more metrics? However, the researcher Igor Kromin, who wrote the post, found the JavaScript module that is used by GoDaddy is so poorly written that it slowed down his site’s performance measurably. Before he published his findings, all GoDaddy hosting customers had these metrics enabled by default. Now they have turned it off by default and are looking at future improvements. Score one for progress.

Why is this a big deal? Supply-chain attacks happen all the time by inserting small snippets of JavaScript code on your pages. It is hard enough to find their origins as it is, without having your hosting provider to add any additional burdens as part of their services. I wrote about this issue here.

If you use GoDaddy hosting, you should go to your cPanel hosting portal, click on the small three dots at the top of the page (as shown above), click “help us” and ensure you have opted out.

Okay, moving on to the second article, about other hosting provider scripting vulnerabilities. Paulos Yibelo looked at several providers and found multiple issues that differed among them. The issues involved cross-site scripting, cross-site request forgery, man-in-the-middle problems, potential account takeovers and bypass attack vulnerabilities. The list is depressingly long, and Yibelo’s descriptions show each provider’s problems. “All of them are easily hacked,” he wrote. But what was more instructive was the responses he got from each hosting vendor. He also mentions that Bluehost terminated his account, presumably because they saw he was up to no good. “Good job, but a little too late,” he wrote.

Most of the providers were very responsive when reporters contacted them and said these issues have now been fixed. OVH hasn’t yet responded.

So the moral of the story? Don’t assume your provider knows everything, or even anything, about hosting your site, and be on the lookout for similar research. Find a smaller provider that can give you better customer service (I have been using EMWD.com for years and can’t recommend them enough). If you don’t know what some of these scripting attacks are or how they work, go on over to OWASP.org and educate yourself about their basics.

How great collaborations occur

What do the Beatles, Monty Python, the teams behind building the Ford Mustang and the British Colossus computer, and the Unabomber manhunt have in common? All are examples of impressive and successful collaborative teams. I seem to return to the topic of collaboration often in my writing, and wrote this post several years ago about my own personal history of collaboration. For those of you that have short memories, I will refresh them with some other links to those thoughts. But first, let’s look at what these groups all have in common:

Driven and imaginative leadership. The Netflix series on the Unabomber creates a somewhat fictional/composite character but nevertheless shows how the FBI developed the linguistic analysis needed to catch this criminal, and how a team of agents and a massive investigation found him. Some of those linguistic techniques were used to figure out the pipe bombing suspect from last week, by the way.  

A combination of complementary skills. The Beatles is a good example here, and we all have imprinted in our early memories the lyrics and music by John and Paul. On the British code-breaking effort Colossus,  that team worked together without actually knowing what they each did, as I mentioned in my blog post. Another great example is the team that originally created the Ford Mustang car, as I wrote about a few years ago. 

Superior writing and ideation. An interview that Eric Idle recently gave on the Maron WTF podcast is instructive. Idle spoke about how the entire Python team wrote their skits before they cast them, so that no one would be personally invested to a particular idea before the entire group could improve and fine-tune it. Many collaborative efforts depend on solid writing backed by even more solid idea-creation. There are a number of real-time online writing and editing tools (including Google Docs) that are used nowadays to facilitate these efforts. 

Active learning and group training. A new effort by the Army is noteworthy here, and what prompted my post today. They recognize that soldiers have to find innovative ways to protect their digital networks and repel cyber invasions. They announced the creation of a new cyber workspace at the Fort Gordon (near Augusta Geo.) base called Tatooine, which refers to the Star Wars planet where Luke spent some time in the early movies. The initial missions of this effort will focus on three areas:

  • drone detection,
  • active hunting of cyber threats on DoD networks, and
  • designing better training systems for cyber soldiers.

Great communicators.  Many of these teams worked together using primitive communication tools, before the digital age. Now we are blessed with email, CRMs, real-time messaging apps, video chats, etc. But these blessings are also a curse, particularly if these tools are abused. In this post for the Quickbase blog, I talk about signs that you aren’t using these tools to their best advantage, particularly for handling meeting schedules and agendas. In this post from September, I also provide some other tips on how to collaborate better. 

Unique partnerships. All of my examples show how bringing together the right kinds of talent can result in the sum being bigger than the individuals involved. At the Army base, both military and civilian resources will be working together, and draw on the successful Hack the Army bug bounty program. On Colossus, they recruited people who were good at solving crossword puzzles, among other things. The Python group included Terry Gilliam, who was a gifted animator and brought the necessary visual organization to their early BBC TV shows. 

Certainly, the history of collaboration has been one of fits and starts. As a former publication editor, I can recall the teams that I put together had some great collaborative efforts to write, edit, illustrate and publish the stories in our magazines. And while we continue making some of the same mistakes over again and not really considering the historical context, there are a few signs of hope too as the more modern tools help folks over some of these hurdles. That brought me a solid appreciation for how these best kinds of collaborations happen. Feel free to share your own examples if you’d like. 

iBoss blog: What is HTTP Strict Transport Security

 

 

Earlier this summer, I wrote about how the world of SSL certificates is changing as they become easier to obtain and more frequently used. They are back in the news more recently with Google’s decision to add 45 top-level domains to a special online document called the HTTPS Strict Transport Security (HSTS) preload list. The action by Google adds all of its top level domains, including .Google and .Eat, so that all hosts using that domain suffix will be secure by default. Google has led by example in this arena, and today Facebook, Twitter, PayPal and many other web properties have supported the HSTS effort.

The HSTS preload list consists of hosts that automatically enforce secure HTTP connections by every visiting browser. If a user types in a URL with just HTTP, this is first changed to HTTPS before the request is sent. The idea is to prevent man-in-the-middle, cookie hijacking and scripting attacks that will intercept web content, as well as prevent malformed certificates from gaining access to the web traffic.

The preload list mitigates against a very narrowly defined attack that could happen if someone were to intercept your traffic at the very first connection to your website and decode your HTTP header metadata. It isn’t a likely scenario, but that is why there is this list.  “Not having HSTS is like putting a nice big padlock on the front door of your website, but accidentally leaving a window unlocked. There’s still a way to get in, you just have to be a little more sophisticated to find it,” says Patrick Nohe of the SSL Store in a recent blog post.

This means if you thought you were good with setting a permanent 301 redirect from HTTP to HTTPS, you aren’t completely protected.

The preload site maintains a chart showing you which browser versions support HSTS, as shown above. As you might imagine, some of the older browsers, such as Safari 5.1 and earlier IE versions, don’t support it at all.

So, what should you do to protect your own websites? First, if you understand SSL certificates, all you might need is a quick lesson in how HSTS is implemented, and OWASP has this nice short “cheat sheet” here. If you haven’t gotten started with any SSL certs, now is the time to dive into that process, and obtain a valid EV SSL cert. If you haven’t catalogued all your subdomains, this is also a good time to go off and do that.

Next, start the configuration process on your webservers: locate the specific files (like the .htaccess file for Apache’s web servers) that you will need to update with the HSTS information. If you need more complete instructions, GlobalSign has a nice blog entry with a detailed checklist of items, and specific instructions for popular web servers.

After you have reviewed these documents, add your sites to the preload site. Finally, if you need more in-depth discussion, Troy Hunt has this post that goes into plenty of specifics. Healso warns you when to implement the preload feature: when you are absolutely, positively sure that have rooted out all of your plain HTTP requests across your website and never plan to go back to those innocent days.

Software shouldn’t waste my time

One of my favorite tech execs here in St. Louis is Bryan Doerr, who runs a company called Observable Networks that recently was acquired by Cisco. (Here is his presentation of how the company got started.) One of the things he is frequently saying is that if a piece of software asks for your attention to understand a security alert, we don’t want to waste your time. (He phrases it a bit differently.) I think that is a fine maxim to remember, both for user interface designers and for most of us that use computers in our daily lives.

As a product reviewer, I often find time-wasting moments. Certainly with security products, they seem to be designed tis way on purpose: the more alerts the better! That way a vendor can justify its higher price tag. That way is doomed.

Instead, only put something on the screen that you really need to know. At that moment in time. For your particular role. For the particular device. Let’s break this apart.

The precise moment of time is critical. If I am bringing up your software in the morning, there are things that I have to know at the start of my day. For example, when I bring up my calendar, am I about to miss an important meeting? Or even an unimportant meeting? Get that info to me first and fast. Is there something that happened during the night that I should jump on? Very few pieces of software care about this sort of timing of its own usage, which is too bad.

Part of this timing element is also how you deal with bugs and what happens when they occur. Yes, all software has bugs. But do you tell your user what a particular bug means? Sometimes you do, sometimes you put up some random error message that just annoys your users.

Roles are also critical. A database administrator has a lot different focus from a “normal” user. Screens should be designed differently for these different roles. And the level of granularity is also important: if you have just two or three roles, that is usually not enough. If you have 17, that is probably too many. Access roles are usually the last thing to be baked into software, and it shows: by then the engineers are already tired about their code and don’t want to mess around with things. Like anything else with software engineering, do this from writing your first line of code if you want success.

Finally, there is understanding the type of device that is looking at your data. As more of us use mobile devices, we want less info on the screen so we can read it without squinting at tiny type. In the past, this was usually called responsive design, meaning that a web interface designer would build an app to respond to the size of the screen, and automatically rearrange stuff so that it would make sense, whether it was viewed on a big sized desktop monitor or a tiny phone. If your website or app isn’t responsive, you need to fix this post-haste. It is 2017 people.