Network Solutions blog: How to Recognize and Prevent Homograph Attacks

I have written a few times about ways to prevent brandjacking. In this blog post for Network Solutions, I discuss the use of homoglyph or homograph attacks by cybercriminals. These attacks involve exploiting international domain names and the idea is simple to explain once you know a bit of Internet history.

When the Internet was first created, it was based on using Roman alphabet characters in domain names. This is the character set that is used by many of the world’s languages, but not all of them. As the Internet expanded across the globe, it connected countries where other alphabets were in use, such as Arabic or Mandarin. 

Several years ago, researchers discovered the homograph ploy, and since then all modern browsers have been updated to recognize the homograph attack methods of using “xn–80ak6aa92e.com” instead of “apple.com.” I go into the details in my blog post and you can see an example of how a browser responds above.

There is an important lesson here for IT professionals: watch out for injection-style attacks across your web infrastructure. Every element of your web pages can be compromised, even rarely-used tiny icon files. By paying attention to all possible threats today, you’ll save yourself and your organization a lot of trouble tomorrow.

Internet Protocol Journal: Selling my IPv4 block

If your company owns a block of IPv4 addresses and is interested in selling it, or if your company wants to purchase additional addresses, now may be the best time to do so. For sellers, a good reason to sell address blocks is to make money and get some use out of an old corporate asset. If your company has acquired other businesses, particularly ones that have assets from the early Internet pioneers, chances are you might already have at least one range that is gathering dust, or is underused.

So began an interesting journey for me and my range of class C addresses. It took months to figure out the right broker to list my block and to work through the many issues to prove that I actually was assigned it back in the mid 1990s. Thus began my own journey to correct this information and get it ready for resale. The process involved spending a lot of time studying the various transfer webpages at ARIN, calling their transfer hotline several times for clarifications on their process, and paying a $300 transfer fee to start things off. ARIN staff promises a 48-hour turnaround to answer e-mails, and that can stretch out the time to prepare your block if you have a lot of back-and-forth interactions, as I did.

You can read my report in the current issue of the Internet Protocol Journal here. I review some of the historical context about IPv4 address depletion, the evolution of the used address marketplace, the role of the block brokers and the steps that I took to transfer and sell my block.

I remember c|net: a look back on computing in the mid-1990s

The news this week is that c|net (and ZDnet) have been sold to a private equity firm. I remember when c|net was just starting out, because I wrote one of the first hands-on reviews of web server software back in 1996. To test these products, I worked with a new company at the time called Keylabs. They were the team that built one of the first huge, 1000-PC testing labs at Novell and were spun out as a separate company, eventually spinning off their own endpoint automation software company called Altiris that was acquired by Symantec and now is part of Broadcom. They were eager to show their bona fides and worked with me to run multiple PC tests involving hundreds of computers trying to bang away on each web server product. “1996 was an exciting time for computing,” said Jan Newman who is now a partner at the VC firm SageCreek and was one of the Keylabs founders. “The internet was gathering steam and focus was changing from file and print servers to the web. I believe this project with David was the very first of its kind in the industry. It was exciting to watch new platforms rise to prominence.” Now we have virtual machines and other ways to stress test products. The review shows how the web was won back in the early days.

Here are some observations from re-reading that piece.

  1. The demise of NetWare as a server platform. Back in the mid 1990s, NetWare — and its associated IPX protocol — was everywhere, until Microsoft and the Internet happened. Now it is about as hard to find as a COBOL developer. One advantage that NetWare had was it was efficient: you could run a web server on a 486 box at about the same performance as any of the Windows servers running on a faster Pentium CPU.
  2. Remember Windows NT? That was the main Microsoft server platform at the time. It came in four different versions: running on Intel, DEC Alpha, MIPS and PowerPC processors. Those latter two were RISC processors that mostly have disappeared, although Apple Macs and Microsoft Xbox’s  ran on PowerPCs for years.
  3. Remember Netscape? In addition to their web browser that made Mark Andreesen rich, they also had their own web server, called FastTrack, that was in beta at the time of my review. Despite being a solid performer, it never caught on. It did support both Java and JavaScript, something that the NT-only web servers didn’t initially offer.
  4. The web wasn’t the only data server game. Back in the mid-1990s, we had FTP, and Gopher as popular precursors. While you can still find FTP (I mainly use to transfer files to my web server and to get content to cloud images), Gopher (which got its name from the University of Minnesota team mascot) is gone into a deep, dark hole.
  5. Microsoft’s web server, IIS, was underwhelming when first was released. It didn’t support Java, didn’t do server-side includes (an early way to use dynamic content), didn’t have a web-based management tool, didn’t support hosting multiple domains unless you used separate network adapters, didn’t have any proxy server support and made use of an unsecured user accounts. Of course, now it is one of the top web server platforms with Apache.
  6. You think your computer is slow? How about a 200 MHz Pentium. That was about as fast as you could expect back then. And installing 16 MB of RAM and using 10/100 Ethernet networks were the norm.

Getting my kicks on the old Route 66

Like many of you this past Labor Day weekend, my wife and I took a drive to get out of our pandemic bubble. And as the NY Times ran this piece, we also got our kicks on Route 66. Their photographer went to the portion through Arizona and New Mexico; we stayed a lot closer to home, about an hour’s drive from St. Louis. This wasn’t our first time visiting this area, but we wanted to see a few sights from a safe distance, and also for my wife to visit an ancestral home not far off the Mother Road, as it is called.

St. Louis has a complicated relationship with Route 66: there are many different paths that the road took through the city to cross the Mississippi River at various bridges over the years the road was active. And for those of you that want to discover the road in other parts of the country, you will quickly find that patience is perhaps the biggest skill you’ll need. Different parts were decommissioned or rerouted after the freeways were constructed that brought about its demise. In our part of the country, that is I-44, which goes between St. Louis and Oklahoma City, where it connects up with I-40.

My favorite Route 66 memory spot within city limits has to be the old Chain of Rocks Bridge, which was opened in the 1930s and was featured in that now classic film “Escape From New York.” The bridge is now a bike/pedestrian path and it is one of the few bridges that is deliberately bent in the middle. It lies on the riverfront bike trail that I have been on often.

Once you leave the city and head west you need to be a determined tracker. Many parts of it are on the map as the I-44 service road, but that doesn’t tell the entire story about the actual original roadbed that in many cases no longer exists. Speaking of which, one of the places that you might have heard of is Times Beach. The beach refers to the Meramec River and the reason for its memory is this is the town that became contaminated with Dioxin. Now the streets remain but not much else, and the state has turned it into a state park. The visitor center is a former roadhouse that was built in 1936. Speaking of other bygone inns, in a few miles you’ll pass the Gardenway Motel near Gray’s Summit. The motel had 40 rooms and was built in 1945 and eventually closed in 2014. It was owned by the same family during its entire run. A separate advertising sign still stands down the road.

There are a lot of other classic signs nearby too, but like I said you have to spend some time exploring to find them. If you are looking to stay in one of the period motels that is still operating, you might try the Wagon Wheel in Cuba, a few miles further west.

Another example of the bygone era that Route 66 spanned was captured by this National Park Service webpage on the Green Book. This was a guide for Black motorists who couldn’t stay at the then-segregated lodgings mentioned above. Mrs. Hilliard’s Motel in St. Clair, which is in the area, operated briefly in the 1960s. The guide (which was published annually from 1936 to 1964 by Victor Green) had other recommended and safe places for Black travelers such as dining and gas stations. Our history museum has an excellent explanation of its use and some sample pages here, which you can contrast with what was portrayed in the 2018 film.

One of the things that I learned when traveling in Poland is that history is often what you don’t see, sometimes painfully removed, other times left to rot and decay. That will require some investigation. Route 66 is a real time capsule to be sure.

The rise of the online ticketing bots

A new report describes the depth of criminality across online ticketing websites. I guess I was somewhat naive before I read the report, “How Bots affect ticketing,” from Distil Networks. (Registration is required.) The vendor sells anti-bot security tools, so some of what they describe is self-serving to promote their own solutions. But the picture they present is chilling and somewhat depressing.

The ticketing sites are being hit from all sides: from dishonest ticket brokers and hospitality agents who scrape details and scalp or spin the tickets, to criminals who focus on fan account takeovers to conduct credit card fraud with their ticket purchases. These scams are happening 24/7, because the bots never sleep. And there are multiple sources of ready-made bad bots that can be set loose on any ticketing platform.

You probably know what scalping is, but spinning was new to me. Basically, it involves a mechanism that appears to be an indecisive human who is selecting tickets but holding them in their cart and not paying for them. This puts the tickets in limbo, and takes them off the active marketplace just long enough that the criminals can manipulate their supply and prevent the actual people from buying them. That is what lies at the heart of the criminal ticketing bot problem: the real folks are denied their purchases, and sometimes all seats are snapped up within a few milliseconds of when they are put on sale. In many cases, fans quickly abandon the legit ticketing site and find a secondary market for their seats, which may be where the criminals want them to go. This is because the seat prices are marked up, with more profit going to the criminals. It also messes with the ticketing site’s pricing algorithms, because they don’t have an accurate picture of ticket supply.

This is new report from Distil and focusing just on the ticketing vendors. In the past year, they have seen a rise in the sophistication of the bot owners’ methods. That is because like much with cybercrime, there is an arms race between defenders and the criminals, with each upping their game to get around the other. The report studied 180 different ticketing sites for a period of 105 days last fall, analyzing more than 26 billion requests.

Distil found that the average traffic across all 180 sites was close to 40% consumed by bad bots. That’s the average: many sites had far higher percentages of bad bot traffic. (See the graphic above for more details.)

Botnets aren’t only a problem with ticketing websites, of course. In an article that I wrote recently for CSOonline, I discuss how criminals have manipulated online surveys and polls. (Registration also required.) Botnets are just one of many methods to fudge the results, infect survey participants with malware, and manipulate public opinion.

So what can a ticketing site operator do to fight back? The report has several suggestions, including preventing outdated browser versions, using better Captchas, blocking known hosting providers popular with criminals, and looking carefully at sources of traffic for high bounce rates, a series of failed logins and lower conversion rates, three tells that indicate botnets.

The dangers of DreamHost and Go Daddy hosting

If you host your website on GoDaddy, DreamHost, Bluehost, HostGator, OVH or iPage, this blog post is for you. Chances are your site icould be vulnerable to a potential bug or has been purposely infected with something that you probably didn’t know about. Given that millions of websites are involved, this is a moderate big deal.

It used to be that finding a hosting provider was a matter of price and reliability. Now you have to check to see if the vendor actually knows what they are doing. In the past couple of days, I have seen stories such as this one about GoDaddy’s web hosting:

 

And then there is this post, which talks about the other hosting vendors:

Let’s take them one at a time. The GoDaddy issue has to do with their Real User Metrics module. This is used to track traffic to your site. In theory it is a good idea: who doesn’t like more metrics? However, the researcher Igor Kromin, who wrote the post, found the JavaScript module that is used by GoDaddy is so poorly written that it slowed down his site’s performance measurably. Before he published his findings, all GoDaddy hosting customers had these metrics enabled by default. Now they have turned it off by default and are looking at future improvements. Score one for progress.

Why is this a big deal? Supply-chain attacks happen all the time by inserting small snippets of JavaScript code on your pages. It is hard enough to find their origins as it is, without having your hosting provider to add any additional burdens as part of their services. I wrote about this issue here.

If you use GoDaddy hosting, you should go to your cPanel hosting portal, click on the small three dots at the top of the page (as shown above), click “help us” and ensure you have opted out.

Okay, moving on to the second article, about other hosting provider scripting vulnerabilities. Paulos Yibelo looked at several providers and found multiple issues that differed among them. The issues involved cross-site scripting, cross-site request forgery, man-in-the-middle problems, potential account takeovers and bypass attack vulnerabilities. The list is depressingly long, and Yibelo’s descriptions show each provider’s problems. “All of them are easily hacked,” he wrote. But what was more instructive was the responses he got from each hosting vendor. He also mentions that Bluehost terminated his account, presumably because they saw he was up to no good. “Good job, but a little too late,” he wrote.

Most of the providers were very responsive when reporters contacted them and said these issues have now been fixed. OVH hasn’t yet responded.

So the moral of the story? Don’t assume your provider knows everything, or even anything, about hosting your site, and be on the lookout for similar research. Find a smaller provider that can give you better customer service (I have been using EMWD.com for years and can’t recommend them enough). If you don’t know what some of these scripting attacks are or how they work, go on over to OWASP.org and educate yourself about their basics.

How great collaborations occur

What do the Beatles, Monty Python, the teams behind building the Ford Mustang and the British Colossus computer, and the Unabomber manhunt have in common? All are examples of impressive and successful collaborative teams. I seem to return to the topic of collaboration often in my writing, and wrote this post several years ago about my own personal history of collaboration. For those of you that have short memories, I will refresh them with some other links to those thoughts. But first, let’s look at what these groups all have in common:

Driven and imaginative leadership. The Netflix series on the Unabomber creates a somewhat fictional/composite character but nevertheless shows how the FBI developed the linguistic analysis needed to catch this criminal, and how a team of agents and a massive investigation found him. Some of those linguistic techniques were used to figure out the pipe bombing suspect from last week, by the way.  

A combination of complementary skills. The Beatles is a good example here, and we all have imprinted in our early memories the lyrics and music by John and Paul. On the British code-breaking effort Colossus,  that team worked together without actually knowing what they each did, as I mentioned in my blog post. Another great example is the team that originally created the Ford Mustang car, as I wrote about a few years ago. 

Superior writing and ideation. An interview that Eric Idle recently gave on the Maron WTF podcast is instructive. Idle spoke about how the entire Python team wrote their skits before they cast them, so that no one would be personally invested to a particular idea before the entire group could improve and fine-tune it. Many collaborative efforts depend on solid writing backed by even more solid idea-creation. There are a number of real-time online writing and editing tools (including Google Docs) that are used nowadays to facilitate these efforts. 

Active learning and group training. A new effort by the Army is noteworthy here, and what prompted my post today. They recognize that soldiers have to find innovative ways to protect their digital networks and repel cyber invasions. They announced the creation of a new cyber workspace at the Fort Gordon (near Augusta Geo.) base called Tatooine, which refers to the Star Wars planet where Luke spent some time in the early movies. The initial missions of this effort will focus on three areas:

  • drone detection,
  • active hunting of cyber threats on DoD networks, and
  • designing better training systems for cyber soldiers.

Great communicators.  Many of these teams worked together using primitive communication tools, before the digital age. Now we are blessed with email, CRMs, real-time messaging apps, video chats, etc. But these blessings are also a curse, particularly if these tools are abused. In this post for the Quickbase blog, I talk about signs that you aren’t using these tools to their best advantage, particularly for handling meeting schedules and agendas. In this post from September, I also provide some other tips on how to collaborate better. 

Unique partnerships. All of my examples show how bringing together the right kinds of talent can result in the sum being bigger than the individuals involved. At the Army base, both military and civilian resources will be working together, and draw on the successful Hack the Army bug bounty program. On Colossus, they recruited people who were good at solving crossword puzzles, among other things. The Python group included Terry Gilliam, who was a gifted animator and brought the necessary visual organization to their early BBC TV shows. 

Certainly, the history of collaboration has been one of fits and starts. As a former publication editor, I can recall the teams that I put together had some great collaborative efforts to write, edit, illustrate and publish the stories in our magazines. And while we continue making some of the same mistakes over again and not really considering the historical context, there are a few signs of hope too as the more modern tools help folks over some of these hurdles. That brought me a solid appreciation for how these best kinds of collaborations happen. Feel free to share your own examples if you’d like. 

iBoss blog: What is HTTP Strict Transport Security

 

 

Earlier this summer, I wrote about how the world of SSL certificates is changing as they become easier to obtain and more frequently used. They are back in the news more recently with Google’s decision to add 45 top-level domains to a special online document called the HTTPS Strict Transport Security (HSTS) preload list. The action by Google adds all of its top level domains, including .Google and .Eat, so that all hosts using that domain suffix will be secure by default. Google has led by example in this arena, and today Facebook, Twitter, PayPal and many other web properties have supported the HSTS effort.

The HSTS preload list consists of hosts that automatically enforce secure HTTP connections by every visiting browser. If a user types in a URL with just HTTP, this is first changed to HTTPS before the request is sent. The idea is to prevent man-in-the-middle, cookie hijacking and scripting attacks that will intercept web content, as well as prevent malformed certificates from gaining access to the web traffic.

The preload list mitigates against a very narrowly defined attack that could happen if someone were to intercept your traffic at the very first connection to your website and decode your HTTP header metadata. It isn’t a likely scenario, but that is why there is this list.  “Not having HSTS is like putting a nice big padlock on the front door of your website, but accidentally leaving a window unlocked. There’s still a way to get in, you just have to be a little more sophisticated to find it,” says Patrick Nohe of the SSL Store in a recent blog post.

This means if you thought you were good with setting a permanent 301 redirect from HTTP to HTTPS, you aren’t completely protected.

The preload site maintains a chart showing you which browser versions support HSTS, as shown above. As you might imagine, some of the older browsers, such as Safari 5.1 and earlier IE versions, don’t support it at all.

So, what should you do to protect your own websites? First, if you understand SSL certificates, all you might need is a quick lesson in how HSTS is implemented, and OWASP has this nice short “cheat sheet” here. If you haven’t gotten started with any SSL certs, now is the time to dive into that process, and obtain a valid EV SSL cert. If you haven’t catalogued all your subdomains, this is also a good time to go off and do that.

Next, start the configuration process on your webservers: locate the specific files (like the .htaccess file for Apache’s web servers) that you will need to update with the HSTS information. If you need more complete instructions, GlobalSign has a nice blog entry with a detailed checklist of items, and specific instructions for popular web servers.

After you have reviewed these documents, add your sites to the preload site. Finally, if you need more in-depth discussion, Troy Hunt has this post that goes into plenty of specifics. Healso warns you when to implement the preload feature: when you are absolutely, positively sure that have rooted out all of your plain HTTP requests across your website and never plan to go back to those innocent days.

Software shouldn’t waste my time

One of my favorite tech execs here in St. Louis is Bryan Doerr, who runs a company called Observable Networks that recently was acquired by Cisco. (Here is his presentation of how the company got started.) One of the things he is frequently saying is that if a piece of software asks for your attention to understand a security alert, we don’t want to waste your time. (He phrases it a bit differently.) I think that is a fine maxim to remember, both for user interface designers and for most of us that use computers in our daily lives.

As a product reviewer, I often find time-wasting moments. Certainly with security products, they seem to be designed tis way on purpose: the more alerts the better! That way a vendor can justify its higher price tag. That way is doomed.

Instead, only put something on the screen that you really need to know. At that moment in time. For your particular role. For the particular device. Let’s break this apart.

The precise moment of time is critical. If I am bringing up your software in the morning, there are things that I have to know at the start of my day. For example, when I bring up my calendar, am I about to miss an important meeting? Or even an unimportant meeting? Get that info to me first and fast. Is there something that happened during the night that I should jump on? Very few pieces of software care about this sort of timing of its own usage, which is too bad.

Part of this timing element is also how you deal with bugs and what happens when they occur. Yes, all software has bugs. But do you tell your user what a particular bug means? Sometimes you do, sometimes you put up some random error message that just annoys your users.

Roles are also critical. A database administrator has a lot different focus from a “normal” user. Screens should be designed differently for these different roles. And the level of granularity is also important: if you have just two or three roles, that is usually not enough. If you have 17, that is probably too many. Access roles are usually the last thing to be baked into software, and it shows: by then the engineers are already tired about their code and don’t want to mess around with things. Like anything else with software engineering, do this from writing your first line of code if you want success.

Finally, there is understanding the type of device that is looking at your data. As more of us use mobile devices, we want less info on the screen so we can read it without squinting at tiny type. In the past, this was usually called responsive design, meaning that a web interface designer would build an app to respond to the size of the screen, and automatically rearrange stuff so that it would make sense, whether it was viewed on a big sized desktop monitor or a tiny phone. If your website or app isn’t responsive, you need to fix this post-haste. It is 2017 people.

Joey Skaggs and the art of the media hoax

I have had the pleasure of knowing Joey Skaggs for several decades, and observing his media hoaxing antics first-hand during the development and deployment of his many pranks. Skaggs is a professional hoaxer, meaning that he deliberately crafts elaborate stunts to fool reporters, get himself covered on TV and in newspapers, only to reveal afterwards that the reporters have been had. He sometimes spends years constructing these set pieces, fine-tuning them and involving a cast of supporting characters to bring his hoax to life.

His latest stunt is a documentary movie about filming another documentary movie that is being shown at various film festivals around the world. I caught up with him this past weekend here in St. Louis, when our local film festival screened the movie called The Art of the Prank. Ostensibly, this is a movie about Skaggs and one of his pranks. More about the movie in a moment.

I have covered Skaggs’ exploits a few times. In 1994, he created a story about a fake bust of a sex-based virtual reality venture called Sexonix. I wrote a piece for Wired (scroll to nearly the bottom of the page) where he was able to whip up passions. In the winter of 1998, I wrote about one of his hoaxes, which was about some issues with a rogue project from an environmental organization based in Queensland, Australia. The project created and spread a genetically altered virus. When humans come into contact with the virus, they begin to crave junk food. To add credibility to the story, the virus was found to have infected Hong Kong chickens, among other animals. Skaggs created a phony website here, which contains documentation and copies of emails and photos. Now remember, this was 1998: back then newspapers were still thriving, and the Web was just getting going as a source for journalists.

As part of this hoax, Skaggs also staged a fake demonstration outside the United Nations headquarters campus in New York City. The AP and the NY Post, along with European and Australian newspapers, duly covered the protest, and thus laid the groundwork for the hoax.

Since then he has done dozens of other hoaxes. He set up a computerized jurisprudence system called the Solomon Project that found OJ guilty, a bordello for dogs, a portable confessional booth that was attached to a bicycle that he rode around one of the Democratic conventions, a miracle drug made from roaches, a company buying unwanted dogs to use them as food, and more. Every one of his setups is seemingly genuine, which is how the media falls for them and reports them as real. Only after his clips come in does he reveal that he is the wizard behind the curtain and comes clean that it all was phony.

Skaggs is a genius at mixing just the right amount of believable and yet unverifiable information with specific details and actual events, such as the UN demonstration, to get reporters to drop their guard and run the story. Once one reporter falls for his hoax, Skaggs can build on that and get others to follow along. Skaggs’ hoaxes illustrate how little reporters actually investigate and in most cases ignore the clues that he liberally sprinkles around. This is why they work, and why even the same media outlets (he has been on CNN a number of times) fall for them.

In the movie, you see Skaggs preparing one of his hoaxes. I won’t give you more details in the hopes that you will eventually get to see the film and don’t want to spoil it for you. He carefully gathers his actors to play specific roles, appoints his scientific “expert” and gets the media – and his documentary filmmaker – to follow him along. It is one of his more brilliant set pieces.

Skaggs shows us that it pays to be skeptical, and to spend some time proving authenticity. Given today’s online climate and how hard the public has to work to verify basic facts, this has gotten a lot more difficult, ironically. Most of us take things we read on faith, and especially if we have seen it somewhere online such as Wikipedia or when we Google something. As I wrote about the “peeps” hoax in 1998, “a website can change from moment to moment, and pining down the truth may be a very difficult proposition. An unauthorized employee could post a page by mistake. One man’s truth is another’s falsehood, depending on your point of view. Also, how can you be sure that someone’s website is truly authentic? Maybe during the night a group of imposters has diverted all traffic from the real site to their own, or put up their own pages on the authentic site, unbeknown to the site’s webmaster?”

Today we have Snopes.com and fact checking efforts by the major news organizations, but they still aren’t enough. All it takes is one gullible person with a huge Twitter following, (I am sure you can think of a few examples) and a hoax is born.

In the movie, trusted information is scarce and hard to find, and you see how Skaggs builds his house of cards. It is well worth watching this master of media manipulation at work, and a lesson for us all to be more careful, especially when we see something online. Or read about it in the newspapers or see something on TV.