How the Okada Manila Luxury Resort Built its Greenfield IT Infrastructure

When you hear about an IT staff that has to build their infrastructure from scratch to support a new business, you think, “That couldn’t be that hard – they had no legacy infrastructure to support. What a dream job.” Well, it wasn’t a piece of cake for the crew at the Okada Manila resort hotel, and in an interview with Dries Scott, the SVP of IT for Okada, I got to see why.

Okada was built on a huge site and is similar to the resort-style properties that can be found in Las Vegas and Macau. It will house 2,300 guest rooms when it is fully built and have 10,000 employees. Scott’s IT department has at least 100 of them full-time — plus contractors — to support 2,000 endpoints and numerous physical and virtual servers placed in two separate datacenters on the property.

Scott actually worked for a few of the Macau resort hotels before coming to Manila, and he wanted to create the ideal IT environment for a five-star luxury hotel. “The biggest decision we had to make was to try to steer clear of having actual desktop PCs as our workstations,” he said to me when he sat down for an interview yesterday. “When you are starting from a clean sheet of paper, you want something that could last 10 to 20 years and want products that could evolve over this time period.” He decided to choose VDI for his endpoints. “I wanted to move away from the usual desktop PC environment, although we ending up having a few of them for our staff. PCs are a pain to manage, because hard drives crash, getting updates and patches distributed isn’t easy, and other issues.” To support their VDI deployment they purchased a variety of products, including XenDesktop, XenApp and NetScaler, HP thin clients and Dell servers.

One of the key enabling technologies is FSLogix Office 365 Container.  “This makes Outlook running on XenApp and XenDesktop able to mount users’ profiles as if they were on a local C: drive, so Windows acts normally and Outlook works like it is running on a regular PC desktop,” he said. This means you get the performance of the virtual workspace but the ease of management too.

Having a VDI solution meant some initial support hurdles. “We had to have a lot of patience with our users, some of whom were using VDI workstations for the first time,” he told me. “I could have taken the easy way out and just bought desktops for everyone, but I knew eventually VDI will pay off and benefit us in the long run.”

One concern Scott had was keeping corporate data secure. Given the market of his resort, he wanted to ensure that customers’ information stayed on the corporate systems; “It is one of our most critical assets,” he said. “Users don’t have the ability to remove any corporate data from the company.” His thin clients locked out USB access, for example, and he also set up appropriate data leak policies too. Through ShareFile, he has other policies for how files can be shared across his staff, and he prevents access to public SaaS repositories, like consumer file-sharing services whenever possible. Finally, he figured out ways to keep data from his construction contractors on his servers. “I didn’t want them to pack up their PCs and leave with my data on them,” he said.

View post on imgur.com

Building a new resort’s IT infrastructure wasn’t as easy as I was assuming, mainly because some IT elements needed to be put in place during the construction phase to support those workers on the job site. This meant erecting temporary buildings and networks and then migrating these resources to the production environment once the hotel was built. “That migration wasn’t easy, but we are just about through that process,” he said. “We have certainly been through a bit of a bumpy road.” One of his recommendations was to use Citrix consulting services in setting up his environment and helping define the appropriate computing architecture. “They can help make everything stable from the beginning and figure out your app and server configurations.”

What helped him pull off this project? Executive buy-in. “Our chairman is an engineer and very much into technology. It was a massive help that he supported our decisions from day one. All he wanted was to implement my vision and he gave me the ability to implement it.”

Network World review: Smart home hubs from Google and Amazon

The first decision you need to make in your smart home journey is selecting the right ecosystem. By ecosystem, I mean the voice-activated smart hub that is used to deliver audio content from the Internet (such as news, weather, and answers to other queries) as well as the main interface with a variety of other smart home devices, such as lighting, thermostats and TVs. In this review I look at two of the three main hubs from Google (the white-topped taller unit on the right) and Amazon (the smaller black unit on the left) and how they stack up.

You can read my review in Network World here.

This is the second in a series of articles on how to successfully and securely deploy smart home technology. The first one can be found here.

White paper: Invisible mobile banking security

As more banking customers make use of mobile devices and apps, the opportunities for fraud increases. Mobile apps are also harder to secure than desktop apps because they are often written without any built-in security measures. Plus, most users are used to just downloading an app from the major app stores without checking to see if they are downloading legitimate versions.

Besides security, mobile apps have a second challenge: to be as usable as possible. Part of the issue is that the usability bar is continuously being raised, as consumers expect more from their banking apps.

In this white paper for VASCO, I show a different path. Mobile banking apps can be successful at satisfying the twin goals of usability and security. Usability doesn’t have to come at the expense of a more secure app, and security doesn’t have to come at making an app more complex to use. Criminals and other attackers can be neutralized with the right choices that are both usable and secure.

StateTech magazine: 4 Steps to Prepare for an IoT Deployment

As the Internet of Things (IoT) becomes more popular, state and local government IT agencies need to play more of a leadership role in understanding the transformation of their departments and their networks. Embarking on any IoT-based journey requires governments and agencies to go through four key phases, which should be developed in the context of creating strategic partnerships between business lines and IT organizations. Here is more detail on these steps, published in StateTech Magazine this month.

Network World review: securing the smart home

Today I begin a series of reviews in Network World around smarter home products. Last year we saw the weaponized smart device as the Mirai botnet compromised webcams and other Internet-connected things. Then earlier this year we had Vizio admit to monitoring its connected TVs and more recently there was this remote TV exploit and even dishwashers aren’t safe from hackers.

Suddenly, the smart home isn’t smart enough, or maybe it is too smart for its own good. We need to take better care of securing our homes from digital intruders. The folks at Network World asked me to spend some time trying out various products and using a typical IT manager’s eye towards making sure they are setup securely.

Those of you that have read my work know that I am very interested in home networking: I wrote a book on the topic back in 2001 called The Home Networking Survival Guide and have tried out numerous home networking products over the years. My brief for the publication is broadly defined and I will look at all sorts of technologies that the modern home would benefit from, including security cameras, remote-controlled sensors, lighting and thermostats, and more.

Smart home technology has certainly evolved since I wrote my book. Back then, wireless was just getting started and most homeowners ran Ethernet through their walls. We didn’t have Arduino and Pi computers, and many whole house audio systems cost tens of thousands of dollars. TVs weren’t smart, and many people were still using dial-up and AOL to access the Internet.

Back in the early 2000’s, I visited John Patrick’s home in Connecticut. As a former IBMer, he designed his house like an IBM mainframe, with centralized control and distributed systems for water, entertainment, propane gas, Internet and other service delivery. He was definitely ahead of the time in many areas.

When I wrote about the Patrick house, I said that for many people, defining the requirements for a smart home isn’t always easy, because people don’t really know what they want. “You get better at defining your needs when you see what the high-tech toys really do. But some of it is because the high-tech doesn’t really work out of the box.” That is still true today.

My goal with writing these reviews is to make sure that your TV or thermostat doesn’t end up being compromised and being part of some Russian botnet down the road. Each article will examine one aspect of the secure connected home so you can build out your network with some confidence, or at least know what the issues are and what choices you will need to make in supporting your family’s IT portfolio of smart Things.

Since I live in a small apartment, I asked some friends who live in the suburbs if they would be interested in being the site of my test house. They have an 1800 sq. ft. three bedroom house on one level with a finished basement, and are already on their second smart TV purchase. One of them is an avid gamer and has numerous gaming consoles. Over the past several months (and continuing throughout the remainder of this year), we have tried out several products. In my first article posted today, we cover some of the basic issues involved and set the scene.

Lessons learned from building software at scale

So you have read The Lean Startup. Suffered through following several agile blogs (such as this one). You think you are ready to join the cool kids and have product scrums and stand-up meetings and all that other stuff. Now you need an implementation plan.

Maybe it is time to read this post by Paul Adams on the Intercon blog. He describes some of the lessons he and his development team have learned from building software and scaling it up as the company grows. I asked a few of my contacts at startup software firms what they thought of the post and there was mostly general agreement with his methodology.

Here are some of Adams’ main points to ponder.

Everyone has a different process, and the process itself changes as the company matures and grows. But his description is for their current team size of four product managers, four software designers, and 25 engineers. Like he says: “it’s not how we worked when we had a much smaller team, and it may not work when we have doubled our team.”

Create a culture where you can make small and incremental steps with lots of checkpoints, goals, and evaluations. “We always optimize for shipping the fastest, smallest, simplest thing that will get us closer to our objective and help us learn what works.” They have a weekly Friday afternoon beer-fueled demo to show how far they have gotten in their development for the week. Anyone can attend and provide comments.

Facetime is important. While a lot of folks can work remotely, they find productivity and collaboration increases when everyone is in the same room in a “pod.” Having run many remote teams, certainly local pods can be better but if you have the right managers, you can pull off remote teams too. It appears IBM is moving in this “local is better” mode lately.

Have small teams and make them strictly accountable. Adams has a series of accountability rules for when something goes wrong. Create these rules and teams and stick by them. “We never take a step without knowing the success measurement,” said one friend of mine, who agrees with much of what Adams says in his post. My friend also mentions when using small teams, “not all resources have a one-to-one relationship in terms of productivity; we find that we that the resources we use for prototyping new features can generally float between teams.”

Have a roadmap but keep things flexible and keep it transparent. “Everything in our roadmap is broken down by team objective, which is broken down into multiple projects, which in turn are broken down into individual releases,” said Adams. They use the Trello collaboration tool for this purpose, something that can either be a terrific asset or a major liability, depending on the buy-in from the rest of the team and how faithful they are to keeping it updated.

However, caution is advised: “The comprehensive approach that Adams describes would be entirely too much overhead for most startups,” says my friend. This might mean that you evaluate what it will take to produce the kind of detail that you really need. And this brings up one final point:

Don’t have too many tools, though. “Using software to build software is often slower than using whiteboards and Post-it notes. We use the minimum number of software tools to get the job done. When managing a product includes all of Google Docs, Trello, Github, Basecamp, Asana, Slack, Dropbox, and Confluence, then something is very wrong.”

Email encryption has become almost frictionless

As you loyal readers know (I guess that should just be “readers” since that implies some of you are disloyal), I have been using and writing about email encryption for two decades. It hasn’t been a bowl of cherries, to be sure. Back in 1998, when Marshall Rose and I wrote our landmark book “Internet Messaging,” we said that the state of secure Internet email standards and products is best described as a sucking chest wound.” Lately I have seen some glimmers of hope in this much-maligned product category.

Last week Network World posted my review of five products. Two of them I reviewed in 2015: HPE/Voltage Secure Email and Virtru Pro The other three are Inky (an end-to-end product), Zix Gateway, and Symantec Email Security.cloud. Zix was the overall winner. We’ll get to the results of these tests in a moment.

In the past, encryption was frankly a pain in the neck. Users hated it, either because they had to manage their own encryption key stores or had to go through additional steps to encrypt and decrypt their message traffic. As a consequence, few people used it in their email traffic, and most did under protest. One of the more notable “conscientious objectors” was none other than the inventory of PGP himself, Phil Zimmerman. In this infamous Motherboard story, the reporter tried to get him to exchange encrypted messages. Zimmerman sheepishly revealed that he was no longer using his own protocols, due to difficulties in getting a Mac client operational.

To make matter worse, if a recipient wasn’t using the same encryption provider as you were using, sending a message was a very painful process. If you had to use more than one system, it was even more trouble. I think I can safely say that these days are soon coming to an end, where encryption is almost completely frictionless.

By that I mean that there are situations where you don’t have to do anything, other than click on your “send” button in your emailer and off the message goes. The encryption happens under the covers. This means that encryption can be used more often, and that means that companies can be more secure in their message traffic.

This comes just in time, as the number of hacks with emails is increasing. And it is happened not only with email traffic, but with texting/instant message chats as well. Last week Checkpoint announced a way to intercept supposedly encrypted traffic from What’s App, and another popular chat service Confide was also shown to be subject to impersonation attacks.

So will that be enough to convince users to start using encryption for normal everyday emailing? I hope so. As the number of attacks and malware infections increase, enterprises need all the protection that they can muster and encrypting emails is a great place to start.

What I liked about Zix and some of the other products that I tested this time around was that they took steps to hide the key management from the users. Zimmerman would find this acceptable, to be sure. Some other products have come close to doing this by using identity-based encryption, which makes it easier to on-board a new user into their system with a few simple mouse clicks.

I also found intriguing is how Zix and others have incorporated data loss prevention (DLP) and detection into their encryption products. What this means is that all of these systems detect when sensitive information is about to be transmitted via email, and take steps to encrypt or otherwise protect the message in transit and how it will ultimately be consumed on the receiving end.

DLP has gone from something “nice to have” to more essential as part of business compliance and data leak hacks, both of which have increased its importance. Having this integration can be a big selling point of making the move to an encrypted email vendor, and we are glad to see this feature getting easier to use and to manage in these products.

Finally, the products have gotten better at what I call multi-modal email contexts. Users today are frequently switching from their Outlook desktop client to their smartphone email app to a webmailer for keeping track of their email stream. Having a product that can handle these different modalities is critical if it is going to make a claim towards being frictionless.

So why did Zix win? It was easy to install and manage, well-documented and had plenty of solid encryption features (see the screenshot here). It’s only downside was no mobile client for composing encrypted messages, but it got partial credit for having a very responsive designed webmailer that worked well on a phone’s small screen. Zix also includes its DLP features as part of its basic pricing structure, another plus.

We have come a long way on the encrypted email road. It is nice to finally have something nice to say about these products after all these years.

The rise of blockchain-as-a-service

With the announcement last week of the Enterprise Ethereum Alliance, it is timely to look at what is going on with blockchain technologies. The Alliance was formed to try to encourage a hybrid kind of blockchains with both public and private aspects. Its members include both cutting-edge startups along with established computer vendors such as Microsoft and major banks such as ING and Credit Suisse. As mentioned in this post by Tom Ding, a developer at String Labs, the Alliance could bring these disparate organizations together and find best-of-breed blockchain solutions that could benefit a variety of corporate development efforts.

When Bitcoin was invented, it was based on a very public blockchain database, one in which every transaction was open to anyone’s inspection. A public chain also allows anyone to create a new block, as long as they follow the protocol specs. But as blockchains matured, enterprises want something a bit more private, to have better control over the transactions for their own purposes and to control who is trusted to make new blocks.

This isn’t a mutually exclusive decision, and what is happening now is that many blockchain solutions use aspects from both public and private perspectives, as you can see from this infographic from Let’s Talk Payments.

You want the benefits of having multiple programmers hammering against an open source code base, with incentives for the blockchain community to improve the code and the overall network effects as more people enter this ecosystem. You also gain efficiencies as the number of developers scales up, and perhaps have future benefits where there is interoperability among the various different blockchain implementations. At least, that is theory espoused in a recent post on Medium here, where R Tyler Smith writes: “One thing that blockchains do extremely well is allow entities who do not trust one another to collaborate in a meaningful way.”

The Ethereum Alliance is just the latest milepost that blockchains are becoming more potentially useful for enterprise developers. Over the past year, several blockchain-as-a-service (BaaS) offerings have been introduced that make it easy to create your own blockchain with just a few clicks. Back in November 2015, Microsoft and ConsenSys built the first BaaS on top of Azure and now have several blockchain services available there. IBM followed in February 2016 with their own BaaS offering on BlueMix. IBM has a free starter plan that you can experiment with before you start spending serious money on their cloud implementations. Microsoft’s implementation is through its Azure Marketplace. There is no additional charge for blockchain services other than the cloud-based compute, network and storage resources used.

IBM’s BlueMix isn’t the only place the vendor has been active in this area: the company has been instrumental in supporting open source code regarding blockchain with large commitments to the Apache Hyperledger project. Not to be left out of things, the Amazon Web Services marketplace offers two blockchain-related service offerings. Finally, Deloitte has its own BaaS service offering as part of its Toronto-based blockchain consulting practice.
If you want to get started with BaaS, here is just one of numerous training videos that are available on the Microsoft virtual academy that covers the basics. There is also this informative white paper that goes into more details about how to deploy the Microsoft version of BaaS. IBM also has an informative video on some of the security issues you should consider here. (reg. req.)

Security Intelligence blog: Making the Move to an All-HTTPS Network

Many website operators have wrestled with the decision to move all their web infrastructure to support HTTPS protocols. The upside is obvious: better protection and a more secure pathway between browser and server. However, it isn’t all that easy to make the switch. In this piece that I wrote for IBM’s Security Intelligence blog, I bring up the case study of The Guardian’s website and what they did to make the transition. It took them more than a year and a lot of careful planning before they could fully support HTTPS.

HPE Insights: 8 lessons about IoT security learned from the Mirai botnet

the fall of 2016, a set of attacks was launched using a clever exploit, by building an automated criminal collection of Internet-connected webcams and digital video recorders. Subsequently labeled “Mirai,” this botnet has been the source of a series of distributed denial of service (DDoS) attacks on numerous notable Internet destinations such as security journalist Brian Krebs’ site, a German ISP, and the Dyn.com domain name services that is used by many large-scale online companies.

Until Mirai came along, the vast majority of DDoS attacks were done using malware-infected Windows PCs, commandeered by criminals who could harness this collected computing power and control them remotely. But Mirai has changed all of that: the sheer numbers involved and the magnitude of damages inflicted on its targets has made Mirai a potent criminal force.

There are many things to learn from construction of its malware and its leverage of various IoT embedded devices. Let’s talk about the timeline of the destruction it has already accomplished, how Mirai was initially detected, and what IT managers need to know about defending their networks against some of the methods it used in its attacks.

Timeline: What actually happened?

Mirai has been in the news for a number of events from last fall. What is clear as you examine this timeline is how it has became increasingly more potent and dangerous as it was used against various online businesses.

  1. Sept 20: Brian Krebs

On September 20th Brian Krebs’ web servers became the target of one of the largest DDoS attacks ever recorded—between 600 billion and 700 billion bits per second. To give you an idea of the magnitude here, this level of traffic is almost half a percent of the Internet’s entire capacity. What makes this even more impressive is that these data rates were sustained for hours at a time against Krebs’ websites.

DDoS attacks are brute force: a collection of computers sends streams of automated TCP/IP traffic directed at a specific web destination. When the traffic reaches a certain volume, it can overwhelm and shut down this targeted server. An enterprise has to filter out the malicious traffic or otherwise divert it away from its network to bring its servers back online.

This wasn’t Krebs’ first DDoS attack: indeed, over the past several years, he has experienced hundreds of them. But it certainly was the biggest. According to Akamai, the Krebs’ attacks were launched by 24,000 systems infected with Mirai. During September, five attacks hit Krebs, ranging from 123 to 623 Gbps.

To better defend himself, he had been using the content delivery network Akamai to filter out the attacks. And for the most part, they were able to repel these earlier DDoS efforts. But the 9/20 attacks contained so much traffic that after several days Akamai had to throw in the virtual towel, and admit defeat. This meant that Krebs’ websites were offline for a few days, until he was able to move his protection to Google’s Project Shield. This is a free invitation-only program that is designed to help independent news sites stay up and running. So far Google’s efforts seem to be working and keeping his website up and running.

  1. Oct 1: source code for Mirai released on GitHub

 

The attack on Krebs was a great proof of concept, but the folks behind Mirai took things a step further. A few weeks later, a person going by “Anna_Senpai” posted the code for Mirai online, where it since has been downloaded thousands of times from various sources, including GitHub. The name refers to a Japanese anime character that is a law enforcer of sorts. The word Mirai is also Japanese for future. This further spreads the botnet infection as more criminals begin using the tool to assemble their own botnet armies.

 

  1. Oct 21: Dyn attack

 

Then in late October another huge attack was launched on Dyn, who provides domain name services (DNS) for a variety of large-scale customers such as GitHb, Twitter, Netflix, AirBnB and hundreds of others. These services are akin to an Internet phone book: when you request a particular website, such as Google.com, it routes your request to a particular TCP/IP address for Google’s webservers to respond. Without these naming services, your request goes nowhere. The Mirai attack used 100,000 unique IP addresses, a big step up from the earlier one on Krebs. Dyn has multiple data centers around the world, and there were three attempted attacks over the course of the day. The first two brought part of its operations down, meaning that Internet users couldn’t access the websites of certain Dyn customers. The third attack was thwarted by Dyn’s IT staff.

More information from Flashpoint here:

https://www.flashpoint-intel.com/mirai-botnet-linked-dyn-dns-ddos-attacks/

 

  1. Nov 1: Liberia’s Internet connection is taken offline

 

The Mirai botnet also brought down the entire Internet connection for Liberia in late October/early November. The attack was targeted at the two fiber companies that own the country’s Internet connections. These companies manage the link to a massive undersea cable that runs around the African continent, connecting other countries together. One possible reason for Liberia being targeted is its single fiber cable connection, and the fact that the Mirai botnet can overwhelm the connection with a 500 Gbps traffic flood.

 

  1. Nov 30: Deutsche Telekom

 

Then, in late November more than 900,000 customers of German ISP Deutsche Telekom (DT) were knocked offline after their Internet routers got infected by a new variant of the Mirai malware. The Mirai code seen in this attack has been modified with two important features: First, it has expanded its scope to exploit a security flaw in specific routers made by Zyxel and Speedport to allow remote code execution. These routers have been sold to numerous German customers, which is why DT was affected so severely.  Second, this new strain of Mirai now scans the entire Internet looking for all potential devices that could be compromised.

 

  1. Mirai is still continuing

 

These are just the most noteworthy attacks to date. Given the size and effect, Mirai continues to be deployed for a variety of targets. Security researchers MalwareTech.com have set up this Twitter account to keep track of these attacks in near real-time, where you can see several attacks occur daily:

https://twitter.com/MiraiAttacks

 

 

How was Mirai first detected?

 

September 2016 was the month when a series of IoT-based botnets were detected by a variety of security researchers, most notably Sucuri and Flashpoint. Sucuri published several blog posts that described their investigations of several botnets that added up to a collection of more than 45,000 individual IP addresses. (Note that is about twice the number of origins first experienced by Krebs.) The botnets were able to pull off an attack on one of their customers that reached 120,000 requests per second. Sucuri’s customer was concerned because the level of the attack was so large that they couldn’t fight it off, even using Amazon and Google clouds to spin up larger virtual machines to defend themselves. This was similar to what happened with Krebs trying to use Akamai’s defenses.

 

The Sucuri assessment found three different types of endpoints that made up the attack on their customer: webcams, home routers, and compromised enterprise web servers. They found eight major home router brands that were part of the botnet, with the majority of the total IP addresses coming from Huawei brands. Many of these routers were located in Spanish-speaking countries, but there were plenty of compromised routers located all around the world. This geographic diversity is one of the reasons why Mirai was both so powerful and so hard to defend.

Pic: https://blog.sucuri.net/wp-content/uploads/2016/08/chart_home-router-botnet-map.png

 

Flashpoint found subsequent compromised devices by scanning Internet traffic on TCP port 7547, according to their researchers. They say there are several million other vulnerable devices in other countries, including Brazil and the UK. The latest Mirai variant is likely an attempt by one of the existing Mirai botmasters to expand the number of infected devices under their control. According to BadCyber.com, part of the problem is that DT who was initially targeted in November does not appear to have followed the best practice of blocking the rest of the world from remotely managing these devices.

 

Lessons for IT managers

 

The Mirai botnet has developed quickly as a major threat that will require a combination of methods to defend against its massive traffic volumes that can overwhelm even the most capable web servers.  Here are several suggestions for IT and security managers.

 

First, have a DDoS strategy ahead of time.  If you thought your company wasn’t that important, you need to forget that security-by-obscurity plan and come up with something more definitive.  Anyone can become a target, and now is the time to plan appropriate measures. Flashpoint has some suggestions here that are worth reading.

 

Now is the time to examine how you obtain your DNS services. One of the problems for Dyn customers is that they didn’t make use of a secondary DNS provider, or didn’t configure their DNS servers to use more than one of Dyn’s data centers. Reconfiguring their servers took time and made the Mirai attack last longer. Some large online companies are now using both Dyn and other DNS providers (such as OpenDNS or easyDNS, for example) for redundant operations. This is a good strategy in case of future DNS-based attacks.

 

Flashpoint suggests you employ Anycast DNS as your provider. This has two benefits: first, it can spread the attacking botnet requests across a distributed network, lessening the burden on each individual machine. Second, it can also speed up DNS responses, making your Internet visitors happier when pages load more quickly.

 

Another strategy is to regularly check your routers for inadvertent DNS changes, what is called DNS hijacking. F-Secure has a simple and free tool that can determine if your routers’ DNS settings have been tampered with, and that only takes a few seconds to find out for each router. While this could be tedious, at least home routers should be checked with this tool.

 

One early strategy was to simply reboot your routers, since Mirai is memory-resident and rebooting removes the infection. While that initially will work, it isn’t a good longer-term solution, since the criminals have perfected scanning techniques to re-infect your router if it is still using the default passwords in their hit list. So of course, the next step is to change these defaults, then reboot again.

 

Find any unchanged factory default passwords on any network equipment and change them immediately. These passwords were the reason why Mirai was able to collect so many endpoint IoT webcams and routers to begin with. The F-Secure tool can help with home routers, but a more complete program should be put in place to ensure that all critical network infrastructure has appropriately complex and unique passwords going forward.

 

Make sure your network forensics are in order. You should be able to capture the attack traffic so you can analyze what happened and who is targeting you. Mirai made use of an exploit on TCP port 7547 to connect to those home routers, so add a detection rule to monitor that port especially. Also, make sure that legitimate traffic is not counted or recorded in your logs. Part of this is in understanding metrics of your normal traffic baselines too.

 

Finally, it may be time to consider a content delivery network provider to handle your peak traffic loads. As you investigate your historic traffic patterns, you can see if your webservers are stretched too thinly or if you need to purchase additional load balancing or content delivery networks to improvement performance.