Last month, researchers discovered that someone at Microsoft misconfigured one of their Azure Blob Storage containers. The container had public access, which could have resulted in a data breach. It contained sensitive data from a high-profile cloud provider with 65,000 companies,111 countries and private data of 548,000 users. Microsoft was notified by the researchers and reconfigured the bucket to make it private within several hours. “Our investigation found no indication customer accounts or systems were compromised. We have directly notified the affected customers,” posted Microsoft on their blog.
Another security researcher suggested that the data was a SQL server backup that was mistakenly placed on this open storage container.
The leak was dubbed BlueBleed and the original researchers published a search tool that anyone can use to find whether information from a domain is part of this leak. The key word in that last sentence is “anyone” and if you read the Microsoft blog you can see that they aren’t happy about the way the tool is set up, because anyone can search across any domain to find out whether any unprotected assets were part of this breach.
Certainly, having private data in public containers — those that have no password protection, let alone using any multiple authentication factors — continues to be a big problem. Chris Vickery has made his career discovering many of them, and this post from several years ago cited the more infamous (at least at that moment in time) of Amazon S3’s “leaky buckets.” All of the cloud storage vendors make it relatively easy to create a new storage container that anyone can access. But don’t blame them — it is just basic human nature to forget to lock the door properly.
How can you prevent this from happening?
First, ensure that your sensitive data is well-protected, with proper and strong MFA. Microsoft has various recommendations for securing Azure Blobs and using their various cloud and endpoint security tools.
Avoid promiscuous provisioning. A case in point is Twitter, which (according to Mudge’s testimony) stated that thousands of their employees — accounting for roughly half its workforce, and all its engineers — work directly on Twitter’s live product and have full access rights to interact with actual user data. Okta realized a similar situation in its breach analysis earlier this year, and has since moved to limit access by its tech support engineers. What is needed is to reduce these over-privileged accounts, and to limit who has access to your data. If a developer is testing code outside of a production system, ensure that the data is protected. Audit your accounts to find out who has what access, and to spot configuration errors. One research report found that in 2020, two-thirds of the threats cited by respondents were caused by cloud platform configuration errors.
Ensure that your key IT suppliers have updated contact information to communicate with you. Microsoft relied on a “if you haven’t heard from us, assume you aren’t part of the breach” system — that is not as good as telling everyone what happened. Messages can also get lost or sent to dead mailboxes.
Offboard employees properly and thoroughly. When someone leaves your company, ensure that all of their accounts have been revoked. Many IT managers readily admit that their Active Directories are outdated (that link brings you to the stat of 10% of accounts in these directories are inactive according to Microsoft) and don’t have sufficient resources to maintain, even for the simple situation of who is presently employed by their companies, let alone who has the correct access rights.