Veracode blog: What kind of tools do you need to secure mobile apps?

The days when everyone is chained to a fixed desktop computer are long over. But it isn’t just about being more mobile, or using more mobile devices, or letting your users bring their own devices and use them at work. It isn’t that the workday is no longer 9-to-5 and users expect to get their jobs done whenever and wherever they might be in the world. No, it is about moving to a completely new way of delivering IT services, which presents both challenges and opportunities for IT.

 

The challenge is being able to secure all these different devices and still allow users to get their work done. The opportunity is that IT can become a more agile place, while at the same time being able to finally implement a consistent and universal applications security policy across the enterprise. This means that we need to secure the app and not the endpoint device itself, whatever it may be. We need to build apps from the start with the requisite security, rather than rely on some dubious perimeter protection.

 

It makes sense for IT to allow users to bring their own devices. “We started out trying to manage our mobiles and standardize on particular corporate-owned devices, but within two or three months, we found that we had a completely new set of devices to choose from. Phones were being introduced faster than we could vet them for our approved list,” said American Red Cross CIO John Crary. Now the Red Cross can focus on more important matters.

There are several ways to do this. One is to deploy a mobile device management (MDM) tool. These products set up security policies for a device, such as what happens a device is lost or stolen, or how personal data is stored on the phone or tablet. A recent survey by independent analyst Jack Gold shows that there is a wide variation in which MDM components are actually deployed by large enterprises.

 

Another is to purchase a single-sign on (SSO) product that offers some MDM features. I agree with Gartner and other analysts who see a bright future when these two types of products can be better integrated. Single sign-on products could be a good choice if you want to protect your mobile endpoints with more than just their login passwords but don’t want to purchase a separate MDM solution.

While MDM and SSO tools are useful, an even better method is to make use of an application construction kit such as what Veracode offers and build in security controls for each mobile app. If you go this route, you’ll want to think about the following five issues:

  1. To stop SQL injection, developers must prevent untrusted input from being interpreted as part of a SQL command. What is untrusted input? Basically anything and everything coming from the Internet. The best way to do this and stop SQL Injection is with the programming technique known as Query Parameterisation.  While you are browsing this link, check out some of the other common vulnerabilities that Open Web Application Security Project has compiled. Better yet, make sure you have their top 10 list covered by your developers, or that they know where to find and fix them with appropriate security controls.
  2. Make sure you treat your passwords with care. Forget about storing them in plain text, you should employ the most current encryption techniques to protect them, such as using salted random numbers and one-way encryption algorithms.
  3. Do you need geofencing? If your staff travels outside of the United States, having the ability to lock or permit access from particular geographies could be important for keeping your data on your mobile devices more secure. There are numerous tools that can help build this into your app.
  4. Properly authenticate your user logins. An application should require users to re-authenticate to complete a transaction or event to prevent in-session hijacks. They also should make use of multi-factor authentication as part of the app itself, or deploy this authentication inside an SSO or MDM tool.
  5. Know the vulnerabilities in your chosen programming language and make sure your code avoids them. There are various tools (such as from Veracode) that can help spot these.

Quickbase blog: How citizen developers can do better with managing their own apps

We have written extensively about the rise of the citizen developer movement, whereby everyone can become a developer because of the widespread availability of rapid app dev tools.

IT professionals have been trained how to manage their app portfolios, and there is no secret to doing this. Here are some ways to start thinking about how you can manage and build more capable apps.

First off, do you need to build a separate app or can you deliver the same functionality with something entirely running entirely on a Web server? While building apps can be cool, it might be easier to set up a URL that your users can access and run the app from their web browsers. Web-based apps can require more skill to understand how to setup the best user experience and appropriate user controls, however. And web-based apps also demand a different skill set than building apps too.

Do you have reliable pool of beta testers? You want to start recruiting people that will give you the best feedback and can identity bugs and other issues before the app gets deployed and used widely. Ideally, they will be users who are forthcoming with their experiences but don’t bury you with numerous and irrelevant email comments. That is often a fine line to walk, however.

How often do you need to update your app? The hard part about app construction isn’t the initial app build, but what happens when your ultimate users get ahold of it and realize 17 new features that they now want as a result of seeing your app. So plan up front for how these updates will happen, how often you intend to do them, who will take responsibility for them and how the updates will ultimately end up in your users hands. Also think about what happens when you have to expire an app or remove it entirely from your portfolio.

Can you build the app with its own security built-in?  The answer should be yes: your users should be able to authenticate themselves and obtain the appropriate access rights to the app without having to rely on external Active Directory permissions or other security apparatus. Also, you should build in a mechanism to protect your data generated or used by the app too.

Do you need separate web, desktop, tablet and mobile versions for your app? If so, choose the appropriate tool that can create apps in those operating systems and form factors. Or focus on one or two versions and stick with them. Learn about responsive Web design to make your webpages more effective and useful to users who are running smaller tablet and phone screen sizes. Also, be sure to test the app with the collection of OS versions that your users are running to make sure it operates as expected. And when Google and Apple come out with new OS versions, make sure you stay on top of these updates and find out if your app still works properly.

As you can see, building an app is just the beginning. But the more you cover these issues up front, the better your app experience will be for everyone.

The implications of automated license plate readers

One of the more interesting aspects of our surveillance society is the use of automated license plate reader (ALPR) technology by law enforcement to track the movements of vehicles. Our legal system is far behind in treating this technology, and activists are just beginning to challenge our government in terms of proper use, managing citizen expectations, and shedding daylight on the technology and the resulting data collections.

I got interested in this technology after several trips to Israel: the main north/south freeway uses both the ALPR system and an RFID system to send tolls to those who use the roads. If you have a transponder, you are recognized by the RFID system. If you don’t the ALPR system will send you a bill from the rental company for the tolls a few weks after your rental. Theses sorts of systems are also used in several cities around the world for congestion pricing (London has been doing this for years) when a driver enters the central business district.

Then I read this article in Motherboard about how the Philadelphia police department was using ALPR equipment that was mounted to a vehicle with Google markings. While police departments often use decoy vehicles with fake business logos to hide in plain sight, I think this is the first time anyone was attempting to pass off a Google Street Maps vehicle in this fashion. Naturally, Google was not amused nor apparently consulted in this move. And while the Philly police is allowed to collect license plate data, they can’t just appropriate some legitimate business.

These automated readers are pretty powerful tools: some can collect thousands of plates an hour and can even recognize mud-splattered plates through infrared imaging. When I did my ride-along with a St. Louis city cop, he had me running plates the old fashioned way: by entering them one at a time into a car-mounted computer terminal to query a central database. It wasn’t a cumbersome process, but it did take a moment, if I typed in the plate correctly. (There is a great plot point in the Amazon series “Bosch” that hinges on a mistyped plate number, for fans of that fictional detective.)

But you might start asking questions, which is what I did, about what happens to this data once it is collected. Obviously, the chances of abuse are huge. Several years ago, the ACLU issued a report that looked in the potentials for abuse and said that ALPR technology can be “deployed with too few rules and could become a tool for mass routine location tracking and surveillance.”

On top of this there is an open data movement that allows anyone with a webcam to upload the plates they have collected to a central website. While I am normally a fan of open data, this also has great abuse potential too. What if one of my neighbors starts tracking my movements inadvertently?

There are no federal statutes that limit this collection of data, and of course each state has their own regulations about how long data can be retained and who has access to this data. Minnesota, for example, limits the data collected to only active investigations, and requires the rest to be destroyed. Some states don’t have any requirements about destruction of their data, and others allow agencies to keep the data for years. And there are two private companies that collect plate data and share the information with insurers and car repo vendors, among others.

The Electronic Frontier Foundation has already brought one lawsuit against the Los Angeles police departments. According to the EFF, LAPD and LA county sheriff’s departments have collected millions of plates each week. The EFF has a nice page summarizing what these rulings across the country are at the moment.

So what can you do to defend yourself, especially if you aren’t a suspect? The answer is not much. You might be able to obscure your plate on your car if it is parked in your driveway on your own property, but once you drive it on a public street you have to keep it visible. Also, you should become familiar with the EFF talking points for activists, which are illuminating. In the meantime, keep an eye out for strange vans parked on your street.

iBoss blog: Turning the tide on polymorphic malware

Security startups are using the techniques of polymorphic malware to better protect enterprises and use a tool from the hacker’s world for good instead of for evil. Let’s look at why is this important and why you should care.

Polymorphic malware is nasty stuff. It adapts to a variety of conditions, operating systems and circumstances and tries to evade whatever security scans and protection products to infect your endpoints. It is called that because it shifts its signatures, attack methods, and targets so that you can’t easily identify and catch it.

But turnabout is fair play, especially when it comes to infosec. And now some very clever companies are taking the notion of polymorphism and using it as a defensive countermeasure. These vendors such as JumpSoftMorphisecShape Security and CyActive (now part of PayPal) who can make a target Web server or other piece of network infrastructure appear to change frequently so it can’t be easily identified or infected.

This can thwart attackers that are trying to identify your servers or domain accounts or unpatched endpoints and used targeted exploits to worm their way into your network. As Dudu Mimran, the CTO of Morphisec says on his blog, “An attack is composed of software components and to build one, the attacker needs to understand their target systems. Since IT has undergone standardization, learning which system the target enterprise uses and finding its vulnerabilities is quite easy.”

Actually, poiymorphism isn’t exactly new. Academics have been writing research papers about it for years, under the rubric of “moving target defenses.” There are been two Association of Computing Machinery (ACM) conferences: one in November 2014 in Arizona and a second one last November in Denver. Both covered many ways of implementing such a defense, such as with game theory and other advanced algorithms.

In an article for Network World, a Morphisec executive wrote about three categories of polymorphic defenses. These include using network actions (such as changing the apparent IP address), host actions (such as changing host names and other characteristics), and application actions (such as changing the memory layout of a process to find and execute the app).

The products are still mostly at the startup stage, but they are quickly evolving and gaining customers. For example, Shape sells an appliance that sits behind an enterprise load balancer and with a few configuration commands can protect your network from DDOS, man-in-the-browser and account takeover attacks. It dynamically changes the code behind each page displayed by your webserver every time it is loaded. This defeats many of the automated scripts used in these kinds of exploits.

Today’s polymorphic defenses generally perform a series of actions. First, some kind of trusted source controls the dynamic, real-time changes to a host server, such as a web or database server. Then they create something that isn’t easily recognized by typical attack patterns. These changes are then implemented so that external users can predict what will happen, and thus can’t easily respond or use existing attack methods. Finally, they make sure their code is hardened in such a way that it can’t be easily reverse engineered.

Whether these polymorphic defenses will prove vulnerable to even more sophisticated exploits isn’t yet clear. And whether they will ultimately prove unworkable given all the security features that they have to manage under the covers also isn’t a sure bet. But at least the bad guys are finally getting a taste of their own evil-tasting medicine, and they could prove to be a valuable tool in your security arsenal.

Quickbase blog: How to Make Scheduling Meetings Easier and More Productive

xk2One thing that hasn’t changed about today’s office environment is that meetings are still very much in force. Certainly there are ways to make their end product – such as linked spreadsheets poked fun of by this Xkcd comic — more productive. But there are other productivity gains to be had with meeting scheduling and tracking and online calendar technologies that can be had as well. Before you dive into any of these, realize that you will probably need more than one tool to help, depending on your needs. In my post today for the Quickbase blog I talk about various tools that you can use.

Using citizen science to hunt for new planets

When I was growing up, one of my childhood heroes was Clyde Tombaugh, the astronomer who discovered Pluto. Since then, we have demoted Pluto from its planetary status. But it still was a pretty cool thing to be someone who discovered a planet-like object. Today, you have this opportunity to find a new planet, and you don’t even need a telescope nor spend lonely cold nights at some mountaintop observatory. It is all thanks to an aging NASA spacecraft and how the Internet has transformed the role of public and private science research.

Let’s start in the beginning, seven years ago when the Kepler spacecraft was launched. Back then, it was designed to take pictures of a very small patch of space that had the most likely conditions to find planets orbiting far-away stars. (See above.) By closely scrutinizing this star field, the project managers hoped to find variations in the light emitted by stars that had planets passing in front of them. It is a time-tested method that Galileo used to discover Jupiter’s moons back in 1610. When you think about the great distances involved, it is pretty amazing that we have the technology to do this.

Since its launch, key parts of the spacecraft have failed but researchers have figured out how to keep it running using the Sun’s solar winds to keep the cameras properly aligned. As a result, Kepler has been collecting massive amounts of data and downloading the images faithfully over the years, and more than 1,000 Earth-class (or M class, from Star Trek) planets have already been identified. There are probably billions more out there. 

NASA has extended Kepler’s mission as long as it can, and part of that extension was to establish an archive of the Kepler data that anyone can examine. This effort, called Planethunters.org, is where the search for planets gets interesting. NASA and various other researchers, notably from Chicago’s Adler Planetarium and Yale University, have enlisted hundreds of thousands of volunteers from around the world to look for more planets. You don’t need a physics degree, you don’t need any sophisticated computer or run any Big Data algorithms. Instead, if you have a keen mind and eyesight to pore over the data and the motivation to try to spot a sequence that would indicate a potential planetary object.

What is fascinating to me is how this crowd-based effort has been complementary to what has already happened with the Kepler database. NASA admits that it needs help from humans. As they state online, “We think there will be planets which can only be found via the innate human ability for pattern recognition. At Planet Hunters we are enlisting the public’s help to inspect the Kepler [data] and find these planets missed by automated detection algorithms.”   

Think about that for a moment. We can harness the seemingly infinite computing power available in the cloud, but it isn’t enough. We still need carbon-based eyeballs to figure this stuff out.

Planet Hunters is just one of several projects that are hosted on Zooniverse.org, a site devoted to dozens of crowdsourced “citizen science” efforts that span the gamut of research. Think of what Amazon’s Mechanical Turk does by parcelling out pieces of data that humans classify and interpret. But instead of helping some corporation you are working together on a research project. And it isn’t just science research: there is a project to help transcribe notes from Shakespeare’s contemporaries, another one to explore WWI diaries from soldiers, and one to identify animals captured by webcams in Gorongosa National Park in Mozambique. Many of the most interesting discoveries from these projects have come from discussions between volunteers and researchers. That is another notable aspect: in the past, you needed at least a PhD or some kind of academic street cred to get involved with this level of research. Now anyone with a web browser can join in. Thousands have signed up.

Finally, the Zooniverse efforts are paying another unexpected benefit: participants are actually doing more than looking for the proverbial needle in the haystack. They are learning about science by doing the actual science research. It is taking something dry and academic and making it live and exciting. And the appeal isn’t just adults, but kids too: one blog post on the site showed how Czech nine year old kids got involved in one project. That to me is probably the best reason to praise the Zooniverse efforts.

So far, the Planet Hunters are actually finding planets: more than a dozen scientific papers have already been published, thanks to these volunteers around the world on the lookout. I wish I could have had this kind of access back when I was a kid, but I also have no doubt that Tombaugh would be among these searchers, had he lived to see this all happening.

Quickbase blog: How Much Code Do You Need to Collaborate These Days?

Today we have a seeming ubiquity of the coding generation: rapid application development can be found everywhere, and it has infected every corporate department. But what is lost in this rush to coding everywhere is that you really don’t need to be a programmer anymore. Not because everyone seems to want to become one. But because the best kinds of collaboration happen when you don’t have to write any code whatsoever.

You can read my post about this topic in the Quickbase The Fast Track blog here.

Why the original Soviet Internet failed

I am reminded today about the cold war with next week being the 30th anniversary of the Chernobyl disaster. But leading up to this unfortunate event were a series of activities during the 1950s and 1960s where we were in a race with the Soviets to produce nuclear weapons, launch manned space vehicles, and create other new technologies. We also were in competition to develop the beginnings of the underlying technology for what would become today’s Internet.

One effort succeeded thanks to well-managed state subsidies and collaborative research that worked closely with a centrally planning authority. The other failed largely because of unregulated competition that was stymied by a variety of self-interests. Ironically, we acted like the socialists and the Soviets acted like the capitalists.

While the origins of the American Internet are well documented, until now there has been little published research into those early Soviet efforts. A new book from Benjamin Peters, a professor at the University of Tulsa, called How Not to Network a Nation seeks to rectify this. While a fairly dry read, it nevertheless is fascinating to see how the historical context unfolded and how the Soviets missed out on being at the forefront of Internet developments, despite early leads in rocketry and computer science.

It wasn’t from lack of effort. From the 1950s onward, a small group of Soviet scientists tried to develop a national computer network. They came close on three separate times, but failed each time. Meanwhile, the progenitor of the Internet, ARPANET, was established in late 1959 in the US and that became the basis for the technology we use every day.

Ultimately the Soviet-style “command economy” proved inflexible and eventually imploded. Instead of being a utopian vision of the common man, it gave us quirky cars like the Lada and a space station Mir that looked like something built out of spare parts.

The Soviets had trouble mainly because of a disconnect between their civilian and military economies. The military didn’t understand how to marshall and manage resources for civilian projects. And when it came time to deal with superstar scientists from its army, they faltered when deciding on proposed civilian projects.

Interestingly, those Soviet efforts at constructing the Internet could have become groundbreaking, had they moved forward. One was the precursor to cloud computing, another was an early digital computer. Both of these efforts were ultimately squashed by their bureaucracies, and you know how the story goes from there. What is more remarkable is that this early computer was Europe’s first, built in an old monastery that didn’t even have indoor plumbing.

Almost a year after ARPANET was created, the Soviets had a meeting to approve their own national computing network. Certainly, having the US ahead of them increased their interest. But they tabled the idea from Vicktor Glushkov and it died in committee. It was bad timing: two of the leaders of the Politburo were absent that day they considered the proposal.

Another leading light was Anatoly Kitov, who proposed in 1959 that civilian economists use computers to solve economic problems. For his efforts, he was dismissed from the army and put through a show trial. Yet during the 1950s the Soviet military had long-distance computer networks and in the 1960s they had local area networks. What they didn’t have were standard protocols and interoperable computers. It wasn’t the technology, but the people, that stopped their development of these projects.

What Peters shows is that the lessons from the failed Soviet Internet (he adoringly calls it the ‘InterNyet’) has to do more with the underlying actors and the intended social consequences than any lack of their combined technical skill. Every step along the route he charts in his book shows that some failure of one or more organizations held back the Soviet Internet from flourishing like it did here in the States. Memos got lost in the mail, decisions were deferred, committees fought over jurisdiction, and so forth. These mundane reasons prevented the Soviet Internet from going anywhere.

You can pre-order the book from Amazon here.

iBoss blog: How Stronger Authentication Methods Can Better Secure Cloud Access

There are many myths about cloud computing. One common one is that servers in the cloud are less secure than when they are located on-premises. Like so many other myths, this has some basis in fact, but only under a very limited set of circumstances. In my latest post for iBoss’ blog, I talk about ways to better secure your cloud-based servers using multifactor authentication (MFA) and single sign-on (SSO) methods to better protect these assets.

Dice (Strom archive)

I began writing articles for their business intelligence SlashBI site in June 2012. Dice once owned Slashdot.

Here are links to my articles that can be found on Dice Insights.

Dice Security Talent Community

From 2012-2014, I was the editor of this sub-site for Dice.com, the job listings site. (It has since been taken down.) Dice enhanced its content with a series of topic-oriented discussions and resources that would benefit its job-seekers. I curated the security content and included recommended resources, added my voice to the discussions forums, and wrote and posted short news articles.