One thing that hasn’t changed about today’s office environment is that meetings are still very much in force. Certainly there are ways to make their end product – such as linked spreadsheets poked fun of by this Xkcd comic — more productive. But there are other productivity gains to be had with meeting scheduling and tracking and online calendar technologies that can be had as well. Before you dive into any of these, realize that you will probably need more than one tool to help, depending on your needs. In my post today for the Quickbase blog I talk about various tools that you can use.
When I was growing up, one of my childhood heroes was Clyde Tombaugh, the astronomer who discovered Pluto. Since then, we have demoted Pluto from its planetary status. But it still was a pretty cool thing to be someone who discovered a planet-like object. Today, you have this opportunity to find a new planet, and you don’t even need a telescope nor spend lonely cold nights at some mountaintop observatory. It is all thanks to an aging NASA spacecraft and how the Internet has transformed the role of public and private science research.
Let’s start in the beginning, seven years ago when the Kepler spacecraft was launched. Back then, it was designed to take pictures of a very small patch of space that had the most likely conditions to find planets orbiting far-away stars. (See above.) By closely scrutinizing this star field, the project managers hoped to find variations in the light emitted by stars that had planets passing in front of them. It is a time-tested method that Galileo used to discover Jupiter’s moons back in 1610. When you think about the great distances involved, it is pretty amazing that we have the technology to do this.
Since its launch, key parts of the spacecraft have failed but researchers have figured out how to keep it running using the Sun’s solar winds to keep the cameras properly aligned. As a result, Kepler has been collecting massive amounts of data and downloading the images faithfully over the years, and more than 1,000 Earth-class (or M class, from Star Trek) planets have already been identified. There are probably billions more out there.
NASA has extended Kepler’s mission as long as it can, and part of that extension was to establish an archive of the Kepler data that anyone can examine. This effort, called Planethunters.org, is where the search for planets gets interesting. NASA and various other researchers, notably from Chicago’s Adler Planetarium and Yale University, have enlisted hundreds of thousands of volunteers from around the world to look for more planets. You don’t need a physics degree, you don’t need any sophisticated computer or run any Big Data algorithms. Instead, if you have a keen mind and eyesight to pore over the data and the motivation to try to spot a sequence that would indicate a potential planetary object.
What is fascinating to me is how this crowd-based effort has been complementary to what has already happened with the Kepler database. NASA admits that it needs help from humans. As they state online, “We think there will be planets which can only be found via the innate human ability for pattern recognition. At Planet Hunters we are enlisting the public’s help to inspect the Kepler [data] and find these planets missed by automated detection algorithms.”
Think about that for a moment. We can harness the seemingly infinite computing power available in the cloud, but it isn’t enough. We still need carbon-based eyeballs to figure this stuff out.
Planet Hunters is just one of several projects that are hosted on Zooniverse.org, a site devoted to dozens of crowdsourced “citizen science” efforts that span the gamut of research. Think of what Amazon’s Mechanical Turk does by parcelling out pieces of data that humans classify and interpret. But instead of helping some corporation you are working together on a research project. And it isn’t just science research: there is a project to help transcribe notes from Shakespeare’s contemporaries, another one to explore WWI diaries from soldiers, and one to identify animals captured by webcams in Gorongosa National Park in Mozambique. Many of the most interesting discoveries from these projects have come from discussions between volunteers and researchers. That is another notable aspect: in the past, you needed at least a PhD or some kind of academic street cred to get involved with this level of research. Now anyone with a web browser can join in. Thousands have signed up.
Finally, the Zooniverse efforts are paying another unexpected benefit: participants are actually doing more than looking for the proverbial needle in the haystack. They are learning about science by doing the actual science research. It is taking something dry and academic and making it live and exciting. And the appeal isn’t just adults, but kids too: one blog post on the site showed how Czech nine year old kids got involved in one project. That to me is probably the best reason to praise the Zooniverse efforts.
So far, the Planet Hunters are actually finding planets: more than a dozen scientific papers have already been published, thanks to these volunteers around the world on the lookout. I wish I could have had this kind of access back when I was a kid, but I also have no doubt that Tombaugh would be among these searchers, had he lived to see this all happening.
Today we have a seeming ubiquity of the coding generation: rapid application development can be found everywhere, and it has infected every corporate department. But what is lost in this rush to coding everywhere is that you really don’t need to be a programmer anymore. Not because everyone seems to want to become one. But because the best kinds of collaboration happen when you don’t have to write any code whatsoever.
I am reminded today about the cold war with next week being the 30th anniversary of the Chernobyl disaster. But leading up to this unfortunate event were a series of activities during the 1950s and 1960s where we were in a race with the Soviets to produce nuclear weapons, launch manned space vehicles, and create other new technologies. We also were in competition to develop the beginnings of the underlying technology for what would become today’s Internet.
One effort succeeded thanks to well-managed state subsidies and collaborative research that worked closely with a centrally planning authority. The other failed largely because of unregulated competition that was stymied by a variety of self-interests. Ironically, we acted like the socialists and the Soviets acted like the capitalists.
While the origins of the American Internet are well documented, until now there has been little published research into those early Soviet efforts. A new book from Benjamin Peters, a professor at the University of Tulsa, called How Not to Network a Nation seeks to rectify this. While a fairly dry read, it nevertheless is fascinating to see how the historical context unfolded and how the Soviets missed out on being at the forefront of Internet developments, despite early leads in rocketry and computer science.
It wasn’t from lack of effort. From the 1950s onward, a small group of Soviet scientists tried to develop a national computer network. They came close on three separate times, but failed each time. Meanwhile, the progenitor of the Internet, ARPANET, was established in late 1959 in the US and that became the basis for the technology we use every day.
Ultimately the Soviet-style “command economy” proved inflexible and eventually imploded. Instead of being a utopian vision of the common man, it gave us quirky cars like the Lada and a space station Mir that looked like something built out of spare parts.
The Soviets had trouble mainly because of a disconnect between their civilian and military economies. The military didn’t understand how to marshall and manage resources for civilian projects. And when it came time to deal with superstar scientists from its army, they faltered when deciding on proposed civilian projects.
Interestingly, those Soviet efforts at constructing the Internet could have become groundbreaking, had they moved forward. One was the precursor to cloud computing, another was an early digital computer. Both of these efforts were ultimately squashed by their bureaucracies, and you know how the story goes from there. What is more remarkable is that this early computer was Europe’s first, built in an old monastery that didn’t even have indoor plumbing.
Almost a year after ARPANET was created, the Soviets had a meeting to approve their own national computing network. Certainly, having the US ahead of them increased their interest. But they tabled the idea from Vicktor Glushkov and it died in committee. It was bad timing: two of the leaders of the Politburo were absent that day they considered the proposal.
Another leading light was Anatoly Kitov, who proposed in 1959 that civilian economists use computers to solve economic problems. For his efforts, he was dismissed from the army and put through a show trial. Yet during the 1950s the Soviet military had long-distance computer networks and in the 1960s they had local area networks. What they didn’t have were standard protocols and interoperable computers. It wasn’t the technology, but the people, that stopped their development of these projects.
What Peters shows is that the lessons from the failed Soviet Internet (he adoringly calls it the ‘InterNyet’) has to do more with the underlying actors and the intended social consequences than any lack of their combined technical skill. Every step along the route he charts in his book shows that some failure of one or more organizations held back the Soviet Internet from flourishing like it did here in the States. Memos got lost in the mail, decisions were deferred, committees fought over jurisdiction, and so forth. These mundane reasons prevented the Soviet Internet from going anywhere.
You can pre-order the book from Amazon here.
There are many myths about cloud computing. One common one is that servers in the cloud are less secure than when they are located on-premises. Like so many other myths, this has some basis in fact, but only under a very limited set of circumstances. In my latest post for iBoss’ blog, I talk about ways to better secure your cloud-based servers using multifactor authentication (MFA) and single sign-on (SSO) methods to better protect these assets.
An interesting analysis in Digital Shadows recently spoke about the hiring shortage that has befallen the black-hat hacker community. While most enterprise IT managers are frustrated about getting skilled cybersecurity personnel for their own teams, there are some unexpected benefits, too.
I spoke to Ron Gula, the CEO of Tenable Security, who has witnessed this situation first-hand. Even though security budgets are increasing, “money can’t make smart people appear out of nowhere,” he told me. Finding new black-hat talent can be just as frustrating as your next legit IT hire.
Most of us know by now that if you spot a random USB thumb drive sitting on the ground, you should ignore it, or better yet, put in the nearest trashcan. This action was an early plot point in the TV series Mr. Robot. I even saw a poster at Checkpoint’s Tel Aviv headquarters when I visited there in January warning employees to dispose of such drives when found on their campus.
But still, human nature gets the better of us sometimes. A recent academic paper shows just how tempting that drive can be for college students at the Universities of Michigan and Illinois. The study found that out of 300 drives that were sprinkled around the various campuses, at least half were retrieved and inserted into computers. In some cases, the drives were inserted within a few minutes of being left.
These drives contained special code that would “phone home” and alert the researchers that they were found, but they could have contained more dangerous malware. Which is the point of this depressing exercise.
What is interesting about the paper was the lengths that the researchers went to understand their target’s motivations and rationale for picking up the drives in the first place. They were asked to complete a survey (paying them $10 to complete, after all, these are college students). Two thirds of them said they took no precautions before connecting them to their computers.
They also tested the time of day, location, and branding of the drive itself to see if these factors made them more or less likely to be retrieved. For branding, the researchers attached a “confidential” sticker, a return address label or keys to see if that made a difference. Interestingly, the return address label actually reduced insertion rates. The researchers also monitored Facebook and Reddit to see if any students posted warnings about the proliferation of drives around campus. Despite several postings and the fact that word spread on these networks quickly during the experiment, the drives were still retrieved.
This isn’t the first, and certainly won’t be the last such study. Several years ago, the Department of Homeland Security found that 60% of folks who found drives planted outside government buildings tried them out, and this percentage increased to 90% when the drives had a logo on them indicating some sort of official use. And last fall, a study commissioned by the trade group CompTIA found that 20% of 200 drives that were sprinkled across five cities were retrieved.
Certainly, there are some drives that are truly evil, such as this drive reported by Gizmodo that will literally cook your motherboard. Or the infamous Rubber Ducky drive used by penetration testers.
Bruce Schneier complained about this meme years ago, and wrote in a blog post:
“The problem isn’t that people are idiots, that they should know that a USB stick found on the street is automatically bad and a USB stick given away at a trade show is automatically good. The problem is that it isn’t safe to plug a USB stick into a computer. Quit blaming the victim. They’re just trying to get by.”
Certainly, better and more security education would be a good idea. The college survey found that students perceived the files on the flash drive as being safer because they used .html extensions. Uh, not quite. But there is some hope: a few students were suspicious and actually used a text editor to open these files and connect them to offline computers.
You can read my post here about the threat of Internet-connected printers.
We have reached a point where computers are needed to make our medical providers act more human. The idea is to use virtual reality techniques and programs to help train doctors to deal with health emergencies and other clinical care situations. The MPathic-VR system covers a wide range of situations and real-world behaviors that are typical in a clinical situation. You can read my story on Medical Cyberworlds (who developed the system) on Dice today here.
Last week I took my first couple of Uber rides when I was in Los Angeles. I had resisted the temptation for some time, for several reasons. First, I wasn’t happy with their corporate culture and saw my one-man boycott as something personally meaningful, if a bit useless. Second, ride hailing is illegal here in St. Louis, where we have a Neanderthal taxi commission that has laid a nice featherbed for its own drivers. Finally, I don’t take all that many taxis for the most part, other than to and from the airport, and again, see point #2.
The Uber trips in LA were very enlightening. Both drivers appeared within minutes upon clicking the request on the Uber mobile app. In one case, I was at LAX airport and got to see how efficient the Uber pickups were: in the short time that I was waiting for my driver, about a dozen millennials had met their drivers and zoomed off. Before they got into their cars, I could tell they were Uber customers. They were staring at their screens, watching their cars approach the airport. LAX, unlike St. Louis’ Lambert airport, allows Uber to pickup passengers in a certain spots, in between the terminals. There is no need to queue up like at a “normal” cabstand, because you have already been assigned a driver.
This watching your car approach – or indeed, any nearby Uber car available at that moment – is the real genius idea behind the service. Often I have waited for a taxi pickup, not knowing where the cab is. With Uber, this uncertainly is removed. You have a countdown clock that tells you, quite accurately, when your car is to arrive. You see the name of the driver, the license plate, make and model of the car, and you can directly contact the driver to confirm exactly where you will be. With one ride, for some reason the app displayed a nonsense address for my location, but the driver called me and we clarified where I was actually standing.
Most of the cars that morning at LAX were Priuses and both my rides were Priuses, too. (Cnet has a funny story about how people just assume that all Priuses are Ubers here.) One driver explained the economics of operating even a fuel-efficient car with a Prius, showing me how much more profitable the hybrid can be. The cars were clean, relatively new models. One had a charging cable for my phone, a nice touch. The rides were about 20% less than what a typical cab fare would be too. On my return to the airport, I was told by the Uber app that because of congestion at that moment if I wanted a ride I would have to pay 30% more for it, or I could wait a few minutes for the price to drop. I waited, and was notified by the app when this happened to book my ride. That is another nice touch.
A final benefit is that when you get to your destination, you just get out of the car. There is no need to go through the payment process: that is handled automatically by the app. The driver doesn’t carry any cash: my fare is deducted from my credit card and the driver’s fee is added to his or her bank account. You then get an email receipt within seconds.
Both of my drivers shared that they were making decent livings with Uber, more than $50,000 a year and about $30 an hour. This is more when compared with driving a regular yellow cab in LA. One of my drivers was a former cabbie and told me that he never made as much as he does now with Uber. Both drivers also mentioned to me that they can drive when they want to: one gets up early and covers the morning rush, then takes a few hours off and returns for the afternoon and evenings. Many cabbies don’t have that flexibility because they aren’t working for themselves, they have to make the most of their employer’s cabs.
Granted, my data is just incidental. What about overall trends? Fortunately, the New York City taxi commission data is available for anyone to download and Todd Schneider has done just that. His latest post shows that there are more Uber cars in the city, and not surprisingly that yellow cabs are losing market share in terms of the number of daily riders, even though they take more fares per cab.
Schneider also shows that the market for Uber is becoming more competitive, as the number of cars on the road has rapidly increased. (Lyft, Uber’s main competitor, has a smaller market share.) This could be one reason why Uber is dropping its prices in NYC. Schneider estimates that Uber made about $220 million during all of 2015 in NYC. Given their commission rate, that means they have added about a billion dollars to the city’s economy last year.
I know I am late to the ride hailing party, but these services are certainly changing the economics and the process of taking taxis to be sure. I think they have a lot of benefits, and I certainly will use them more frequently in the future. I hope they can win their legal battles here in St. Louis and elsewhere around the world.