Lessons learned from enterprise wikis

In preparation for a speech that I am giving next month at the New Communications Forum conference  in Las Vegas about how Web 2.0 works for public relations, I came across what Razorfish is doing. The company, which now reaches across 16 offices and more than 1,000 staffers, is an interactive advertising and digital marketing agency that has grown through acquisitions since the early days before we even knew enough about Web 1.0.

Razorfish has developed their own enterprise wiki that combines things from a corporate blog, mailing lists, and other collaborative technologies. I had a chat with Ray Velez, who works in the Razorfish New York office. Ray gave me a brief tour and spoke about some of their lessons learned. The wiki is useful for spreading their knowledge across their wide network, and keeping people up-to-date on best practices, or even, previous work that they have done for their clients. For example, Ray mentioned a weather feed that one client wanted to incorporate into their Web site. Using the wiki, a staffer was able to track down three previous feed projects that had been developed in the past and present their client with this work in a matter of a few minutes. Without the wiki, it would have taken numerous phone calls and emails to track this information down within the corporation, or the staffer might have had to build it from scratch without the benefit of this knowledge.

You can get a taste for what they have developed by reading their case study published last year by Andrew McAfee, an Associate Professor at Harvard Business School

Based on my conversation with Ray, here are some of the lessons that they have learned with living with this project.

Lesson #1: Open source is great, but you can’t just slap the code in your organization without some modifications. Razorfish has one full-time intern and two part-time developers that maintain their code. They make use of MediaWiki (the open source code that powers Wikipedia) as well as WordPress’s blog software, among other things that they have developed themselves. Ray told me that the integration process is “constant” and taken them over 18 months to get to where they are today. While open source gave them a leg up on development, there is always more to be done.

We have grown up understanding that because the Web makes it really easy to fix little things, you end up making lots of small, incremental changes to your site. With open source projects, you want to do the same thing and make small tweaks, so they become living entities.

Lesson #2: You can only be so open — authentication matters.
Razorfish put in place some code that pulls information from their Active Directory servers. This enables single sign-on and also makes sure that their users could be held accountable and identifiable. Obviously, this is part of their corporate practices, but it wasn’t easy. The MediaWiki code didn’t have any simple mechanisms for AD integration, and even had a commented out a non-working section of code. Ray plans on giving back the fruits of their efforts at some point in the near future, in the grand spirit of the open source tradition.

Lesson #3: Security matters a lot. Razorfish’s wiki is behind its corporate firewall and just for employee-to-employee communications. In the olden days of the Web, say 1997, we would have called this an intranet. This might be at cross purposes with the idea of open collaboration, and there are suitable warnings placed around the site to remind users not to post highly confidential information that shouldn’t be shared across the corporation. Still, being open source doesn’t mean being lax about security.

Lesson #4: Search matters, too. Part of the custom code they wrote was to enable search across all wiki and blog content, to make it easier for employees to identify and find something of interest. Like any Web site, you can never spend too much time investing in a better search routine.

Razorfish has developed a very interesting application, to be sure. If you have a similar project you would like to tell me about, drop me a line.

Getting the most out of RSS

Really Simple Syndication is everywhere. RSS has been incorporated into online news portals and the latest version of Internet Explorer. It’s how the blogosphere has grown so quickly, and lies at the core of popular new media sites like MySpace and YouTube. Yet this technology still seems foreign to many media professionals. I will be doing a session at the New Communications Forum on March 8th in Vegas. The session is geared towards journalists and PR professionals, and I will cover such topics as:

  • Understand the media’s use of RSS
  • Use RSS feeds to provide real value to you and your audience
  • Set up an RSS newsreader and customize feeds to track news on your industry, clients and competitors
  • Use RSS to become a better PR resource to media

What the Linux merger really means for Oregon job growth

[This article was contributed by Brian Walsh, who runs a Portland-based service firm. His practice concentrates on enterprise software, providing counsel, architecture and development. He enjoys the hands-on approach and divides his time between policy issues and technical challenges. Brian has worked as an IT manager and also an editor at Network Computing and theServerSide.com, and he can be reached at www.bwalsh.com. This article was written for the Software Association of Oregon.]

There is no arguing with success. Linux and open source in general have made significant advances in market penetration over the past five years or so. On the other hand, as I look around my local coffee shop all I see are Windows boxes and the occasional Portland-bike-messenger way-cool Mac.

It seems there are a few points of view one can take regarding the announced merger of OSDL and FSG:

  • That’s it. Linux has reached a leveling-off point. The enterprise has been breached and sponsoring companies are consolidating their investment in trade groups.
  • It’s a launching point. Linux vendors are ready to collect their respective energies and attack the desktop, for real this time.
  • Where are all the jobs?

Regarding Linux on the desktop, maybe this time it will be different. This has been a recurring topic among the punditry for at least fifteen years. I remember back in the late ‘80s there were rumors running around about Sun and Apple combining forces. Forget about what the press says. Judge the likelihood or progress toward this goal by asking yourself a few questions:

  • How did it go the last time you tried to install Linux on your laptop?
  • How did it go the last time you tried to give up MSOffice and use Open Office?
  • What answer did you get from your favorite software vendor when you asked about a Linux Port? John Dvorak focuses here.
  • Ask your friendly IT person to pronounce “Ubuntu”.

Licensing issues still dog open source progress

As far as enterprise adoption of Linux, one of the goals of OSDL has been to champion the legal issues surrounding licensing. Work still needs to be done here. Licensing issues and threats of suits still take up way too much ink. Not surprisingly, efforts at rapprochement have not born fruit. There is little case for suing end users and small vendors; the downside loss of goodwill is simply too costly. At the same time, the threat is too good a tool to abandon. As an honest broker between Linux vendors, OSDL can only hope that competitive pressures will encourage Linux vendors to package indemnification. Further increases in Linux as a server platform will rely more on preferences of the applications and tools further up the stack, apart from the OS. The new Linux Foundation should concentrate on keeping licensing issues out of the press.

Where are all the jobs?

The first thing that caught my eye when I heard about the consolidation of OSDL and FSG was the layoff of a third of the staff. The Portland area has been on the losing end of too many consolidations. Then I realized that a third of the staff meant nine jobs. If we believe OSS monetization is based on a service model requiring lots of warm bodies (a tougher business model than selling licenses), one has to wonder: Where are all the jobs?

Oregon wants to be known as a center of OSS. Up until now that has meant being known as the home of Linus, OSDL and O’Reilly’s Open Source Conference. It is encouraging that, as a result of the merger, the Linux Foundation will remain here in Portland and the new COO will move to the area. The staff after the short term consolidation will doubtless continue to grow.

However, in order to develop the jobs that come with it, we need to realize that open source encompasses a lot more than Linux. Communities clustered around languages such as Java, Ruby, PHP and application domains like bioinformatics, media players and mobile devices have significant open source efforts. Community source (open source aligned around a vertical industry segment) continues to emerge. It is not all about the OS. Personally, most open source developers use and deploy to Windows as much as Linux.

Building Oregon’s open source traction

The barriers to entry in the software industry are low and our slim competitive advantages are not to be squandered. Quite apart from the Linux Foundation’s activity, we should ensure that whatever buzz associated with OSS and Oregon continues to grow. It is a mistake to cast this in terms of MS vs. Linux. As a rising tide, OSS will raise all boats.

If Oregon wants to enhance the reputation it has developed for open source and develop more jobs it needs more examples of success. Oregon needs some traction as a center for open source efforts other than Linux itself. Job growth will be found by identifying opportunities in the problems and challenges of larger markets than ourselves while continuing to attract and hold talent.

Google hacks

Allow me to show you how to hack into your own Web site. You don’t need any specialized tools, and you don’t need any specialized skills either. All you need is a Web browser and the ability to enter the appropriate search syntax to Google your own site, or anybody else’s for that matter. It doesn’t take much time, and the payoffs could be huge: an intruder could easily obtain a copy of your most sensitive data in about the time it takes to read through this essay.

The trick is using Google’s search engine to look for specific terms, such as passwords, salary details, and customer details. The opportunities are enormous. Many Web sites contain inherent design flaws that leave them ripe for exploitation.These flaws are not immediately obvious and the fixes are not simple.

I wrote about this exploit, called Google Hacking, in an article for today’s New York Times Circuits section.

It was a fun story to report, and I thought I would take a moment to tell you about things that didn’t make it in there.

First and foremost is an updated version of a great book that O’Reilly has of the same name.

The term really refers to a lot of different things. In my NYT article, I talk about the dark side, about ways that bad guys can uncover sensitive information, or pages that you might not realize are available to the general public. But there are a lot of neat things that you can do with Google that are much more benign and fun, and can really stretch your ability to look for particular information. Here is one that you probably didn’t know about: you can type in “13 miles in kilometers” in Google’s search box and it will do the conversion for you.

Back to the dark side though. I spoke to a lot of different people in law enforcement, and one of the things that struck me during these interviews is how hard it is to prosecute someone who has been using Google to illegally use information. You need to have some tangible, physical evidence and the very nature of the Google hack is that you never leave any footprints on the target site. Still, I was impressed with how technically savvy the police are, at least the ones that I spoke to who understand these issues and aren’t taking these exploits lightly.

While these exploits have been known for many years among the IT community, they aren’t well known for the general business and consumer audience, which is why I wanted to write about them. Some people may say, why give these people the information to cause trouble? In my article, I actually show a sample piece of search syntax that can bring up vulnerable sites, which probably is a first for the Times.

I look at it differently: the bad guys already know about these exploits, and the challenge will be to educate the general population, especially the smaller businesses, that don’t always protect themselves. This isn’t just leaving your back door open, it is putting a 40 foot neon sign out front with a big arrow pointing out that millions of valuables can be found in your top dresser drawer. And the problem intensifies if someone can take over your site and use it to launch their own mischief or worse, illegal activities.

The article mentions two Web sites that are great resources for more technical folks. One is Johnny Long’s site.

Long compiles hundreds of vulnerabilities that have already been indexed by Google, and the site is full of great examples of search terms that you can plug in to find passwords and default configuration pages that will take you to some interesting places.

The other site is OWASP.org. The chair of this industry organization is Jeff Williams. He told me “most Web applications respond to attacks quite happily, without detecting them and without taking any defensive actions. Network security mechanisms like firewalls, intrusion detection, and hardened operating systems can’t detect or prevent these attacks because they don’t know anything about company’s custom application code and how it works. And, unfortunately, the innocent code doesn’t defend itself.”

Speaking of defending yourself, what can you do? First, make sure you are secure. Williams says, “companies that don’t know whether their applications are secure or not should start by verifying a few of them to find out.” And if you have information that you don’t want Google to index, remove it.
Here is some information that Google publishes to show site operators how they can remove their content from the search index
.

Second, take security audits seriously, and do them often. Howard Schmidt, the former federal cyber security chief, talks about how you have to do security scans continuously. You can’t just rely on an annual audit, or even a quarterly audit, because sites are organically changing and new exploits are being uncovered every day.

Third, train your developers to be aware of these and other common exploits, and reserve some funding for security assessments as part of all contracting projects you do in the future. Use the sample legal contract language from OWASP.org when you have to hire out for help, and also take a look at their tutorials to harden your site.

Fourth, don’t just think that Google hacks are the only story. There are plenty of other ways to get information from Web sites. Read my white paper for Breach Security about SQL injection if you haven’t already, to see how easy this exploit is.

Finally, keep what Long told me in mind: “Google hacking, cross-site scripting and SQL injection vulnerabilities have been present in every Web site and application I have audited. Every single one. Bear in mind that some Google-hacking style vulnerabilities are more revealing than others, but it is a pervasive threat.”

A look back to 1995

Web Informant turns 11 this month. Hard to believe that for the most part I have been writing these things for so long. Harder still to believe that many of you have been reading them (and commenting on them too) for so long. So first off, a boat load of thanks. It has been a lot of fun to write these things, and I hope I can keep them coming for another 11 years.

I got into a reflective mood this morning, after taking a trip down memory lane by reading Techweb’s excellent historical view of the Web.

They claim that the Web was invented in the summer of 1991, although I have seen references to earlier than that. Most of us didn’t start really using it until the first Windows and Mac browsers came out a few years later.

So let’s go into the Wayback machine and see where what was happening 11 years ago:

We had browsers that were just beginning to display tables and images in-line, and Netscape was still the dominant force in browsing technology. They began developing their own browser extensions then, which was the beginning of their demise, helped by Microsoft, too many programmers, and AOL along the way. Now Microsoft is losing browser mind share to Firefox. Funny how the pendulum swings back and forth.

If you got a copy of these early browsers, they fit on a single floppy disk, and we still had PCs that came with floppies too. For those of you too young to remember these, the ones we used in 1995 could hold a megabyte of data and were small enough that you carry in your shirt pocket, back when shirts had pockets. Now we can buy USB key drives that hold 1000 times as much for about $50. I guess it is time to throw out my collection of floppies now.

Around 1994 the Web started to take off, with some reports putting the growth in actual sites from the low thousands to more than 25,000 sites by the end of the year. Back then, email and FTP traffic were the dominant information flows, and let’s not forget about Gopher, the first hypertext protocol, too.

Before 1994, we had computers that had integrated TCP/IP protocols in them, they were called Unix computers. For the rest of us, we had to deal with installing a separate piece of software that handled communications. Remember NetManage? Microsoft Windows for Workgroups and the Mac OS 7.5 both included support for TCP/IP in their operating systems that year.

Back in 1995, OS/2 was still a viable operating system and IBM had high hopes that it would still become popular, even going so far to take its Warp codename and use on the product. And OS/2 had built-in TCP/IP protocols, if memory serves correctly, long before Windows did. I remember that IBM had Kate Mulgrew appear at the launch event — she played Captain Janeway on one of the Star Trek series. Linus Torvalds was still in graduate school working on the thesis that would eventually spawn Linux and reinvigorate the open source world and make Unix safe for the rest of us.

Back in 1995, I first started writing about how the browser was turning into its own operating system and computing environment. Now we have plug-ins galore for all of the major browser versions, and many commercial software products have some kind of browser interface too.

Back in 1995, there weren’t too many affordable choices for broadband access — indeed, I don’t think the term was in much use then. I think I was still using an ISDN line, and happy to get all of that 112 Kbps of connectivity that I got. Cable modems and DSL lines would happen later. Back then, we had lots more phone companies too before they all started combining with each other. And AT&T was still selling just long distance and not the local provider for the middle of the country. MCI was still doing business, and UUnet was one of the stronger ISPs around. Neither had gotten involved with Bernie Ebbers’ thievery yet.

Back in 1995, I already had my own domain name strom.com for several years, which seemed like a novelty at the time. It was easy to obtain a domain name — and they didn’t cost anything either. Cybersquatting, phishing, ad banner tracking, and cookie stuffing were all still relatively unknown. Blogs hadn’t been invented, nor podcasts, wikis, or mashups. We were still using Yahoo to search the Internet. It was a time of relative innocence. No one used VPNs. Routers still cost thousands of dollars. Ethernet was locked in a battle with Token Ring, and wireless networks were expensive and not found anywhere near places selling coffee.

If you want to go into the Wayback machine back 20 years, take a look at something that wrote here.

AOL offers free online storage

AOL, through its Xdrive subsidiary, will offer 5 GB of free online storage next month. The best coverage of this is from Glenn Fleishman over at TidBITS, who also vents a bit (justifiably so) about other AOL activites of the past few days. My favorite graf:

What all this means for AOL is hard to say. AOL’s software still stinks. AOL’s email filtering is highly erratic. Any of us who run mailing lists are familiar with suddenly having all of our double opt-in, fully approved AOL users bounce our email for some obscure reason that’s impossible to address directly with AOL.

As if we all don’t have enough to do

Yahoo and Tim O’Reilly have come up with this fantasy stock market called Buzz Game. You buy and sell shares of technologies based on whether you think they are hot or not, using monopoly play-dough.

They give their reasons:

  • To see if search buzz (including spikes and trends) can indeed be predicted by the collective wisdom of crowds in a market
  • To provide an index of “what’s next” for technology enthusiasts
  • To separate the wheat from the chaff among the various technologies that O’Reilly is constantly watching and tracking; to measure which forces in the technology industry are truly disruptive and which are mere flashes in the pan
  • To discover how Yahoo! Research’s dynamic pari-mutuel market mechanism behaves in the “wild”
  • To investigate opportunities around predicting trends in search engine behavior, and how they relate to events in the real world
  • Last but not least, to entertain and engage participants in the game

I guess after fantasy fly fishing, this is the next big thing.