Remember when Push was on the cover of Wired magazine, and companies were coming and going faster than you can say bubble?
Here we go into our Web time machine and pop out around July 2000 and see who has survived the push madness.
Remember when Push was on the cover of Wired magazine, and companies were coming and going faster than you can say bubble?
Here we go into our Web time machine and pop out around July 2000 and see who has survived the push madness.
Dave Piscitello and Lisa Phifer run a boutique consulting firm called Core Competence. Back in the day, we took a very complete look at the various kinds of all-in-one access/firewall/server appliances that could be used to run a small business in a box.
With all due respect to Tim Bray’s recent analysis of IE 5’s use of XML, I think he is missing the point. Microsoft’s entry into the XML universe will do lot more harm than good initially for the XML standards effort, and has the ultimate intention of replacing the way most of us create and exchange documents.
First off, Tim looked at whether Microsoft version of XML (for brevity, let’s call it MS-XML) implements the standards appropriately and within certain parameters. That is fine but not really newsworthy to hear that MS-XML has bugs and wants to do things somewhat differently than the current standards.
The real news is how MS-XML is designed from the beginning to be the common file interchange format for all Microsoft Office 2000 applications. In doing this, Microsoft has taken to extreme its time-honored notion of embrace and extend an ongoing standards effort. This time, MS-XML has something other than XML in mind. Microsoft is trying to move people away from ordinary HTML v3 documents and make Office 2000 the standard tool for web authoring. And while earlier efforts, Front Page most memorable, haven’t really caught on, I think this time Office 2000 has a solid chance.
Let me explain. Up to a few years ago, I received non-Microsoft Office documents in the mail from my correspondents. Now it is rare that I get that errant Word Perfect or Lotus 1-2-3 file: indeed, when I do I often castigate my correspondents and tell them to send me their Microsoft equivalents. This isn’t because I love Microsoft products: it is because that is what the world uses. Remember revisable-form text? Gone. Remember non-PowerPoint presentations? All but extinct. Microsoft Office is the default document interchange standard today.
But to make interchange workable, we still have one remaining issue and that is version control. When Office 97 came out, many people were still running Office 95 or earlier versions and couldn’t read the newer document formats. Of course, this encourages people to upgrade when just a few start sending out the newer formatted documents, but for corporations that want to exchange information easily, it is a painful upgrade. Far better to use a standards-based format, and here MS-XML is perfect for this purpose.
While it is wonderful that Microsoft has decided to support XML in its browser, the bigger news is what it has done with XML in the rest of Office 2000 components, including Word, PowerPoint and Excel.
A disclaimer: I am by no means adept at XML. I can write very rudimentary HTML code for maintaining my own web site, and my programming days are long since over. But perhaps this is why I am so sensitive when it comes time to evaluate MS-XML. My bottom line: I can’t read MS-XML pages and am too old to start learning how.
So let’s examine the code produced by Word 2000 for a couple of simple examples. For these tests, I am running Beta 9.0.2216 on Windows 98. I wrote a one-page document with the single line “Hello World” and saved it to an HML formatted file. When I view the source, I have a rather lengthy page of text. The header of the page includes all sorts of font metric definitions and meta tags and file information. The first few lines look like this:
<html xmlns:v=”urn:schemas-microsoft-com:vml”
xmlns:o=”urn:schemas-microsoft-com:office:office”
xmlns:w=”urn:schemas-microsoft-com:office:word” xmlns=”-//W3C//DTD HTML 4.0//EN”>
But the really interesting part is the body copy, which looks like this:
<body lang=EN-US style=’tab-interval:.5in’>
<div class=Section1>
<p class=MsoNormal align=center style=’text-align:center’><b style=’mso-bidi-font-weight:
normal’><span style=’font-size:20.0pt;mso-bidi-font-size:12.0pt;color:#3366FF’>Hello
World!<o:p/></span></b></p>
You’ll notice that our font size (20 point), font color, justification (centered) and bold text is all preserved with this code fragment. The reference to the class “MsoNormal” is defined in the style section earlier as Times New Roman, which is what I used in my Word document.
All of this makes it easier to exchange my Word 2000 document with someone else. They can see the style and layout of my text, something that HTML hasn’t been very good at doing since day one.
While this isn’t a review of the product, let me touch on one other feature in Word 2000 that makes it easier for web authors. When you go to save your document, you can save it directly to your web site via FTP. Once you enter the URL, username and password, the FTP site appears on your local directory tree as just another location. That is very nice.
But the side effect is that I have to make a pact with the devil. Once I go down the route of saving my pages as MS-XML, the naked code may become unreadable to me. The pages also take up more room and thus will take a bit longer to download and view. As I said, I am not a XML programmer, or even any kind of programmer. I have purposely kept my web pages sparse and relatively devoid of “advanced” features, in the name of being browser agnostic and universally viewed. I fear that the more people use Word 2000, the more that MS-XML will replace ordinary HTML code on the web.
What about PowerPoint 2000? Earlier versions of PowerPoint had the ability to publish to the web. While simple to use, this produced rather clunky code and a long series of files. The new and XML-ized version produces a single “pointer file” which contains this code enumerating the other files in a separate directory that comprise your PowerPoint slide show. Here are the contents for filelist.xml for our single slide presentation:
<xml xmlns:o=”urn:schemas-microsoft-com:office:office”>
<o:File HRef=”master03.htm”/>
<o:File HRef=”master03.xml”/>
<o:File HRef=”preview.wmf”/>
<o:File HRef=”pres.xml”/>
<o:File HRef=”slide0001.htm”/>
<o:File HRef=”master03_stylesheet.css”/>
<o:MainFile HRef=”../Hello Worldppt.htm”/>
<o:File HRef=”error.htm”/>
<o:File HRef=”script.js”/>
<o:File HRef=”filelist.xml”/>
</xml>
Why so many files? Each is essentially a style sheet for different purposes: one for all the XML-capable browsers (guess who?), one not, one that uses CSS, one that uses Javascript. Slide0001.htm is where you’ll find the actual content for our presentation. And “Hello Worldppt.htm” is the control code for the whole show: you’ll see a small Visual Basic program that determines which browser you are running and what you get to see.
if ( msie >= 0 )
ver = parseFloat( appVer.substring( msie+5, appVer.indexOf ( “;”, msie ) ) );
else
ver = parseInt( appVer );
path = “./Hello%20Worldppt_files/error.htm”;
if( (ver < 4) || (msie <= 0) )
{
if ( !msieWin31 && ( ( msie >= 0 && ver >= 3.02 ) || ( msie < 0 && ver >= 3 ) ) )
window.location.replace( path );
else
window.location.href = path;
}
else
window.location.replace( ‘./Hello%20Worldppt_files/slide0001.htm’+document.location.hash );
Again, this has the effect of making it easy to publish your work to the web and exchange it with others.
If you buy into my explanation, the whole idea of suing Microsoft for putting IE into the operating system becomes really a minor sideshow. With Office 2000, something bigger is at stake, to capture all the current non-MS Office users, those few hardy holdouts who use Lotus and Corel tools to create their documents, spreadsheets and presentations. And while they are at it, Microsoft also wants to capture those others who use non-MS tools for writing web pages. The underlying effort is to be the single document interchange vendor for everyone, even those folks who don’t run some form of Windows on their desktop. And MS-XML will be the Trojan Horse to pull this off. Taken in this context, whether Microsoft supports or doesn’t support the overall XML standards effort isn’t that important anymore. Because soon more people will be writing MS-XML documents than anything else, and then MS-XML will become >the< standard.
Well, certainly this week our friends at Microsoft (and Netscape) have been in the news, as the antitrust trial gets underway. Let’s say you, as an IS decision maker concerned about your own job-safety, allow (or mandate) your systems people to construct your internal web-based applications using everything Microsoft. Your web pages are made on NT running IIS with ASP, and use Visual Basic scripts on both browser and server. You write all of this using Visual InterDev as the toolkit, and of course you stick with the Microsoft-flavor dynamic HTML and maybe XML with the Microsoft XMS extensions. Your systems people want to use all the cool ActiveX widgets and some data-mapped form fields at the browser end. So you decide to use only Internet Explorer inside the company as the sole supported browser to view all these pages.
You think: why worry about any standards? It’s an INTRANET! And you want your systems people (and you) to deliver the richest, coolest stuff in the shortest time. Of course! Oh, and maybe you dictated that the Microsoft stuff is the “company standard” in order to reduce acquisition and support costs.
Now think a moment to where you were back in 1982? You probably were running lots of IBM stuff.
One day, your boss (or even the CEO) says: “Connect us with our customers! I want our customers to have access to their order status, account information, and our own contact people. This must go way beyond an online store. We’re going to do one-to-one marketing here, and since we’re already web based, this should be EASY, right?” The CEO might issue similar orders relating to vendors.
Oh no! Your customers have various browsers out there and some of them won’t display the pages your people developed. You issue orders to develop “browser agnostic” web pages. And now those pesky IETF and W3C standards get in the way! Your systems people start gasping for air because it means the loss of some coolness, and worse, some drudge-type work. Or maybe you decide to develop a whole parallel set of pages for outsiders and maintain both. And who’s gonna pay for this?
So you go back to the CEO and ask for more money. He blows his stack and asks why the hell we can’t use what we have! It’s the WEB, for heaven’s sake. “Um… well, it really isn’t the web, boss, it’s the Microsoft Web® and it’s … better!” So the CEO relents and makes a mental note of this screwup….
But wait, there’s more! Your systems people have been using InterDev and/or FrontPage and don’t know much about HTML, cascading style sheets, form formatting, table layout etc. They have been isolated from the “ugly, low level” standard languages and technologies and have been using the Microsoft web development tools. Those tools cost a lot of money, but they saved even more in labor, eh? Not any more. Well, for some more money you can use more Microsoft technology to develop browser-agnostic pages. But what does that mean? And who’s going to fix a problem with Opera or Netscape? Someone has to know about those pesky standards and be familiar enough to deal with them. More time, more money. Are you going to go back to the CEO again? Was your “the safe thing is to go with Microsoft” decision really safe?
I haven’t even MENTIONED the issue of portability at the server end. You are of course locked into Microsoft technology in your shop. Yesterdays “IBM shop” is todays “Microsoft Shop”. Remember how hard we tried to keep Compaq from becoming the “approved standard” for PCs in the mid 1980s?
By now you should have a fairly tight feeling in the pit of your stomach. So here it is:
(This story ran in O’Reilly’s Web Review in 1998. Links probably don’t work.)
Electronic commerce (or “eCommerce” as it’s affectionally known), offers the small businessman the ability to compete on a global scale. There are some basic requirements that you’ll need first, such as the hardware and a database with information about all of your products. Once you have the bare necessities, your next step is to build your Web store, and what better way than with “stores in a box.” These software suites have almost everything you need to open your virtual door for business. This week, David Strom takes a look at six of the more common suite sets to see which is closer to actually allowing you to open up shop on the Net.
So, you want to set up your own Web storefront? Be prepared to do a lot of research and spend some time testing products. While several vendors offer a “suite” of software designed to build and operate your Web store, the suites lack integration and are far from complete solutions. These packages are also very choosy when it comes to supporting particular databases and Web servers. Given the $3,500 to $10,000 price tags, you might be better off assembling your own series of products to do the job.
I have tried six of these suites over the past year, including Microsoft’s Commerce Server (which used to be called Merchant), IBM’s net.Commerce, O’Reilly’s WebSite Professional, iCat’s Electronic Commerce Suite, Intershop’s Online, Pacific Coast Software’s WebCatalog and WebMerchant. All of these run on Windows NT — Pacific Coast’s software also runs on the Macintosh and is resold by Starnine. IBM’s product also runs on AIX and Solaris. See the table for more information and prices.
If you are thinking that setting up one of these suites is like setting up Microsoft Office, you’re in for an unpleasant surprise. Each is far from being turnkey: WebSite (from the same company that publishes Web Review, O’Reilly) and WebMerchant were the easiest to get going, but they still needed some tweaking. The others took anywhere from several hours to several days, including phone calls to technical support personnel.
None of these products, with the exception of WebSite, comes cheap. Expect to pay around $5,000 for the software, and more if you want to run multiple Web servers or on multiple database servers. That doesn’t include the price of your hardware, and in some cases the price of a database server as well. You’ll also have fees on top of this for yearly support contracts (something I’d recommend in this case, given the complexity of each suite) of around 15% of the software purchase price, and other fees to process each payment through your credit card merchant (around 2% of each transaction). That’s a lot of dough, given how much effort you’ll need to get these storefronts going.
Before you get started with these products, you should ask yourself the following questions:
1. What is my database expertise?
Behind every great Web storefront is a solid database, and that means you’ll probably need a good database administrator on staff to make these products work well. Don’t have one? Then stick to WebCatalog and WebMerchant, which use flat files to keep track of orders and products for sale.
Why is a database so important? You’ll want to integrate the Web store into your existing operations, and that means tying together your existing databases with the ones that the suite sets up for your storefront. If you don’t do this, you’ll have to maintain two separate systems: One for the physical store, and one for the Web store. That gets old real fast.
Start your search for the right product by first seeing whether it will support any of your existing accounting or inventory databases. Most offer very limited support. For example, Microsoft’s Commerce server really needs its own SQL Server to operate its storefront, while O’Reilly’s WebSite works off a series of Access databases. Intershop comes with a copy of Sybase SQL 11, iCat comes with a bundled copy of Sybase SQL Anywhere (the single-user version) while IBM’s net.Commerce comes with a copy of DB2.
Some vendors claim they offer support for a wide range of database servers via ODBC: Frankly, I don’t believe this is possible, given what I’ve seen so far with getting ODBC drivers to work. This is just another headache you don’t need.
You can work around the database issue by first setting up a sample storefront and then examining the database structure of tables and fields that is created. If you know enough, you can then convert your existing inventory and accounting systems into the format required by the product. This could be a great deal of work, however.
2. How much HTML do you know?
While the suites come with a variety of wizards and automated setup routines to format the pages that will become your storefront, you’ll want to go in and make changes to these pages eventually. That means knowing not only HTML but the various proprietary extensions to HTML that each suite uses. Some of these extensions are well documented (such as the WebSite ones) and others are fairly obscure (such as the WebCatalog documentation). If you are new to HTML, then this isn’t the place to learn.
3. What payment scheme are you going to use?
Chances are you want to have a form on your Web store that will enable people to type in their credit card number as your primary payment option. You will probably be limited to accepting purchases in U.S. funds, and depositing these funds to a U.S. merchant banking account — if you want more flexibility than this, you’re out of luck for the time being.
All of the suites offer credit card processing, but do it in different ways. Intershop has the widest range of payment options, meaning that you can set up your store to accept more than one payment method, offering your customers lots of choices. The others are more limiting, and in some cases only support a single payment method. This in my mind is the single biggest problem for Web storefronts — if physical stores operated in this fashion, they would be out of business quickly.
In the meantime, check the fine print before you get too far down the road with any particular suite. For example, WebSite has two different payment methods but only one can be used for any particular storefront. It comes with the Internet Secure payment software, which works with a single credit card processor that takes both Canadian and U.S. funds. You could also set up WebSite to support CyberCash payments, but then you couldn’t accept credit cards. And if you don’t want to use the credit card processor that works with Internet Secure (either because they are fairly costly or because you already have your own processor), then you won’t want to run WebSite.
iCat doesn’t come with any payment software, but there are many different third parties that can provide this so that isn’t as much of an issue other than finding the right one and paying extra for this software. Microsoft and IBM’s suite support the Verifone software called vPOS, among other systems as well. And WebMerchant supports a variety of payment schemes.
When you see what is involved implementing the payment process you might change your mind about using any of these suites. For example, if you use the First Virtual payment option with WebMerchant, you have to manually move an order from the pending to completed folder when you receive the payment authorization. This can quickly get tedious for even a small number of purchases.
So what are your alternatives? My recommendation is to use software from ICVerify. They support a wide variety of secure Web servers on NT and Unix, and a wide range of payment processors as well. You can download a Windows demo version of their software that does everything except actually move the money.
4. How easy is it to maintain your store?
This is yet another weakness with these products. If it takes you hours to add a new line of goods to your store, you probably aren’t going to want to update your pages very often.
In the ideal world, the storefront should reflect your own physical inventory and retailing options. It should show upsells (buy the CD, get a CD case for an extra dollar!) and specials, closeouts and merchandize that is out of stock. But none of these suites is really useful for doing this, although some try more than others.
With most of the suites, you maintain your site via a series of HTML forms that are password-protected for administrators. But this quickly gets tedious making the changes: You have to manually edit each Web page and type in the updates.
So who wins?
So overall, where does that leave things with each of the suites? I’d rate the IBM net.Commerce as having the best trade-offs between ease of use and options, although using DB2 isn’t for everyone. WebSite and the Pacific Coast software are the easiest to use. iCat has the widest third-party support, but buying these options can quickly get expensive. If you believe the world will follow Microsoft, then take a closer look at MS Commerce. And Intershop and Web Merchant are the ways to go if you want to have the widest choice in payment options.
(This essay was posted on my Web site in September 1996 and reprinted here for reference.)
This issue marks the first anniversary of Web Informant. I thought that it would be a good time to take a look back even further in the past to the fall of 1986. Back then, I began my writing career working for a special supplement to PC Week called Connectivity. At the time, I had left working for Transamerica Occidental Life Insurance Company in downtown Los Angeles, in a 30 person information center. This group was part of their internal IS department to support end-user computing, and I was getting more interested in local area networks, having installed the first one at Transamerica that summer of 1986. I remember working in an Information Center (as we liked to capitalize it). It was a proud profession: there were trade mags, shows and even IBM product lines geared towards us.
First, let’s talk about the Internet. Back then, there were still a small number of different networks that had grown up from various US government-funded projects. A prophetic paper by Dennis Jennings, Larry Landweber and David Farber for Science magazine mentioned how “NSFnet will probably have the most impact on science of all networking activities in the US at this time (February 1986).” NSFnet went on to evolve into the Internet backbone that we have today.
For those of you new to the Internet, as recently as five years ago a private corporation would have had lots of difficulty finding an Internet Service Provider, let alone getting their own .com domain name established. When I began Network Computing magazine in the summer of 1990, we had to piggyback on a university’s email system to get Internet access for our editors! Even as recently as three years ago, there was a single ISP with numbers in my local area code that I could call: now there are over a dozen that I can choose from for Internet access.
Ten years ago, most of the computers on the “Internet” were running proprietary operating systems or Unix. The idea of having a desktop PC running IP was ludicrous. Now it is taken for granted, and comes included with all desktop operating systems.
Mobile PC products certainly have gone through quite an evolution: back ten years ago, I remember my first portable PC was the Radio Shack model 100, a unit I am proud to say I still have somewhere in my office. It had a terrific keyboard, a 1200 (internal) bps modem, and an eight-line by 40 character screen that I used to file my stories for PC Week. Steve Roberts sent me a photo showing him typing on one next to the massive bicycle-cum-office that he rode around America back in 1984. Since then, I think I have used about 20 different laptops to write my articles. My favorite was the NEC Ultralight notebook that I used in 1989 (one photo I have is of me typing away at the hospital shortly after my daughter was born). Unlike my present portable, it was small, light (less than 4 lbs.) and had great battery life.
Roberts’ article mentions the “Plus [automated teller] System promises to make all this [mobility] easy someday, but at the moment [May 1984], its nodes are far more sparsely scattered than are those of CompuServe.” Interesting how both networks have grown over the years, but I think the number of ATMs far exceeds CompuServe nodes at this point. Roberts is now on to outfitting a sailing ship with various computers, by the way.
What computers were we using ten years ago? Well, at Transamerica the most popular device (in terms of numbers, not necessarily in terms of emotions) was still the 3270 terminal — by the time I left in 1986 we almost had 2,000 apiece of PCs and mainframe terminals. The best PC at the time was the 386 with 640 Kilobytes of RAM, introduced by Compaq that fall. However, most of the machines we had at Transamerica were 8088s, with some ATs.
What was the predominant software back then? Why, Lotus 1-2-3 of course. I still have my copy of version 1A, and not too long ago I installed it on a machine and was gratified to see its familiar grid pattern. And also I was gratified to see that I could still remember how to use it.
Back then, a good portion of my end-user support effort was getting the right video drivers (remember Hercules graphics on IBM monochrome monitors?) to work properly. MicroChannel and EISA bus machines had not yet been invented, and Apple had begun selling Macintoshes a few years earlier. Most monitors were woefully small by today’s standards.
Networking was a very different picture back ten years ago. Of course, the biggest networks were still those connecting terminals and PCs with 3270 cards to IBM mainframes. In the fall of 1986 the Manufacturing Automation Protocol was picking up interest (and actually on version 2.1!). A company called Industrial Networking Inc. was formed to sell products from Ungermann-Bass and General Electric, only to fold a year later. UB, by the way, was the first company besides IBM to sell token ring gear that summer. Now the automotive companies (the core group of MAP’s original sponsors) are fully behind IP and the Internet.
Novell and IBM were the predominant LAN software vendors back then — NetWare was one of the first products to take advantage of the protected mode of the 286 processor, something that IBM finally delivered on with OS/2 several years later. Token rings were just 4 megabits, and used passive “MAUs” for hubs that still are around today (mainly because they are one of the few hubs that don’t require any power to operate). 3Com was selling Ethernet cards by the truckload but having a hard time with the original “network computer,” a diskless workstation called the 3Station. Some things never change.
Looking back at PC Week Connectivity (yes, I still have the back issues), I am amused how many of the stories written then still cover many of the same themes we have today. One of my first reviews published for PC Week ran in Jan 1987 about Attachmate’s 3270 emulation products. At the time Attachmate was a brand-new company, and I stated “No matter how good the Attachmate product is, it will be tough to gain market share over DCA and IBM.” Since then the company bought DCA and IBM has played a lesser role with 3270 products. Oh well, can’t always call ’em.
The December 1986 PC Week had some interesting prices: 2400 bps internal modems from Hayes were selling at close to $800, 80386 PC 18 Mhz clones with 512 k bytes of RAM were going for $4500, Microsoft was selling version 1.03 of Windows and version 3.1 of Word, and 3Com’s servers cost $6000 and had whopping 70 megabyte disks!
What was the computer trade publication landscape back then? Well, PC Week, Infoworld and Computerworld were the predominant news weeklies. I re-read the parody issue called ConfuserWorld which was printed in 1983 and contained headlines such as “IBM calls it quits”. The best part of the parody publication were its ads, though: one for a new computer-related TV show called Happy Daze where “the Fonz fixes an HP-4000 by kicking it in the drive unit.” Another ad for Kodex maxi- modems promised that your could “communicate with confidence” on those “special times of the month when communications is at its peak.”
There weren’t any Internet-related publications, and just LAN Times and LAN magazine were devoted to networking topics. Networld and Interop were separate trade shows, and Comdex was still too crazy even then.
We have come a long way in a decade. Thanks for all of you that have sent me documents and anecdotes from that era — I appreciated reading all the responses. It has made for an interesting trip down memory lane.
[NOTE: This review ran in c|net in 1996.]
In just a little over 12 months Web server software has gone from curiosity to commodity: once the province of Unix gurus, now they are geared for inexperienced Webmasters and available on every operating system imaginable, including Macintosh, Windows and NetWare. What a difference a year makes: last year there were few products in this “category” — now there are several dozen and more being introduced each day. There is now a confusing array of features, counterclaims, and markets ranging from the “personal” Web Server to ones supporting full-fledged sites to conduct Internet-based commerce.
Think of Web server software like a network operating system but just for your Web pages: the software sends your pages to Web browsers on request and keeps track of who sees which files. Your server also has the ability to run various animation routines, ranging from the simple Server-Side Includes (SSI) that can update a static page with dynamic information (such as the number of people to visit) to the more complex Java-based video-like animations.
To find out which product is easiest to use and fastest at serving up your files, we looked at four of the more popular Windows NT servers, along with one NetWare version: Netscape’s beta version of FastTrack server, announced last month and available this month; Microsoft’s Internet Information Server (IIS); Process Software’s Purveyor Web Server; O’Reilly’s WebSite; and our lone NetWare server called SiteBuilder from American Internet Corp.
The products, which carry price tags of free (for Microsoft’s offering) to $1500 (for SiteBuilder) all were fairly easy to setup and get going: but overall we liked the mix of features found in WebSite the best. While not the fastest or the slowest, WebSite had the best documentation of its features and the most flexibility in terms of setup, along with lots of tutorials covering topics such as how to setup indexing and track visitors. WebSite also comes with the best set of tools to keep track of your links and to allow visitors to search the content of your site.
Microsoft’s IIS and Netscape’s FastTrack were the fastest at serving Web pages and could handle the greatest number of client connections on an Ethernet network, while SiteBuilder had the worst performance and could handle the fewest clients.
All of the products had their good points: We liked the simplicity, sparseness, and speed of IIS and given that it is free think it will become “My First Web Server” for many people. However, its overall feature set leaves something to be desired: it has limited support for Server-Side Includes and doesn’t come with any bundled search tools. Purveyor’s appeal was mostly in its flexible administration features, and SiteBuilder was notable for its support of remote CGI scripts and a series of well-documented Server-Side Includes. Finally, while FastTrack is still in beta, we think it has a great deal of potential to become the fastest server on NT: it delivered solid performance across the board with very little degradation from 10 to 130 clients connecting concurrently.
———————-
We tested these servers at KeyLabs, Inc., a new testing facility in Provo, Utah. Up to 130 individual Windows 95 machines were assembled, ranging from 486s to Pentium 130s. Each machine had at least 16 megabytes of RAM installed and an NE2000 10 megabit Ethernet card, along with using the Microsoft TCP/IP stack and Netscape Navigator Gold v.2.0 browsers. Machines used included brands from HP, DEC, Compaq, and ALR.
Individual segments of ten machines apiece were connected to their own switched 10/100 megabit 3Com hub, with each hub connecting back to a central Synoptics hub using 100 megabit Fast Ethernet. Each Web server was connected via an Intel 100 Pro adapter to this hub. The servers (running either NetWare 4.1 or NT Server 3.5.1) were Pentium Pro 200 machines with 32 megabytes of RAM and a 1 gigabyte SCSI disk.
Our test suite consisted of loading a series of Web pages from the server that were about 10 k bytes in size and were similar in content and complexity to c|net’s home page. The page included about 300 k bytes of graphics files, ranging in size from a few bytes to several hundred. Each set of Windows 95 workstations would boot up, load Navigator, and then proceed to load a series of ten similar pages from the server. We timed how long it took the browser to complete loading each page and then averaged the results over the total number of workstations. Our test bed included a script to run groups of 10, 40, 70, 100 and 130 workstations concurrently against the same Web server, thereby simulating the worst-case scenario on a busy network where many users wanted to obtain the same data at the same time. After each machine was finished browsing these pages, it would write the time it took to a log file and then disconnect from the network.
We used various monitoring tools to ensure that no network bottlenecks were observed and that the servers were running according to their standard defaults. Logging was enabled in each server, and other than that no other special adjustments were made in any operating system or Web server parameters. No special tuning or other attempt was made to improve performance of each product.
We measured four parameters during our tests: first, we looked at the peak server processor utilization rate observed during each test run. This measurement is reported by both NT’s Performance Monitor and NetWare’s console monitor (although not comparable across the two operating systems) and is an indication of how busy the server is in filling requests from client browsers. Second, we examined the peak network wire utilization rate coming from the server to the Synoptic hub, using Novell’s ManageWise to monitor the 100 megabit server connection. This tells us if the network is being a bottleneck or if the server is limited in satisfying browser requests. (We observed the latter case in all of our tests.) Third, we examined disk access to ensure that the client machines were getting their pages from the disk cache of the server rather than waiting for the drive, which we observed in all situations. Finally, we calculated the time to complete browsing each of our sample Web pages as mentioned above.
———————
Internet Information Server v 1.0
IIS covers just the basics but does them well. It doesn’t have the most features or come with the most flexible remote administration tools, but it is the only NT product that covers all four NT CPU types and comes with gopher, ftp and Web servers combined. It came in a close second in our performance tests, which is surprising given Microsoft’s tendency to deliver below-par on its 1.0 releases. And given its low cost, this could become everyone’s first Web server.
Basics.
Setting up IIS is a dream: you answer a few questions and it copies files. There are three tough spots: First off, you need NT Server (other products can run on either Server or Workstation versions). Second, you must install at least Service Pack 3 — which the program will let you know if you need and which is included on the CD if you buy the product instead of downloading it for free. This alone may be worth the $99 price, since the service pack takes up 8 megabytes of software and downloading it could take some time. Finally, you must setup your server security correctly. IIS automatically creates an “InternetGuest” account with no password which is used by every browser to connect to your server: you may or may not want to use this account, depending on what files you intend to serve on your site.
IIS runs on all four NT processor platforms: Alpha, Intel, MIPS, and PowerPC. That is unique among NT servers, most of which only run only on Intel and Alpha machines at most.
IIS is also unique in providing out of the box a combination gopher/ftp/Web server: you can install just whichever pieces you’d like. (While NT comes with its own built-in ftp and gopher servers, you’ll probably not want to use either — they are fairly limited in terms of supporting more than a few connections and are cumbersome to administer. The installation routine of IIS will remove these previous versions.)
For a basic Web server, you can do some fairly sophisticated tasks: for example, you can allow or deny browser connections from particular IP addresses, and limit the overall network usage to match the available bandwidth that you have for your Internet connection.
IIS’ documentation isn’t plentiful but does cover the basics. There are a number of resources on Microsoft’s Web site that are helpful to getting you started, along with sample applications and scripts that are installed with the product.
Nevertheless, if you are looking for a Web server that will support proxy servers, or one that comes with its own search tools, IIS is not for you: try either WebSite or Purveyor.
Notable Features.
Microsoft, along with Process Software, has developed their own programming interface called Internet Server API. All Web servers support CGI programming, but with ISAPI Microsoft and Process have tried to go a step further and produce a series of programming interfaces that are highly tuned to perform well and make use of Windows dynamic-linked libraries. However, few products support this API yet, and only IIS and Process’ own Purveyor NT server make use of it. However, given the interest in the development community for this API, it is only a matter of time before we see products to take advantage of these interfaces.
Also notable is support via a database connector for all ODBC servers, along with helpful hints on how to implement connections to SQL Server.
IIS extends the information that is reported by NT’s Performance Monitor tool, including such things as Web browser connections and bytes/sec delivered by the server. You add and configure these various metrics using the standard dialogs in this tool, and you can monitor a remote machine across the network (provided you have an account to do so). You would expect Microsoft to make use of its own NT-based tools in this fashion, although Purveyor has about the same number of variables you can report in Performance Monitor (WebSite has just a few and FastTrack has none). We liked this ability to keep track of what our Web server was doing with such a graphical tool, but admittedly we are somewhat strange. For the vast majority, this probably is more of a curiosity, although it can be helpful when it comes time to determine where your server bottlenecks are and whether you need to add more memory or processing power to handle increased loads.
IIS does not offer any support for Java and has limited support for Sever-Side Includes in this release.
Administration, security and logging.
You administer IIS via its own tool, which can be done on the same server or run on another NT Server (with at least Service Pack 3 setup) connected across a network or the Internet. To perform the remote administration, you basically install the software with just the administrative component of the corresponding services (Web, gopher, ftp) you desire. It is all graphical, with easily understood dialogs and buttons, including a VCR-like start/stop/pause series of icons to control your various services.
You can do everything remotely with the administration tool that can do locally at the server’s console, although you’ll need an account on your NT Server that has the right access rights along with familiarity in using both User Manager for Domains and Server Manager in addition to the IIS admin tool. This differs in approach from FastTrack and Purveyor, both of which use Web forms run from inside any browser to administer their servers. If you are more familiar with NT users and domains, this will be a piece of cake. If you are new to NT, you might want to use another server or learn your way around NT networking first.
Microsoft has said that the next version of IIS will offer a Web browser-based administration tool similar to how FastTrack works: that is a positive development in our opinion, giving administrators tremendous flexibility.
Once you start looking at your logs you’ll see some of IIS’ limitations. You don’t have many choices when it comes to manipulating your access logs: if you want more, you can switch your logging over to a format that can be read by SQL Server. Otherwise, you can set up a log file on a daily, weekly, or monthly basis, or else to create a new log when it reaches a certain size. That’s about it — other Web servers such as WebSite have much more control in how they produce logs. Another downside is that IIS produces logs in its own format: if you want to make use of one of the variety of log analyzer tools, you’ll have to convert them into the more usual Common Log Format by running a utility called CONVLOG.EXE.
IIS has two different choices for how browsers can securely access its files: the first is what Microsoft calls “basic/clear text:” user names and passwords are sent over the network in clear text without any encryption. Although it lacks security, it is used by all other Web servers and browsers. An alternative method is called “Windows NT Challenge/Response:” this is more secure in that it sends encrypted passwords over the wire, but only works with users running Microsoft’s Internet Explorer browsers.
There are some other limitations with IIS, especially when it comes to supporting hosting multiple domains on a single NT Web server. First of all, you’ll have to either install a separate network card for each IP address, or be familiar with how to edit NT’s registry to get around this restriction. The latter method isn’t documented or supported by Microsoft.
All your servers will require the same default document name (such as “index.html”) and require all CGI scripts to be placed in the same directory, since there is only a single place to entry this information for the entire server. Other servers, such as WebSite and FastTrack, offer more choices here.
Performance.
IIS delivered either the best or the second best performance in terms of all three metrics we used in our tests: its pages completed faster than anyone else’s server with the exception of FastTrack, and the server could handle the most number of connected clients before hitting any bottlenecks in terms of processor or network utilization. However, because the server was so speedy IIS did have the highest network utilization observed by any server — with 130 clients running, a 100 megabit wire between the server and the network reached 54 % of its capacity.
IIS bested FastTrack on tests of fewer than 130 client connections, but just by about one percent margin — too close to really measure. On the 130-client test, FastTrack was faster than IIS by four percent, again a very small margin. Unlike all of our other NT-based servers, IIS never hit 100 % processor utilization during the tests, indicating that the server was the most efficient in terms of making use of NT operating system resources. It would deliver our test pages in about two to three seconds on average across the entire range of client connections (from 10 to 130 concurrent machines).
Our tests (see How We Tested) were designed for worst-case situations to expose product weaknesses. In actual use on real networks, IIS will probably have plenty of performance to spare and you’ll see limitations and bottlenecks in the network long before you’ll see IIS running out of gas.
————————————————
O’Reilly and Associates
WebSite has a lot going for it — of all the NT servers it has the best approach in terms of tools included, feature set, and ease of setup and administration. It runs on both NT and Windows 95 — handy for testing and production users alike. It has the best set of features for supporting multiple domains on a single server, and it has the best documentation by far of any Web server we’ve seen — the book alone is worth the price of the software. It offers an acceptable middle ground in terms of performance too: it is neither the fastest or slowest in terms of delivering pages.
Basics
O’Reilly has tried to assemble the best set of tools together in its latest WebSite package and has succeeded for the most part in producing a flexible and powerful Web server that is easy to setup and maintain. There is more meat on the bones than with Microsoft’s IIS, and it makes for a satisfying meal.
Setup is fairly straightforward: you answer a few questions and it proceeds to copy its files. There is much more control over features than with IIS, but O’Reilly has made navigating the various dialogs as obvious as they can be. Some screens are prefaced by dialog boxes saying “if you don’t know what you are doing here, leave the parameters alone” — a nice friendly warning to take a look at the manual for more help.
And the manual: well, let’s just say that this is the best documentation of any Web server we’ve seen by a long shot. Unlike any other Web server, WebSite has a professionally written book that serves as part documentation, part Web server and HTML tutorial, and part reference guide for setting up anyone else’s Web server software. You would expect such quality from O’Reilly, who have been publishing Unix and other technical trade books for years. The book alone is almost worth the entire price of the software, and covers basic HTML, how to manage your Web, create Common Gateway Interface programs and scripts, and other information that is rarely touched upon by competing products. For example, one chapter provides copious examples on how to write Visual Basic scripts that can interact with your Web pages.
Earlier versions of WebSite had poor support for image maps: the current version has improved on this and supports both NCSA and CERN-style syntax for these image maps.
One downside is that WebSite only runs on Intel machines at present — but it does run on either NT Server or Workstation versions, as well as Windows 95.
Notable Features
WebSite has the flexibility to run as both an NT service and as an application icon on the desktop: this is handy for doing testing, when you’d like to start and stop the server at will. (Both Purveyor and IIS also have the ability to be controlled from the desktop as well as from NT’s Control Panel/Services.) Once you switch over to production use, you can have it run as a service automatically. It can also run as an application under Windows 95, although for production uses we’d recommend staying away from this operating system.
The package contains some nice extras, including Map This!, an image map editor that is very useful, Hot Dog HTML editor (that isn’t very useful, but then we don’t particularly like any HTML editors anyway), and two other utilities: WebFind, a nifty and easy-to-use search utility, and WebView, a graphical viewer that shows documents, links, and paths around your own Web. While you can find better search engines, WebFind does the job with a minimum of fuss and bother and is relatively fast: you can index portions of your site and make them available to your users relatively quickly. It is a single executable program that you can place in your own pages — other search engines require complex installation of a series of files and even more complicated maintenance of their indices.
WebView is not always the easiest navigation tool to use, but for beginners it is a helpful way to see how your own Web is put together and whether the links you’ve created are still working.
Perhaps the best example of WebSite’s simplicity yet power is with its support for multiple Web domains: this means that a Web hosting provider could easily use a single copy of WebSite to set up several people’s Web sites, keeping the pages, scripts and graphics of each site separate from the others. While several of the other servers offered the ability to do some of this, only FastTrack came close to the flexibility in multiple Web hosting offered by WebSite.
WebSite includes support for several of its own programming interfaces, including the ability to write DOS batch files and Visual Basic programs as scripts along with the standard perl scripts.
WebSite’s manual also has a small chapter on its Server Side Include support, along with a few examples on how to produce a page counter application. Any file can contain them, unlike Purveyor which requires special .HTP suffix for the server to recognize them. WebSite’s SSI documentation It isn’t as thorough as SiteBuilder’s however — a small flaw in an otherwise excellent book.
Administration, security, logging.
About the only fault we could find with WebSite is in its administrative utility, which can be run on either NT or Windows 95 just like the rest of the software. The problem is that it neither is browser-based, such as FastTrack, or uses NT-native functions, such as IIS: it is somewhere in-between.
Getting the administrative tool to run on a remote machine is not simple and will require a multiple-step process: fortunately, this process is well documented in the stellar manual.
One issue is when you make changes to the registry or for allowing user access — you’ll need to restart the server. If you are connecting to your Web server remotely, this is done via Windows NT’ Server Manager.
You have lots of choices when it comes to setting up various access controls: you can allow individual users, groups, and IP addresses to various parts of your Web quite easily with either the WebSite Admin tool or a separate tool called WSAUTH which runs on the command line. (Getting this to work across a network will require writing CGI scripts or else running an additional piece of software from a third-party that allows incoming telnet connections: either of which will take some doing to get working properly, however.)
If you want a server that supports SSL or SHTTP protocols, you’ll have to use WebSite Professional, which is a separate product that was in beta during our tests but should be available next month with support for both protocols.
There is a great deal of control over the logs that WebSite produces, and this is one of the product’s strengths. Like other Web servers, it produces an access log that shows browser connections in a standard format that can be read by various log analyzer products. But it also creates two other logs, a server log and an error log. The server log has the ability to track up to ten different parameters, including the version and vendor of a particular browser, the location clicked on any image maps, the response of CGI script processes, and all user authentication attempts. Only FastTrack comes close to this level of detail and flexibility with its logs as WebSite, and this is very useful for debugging various server routines and in setting up ways to track not just the visitors but what they actually do once they connect to your site.
Performance
WebSite wasn’t the fastest server, but neither was it the slowest: it offered a nice middle ground, offering pages at anywhere from two to six seconds on average across the entire range of tests from 10 to 130 concurrent machines. However, it produced one of the widest range of results for individual page requests: we saw machines that would take over 10 seconds to get their page delivered, while others would take less than a second. This could be because the server reached 100 % CPU utilization several times during the 70-, 100- and 130-machine tests: so, page requests were waiting for the CPU to finish working on other requests. Keep in mind that our tests were designed to show the worst-case scenario of having many machines hit the server at the same time: your own conditions will probably not be as demanding.
If you are planning on serving more than just a small workgroup, you’ll probably want to run the software on its own NT machine. Again, this is because at 70 concurrent browsers we could consume 100 % of the processor cycles on a 200 MHz Pentium Pro.
WebSite just chugs along, even when it has to deal with serving over a hundred concurrent requests: our tests showed that the average requests per second for even 130 users were about 21 — compared to 13 for 40 users. This indicates that the server takes its time to deliver pages, but can scale well at heavier loads. We were able to see it deliver around 1700 kbytes per second even at the heaviest of loads. This means that WebSite is an excellent server for Internet connections that typically have bandwidth around 9600 k bps.
————————————————
Process Software Corp.
Purveyor has had a product on NT longer than anyone and has established a good middle ground between having a good selection of features but taking somewhat longer to setup and use than the other servers. It is not as easy to use as WebSite, and somewhat slower in our tests. This performance gap between Purveyor and WebSite widens as more machines connect to the same server.
Purveyor has the best remote administration features of any Web server, however, allowing both simple Windows-based console operations along with browser-based operations on the remote machines. And Process, like Microsoft, has spent some time with integrating various Web server functions into NT’s operating system and commands.
Basics
Setup of Purveyor is a bit cumbersome, mainly because the software comes on 10 diskettes and takes some time to load. But otherwise getting it going was fairly simple.
Documentation of the product is sub-par and is the product’s weakest area. However, to Process’ credit, there is a separate manual entitled “Guide to Server Security” which covers how Purveyor handles access of its files that should be mandatory reading. Purveyor’s controls and dialog boxes for this important function are a bit cluttered and not as easy to understand as either IIS or WebSite.
Notable Features
Purveyor’s server supports the ability to restrict users of Web, ftp and gopher proxy servers — this means that Purveyor can cache particular documents that are available outside the corporate firewall for these services and work with the security established on these protocols. None of the other products offer this feature.
Included with the Purveyor software is a “Cool Tools” CD that includes such goodies as a perl interpreter for NT, image map editors, various HTML editing tools along with fully-operational public-domain gopher and mail servers and other useful things for Windows 3.1, Windows 95, and NT (both Alpha and Intel). There isn’t any documentation other than what you can browse on-line, so setting some of these products up may be fairly challenging. And in some cases, you’d be better off downloading more recent versions of these products from the Internet, since many of these programs are freely available at many sites.
Part of the tools included in the package is a copy of Verity’s search engine. There is scarcely any printed documentation on how to make use of this powerful tool, although there is some information on how to prepare searches with some HTML files. The version included with Purveyor is a command-line only interface, unlike the more elegant WebFind utility included with WebSite which is graphical.
Purveyor adds its own menu to Windows’ File Manager to handle how access controls are managed. When you point to a series of files and choose this menu option, you are brought directly into NT’s User Manager to make the necessary adjustments. There is no need to create your own lists outside of the NT system software itself. This is a nice way to make the best use of native NT routines so that webmasters can allow different user names or IP addresses to have different views of the web, and it is too bad that Microsoft didn’t make the whole process easier to begin with in their own operating systems software. While it is convenient to use this at the server’s console, it can be a challenge to run this utility on a remote machine.
Microsoft, along with Process Software, has developed their own programming interface called Internet Server API. All Web servers support CGI programming, but with ISAPI Microsoft and Process have tried to go a step further and produce a series of programming interfaces that are highly tuned to perform well and make use of Windows dynamic-linked libraries. However, few products support this API yet, and only IIS and Process’ own Purveyor NT server make use of it. However, given the level of interest by developers, we expect to see several products that will make use of these interfaces soon.
Like IIS, Purveyor also includes a link to ODBC databases — including Oracle, SQL Server and several other Microsoft products. This is accomplished via a Data Wizard that sets up the query forms for you, after installing the ODBC drivers. The printed documentation is very sparse on this aspect, although there are some on-line help screens that can walk you through the process.
And also like IIS, Purveyor has put together its monitoring by augmenting NT’s Performance Monitor tool. There are over a dozen different parameters you can view, including connections per second, total bytes received, files sent, and CGI requests. Purveyor has different parameters than IIS, although in about the same number. WebSite has far fewer and FastTrack has none, by the way.
Included with Purveyor is a link browser which can detect broken links in your documents. We found it fairly easy to use, although it occasionally reported broken links that were just fine.
Purveyor’s support for Server-Side Includes is poor compared to either WebSite or SiteBuilder, and if you plan on using them you need to rename your files with .HTP extensions. If you plan on doing a great of SSI programming, you might want to look at SiteBuilder instead — it has far better documentation and support of this feature.
Administration, security, logging.
Purveyor’s logging features are very advanced, although not enabled as the default. (We changed this so that we could compare performance with our other servers, since logging transactions can directly effect server performance.) You can create new log files on anything from hourly to yearly basis, and include various levels of detail on your server’s transactions including the actual files requested and user names that access the server. We think this is on par with the best of the servers, and helpful when it comes time to keep track of what your visitors are actually doing once they connect to your site.
Purveyor has attempted to provide a fully featured remote server administration tools: you can enable or disable remote management from the server’s console, giving another level of security to your server. If you run the remote tools, all you need is your Web browser: the forms and dialog boxes look similar to what you would see on the server’s console. We think this is the best remote administration setup of any Web server: while you are at the console, you have the nice Windows graphical interface, and while you are remote you don’t need to be restricted to a particular NT machine to do your administration.
However, getting the remote administration setup was somewhat of a chore, and required spending some time working with File Manager to create access control lists for particular users: Process’ documentation is very vague on how this is accomplished, and the manual has a few errors as well.
If you are looking for a server to support Secure Sockets, you’ll have to purchase Process’ Encrypt WebServer version of Purveyor, a separate product.
Performance
Purveyor was somewhat lagging in terms of performance — it was second to last (SiteBuilder was last) in terms of delivering pages. It reached 100 % processor utilization quicker than IIS and WebSite, meaning that if you are planning on using this product for supporting more than a small workgroup, you would be wise to run Purveyor on its own NT machine separate from any other users or applications.
Purveyor offered pages at anywhere from two to more than seven seconds on average across the entire range of tests from 10 to 130 concurrent machines. However, it produced one of the widest range of results for individual page requests: we saw machines that would take over 10 seconds to get their page delivered, while others would take about three seconds. This could be because the server reached 100 % CPU utilization several times during the 70-, 100- and 130-machine tests, and even at 10 machines it reached 80 % utilization. This means that page requests were waiting for the CPU to finish working on other requests. Keep in mind that our tests were designed to show the worst-case scenario of having many machines hit the server at the same time: your own conditions will probably not be as demanding.
————————————————
American Internet Corp.
SiteBuilder is one of the few but growing numbers of NetWare-based Web servers, but it pales in comparison with the NT servers reviewed here. While promoted as a lean and mean Web machine it doesn’t really deliver on either features or performance. Of the products tested, it took the longest to deliver its pages over concurrent connections. While extremely easy to setup, it had trouble working in our test environment. Using NetWare as a Web platform has its advantages: you can set up a basic NetWare server running on a 486 with less memory than to support an equivalent NT server. Given the product’s price tag, this could be a compensating factor, although SiteBuilder was by far the most expensive product we tested. And given that Novell is selling its own Web Server with a bundled run-time copy of NetWare 4.1 for about half the price of SiteBuilder alone, we don’t think you get great value with the product.
However, there are a few bright spots: SiteBuilder has the best support and documentation for Server-Side Includes of any of the products tested, which is helpful for creating dynamic pages. It also has a unique offering for supporting remote CGI programs, meaning that you can run your CGI scripts on a separate NetWare server than what is running your Web server — given the inferior performance that we observed, this is probably a saving grace for SiteBuilder.
Basics
SiteBuilder was one of the easiest products to setup and administer, provided you already have a NetWare file server and have some rudimentary knowledge of TCP/IP networking. You can get going fairly quickly, thanks to a relatively quick setup routine: you merely pop the diskettes in your NetWare server and run the installation software. It took us about 10 minutes to install both the software and the Windows-based administration tools.
If you have never installed TCP/IP on NetWare, you might want to first make sure that you’ve got the latest software: it is available from Novell at http://netwire.novell.com/servsup/binhtml/ (Look for 1008093.htm for tcp41b.exe for NetWare 4.1 servers or 1203098.htm for tcp31a.exe for 3.1 servers.) You might also have to upgrade various other supporting NLMs, such as CLIB and MATHLIB: SiteBuilder mentions this but it is always a good idea to keep up to date with operating system patches, especially with NetWare.
SiteBuilder comes with two manuals, a Quick Setup and a more detailed User Guide. The User Guide is not as complete as O’Reilly’s WebSite manual but offers a fair explanation of Web server concepts.
Notable Features
SiteBuilder runs as a series of NetWare Loadable Modules (NLMs) on a standard NetWare file server of any vintage from 3.11 up to today’s 4.1. (This differs from Novell’s own Web Server, which only runs on NetWare 4.1.) One of its more unique features is the ability to run CGI scripts on another NetWare server, thereby balancing the load across your network. (You can also run CGI scripts on the same machine as your Web server if you wish: you merely load the appropriate NLMs and map the drive accordingly where the scripts are located.)
Given the miserable performance we observed with the product, remote CGI just might be this product’s saving feature. But more on performance later.
One nice feature is the ability of SiteBuilder to create detailed directories of your files. Usually, when a Web server receives a request for a directory rather than a specific file, it tries to return to the browser a particular default file name (typically index.html). If this file doesn’t exist, it will display a directory of files. While all Web servers do this (provided that the ability to display directories is turned on), SiteBuilder can display more than just the file name: included in its directories are file size and type descriptions as well as small icons that are the actual links to the files themselves. It is a small thing, but an indication that there is more than meets the idea with this product.
Given the lack of heritage with NetWare and IP, American Internet includes IPAccess software with SiteBuilder: this is an IPX to IP gateway that runs on the NetWare server that makes it possible to run their Web server without the need to have IP stacks on each local workstation. You need to install a special WINSOCK.DLL module on each client machine that comes with the product. (We opted for our comparison to use the standard TCP/IP stack with Windows 95.)
Another nice feature is the ability to support server-side include scripts. You can turn this feature off for the entire server or for particular directories for increased security. SiteBuilder has excellent documentation on SSI syntax and comes with several sample SSI scripts, including page counters, math calculations, registration forms, and conditional HTML statements — all of which are great tutorials in how to write your own. In addition to SSI, SiteBuilder includes support for WebBasic scripts, which means you can write your Basic programs to create forms, in much the way other servers use Perl scripts.
SiteBuilder also comes with an NLM version of Perl and Basic script processors, and these will run with some modification standard Unix Perl scripts.
SiteBuilder does not support running multiple independent webs in a single server and does not offer any support for proxy servers or secure sockets encryption.
Administration, security, logging.
There is some information displayed on the NetWare server console itself that indicates the bytes transmitted, errors logged, and peak number of requests. But you’ll need a Windows machine to do any administration of SiteBuilder — you can use anything from 3.1 on up to Windows 95 for this purpose. This means that all administration of the server is essentially remote, but it is fairly straightforward.
One key parameter cannot be adjusted via the Windows interface, the maximum number of threads that can be run by multiple clients. By default this is set at 16, which is a very small number. You can change it yourself by adding a line in the HTTPD.CFG configuration file, but this seems very crude. There is nothing in the documentation on the importance of this parameter: only by calling American Internet did we learn of its importance.
User access control is accomplished with NetWare’s own user facilities, either through the bindery or via NetWare Directory Services. This means you can restrict or allow particular user names or IP addresses to the entire Web server, and also to particular file or directories.
SiteBuilder’s log options are not the most capable but are far from the basic features found in IIS. You can specify a particular size in kilobytes that a log can occupy, and a maximum number of old logs that can be retained. When the server fills these maximums, it can roll over these logs: deleting the oldest log and opening a new one. You may also see a snapshot of the last 64 k worth of logs quickly from the control panel, which is a handy feature when you are doing debugging or want to track current requests. The access logs that SiteBuilder produces are in the common log format that can be analyzed by any number of programs.
Performance
SiteBuilder was dead last in terms of delivering pages — it was far slower to complete our test scripts than any of the other products we used, and slower by a wider margin as we increased the number of concurrent clients. On average, it took anywhere from two to over eight seconds to deliver a test page to our server, and in some cases clients took over 20 seconds to complete our test script.
Part of SiteBuilder’s problem is NetWare’s own drivers and IP-related support software, but part is also inefficient use of NetWare resources. The server quickly hit 100 % CPU utilization (as displayed on the NetWare console monitor) after just running 40 concurrent connections. We also had all sorts of troubles getting the server to complete our test scripts with more than 10 clients connecting to our server using an Adaptec PCI SCSI adapter: when we switched over our NetWare server to make use of an IDE disk drive, we were able to finish our tests. This seems to be a problem with how NetWare handles the PCI SCSI drivers, but we can’t say for certain.
We ran our tests using the default settings for the maximum number of concurrent threads (16), which may account for the extreme slowness observed. However, when we tried increasing this to 32, we kept crashing our NetWare server during the middle of the tests. We think American Internet needs to work more carefully on testing the product under more stressful loads.
SiteBuilder comes with an IPX to IP gateway, so that client machines can still run IPX and connect to the web. However, for comparison purposes with the other servers, we ran our Windows 95 workstations using the same Microsoft-provided TCP/IP stacks only and did not use this gateway.
———————————————–
Netscape Corp.
Netscape’s latest NT server FastTrack is still in beta but it offers lots of promise: it is the fastest performer we’ve tested and chock full of features. It comes with support for Secure Sockets and Java — something that is either extra or unavailable on other servers. Being a beta, there isn’t much in the way of documentation: most of what you’ll need you’ll have to browse through the various administration screens once you get the product installed. Getting FastTrack setup is fairly easy: once you download the 16 megabyte self-extracting file, it creates a series of icons on your Windows desktop. The beta version includes a copy of Netscape Navigator Gold v2, which does triple duty as a browser, an administration tool and also as an HTML authoring tool.
If you want to connect to various ODBC databases, make use of various document creation templates, check your site’s links or make use of Netscape’s tools for converting documents and images from non-Web formats, you’ll need to purchase the $695 LiveWire Pro package, which includes these tools, along with an Informix server for running your database applications. Existing Netscape Communications Server customers can upgrade for $175.
Basics
Netscape has recently made some major changes in its server strategy: until Microsoft came on the scene with IIS, its NT servers were the high-priced spread and often lagged behind its Unix offerings in terms of stability and features. That is all history, and with FastTrack Netscape has produced a solid contender for the NT arena that is also quite inexpensive.
Like Purveyor, FastTrack runs on either workstation or server versions of NT and on either Intel or Alpha machines (in addition to various Unix boxes) and while Netscape claims a server with 16 megabytes of RAM is sufficient, we think closer to 64 megabytes is probably what you’ll end up using.
Being a beta, we are reluctant to comment on things that we know will change when the product reaches a finished state: for example, no printed documentation is included with the downloaded software, and you’ll have to use a browser to find out descriptions of most of the product’s features. However, given the quality of Netscape’s earlier server product documentation, we think that FastTrack will have something at least on par with Microsoft and SiteBuilder in terms of level of detail.
Getting the product running will require installing three different pieces of software: the installation routine, a copy of Navigator Gold which is used for administering the server, and the server software itself. All of this took very little time to accomplish, although downloading 16 megabytes of software might be the biggest challenge.
Notable Features
FastTrack is one of the more feature-laden servers we’ve seen and comes with many items that either cost extra or are unavailable for other servers. Like IIS and Purveyor, it comes with support for various ODBC database drivers, although unlike either of these products, it also includes a copy of Informix Online database server.
FastTrack comes with support for Java and JavaScript, allowing developers to write these applications to run on either the server or client side. This is unique among NT servers — all of the Java-capable servers are currently just available for Unix-based machines.
All Web servers support CGI programming, but Netscape has written its own interface called Netscape Server API. Like Microsoft’s ISAPI, these interfaces are highly-tuned to perform better than ordinary CGI programming. However, few products support this API yet, although there is interest in the development community given the role that Netscape plays in all things Web. Like WebSite, FastTrack also supports the WinCGI interface, so that programmers can write scripts in Basic just as they would write them in perl for other Web servers
Netscape is making a big deal that FastTrack includes its own log analysis tools, so webmasters can view total hits and traffic reports easily. However, we would rather make use of one of the many specialized tools for this purpose, so we don’t think much of this feature.
Netscape offers solid support for hosting multiple web domains on a single server: you can set up individual document and CGI directories that are completely independent of each other, as well as have different default documents for each web. In contrast, IIS offers much more limited support for this situation.
Administration, security, logging.
Netscape servers are administered with a browser, whether you are sitting at the server’s console or running it across the Internet. While we applaud this flexibility, we had a great deal of trouble getting the administrative interface to run properly on our test networks. We ended up uninstalling and re-installing the product several times, and at one point it just stopped working completely. (Luckily, our Web server continued to operate — we just couldn’t administer it.) We hope this will be fixed when the released product is available. The time it took to load each page was very slow, even when we ran the browser on the server’s own console — again, we hope this improves with the finished product.
The product comes with a solid set of security features, including the ability to support Secure Sockets Layer v.3.0. This enables encrypted communications between client and server (otherwise, this information is transmitted in clear text), although studies have shown that the level of encryption is fairly simple to break. With other Web Servers such as WebSite or Purveyor, you’ll need to purchase a higher-priced encrypted version that includes support for this protocol.
One weakness of FastTrack is the lack of support for rotating logs — meaning once the log reaches a certain size, the server copies this file and begins a new log. Otherwise, there are many log options: we think FastTrack and WebSite are both tops when it comes to keeping track of your visitors.
Other things that you can control from your administrative screens include the ability to add users and protect various parts of your server for different groups, and the ability to run pre-set reports against the logs. There are about 40 or so different screens available, which is about on par with what both WebSite and Purveyor offer.
Performance
FastTrack lived up to its name and delivered the best performance of any of the servers tested, matching or beating Microsoft’s IIS. Pages took less than three seconds to download, no matter if 10 or 130 concurrent clients were running. FastTrack was also very efficient at using CPU resources, only reaching the peak 100 % of the processor during our 130-client test. In contrast, only Microsoft’s IIS was more sparing of CPU resources, while the remainder of NT servers reached this peak level at much lighter client loads. On average, FastTrack’s performance degraded less than three percent when the 10-client test is compared to the 130-client test: this is very impressive performance indeed, especially from a beta version.
Our tests were designed for worst-case situations to expose product weaknesses. In actual use on real networks, FastTrack will probably have plenty of performance to spare and you’ll see limitations and bottlenecks in the network long before you’ll see this server running out of gas.