Dave's Blog

Saturday, March 30, 2002 12:09:07 PM In his book, Introduction to 'C' Programming on UNIX, William Holliker gives students and interesting assignment on p. 7-29. The problem statement is:

The system provides a function named logname() which gets a user's
login name. Read the man page on logname(S) and then write and
compile a C program which will print the message:

Hi, train01!
where train01 is the login name of the user executing the program.
Bill's solution, apparently written for SCO Unix in 1999, is this:  
/* exer7_1.c - Answer to Unit 7 Exercise 1 compile with: cc -o lname lname.c -lPW */ #include <stdio.h> void main(void) { char *lname; /* Logname character pointer */ char *logname(); /* Declare the function */ lname = logname(); printf("Hi, %s\n", lname); exit(0); }
This program may be fine and dandy if you're running SCO Unix, but it 
won't compile on Solaris 8 or Red Hat Linux 7.  The logname() 
function doesn't exist!  Thus the portability problem rears its ugly 
head, and people like Rick Carey, an information technology manager 
at Merrill Lynch, pretend to rend their clothes in anguish over lost 
productivity.  Hmm ... could there be a more portable way of writing 
this code?  

Yes, there is.  IEEE POSIX 1003.1 describes a function, supported by 
all POSIX-compliant operating systems (that includes Microsoft, you 
know), called getlogin().  The getlogin() function will collect our 
required data quite nicely:  
/* File p7-29Ex1a.c David R. Dull San Francisco State University, College of Extended Learning Sun Jan 21 10:22:29 PST 2001 Saturday, March 30, 2002 11:57:09 AM */ #define _XOPEN_SOURCE #include <stdio.h> #include <stdlib.h> #include <unistd.h> int main(void) { char *userptr; userptr = getlogin(); printf("Hi, %s!\n", userptr); exit(0); }
But wait!  POSIX 1003.1 provides not one, but several functions 
which can do the job!  The second one is cuserid():  
/* File p7-29Ex1b.c David R. Dull San Francisco State University, College of Extended Learning Sun Jan 21 10:36:48 PST 2001 Saturday, March 30, 2002 11:57:33 AM */ #define _XOPEN_SOURCE #include <stdio.h> #include <stdlib.h> int main(void) { char username[L_cuserid]; char *userptr; userptr = cuserid(username); printf("Hi, %s!\n", userptr); exit(0); }
Both of the above programs compile and run on both Solaris 8 and 
Red Hat 7, with no -- zero, nada, nil, nothing, nought -- portability 
programming effort.  They just compile and run.  And they will 
compile and run on every POSIX-compliant operating system, 
which includes almost every Unix on the market today.

The writer of the Linux man page for getlogin() and cuserid() noted a 
third POSIX-compliant method:  
Nobody knows precisely what cuserid() does - avoid it in portable programs - avoid it altogether - use getp-­ wuid(geteuid()) instead, if that is what you meant. DO NOT USE cuserid().
The man page is dated 3 September, 1995, but is unsigned. Hey, Mr. Bigshot Security Programmer, man pages have an AUTHOR section. Use it. IEEE POSIX 1003.1-2001 in all its glory, written in collaboration with ANSI and The Open Group, which together are called "The Austin Group," can be found at The Open Group home page.


Friday, March 29, 2002 11:27:20 PM Wednesday Merrill Lynch made a splash by announcing their plan for an enterprise-wide move to Linux. This gives Linux, Red Hat specifically, the same air of legitimacy that was given to Sun Microsystems in the 1990s. Of course, the use of Solaris and other Unix systems did not replace Microsoft operating systems, and it's obvious that the "enterprise-wide" character of the announcement is intended to signal the Redmond giant that brokerages are not too happy with its price structure. In the 1990s the brokerages used Solaris, AIX, and HP-UX to speed up and multiplex information moving onto the traders' desks, to connect to back-office software, and to run the servers. The investment ran into millions, if not billions, of dollars, over the alleged 15-year development cycle. Merrill Lynch is proposing a project of equal or larger scale, perhaps surreptitiously motivated by the blue-suited sales trios from IBM. I think it's interesting to note the comment that the press, that is Forbes magazine, is buying the line that "Merrill can write an application once and then deploy it with minimal work on mainframes, desktops, laptops, and handhelds -- whether it be on Intel hardware or something else." Obviously someone, probably Red Hat, is feeding the press, and Merrill Lynch, a big line. Anyone who has ported software from one architecture to another recognizes this is patently false. Even the simplest port to handheld computers running Linux is being ballyhooed as a major accomplishment, and readers would do well to keep up with the technical press. The article hits closer to reality when the reporter and the Merrill manager note that with Unix, "developers write software for every version of Unix, including for tools and patches. This approach ... is time-consuming and expensive." No news there, except when a financial services company mistakenly claims that it is a software publisher. Software written for portability, that is written in ANSI C and in accordance with the POSIX standard, is as portable as software written for Linux. In point of fact, Linux is primarily a an imitation of Unix, conformant to the POSIX standard, whose main promise is the freedom of diversity in future development directions. As of today, it is a poor imitation of the commercial Unixes, but the article points out that 5 years from now that could be a different story. Linux has come an amazingly long way in the last 5 years. If you've worked on the software side of a financial services company, you know that everything it writes is tailored to a specific target machine. There is no application that runs "on mainframes, desktops, laptops, and handhelds" regardless of the operating system, simply because the financial services companies could not tolerate the inefficiency. However, one thing they have been very good at lately is developing applications that run across a suite of these architectures, so when you trace them end-to-end they have traversed all of them. A prime example would be a stock trade (surprise, surprise!). The trader places a request on a browser running on his handheld computer, which triggers server-side software running on the HTTP server, which interfaces with middleware (from IBM, naturally) that converts the request to a database transaction on a mainframe, that turns into an Electronic Data Interchange-style transaction across multiple mainframes in the financial services networks. The primary development cost here is not porting from Solaris to AIX or vice-versa, but weaving together bits of applications that must run on multiple, incompatible operating systems such as Windows CE, Solaris, and OS/390. The holy grail of getting one development team to write in one language for an entire application end-to-end promises more than the unlikely contingency of moving a piece from Solaris to HP-UX. Ultimately, 15 years down the road, the financial services companies will be disillusioned with Linux, as it did not deliver all that the IBM and Red Hat sales teams promised in 2002, but they will be working with new languages and operating systems anyway. In the meanwhile, developers who learn to work with Linux, as well as any other Unixes, and with languages such as C, C++, Java and the like, will enjoy a rich bonanza as the financial services companies throw tons of money at them to save overall operating costs and to increase peak capacity.


Sunday, March 24, 2002 10:07:01 PM One of the first things a student of The C Programming Language, by Kernighan and Ritchie, learns is that there are technically at least two C languages. The authors explain on page 2 that the first, "K&R" or "classic" C, introduced to the public in 1978, was not identical to the "ANSI C" that was approved by the American National Standards Institute in 1988. On pages 2 and 3 they explain the differences. ANSI has approved standard C languages since 1988; they are distinguished by the year they were approved. The classic C language was a simple yet elegant implementation of a number of programming ideas that had proven to be useful to the development team at Bell Labs. The ANSI standard was the result of a ten-year discussion among some of the leading critics and users of the classic C language, whose experience authorized some enhancements. Some of the enhancements had been in use by compiler writers who had thought of good extensions and had included them already. One enhancement that was very popular (indeed, it is popular with all programming languages) was a set of library functions that could be taken for granted and used without being rewritten from scratch. This was actually indispensible with the C language, because it did not include any built-in keywords to read from or write to an input/output device, and it did not include any built-in keywords to perform any string processing. The keywords were pretty much limited to simple numeric data types, one character data type, and control constructs like "if," "while," "for," and the infamous "goto." When programmers first saw C they got the impression that they were actually looking at a compact notation for FORTRAN, with some assembler language thrown in. The "gotcha" was that the assembler appeared to be for the DEC (may it rest in peace) line of 32-bit minicomputers. The missing parts of the language were, in the opinion of the team at Bell Labs, properly in the domain of the operating system (they were pushing UNIX). The ANSI committee, whose primary purpose was to bring order to the enhancements, approved a standard library for C. The primary and most obvious change to the C language that the ANSI committee inflicted, however, was a modification of the function call interface. The purpose of the change was to curb the proliferation of the most grave error that haunted the industry, "type mismatch." In classic C the programmers could call a function with any number of arguments, and then declare their type independently in the function. The compiler had no way of verifying that the type of the data in the calling function and the type of the data in the function called were the same before the two functions were linked. That is, there was no source-level type checking. The committee decided that this could be remedied by specifying a more intelligent (or less intelligent, depending on your point of view) compiler, one that required the programmer to declare up front what data types he was going to use. This was called a function prototype, and all the compiler needed to check both the calling function and the function called was to compare them with the prototype. To make the point more clear, let's look at an example. In classic C, the function "isItAlpha()" could be used and defined on the fly as follows:

main() { int a; printf("Please enter a character and the enter key: "); a = getchar(); if ( isItAlpha(a) ) { printf("That character is alphabetic.\n"); } else { printf("That character is not alphabetic.\n"); } } isItAlpha(b) int b; { if ( (b >= 'a' && b <= 'z') || (b >= 'A' && b <= 'Z') ) { return 1; } return 0; }
A classic C compiler would compile and run this program, even if the 
main() and isItAlpha() functions were in two different files.  There 
would be no type checking.  That is, main() could call isItAlpha() 
with an integer argument and isItAlpha() could assume it were being 
called with a character argument, and the compiler would merrily plug 
the two together without noticing the mismatch.  In a small example 
where the two functions are seen together that doesn't appear 
likely, but in commercial projects where functions were being 
coordinated across dozens or hundreds of files this was a nightmare.  
It was a fertile breeding ground for one of the most common bugs. 

The ANSI standard imposed a function prototype, which had to appear 
in each file that called the function before the function call.  
It was, in essence, a promise that the function, whenever it would be 
defined, would have a specific return type and specific argument 
types.  With this information the compiler could catch the bugs 
before they hatched.  The ANSI C source code would look like this:  
#include <stdio.h> #include <stdlib.h> int isItAlpha(int); int main(void) { int a; printf("Please enter a character and the enter key: "); a = getchar(); if ( isItAlpha(a) ) { printf("That character is alphabetic.\n"); } else { printf("That character is not alphabetic.\n"); } exit(0); } int isItAlpha(int b) { if ( (b >= 'a' && b <= 'z') || (b >= 'A' && b <= 'Z') ) { return 1; } return 0; }
The line "int isItAlpha(int);" before the main() function is the 
function prototype that tells the compiler how the main() function 
is supposed to interface with the isItAlpha() function.  

An observant student may notice the "#include" lines.  Note that 
these are not comments.  The printf(), getchar(), and exit() 
functions have been defined for the programmer in the C standard 
library, and the header files stdio.h and stdlib.h contain their 
prototypes.  The man pages for the functions tell the programmer 
which header pages contain their prototypes.  So the "#include" lines 
actually include more function prototypes!  

Another observation a quick student may make, since he or she will 
have cut and pasted the examples into files and compiled them, is 
that most compilers, like gcc, will compile both files without any 
complaints.  Yes, most compilers are smarter than the ANSI standard 
requires, recognizing and compiling both the classic C source and the 
ANSI C source.  A programmer has to ask the compiler for help in 
detecting the pernicious "type mismatch" bug.  With a command like 

	gcc myexample-standard.c 

the bug may well be born undetected, while with a command like 

	gcc -ansi -pedantic -Wall myexample-ansi.c 

the compiler is requested to use its best effort to detect and warn 
about the possibility of the bug's existence.  


Sunday, March 24, 2002 12:24:30 AM Linux is on a roll. While I spent the good part of March 8th bemoaning the current state of Linux, it appears to be healthier than I anticipated. Yesterday I heard reports from ten students on various topics, including the standard assignment of comparing the most popular Unix and Linux distributions. Students had the option of reporting on a special topic instead, and the topics they chose were almost all about Linux or open-source applications. One student reported on ssh, which is rapidly becoming the replacement for telnet. Another reported on firewalling with ipchains, an almost exclusively Linux feature. A third reported on using Linux servers in the corporate MS Windows environment. The part that impressed me the most is that most of my students are not academics -- they're experienced computer professionals. It appears that Linux has arrived, and its strong point is in future development directions. Currently Linux is gaining market share due in part to the licensing costs of Microsoft products. There are allegations that Microsoft no longer sells its operating systems, per se, and that its heavy promotion of XP is to obsolete previous sales and move the business model into a license model. Perhaps Microsoft is actually moving away from selling the operating system, and just charging a fee for the copying and distribution of the binary version, to satisfy sections 1 and 3b of the Gnu Public License. Perhaps Microsoft will wait and see how IBM fares in the same arena before it enters. Microsoft has some history with Unix, and I had suspected that its heavy investment in Corel might forebode a Linux version of a Microsoft operating system around 2004. What I didn't notice was that Corel had spun off its Linux distribution, so I am not sure that Microsoft is going to enter the fray. Considering its power as an R&D company, I have a hard time believing that MS needs the tiny Corel to port its C# language to Linux. It could be that I just saw the dying flames of a skunkworks project that crashed and burned. Indeed, there are clues that Microsoft cashed in its chips and moved on. Considering the hundreds of workers Corel has laid off in the past few years, Microsoft could have more easily extended them direct job offers, and maintained a stealthier development strategy. Perhaps that is what actually happened. Only time will tell.


Tuesday, March 12, 2002 6:55:56 AM Why do programmers need to know system administration? After all, it's a full-time job. The two specialties are complementary, and cross-over between them is not all that common. Sometimes a programmer will need to perform some system maintenance, for example at times whe the company for whatever reason cannot find or cannot afford a system administrator. But there are also good programming reasons to get to know your platform. I was working with a hardware design group at a major computer manufacturer, when I was called by a user to troubleshoot a printing problem. It turned out he was a new temporary CAD designer, and his system had been set up by fellow engineers because they didn't want to wait for the IT department service cycle. Apparently whoever had set up the designer's account had decided to have him login as root to work around the printing problem. I asked the designer if he was aware of the risk he was taking by using the root login. His reply was "I am a chip circuit designer. I design chips. You are the system administrator. I know AutoCAD, and I don't want to know anything about the computer. Your job is to make it work." In other words, if he were allowed to destroy his computer, he wasn't the least bit interested in making sure that he wouldn't. Suppose he had deleted his applications and some files necessary to properly boot the machine. Guess who he would have blamed. If you never lift a screwdriver and "crack the case" of your workstation, it helps to know how to handle it safely. Waiting for days for some overworked system administrator to show up in your cubicle is a lot less fun than getting on with your project. When I was working at a major financial services company I discovered that a good part of their budget went to writing in-house applications that its customers would never see. They were designed, from the start, to make maximum use of specific machines. From the point-of-sale PCs to the clearinghouse mainframes, every disk and cpu cycle was accounted for, every possible resource was tweaked to the maximum. The programmers who wrote the applications had to know the platforms intimately. Even though languages like Java and Perl are touted as write-once, run-anywhere software, the platform that executes the program will have an impact on its performance. Whether you're writing software for the shrink-wrap market, or writing for a specific target machine, your code will improve if you understand what is going to run it.


Monday, March 11, 2002 12:18:27 AM Tonight I got around to reading Homesteading the Noosphere, by Eric Raymond. It's an eloquent, thoughtful description of the way the open source community operates in terms of motivating programmers to produce high-quality products. Products like gcc, Perl, Linux, and Apache. Products that programmers use every day, both in the open source community and in the proprietary source community. In his analysis, Raymond asserts that the prime cause of quality and of successful project management is that of crediting the contributors to a satisfactory degree. Perhaps this is the essential difference between the quality of software for Linux that is produced by the open-source community itself, and of software for Linux that is produced by programmers in the proprietary source community. Although the latter have their salaries and their tee shirts, most of the acclaim that they can enjoy is of a limited scope. Their managers, their managers' managers, and their close work groups may be, if they are lucky, made aware of their contribution. They hang a plaque on their cube walls, to remind them that they have been recognized. In the open source community, a proper credit stands forever. A contributor can return to the evolving product, as released to the Universe, and find his credit. He can point to this credit when he feels the need to assert his expertise to strangers. Somehow a certificate on a cube wall does not have the same public power. Perhaps these differences will dissolve over time, as the companies that contribute to the open source movement learn from their experience. American companies operating in Japan had to overcome their prejudices and to learn the host culture, in a process that took decades for some. Companies like IBM, Sun, and Lucent Technologies may learn the open source host culture a little more quickly.


Friday, March 08, 2002 11:07:04 PM My, isn't it fun being on the leading edge! Well, yes, it isn't. Let's see, it was in October last year that I read The Cathedral and the Bazaar, by Eric Raymond. I was preparing to teach an introduction to Linux, and I wanted to be up on the issues. I got all inspired about Linux being the operating system of the future. I went out and bought a copy of Red Hat Linux 7.1, and installed it on my IBM Thinkpad. The installation went smooth as silk, and I began to tell everyone that Red Hat was "ready for prime time." A little research indicated that I could even get a Thinkpad with Red Hat Linux pre-installed and tested! Further research indicated that this configuration, IBM stock hardware with a free operating system, was priced at about $3500, or over twice as much as the same hardware running that proprietary, "monopolistic" operating system, Windows 98. Well, you can imagine the enthusiasm with which I met that news! Thank you, IBM, for your support of Linux! So Windows 98 remains my primary operating system. I also looked into the possibility of getting a Winmodem driver for Linux, so I wouldn't have to switch back to Windows 98 to surf the web and to download software. In October, Milton Yee, a student in one of my Unix/C/C++ classes, had sent me some pointers to a group of people who were working on Winmodem drivers for Linux, so it was worth a shot. After I downloaded the drivers from Lucent Technologies, a few attempts at installing them convinced me I needed a more recent version of the Linux kernel than was bundled with Red Hat 7.1, just about the latest version available. OH, THANK YOU, Lucent Technologies, for your support of Linux! Fast forward to January, when I was between contracts and had some time on my hands. I read Sams Teach Yourself Java 2 in 24 Hours, by Rogers Cadenhead, and I decided to download the Java 2 SDK and do the practice exercises. Now, Microsoft Internet Explorer and Netscape Navigator came with Java 1.1 built-in. But Java 2 had to be downloaded as a plug-in, because both companies had decided that writing an advanced compliant Java Virtual Machine was too loaded with licensing issues. Of course, the book had no CD because of the same issues, and the J2SDK had to be downloaded directly from Sun Microsystems. Still faithful in Linux as the operating system of the future, I decided to download the Microsoft run-time environment and the Linux SDK. OK, here we go: Installation on Microsoft Windows 98, from its own self-extracting file incorporating an "Install Wizard," took the JRE about three minutes. Once installed, both Internet Explorer and Netscape Navigator had no problem using it. Installation of the JDK on Red Hat Linux with RPM took about 5 seconds. Once installed, um, hmm, uh, ... well, no amount of tinkering seemed to get Netscape Navigator to recognize a "javax/swing/JApplet." Netscape dutifully relinquished control to the Sun JRE plugin, so the problem had to be in the plugin itself. The SDK Appletviewer worked perfectly fine, but the plugin that came in the same kit did not. Thank you, Sun, for your support of Linux! Last night I read In the Beginning ... Was the Command Line, by Neal Stephenson. In a brief volume that was a history of computer interfaces, a discourse on the technological nature of today's society, and a few war stories, Neal made the point that proprietary operating systems all had fundamental flaws, and that Linux was the operating system of the future. After my own forays into the use and maintenance of Linux, here's my take on the subject: All the computer manufacturers and software publishers have announced their honest and earnest support of Linux, the operating system of the future. All of them continue to publish and maintain their own proprietary software. All of them have provided software for Linux, and every single one of their Linux software products has proven to be inferior to the proprietary stuff that they continue to produce. Not only do I think the priests in the Cathedral are giving faint praise to Linux, but I think that by their weak support they are continuing to maintain the proprietary sanctum sanctorum that has worked so well for them in the past. Open source programmers who have been making free imitations of the commercial software will always be playing catch-up, and the business people who need business software will continue to spend real money for real products. My operating systems of the future are Windows XP and Solaris 9. Why? Because the paid-for commercial software is generally superior to the free stuff. Really! I have seen it for myself.


Sunday, March 03, 2002 5:35:57 PM In the future, when people talk of the "Recession of 2001," I will be quick to point out there was none. That's right. No recession. It may be politically incorrect to say so, but the facts are in. A recession is defined as a period in which the Gross Domestic Product (GDP) declines at least two quarters in a row. The Merriam-Webster Collegiate Dictionary defines it as "a period of reduced economic activity." The third quarter of 2001 was a period of reduced economic activity, but the fourth quarter was not. Close, but no cigar. Now being called "a recessionette" and "the mildest recession in U.S. history," this was not a recession. Business Week editors began to get uneasy talking about "the recession" when the preliminary data for the fourth quarter of 2001 came in. Preliminary estimates showed that there had been a slight growth in the fourth quarter, 0.2%. Now it's official: GDP grew in the fourth quarter, by 1.4%, and some readers are taking economists to task for spreading doom-and-gloom messages with political content. My alumni can testify that I was upset at the time by the newspapers that chose to sensationalize for profit rather than to clarify the truth. Other indicators of the downturn have begun to turn around. Most of the stock slides and layoffs were related to technology. Average stock prices for 11 out of 13 industries rose last year. Almost all stocks rose a bit in the fourth quarter, and it looks like they're still rising. Ironically, it seems that the downturn was too short to call a recession precisely because of the accomplishments of the people who suffered the layoffs. Inventory control systems were more efficient. Forecasting tools were more powerful. Electronic data interchange software allowed orders to be delayed to satisfy just-in-time requirements that couldn't be managed except in theory a decade ago. So when orders fell off, the programmers who had designed the systems found they could take all day to enjoy a cup of coffee, instead of the usual ten minutes. (OK, I'll admit it looked like they usually took more like 30 or 40 minutes. Maybe that was a factor.) When I tell people there is no recession they tend to object. We all know people who are out of jobs. Unemployment in the San Francisco Bay Area is higher (supposedly over 7%), than nationwide (5.6%). But this is basically a one-industry downturn. That's not a recession. When Appalachia went into a depression, the nation didn't declare a crisis. When the Apollo Project completed and aerospace engineers turned to driving taxis to pay the rent, the nation didn't declare a crisis. Today's crisis is a local crisis, and it's a crisis in the past. There are high-tech jobs in New York state. There are high-tech jobs in Utah. Techies who were drawn to California during the frenzy of 2000-2001 have gone back to Kansas, to work in high-tech jobs in places like Overland Park. High tech is hiring in Silicon Valley and San Francisco again, albeit not at the pace we saw before. Is this such a bad thing? I reserve judgement on that. I expected the economy to falter in 2000, given the intense capital spending we pushed in 1999 under the guise of Y2K prevention. The fact that businesses just kept on spending was a complete surprise to me. I remember articles in 2001 that speculated the stock market had to fall some time, and other articles that supposed there were actually two economies, the "old economy" and the "new economy." It turned out the "new economy" was an industry bubble, and the bursting of that bubble was a normalization. If you take the historical stock curves for the industries in question and draw a line projecting from their historical growth rates, you can see the bubble and the normalization. Wages and employment followed that bubble. Many techies dropped out of school to get a piece of the action. Many of them discovered they were in over their heads, and coped on a day-to-day basis with the puzzles they were presented by their fast-paced business and technical environments. Many put off studying for certification or advancement because they were working 60-hour weeks trying to maintain the 24x7 myth. So now we find that there are government programs coming on line that will help them return to school. They can pick up the theory to give a solid foundation to their experience. They can take the time to study for the certificates and the certification. And, as business picks up again, at its normal pace, they can be the experts of the information age, the economy that was never new, that was just caught up in an explosive growth phase. Of course, there's the possibility that there will be a recession in 2002. Many who were unemployed still are. But many are turning to other careers as well as education. There was a joke last year that "The restaurants are happy with the burst of the dot-com bubble. Their best waiters are coming back." That doesn't sound very much like a joke any more. What lesson can we take with us to hedge against a future recession, or a future industry downturn? Diversify. Diversify your 401(k)s and IRAs, diversify your skill set, diversify your life. Waiting on tables was not a bad job before, and it's not a bad job now. Only this time you can repair the internet workstations that line the windows, and add in a web server and a back-end database engine. Instead of hearing "would you like some fries with that?" in the future we may hear "would you like a business account with that?" And we are more likely to say yes. Or, "I'll have a hamburger, a diet Coke, no fries, and 67 shares of Safeway stock."


Previous Logs
2002: January February
2001: May June July August September October November December

 Return to David's Home Page
Your comments are welcomed by David R. Dull, ddull@netcom.com.
(C) Copyright 2002, by David R. Dull. All Rights Reserved.


This page hosted by GeoCities Get your own Free Home Page


1