Last updated 2/26/10
The errata listed on this page lists
all errors that I know of in the third edition, first printing
of the book. If you see one that
is not on this page, I would appreciate knowing about it --
send a message to Gary.Nutt@colorado.edu.
You can find more supplementary materials — lecture notes, examples,
short explanations, exercises, and lab exercises —
here.
On page 63 the function parseCommand erroneously allocates memory and, while that doesn't prevent it from executing correctly, it presents conceptual confusion to students. Especially as they try to free the allocated memory before their next call to that function. Here's the problem: you allocate memory with malloc in several places but then immediately overwrite the pointer returned by malloc with the return result of strsep. If you correct this problem, though, then your use of the sizeof operator near the end of the function seems to fail (I'm not sure why sizeof(some char *) would not always produce 4 but I'm no expert on sizeof). Here's my corrected version which simply fills the argv array with pointers into the cLine string as returned by strsep. The only allocated memory is in cmd->name and it should be free()-ed as soon as it is no longer needed (for Simple Shell lab that is usually right before you call parseCommand again)...typed from memory so pardon any newly introduced errors ;-)int parseCommand(char *cLine, struct command_t *cmd) { int argc; char *clPtr; /* my preference for how to treat strsep arg */ clPtr = cLine; argc = 0; while((cmd->argv[argc++] = (char *)strsep(&clPtr, WHITESPACE)) != NULL) ; cmd->argc = argc-1; cmd->name = (char *)malloc(strlen(cmd->argv[0])+sizeof(char)); strcpy(cmd->name, cmd->argv[0]); return 1; }Now, personally I think your goal might have been to copy the substrings returned by strsep into malloc-ed memory. I obviously didn't try to reproduce your intent, just cleared the memory leak. I would encourage students to eventually call free(cmd->name) in the docs of parseCommand.
Finally, your definition of WHITESPACE to include the period character is troublesome since you give several examples with period in your commands: "a.out foo 100" or "gcc main.c" both fail with your parser. I don't see the usefulness of comma or period in WHITESPACE. Am I missing something?
// Wait for device to become idle while((busy == 1) || (done == 1)) wait(); // busy == 0 and done == 0 // Start the device (Will set busy to 1) . . . while(busy == 1) wait(); // Wait for op to finish // busy is 0 and done is 1 // Now clean up after I/O operation . . . done = 0; // Device I/O complete
// Request for I/O op while((busy == 0) && (done == 1)) wait(); // Do the I/O operation busy = 1; . . . busy = 0; done = 1;You can see an image of the graphic figure here.
... you say that no manufacturer has been able to build bigger than 20-way SMPs. It wouldn't surprise me if your own home directory at CU was on a 64 or 128-way SPARC. The old 1985 common bus Symmetric Multi-Processor architecture indeed reached a point where memory and other shared resource contention eliminated the marginal benefits for large numbers of processors ... but that is old news. By the mid 90s it was recognized that Non-Uniform Memory Architectures (where all processors can reference all memory, but some memory is faster for some processors) could dramatically reduce the cost and increase scalability of SMP systems. Many large servers are now "domained" in ways that make it extremely difficult to decide how many systems you have. The OS work to support these architectures is horrendous, but those may be the paths du jour to huge servers. ...
... Indeed many memory interconnects work by exchanging message over a very fast network (much more scalable than a big memory bus), but the systems I have worked with make it look like mirrored memory (though perhaps without coherent cache) by the time software sees it. ...
... Specifically the codeP_sim (PID callingThread, semaphore R, semaphore S) { ... if (R.val==0) { R_num++; enqueue(callingThread, R_wait); V(mutex); goto L1; } else { ...To reach this point I must be unable to acquire the R semaphore and am enqueued waiting its availability. If enqueue() is a blocking call I never release mutex, so another thread that could release me will block at the beginning of V_sim() and the system grinds to a halt. On the other hand, if enqueue() is not blocking, I could potentially place myself on the R_wait queue multiple times unnecessarily. On the other (third?) hand, if I alter the code so that V(mutex) precedes the blocking enqueue() call, I no longer have exclusive access to the R_wait queue and have a potential race condition.
Prof. O'Neill is correct. The enqueue() needs to atomically
release the mutex when it blocks, then resume at L1. This means that
the V(mutex) should be removed since it is absorbed into
the enqueue() function.
Health monitoring identifies the components that are not currently meeting their requirements. Failure mode analysis networks consider the union of all reports and suggest a course of action. Configured restart groups determine which processes should be killed and restarted, how, and in what order. If recovery does not happen within a prescribed time period (or number of attempts), higher level dependency networks describe a sequence of graduated escalations (warm restarts, cold restarts, restarting 1st and 2nd order clients, rebooting individual systems, and clusters of systems, ...)
0 1 7 8 31 +-+------+-------------------------------------------+ |0| net | host | +-+------+-------------------------------------------+
0 2 15 16 31 +--+----------------------+--------------------------+ |10| net | host | +--+----------------------+--------------------------+
0 3 23 24 31 +---+-------------------------------------+----------+ |110| net | host | +---+-------------------------------------+----------+
0 4 31 +----+-----------------------------------------------+ |1110| Multicast group ID | +----+-----------------------------------------------+
0 5 31 +-----+----------------------------------------------+ |11110| (Reserved for future use) | +-----+----------------------------------------------+
The type of an IP address is determined by the setting of the most
significant bits in the address: Class A addresses have the most significant
bit set to 0; they are 10 in Class B addresses; Class C addresses use 110;
and Class D addresses have the tag field set to
1110 [Stevens, 1994]. For example, suppose we had a 32-bit IP address
0x807BEA0C. When we are thinking about IP addresses, we usually write the
32-bit address using the dotted decimal notation: We convert the hexadecimal
representation to dotted decimal notation by first separating the 32 bits into
4 bytes: 0x 80 7B EA 0C. Next, we convert each of the bytes into a
decimal number, so 80(base 16) = 128(base 10), 7B(base 16) = 123(base 10),
EA(base 16) = 234(base 10), and 0C(base 16) = 12(base 10). Finally, we write
these four decimal numbers to represent the 32-bit number as 128.123.234.12.
When we see a dotted decimal IP address where the first number is in the range
128-191, we have a Class B address: The 2 most significant bits are the tag,
the next 14 most significant bits are the net number and the 16 least
significant bits are the host number. So, after stripping out the tag field
from the two most significant bits, we see that the net number for this IP
address is 0.123 in dotted decimal notation (0x007B in hexadecimal notation),
or 123(base 10). The host number is the two least significant bytes, or
3756(base 10).
(Thanks to Yuzo Yamamoto, Dalarna University, Sweden)