Path: utzoo!mnetor!tmsoft!torsqnt!lethe!yunexus!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!cs.utexas.edu!sdd.hp.com!caen!uflorida!gatech!bloom-beacon!mit-eddie!uw-beaver!ubc-cs!sritchie From: sritchie@cs.ubc.ca (Stuart Ritchie) Newsgroups: comp.os.mach Subject: Re: Threads, Definition of Message-ID: <1991Feb20.011728.15702@cs.ubc.ca> Date: 20 Feb 91 01:17:28 GMT References: <1476@pdxgate.UUCP> <21892@oolong.la.locus.com> Sender: news@cs.ubc.ca (Usenet News) Organization: University of British Columbia, Vancouver, B.C., Canada Lines: 40 In article jgmorris@CS.CMU.EDU (Greg Morrisett) writes: >There is an additional advantage to letting your OS know about your >threads: If one thread blocks for I/O, your whole task doesn't have >to be preempted. Some threads packages that are entirely in the >runtime get around explicit I/O preemption by doing a non-blocking call >(e.g. select) before doing the blocking call. But this sort of trick >isn't possible on a page fault. Note that this advantage applies >to uni-processors as well as MPs. > >-Greg Morrisett > jgmorris@cs.cmu.edu This advantage is true, however most I/O calls that I want to do within a thread will block anyway. File system calls are an obvious example. Under NeXT mach, I can't do a read() without the whole task blocking. Seeing as mach currently relies on much Unix code for I/O, the file system for example, this blocking problem probably won't go away until the fs code is redesigned specifically for Mach and threads. I'm guessing at this point... I saw a comment regarding the efficiency of implementing threads inside of user space compared to threads supported by the kernel. The person claimed 10-1000 times greater cost to schedule kernel supported threads. I would believe 10 times greater cost, but 1000? Could someone explain why the overhead is so great? System call overhead is one thing, but I don't see how that can account for everything. Isn't the microkernel approach supposed to reduce system call overhead? I would appreciate any comments regarding performance and other issues of kernel vs. user-space supported threads. Specifically, I plan on implementing something on the NeXT which makes heavy use of LWP's, namely protocols (X.25, TCP/IP and the OSI stack.) Thus context switches between LWP's should be as efficient as possible. ....Stuart sritchie@cs.ubc.ca Brought to you by Super Global Mega Corp .com