Path: utzoo!mnetor!tmsoft!torsqnt!news-server.csri.toronto.edu!clyde.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!bu.edu!att!tut.cis.ohio-state.edu!purdue!haven!mimsy!chris From: chris@mimsy.umd.edu (Chris Torek) Newsgroups: comp.arch Subject: Re: Let's pretend Keywords: Intel, 586, windows Message-ID: <28774@mimsy.umd.edu> Date: 24 Dec 90 14:49:24 GMT References: <3042@crdos1.crd.ge.COM> <450@lysator.liu.se> <1990Dec23.093537.18481@ncsuvx.ncsu.edu> Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742 Lines: 86 >uad1077@dircon.uucp (Ian Kemmish) writes: >>Hmmm, I've yet to see a windows-in-hardware chip that handles the input >>semantics of windows or canvasses - you'd still need to handle the canvas >>hierarchy in software, so having it in hardware as well just doubles >>the amount of book-keeping you do. In article <1990Dec23.093537.18481@ncsuvx.ncsu.edu> kdarling@hobbes.ncsu.edu (Kevin Darling) writes: >Apologies... I'm not sure what you meant here. Essentially, you must retain the clipping boundaries for all windows in software so that you can tell where the input focus is (for `input is at cursor hot spot' interfaces, anyway; `click-to-type' interfaces could, in theory, ask your hardware-chip `which window number is spot (x,y)', and this can be computed during a display scan: 1/70th of a second for focus to take effect is not too bad). However, typically the answer to `where is the input' is best computed by a different method than `where are the windows', so this doubling is not quite accurate. >>Additionally, there is the problem of what you do when you map the >>n+1'th window.... > Yes, that's always a bother. But we're talking about possible >future hardware, not just today's (quick way out of corner ;-). Depending on how you define a `window', future hardware might have to handle numbers on the order of 10,000 windows. (X11 was originally designed to make each individual window cheap, unlike SunView; as time passed the windows got `fatter' and now in addition to `widgets', each of which is a window, there are toolkits with `gadgets', which are not. This is one of the reasons X11 is wrong. ---Not to belittle X11: it is a massive effort and there is a lot to be learned from it. Still, it has grown WAY too complicated. More in a moment:) >... any overlapping windows must be handled without asking apps to do >redraws, I agree with this. The window system (as a whole, however it is built) must provide each `window user' (application or whatever) the illusion that it has an arbitrarily large and arbitrarily perfect screen all to itself. There must be a way to find out what flaws exist (e.g., mapped or monochrome instead of true color, 1536x1152 pixel rather than infinite, etc.) for special purpose applications, but the default should be a perfect virtual display. (This is another reason X11 is wrong.) When you draw in an overlapped window, the draw should take place in the window. If the covered region is exposed, the window system must put up the result of the draw. If that means it must draw in off-screen memory, then it must draw in off-screen memory. (Some will make the following objection: `My high end display has 1536x1152 pixels, each with 24 bits of true color. That is 5 megabytes per display. You want a window system to allow 100 overlapping full-sized windows and you want it to retain all 500 megabytes?!?' The answer to this is `yes': `How much did you pay for your high-end display? And you mean to tell me that after that, you cannot afford another $1500 for a 600 MB disk for virtual memory?' The usual comeback is `but the application can recompute the display using less memory': Yes, but so what? That requires more code in every application. Pretty soon you have to buy a few $2500 1.2GB disks to hold the applications, not to mention all that money on developer effort to write the extra redisplay code, not to mention the low bandwidth between the CPU and display compared to on-display, .... The extra data space in each application is not free, either.) >so clipping is out of the question. Not at all---*within* the window system. Anyway, to move back towards architecture, there is one key point when it comes to doing windows in hardware: Working smart will always outdo working hard, but working hard can sometimes (often?) be cheaper. Right now, however, I think the tradeoff remains on the side of `working smart': i.e., doing the windows in software. It is moving towards `working hard', but has not got there yet. Give it a few more years.... -- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 7163) Domain: chris@cs.umd.edu Path: uunet!mimsy!chris