Path: utzoo!censor!geac!torsqnt!news-server.csri.toronto.edu!rutgers!gatech!mcnc!duke!khera From: khera@thneed.cs.duke.edu (Vick Khera) Newsgroups: comp.lang.c++ Subject: virtual memory (was Re: NIH class libraries for Turbo C++) Message-ID: Date: 29 Nov 90 14:00:13 GMT References: <14440@accuvax.nwu.edu> <15150004@hpdmd48.boi.hp.com> <59300@microsoft.UUCP> Sender: news@duke.cs.duke.edu Followup-To: comp.os.misc Organization: Duke University CS Dept., Durham, NC Lines: 47 Nntp-Posting-Host: thneed.cs.duke.edu In-reply-to: jimad@microsoft.UUCP's message of 26 Nov 90 18:31:38 GMT In article <59300@microsoft.UUCP> jimad@microsoft.UUCP (Jim ADCOCK) writes: Having used both Unix-style huge linear addresses, and Intel 80x86 segments, I believe neither has much in common with OOP. In either one has to copy objects around, or do manual clustering of objects via heuristics, etc. Maybe one needs hardware based on an obid+offset, with automatic support of clustering? The "huge linear address" of Unix-style machines is a farce in the first place, given that that "huge linear address" is built of 4K typical pages, which are mapped in unique ways to disks, and programmers have to reverse engineer all these aspects to get good performance in serious applications. you seem to be implying that a segmented architecture is better than a system that implements virtual memory in a linear address space. there is no need to "reverse engineer" all those aspects to get good performance out of a Unix OS. The OS has the freedom to figure out which parts of the application are needed and have those in memory, with the rest loaded in on demand. This also reduces initial startup time since the whole program doesn't need to be loaded into memory before it starts to execute. if the program is sufficiently small and the computer's memory is sufficiently large, the whole program can sit in memory with no problems or hassles of segments. i agree that it is beneficial to cluster objects and code that reference each other in the same pages to reduce the page faulting in a virtual memory machine. it is not that crucial, however, since we don't have to worry about crossing segment boundaries and the OS will figure out which pages it needs (the working set). segmented architectures are a thing of the past. when DEC went to create the successor to the PDP line, they started with a segmented architecture and quickly decided it was a botch. then they came up with the virtual architecture extension (known today as the VAX). the only reason Intel sticks with the idea is the amount of investment they have in it and all those little pee-cee's out there that need backward software compatibility. though this is an admirable goal, there comes a point when you just have to cut your losses and go with a better idea. v. -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Vick Khera, Gradual Student/Systems Guy Department of Computer Science ARPA: khera@cs.duke.edu Duke University UUCP: ...!mcnc!duke!khera Durham, NC 27706 (919) 660-6528 Brought to you by Super Global Mega Corp .com