Path: utzoo!utgpu!utstat!jarvis.csri.toronto.edu!mailrus!cs.utexas.edu!tut.cis.ohio-state.edu!pt.cs.cmu.edu!MATHOM.GANDALF.CS.CMU.EDU!lindsay From: lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) Newsgroups: comp.arch Subject: Re: RISC multiprocessors Message-ID: <6979@pt.cs.cmu.edu> Date: 15 Nov 89 20:31:36 GMT References: <13319@pur-ee.UUCP> <280004@hpdml93.HP.COM> <23963@cup.portal.com> <1989Nov15.040039.28570@Solbourne.COM> Organization: Carnegie-Mellon University, CS/RI Lines: 36 In article <1989Nov15.040039.28570@Solbourne.COM> stevec@solbourne.com (Steve Cox) writes: >second-level write-back caches? so (correct me if i am wrong), there >is a first level cache that is not connected to the shared memory bus. >how do these systems support cache coherency for data that is >cached in the first level cache? sounds pretty hairy to me. >or am i missing something? Yes, it's reasonably hairy. A good introduction would be the paper by Baer's group, which appeared in this year's Computer Architecture Conference proceedings (i.e. the June 1989 SigARCH). His scheme is not the only possible one, but the other schemes have roughly similar complexity. As for the data in the first level cache: there are two answers. One, make the first level use writethrough, so that the second level always gets a copy. This gives the "inclusion" property, whereby the second level always contains a strict superset of the first level. The second level occasionally has to invalidate data which is in both levels, and this means that it has to be able to reach in and nuke something that is in the first level. Two, make the first level use writeback, but inform the second level of each write. The second level creates a hole (if necessary), which the first level can later write the data to. This allows the second level to do all the snoopy/coherence things, as before. Another fun issue is the question of synonyms. Some operating systems (such as Mach) want nonunique inverse mappings: that is, one physical page present in N virtual spaces, N > 1. If the cache(s) use physical addresses, no problem. If the cache(s) are flushed on context switch, no problem. Otherwise, there is a nasty problem: the same data could be in two places in the same cache! -- Don D.C.Lindsay Carnegie Mellon Computer Science