Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!usc!zaphod.mps.ohio-state.edu!caen!uflorida!mlb.semi.harris.com!trantor.harris-atd.com!x102c!wdavis From: wdavis@x102c.harris-atd.com (davis william 26373) Newsgroups: comp.ai Subject: Re: Verification of KB Systems Message-ID: <5195@trantor.harris-atd.com> Date: 7 Jan 91 14:21:21 GMT References: <1991Jan4.180056.20917@evax.arl.utexas.edu> Sender: news@trantor.harris-atd.com Reply-To: wdavis@x102c.ess.harris.com (davis william 26373) Organization: Harris Corporation GSS, Melbourne, Florida Lines: 37 In article <1991Jan4.180056.20917@evax.arl.utexas.edu> mullen@evax.arl.utexas.edu (Dan Mullen) writes: >In the proceedings from WESTEX-87 Green and Keyer say that nobody does >V&V of expert systems because nobody knows how to do it. And nobody knows >how to do it because nobody ever asks for it to be done. ... (some lines deleted) >... Or would anyone like to offer a proof that verification >of KB systems is impossible? This is not exactly a proof, but think about what you are asking for and it seems impossible. For a non-expert system, you can specify requirements rigorously. They may not be the correct requirements, but they can be clear. A system claiming to implement those requirements can be validated, to some extent, against those requirements. Maybe something useful is produced in the process, maybe not. Usually new requirements are discovered during or after implementation (I have not seen a project yet that did not involve some derived requirements during the project or a market driven change). How would we specify the precise requirements for an expert system? Start with the fuzzy "It should be an expert" or "It should be as expert as possible within the budget and time constraints" or even "It should know what these experts know". Now take two experts and put them in a room and see if they agree on everything in their area of expertise. If not, then which knowledge is correct for the expert system? It may be possible to check all the rules against experts to see if they agree and it may be possible to check the inference engine with validation techniques, but is this sufficient to validate an expert system? What about knowledge that is learned as the system evolves? I don't know how to completely validate a person's knowledge of a subject. So, I don't feel confident that a computer's knowledge of a subject can be any better validated. Even more problematic is that the computer may have missing knowledge - how is that to be validated (that none is missing)?