Path: utzoo!utgpu!jarvis.csri.toronto.edu!rutgers!apple!gem.mps.ohio-state.edu!ginosko!uunet!mcsun!ukc!strath-cs!cs.glasgow.ac.uk!gilbert From: gilbert@cs.glasgow.ac.uk (Gilbert Cockton) Newsgroups: comp.cog-eng Subject: Re: Visual Languages Message-ID: <3404@oba.cs.glasgow.ac.uk> Date: 14 Sep 89 09:31:06 GMT References: Reply-To: gilbert@cs.glasgow.ac.uk (Gilbert Cockton) Distribution: comp Organization: Comp Sci, Glasgow Univ, Scotland Lines: 68 In article creubank@crls.sony.co.jp (Curtis Eubanks) writes: >Is an expression of the form > > if then else > >easy enough for a non-programmer to understand and use? Probably. Is >it intuitive? See work from Sheffield (Green, Arblaster, Sime) on why IF, IF NOT, IF NOT ... appears to be better. There is a book in the pipeline from (mostly European) Cognitive Ergonomists on programming - keep an eye on Cambridge University Press's list. In the meantime, chase up work by Thomas Green (Cambridge MRC APU, ex-Sheffield), approach Yale stuff with care (Soloway etc, too keen on modelling programmers with naive AI techniques) and look in recent HCI and EACE (European Association for Cognitive Ergonomics) conferences. >Ideally, a user who >knows nothing but some very basic interaction skill (moving the mouse, >clicking, and perhaps typing) can walk up to a visual programming >system and start programming immediately. Realistically, this might >be impossible. A safe assumption, unless you can build a CAI module on programming to raduate level into your visual language :-) >My intuition is that any useful visual language must be restricted to >a specific domain to be able to be used by a complete novice. Complete novice "programmer" I take it. I agree (we are working o such a domain at the moment, telephony actually, an example used by another respondent). > [1] What does the user know? "Ask" your users. This is some cocktail of knowledge elicitation and task analysis. > [2] What kind of operations does he wish to perform and on > what data? That will come partly out of the task analysis and knowledge elicitation, but one of our jobs as computer systems designers is to invent new tasks. Users will usually not "want" these until they are aware of them or, more realisitically, when they can try them out on some realistic prototype. A priori design seems impossible here, but that does not stop it being heavily influenced by some theories, even if it is not driven by them. > [3] How would he naturally represent these operations/data > visually? Forget nature, this is society. Nature stops at material boundaries. Semiotics are cultural phenomena, not material phenomena (all materialists and other reductionists, follow-ups to /dev/null please, this is a nice human group :-)) Common/well known/acceptable representations will again emerge in interactions with possible users, largely through task and domain anlaysis. Telephony for example, is crammed with existing graphical representations which can be carried straight over to the VPL. However, there are also many abstract constructs with names which are very poor metaphors, and thus suggest the wrong visual analogies. Here you have to work out from the structure of a domain entity to its possible representations, drawing on as much rich contextual serendipity as possible. You can borrow visual cues from other parts of the environment. Chernov faces have already been mentioned. Bodily postures and gesture representations may be other good representations for abstract operations. -- Gilbert Cockton, Department of Computing Science, The University, Glasgow gilbert@uk.ac.glasgow.cs !ukc!glasgow!gilbert