Xref: utzoo news.groups:12384 news.misc:3623 Path: utzoo!attcan!utgpu!jarvis.csri.toronto.edu!mailrus!ncar!woods From: woods@ncar.ucar.edu (Greg Woods) Newsgroups: news.groups,news.misc Subject: Re: Report Card on the success of the group creation guidelines Message-ID: <4402@ncar.ucar.edu> Date: 20 Sep 89 16:44:29 GMT References: <17735@looking.on.ca> <1989Sep20.060201.4473@rpi.edu> <45814@bbn.COM> Reply-To: woods@handies.UCAR.EDU (Greg Woods) Organization: Scientific Computing Division/NCAR, Boulder CO Lines: 42 In article <45814@bbn.COM> cosell@BBN.COM (Bernie Cosell) writes: >The hypothesis under consideration is whether the >guidelines we follow in fact have any merit: if they act as a filter to >improve the quality, popularity, etc, of the groups that succeed in >running their gauntlet. Just for the record, the purpose of the guidelines is not to be a filter for newsgroup quality. It is to reduce flame wars over group creations. Therefore Brad's statistics judging the guidelines based on the readership of created groups are a red herring. Brad also conveniently forgets to mention that several of the "successful" groups he mentions (sci.physics.fusion, comp.sys.next) had their creations accompanied by massive flame wars. If one wants to argue that having the guidelines is a bad idea, one will have to accept the resulting flame wars (and possibly rmgroup wars) that would result without them. Secondly, I do not trust the readership per machine statistics that Brad is using. The reason is NNTP, with which arbitron does not work well. Our site is a good example. We probably have about 100 readers of news here. If I run arbitron on the "ncar" machine, where all articles are posted from, it will show about 3. The reason is that the vast majority of our newsreaders here read news remotely via rrn and NNTP. Aside from the fact that lack of history and/or active files on those machines makes running arbitron on them impossible, even if the script were modified to fetch the active file remotely, to get statistics for our site would require running arbitron on over 20 machines here. Most of these machines are not under my control, so even if I *wanted* to take the time required to install and run arbitron on 20+ machines, I lack the necessary access to them to do so. Yes, I could cajole the admins of those machines into doing it (maybe), but now you're talking a LOT of work! It's simply too much of a hassle. Why is all of this important? Because I suspect that most medium to large sites are using this method now for distributing news to their users. Surely I am not the only one that finds the prospect of having to install and run arbitron on this many machines to get accurate statistics to be too time consuming. This means that the number of readers per machine/site (which is the stat that Brad is using) may well be underestimated. Either that or these sites are not being represented adequately. I also think that the number of machines on the whole net, and therefore the number of total readers, is OVERestimated, but I don't have any hard evidence for that. --Greg