Hard to find (i.e., justly obscure) papers of mine.

These papers fall into two groups. The first group are papers in the "Volumes in honor of" literature that the Bayesian cabal put out regularly back when all the statisticians who cared about Bayes could fit into a school bus. For the volume in honor of George Barnard, which I co-edited, Seymour Geisser solicited a paper from George Box and Box turned us down on the astute grounds that (I'm paraphrasing) nobody read those volumes. The second group of papers are propaganda against simulation models that I wrote or co-wrote when I was at RAND.

Papers from "Volumes in honor of"

Each of these .pdfs contains the information you need for a citation.
  • "Can/May Bayesians do pure tests of significance", in honor of George Barnard, 1990.
  • "Who knows what alternative lurks in the hearts of significance tests", Valencia 4, 1992.
  • "The effect of partial-ordering utilities on Bayesian design of sequential experiments", in honor of Arnold Zellner, 1996.
  • "Statistical practice as argumentation: A sketch of a theory of applied statistics", in honor of Seymour Geisser, 1996.

    Propaganda about simulation models

    I wrote these papers after seeing how combat simulation models were used. To put it bluntly, the use of combat simulation models is as intellectually bankrupt -- not to say corrupt -- as any area I've ever seen. My first reaction to this cesspool was "You can't do that! Your entire argument is 'the model says X' when you know perfectly well this model has no empirical basis at all, it's just some stuff a programmer wrote down". Those of my RAND colleagues who were invested in this activity -- who were nowhere near stupid, whatever else you might think of them -- had a variety of replies, for example (I'm paraphrasing) "We're not claiming this represents reality, this is just a book-keeping exercise" or "We're just using this model to generate insights". This all took place while the Department of Defense was trying to impose some form of quality control over its simulation modeling efforts under the banner of "Verification, Validation, and Accreditation", so the first two papers here were hung on the "Validation" hook. In the first paper below (which appeared in Operations Research thanks to Hugh Miser), I packaged up all the supposed uses of bad models (i.e., excuses) that I'd heard and argued that for each such use, the proper way to validate the model for that use did not involve comparing the model to reality. (At the time, this was the utterly useless bumper-sticker definition of model validation, which nobody could ever get past.) People in the field easily detected the ferocious condemnation behind this rhetorical ploy, but people outside the field missed it completely: When I sent this first paper to David Freedman, his reaction was "Next you're going to tell me that professional wrestling is for real." For the second paper I had adult supervision in the person of Jim Dewar, and the argument became more explicit. We argued that some models could be validated in a scientific sense and others (e.g., all combat models) could not be validated. We gave criteria to distinguish the two cases, then gave an updated catalog of uses for unvalidatable (previously "bad") models and the proper way to validate each one, none of which involved comparing the model to reality. This was published as a RAND report. That got me involved with a group loosely led by Steve Bankes, who were putting a more constructive spin on a similar critique under the name "Exploratory Modeling"; the third paper below was my only contribution to this effort (it came out after I'd left RAND), and actually did get some funding from some RAND sponsors.
    Each of these .pdfs contains the information you need for a citation.
  • "Six (or so) things you can do with a bad model", originally published in Operations Research in 1991 and reprinted here as a RAND Note. RAND employed people to help us write; I worked with one English PhD who was ordinarily very helpful but his suggested revision of this title was "Ways of Dealing with Bad Models". (It's not an accident that my former employer is sometimes called "the BLAND Corp.") This paper contains the best joke I've ever gotten into print, the Nephew Mike device (starting at the bottom of page 9's right column). At one point, all I had to do was say "Nephew Mike" at a seminar and a certain former Deputy Assistant Secretary of Defense would blow a gasket. Jim Dewar and I recycled this joke in the next paper (p. 27).
  • "Is it You or Your Model Talking? A Framework for Model Validation", by JS Hodges and JA Dewar, RAND R-4114-AF/A/OSD, 1992. Credit where credit's due: RAND's management ponied up discretionary money to have this paper internally peer-reviewed, edited, and published in its premier series even though it stomps on the toes of some people in RAND's sponsoring agencies. In the early 1990s at least (I left RAND in 1993), RAND's management still took intellectual concerns seriously enough to be hospitable to malcontents like me.
  • "Credible Uses of the Distributed Interactive Simulation (DIS) System, by JA Dewar, SC Bankes, JS Hodges, T Lucas, DK Saunders-Newton, and P Vye, RAND MR-607-A, 1996. The DIS was a gargantuan combat simulation, distributed all over the US, involving uniformed people at all ranks from enlisted men sitting in tank simulators to generals commanding simulated operations. This paper was part of an effort to figure out how to get something intellectually defensible out of all that.