Quantifying evidence in forensic authorship analysis

Tim Grant*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


The judicial interest in 'scientific' evidence has driven recent work to quantify results for forensic linguistic authorship analysis. Through a methodological discussion and a worked example this paper examines the issues which complicate attempts to quantify results in work. The solution suggested to some of the difficulties is a sampling and testing strategy which helps to identify potentially useful, valid and reliable markers of authorship. An important feature of the sampling strategy is that these markers identified as being generally valid and reliable are retested for use in specific authorship analysis cases. The suggested approach for drawing quantified conclusions combines discriminant function analysis and Bayesian likelihood measures. The worked example starts with twenty comparison texts for each of three potential authors and then uses a progressively smaller comparison corpus, reducing to fifteen, ten, five and finally three texts per author. This worked example demonstrates how reducing the amount of data affects the way conclusions can be drawn. With greater numbers of reference texts quantified and safe attributions are shown to be possible, but as the number of reference texts reduces the analysis shows how the conclusion which should be reached is that no attribution can be made. The testing process at no point results in instances of a misattribution.

Original languageEnglish
Pages (from-to)1-25
Number of pages25
JournalInternational Journal of Speech, Language and the Law
Issue number1
Publication statusPublished - 15 Oct 2007


  • Authorship analysis
  • Bayes theorem
  • Discriminant analysis
  • Error
  • Forensic linguistics
  • Sampling


Dive into the research topics of 'Quantifying evidence in forensic authorship analysis'. Together they form a unique fingerprint.

Cite this