Abstract
The judicial interest in 'scientific' evidence has driven recent work to quantify results for forensic linguistic authorship analysis. Through a methodological discussion and a worked example this paper examines the issues which complicate attempts to quantify results in work. The solution suggested to some of the difficulties is a sampling and testing strategy which helps to identify potentially useful, valid and reliable markers of authorship. An important feature of the sampling strategy is that these markers identified as being generally valid and reliable are retested for use in specific authorship analysis cases. The suggested approach for drawing quantified conclusions combines discriminant function analysis and Bayesian likelihood measures. The worked example starts with twenty comparison texts for each of three potential authors and then uses a progressively smaller comparison corpus, reducing to fifteen, ten, five and finally three texts per author. This worked example demonstrates how reducing the amount of data affects the way conclusions can be drawn. With greater numbers of reference texts quantified and safe attributions are shown to be possible, but as the number of reference texts reduces the analysis shows how the conclusion which should be reached is that no attribution can be made. The testing process at no point results in instances of a misattribution.
Original language | English |
---|---|
Pages (from-to) | 1-25 |
Number of pages | 25 |
Journal | International Journal of Speech, Language and the Law |
Volume | 14 |
Issue number | 1 |
DOIs | |
Publication status | Published - 15 Oct 2007 |
Keywords
- Authorship analysis
- Bayes theorem
- Discriminant analysis
- Error
- Forensic linguistics
- Sampling