Authorship Attribution Using a Neural Network Language Model

February 17, 2016 ยท Entered Twilight ยท ๐Ÿ› AAAI Conference on Artificial Intelligence

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 10.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: README.md, aggregate.m, bprop.m, cell2csv.m, confusion_array.m, dataprep.m, display_nearest_words.m, eval_data.m, extract_sentences.m, fprop.m, gen_data.m, getfile.m, idx2word.m, load_data.m, main_classify.m, main_comp_ppl.m, main_example.m, main_gen_data.m, main_gen_lm.m, main_lm_opt.m, main_porterStemmer.m, main_profile.m, main_test.m, nbest_accuracy.m, porterStemmer.m, predict_target_word.m, prep_ngram.m, process_options.m, raw, read_confusion.m, read_nbest.m, sent2idx.m, seq_ppl.m, seq_probability.m, stem, test_accuracy.m, train.m, vocab_indexing.m, word2idx.m, word_distance.m, write_data.m

Authors Zhenhao Ge, Yufang Sun, Mark J. T. Smith arXiv ID 1602.05292 Category cs.CL: Computation & Language Cross-listed cs.AI Citations 39 Venue AAAI Conference on Artificial Intelligence Repository https://github.com/zge/authorship-attribution โญ 17 Last Checked 2 months ago
Abstract
In practice, training language models for individual authors is often expensive because of limited data resources. In such cases, Neural Network Language Models (NNLMs), generally outperform the traditional non-parametric N-gram models. Here we investigate the performance of a feed-forward NNLM on an authorship attribution problem, with moderate author set size and relatively limited data. We also consider how the text topics impact performance. Compared with a well-constructed N-gram baseline method with Kneser-Ney smoothing, the proposed method achieves nearly 2:5% reduction in perplexity and increases author classification accuracy by 3:43% on average, given as few as 5 test sentences. The performance is very competitive with the state of the art in terms of accuracy and demand on test data. The source code, preprocessed datasets, a detailed description of the methodology and results are available at https://github.com/zge/authorship-attribution.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computation & Language

๐ŸŒ… ๐ŸŒ… Old Age

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, ... (+6 more)

cs.CL ๐Ÿ› NeurIPS ๐Ÿ“š 166.0K cites 8 years ago