Abstract
In the field of natural language processing, the task of sensitive information detection refers to the procedure of identifying sensitive words for given documents. The majority of existing detection methods are based on the sensitive-word tree, which is usually constructed via the common prefixes of different sensitive words from the given corpus. Yet, these traditional methods suffer from a couple of drawbacks, such as poor generalization and low efficiency. For improvement purposes, this paper proposes a novel self-attention-based detection algorithm using the implementation of graph convolutional network. The main contribution is twofold. Firstly, we consider a weighted GCN to better encode word pairs from the given documents and corpus. Secondly, a simple, yet effective, attention mechanism is introduced to further integrate the interaction among candidate words and corpus. Experimental results from the benchmarking dataset of THUC news demonstrate a promising detection performance, compared to existing work.