Identifying User Profile by Incorporating Self-Attention Mechanism based on CSDN Data Set

With the popularity of social media, there has been an increasing interest in user profiling and its applications nowadays. This paper presents our system named UIR-SIST for User Profiling Technology Evaluation Campaign in SMP CUP 2017. UIR-SIST aims to complete three tasks, including keywords extraction from blogs, user interests labeling and user growth value prediction. To this end, we first extract keywords from a user's blog, including the blog itself, blogs on the same topic and other blogs published by the same user. Then a unified neural network model is constructed based on a convolutional neural network (CNN) for user interests tagging. Finally, we adopt a stacking model for predicting user growth value. We eventually receive the sixth place with evaluation scores of 0.563, 0.378 and 0.751 on the three tasks, respectively.


INTRODUCTION
Social media have recently become an important platform that enables its users to communicate and spread information. User-generated content (UGC) has been used for a wide range of applications, including user profiling. The Chinese Software Developer Network (CSDN) is one of the biggest platforms of software Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set developers in China to share technical information and engineering experiences. Analyzing UGC on the CSDN can uncover users' interests in the software development process, such as their past interests and current focus, even if their user profiles are incomplete or even missing. Apart from the UGC, user behavior data also contain useful information for user profiling, such as "following," "replying," and "sending private messages," through which the friendship network is constructed to indicate user gender [1,2,3], age [4], political polarity [5,6] or profession [7].
In SMP CUP 2017 [8], the competition is structured around three tasks based on CSDN blogs  : (1) keywords extraction from blogs, (2) user interests labeling and (3) user growth value prediction. Our team from School of Information Science and Technology, University of International Relations participated in all the tasks in User Profiling Technology Evaluation Champaign. This paper describes the framework of our system UIR-SIST for the competition. We first extract keywords from a user's blog, including the blog itself, blogs on the same topic, and other blogs published by the same user. Then a unified neural network model is constructed with self-attention mechanism for Task 2. The model is based on multi-scale convolutional neural networks with the aim to capture both local and global information for user profiling. Finally, we adopt a stacking model for predicting user growth value. According to SMP CUP 2017's metrics, our model achieved scores of 0.563, 0.378 and 0.751 on the three tasks, respectively. This paper is organized as follows. Section 2 introduces User Profiling Technology Evaluation Campaign in details. Section 3 describes the framework of our system. We present the evaluation results in Section 4. Finally, Section 5 concludes the paper.

Data Set
The data set used in SMP CUP 2017 is provided by CSDN, which is one of the largest information technology communities in China. The CSDN data set consists of all user generated content and the behavior data from 157,427 users during 2015, which can be further divided into three parts: 1). 1,000,000 pieces of user blogs, involving blog ID, blog title and the corresponding content; 2). Six types of user behavior data, including posting, browsing, commenting, voting up, voting down and adding favorites, and the corresponding date and time information; 3). Relationship between users, which refers to the records of following and sending private messages.
More details about the size and type of the CSDN data set are shown in Table 1 Table 2 illustrates an example from the given data set.

Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
Task 3: To predict each user's growth value of the next six months according to his/her behavior of the past year, including the texts, the relationships and the interactions with other users. The growth value needs to be scaled into [0, 1], where 0 presents user drop-out.

Metrics
To assess the system effectiveness in completing the above-mentioned tasks, the following evaluation metrics are designed for each individual task. Score 1 is defined to calculate the overlapping ratio between the extracted keywords and the standard answers, which can be computed in Equation (1): where N is the size of the validation set or the test set, K i is the extracted keywords set from document i, and K i * is the standard keywords of document i. Note that it is defined that |K i | = 3 and |K i * | = 5.
Score 2 denotes the overlapping ratio of model tagging and answers, which can be expressed by Equation (2): where T i is the automatically generated tag set of user i, and T i * is the standard tags of user i. It is also defined that |T i | = 3 and |T i * | = 3. Score 3 is calculated by relative error between the predicted growth value and the real growth value of users, which can be expressed by Equation (3): where v i is the predicted growth value of user i, and v i * is the real growth value of user i.
The overall score can be computed by Equation (4):

SYSTEM OVERVIEW
The overall architecture of UIR-SIST is described in Figure 1. UIR-SIST system is comprised of four modules: 1). Preprocessing module: To read all blogs of training set and test set. It performs word segmentation, part-of-speech (POS) tagging, named entity recognition and semantic role labeling; 2). Keyword extraction module: To extract three keywords to represent the main idea of a blog, which can be captured from three aspects to generate the candidate keywords set, including the blog content, other blogs published by the same user, and the blogs on the same topic, as shown in the green part; 3). User interests tagging module: To construct a neural network combined with user content embedding and keyword and user tag embedding for user interests tagging, as shown in the red part; 4). User growth value prediction module: To incorporate users' interaction information and the behavior features into a supervised learning model for growth value prediction, as shown in the blue part.

Keywords Extraction
The objective of Task 1 is to extract three keywords from each blog that can represent the main idea of the blog. In our opinion, the main idea can be extracted from the following three aspects, the blog itself, other blogs published by the same user, and the blogs on the same topic. Based on this assumption, we adopt three different models that can capture each aspect to generate a candidate keywords set, including tf-idf, TextRank and LDA, which are proved very effective in the relevant tasks. Then three keywords are extracted from the candidate set by using different rules.
We first adopt the classic tf-idf term weighting scheme to reflect the content of the blog itself. Then we rank the keywords based on the tf-idf score, and select the top 100 keywords to form the candidate keyword set.

Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
Regarding the blogs on the same topic, we adopt TextRank approach [9] to cluster these blogs together. Meanwhile, all the keywords will be weighed during this process. We finally select the top 300 keywords.
Moreover, we utilize topic information to extract the keywords. Since 42 categories of tags are given in Task 2, we assume that these 42 topics are extracted from all the blogs. Therefore, we use Latent Dirichlet Allocation (LDA) model [10] to extract top 100 keywords for each category from 1,000,000 blogs, and thus obtain the interspecific distribution information of these 4,200 subject keywords.
In summary, we consider three aspects in order to reflect the blog content and obtain three independent candidate keywords sets, which are extracted through tf-idf model, TextRank model and LDA model. After that, we only save the intersection data set. In our training set of Task 1, about 5,000 keywords are provided, which are collected after extraction and deduplication.
A drawback of the classic tf-idf model is that it simply presupposes that the rarer a word is in corpus, the more important it is, and the greater its contribution is to the main idea of the text. However, when referring to a group of articles, which mainly use the same keywords and describe some similar concepts, the calculation results will have many errors. This is also the reason why we use tf-idf in the short blog, while we use the TextRank model in the long blog collection published by the same user.
In addition, in order to enhance its cross-topic analysis ability, we borrow the idea of 2016 Big Data & Computing Intelligence Contest sponsored by China Computer Federation (CCF)  , and implement the improvements on the results of traditional tf-idf calculation, and obtain the result of S-TFIDF(w) by using Equation (5): where C w is the frequency of word w appearing in 42 categories.

User Interests Tagging
The objective of this task is to tag a user's interests with three labels from 42 given ones. We model this task with neural networks, and the model structure is shown in Figure 2. Each blog is represented by a blog embedding [11] through convolution and max-pooling layers. Then we obtain a user's content embedding from weighted sum of all of his or her blog embeddings. The weighted value of each blog embedding is counted by self-attention mechanism. Content embedding and keyword embedding are concatenated as user embedding, and finally fed to the output layer. In our system, a convolutional neural network (CNN) model is constructed for blog representation instead of a recurrent neural network (RNN), since more global information will be captured for indicating the user interests and the time efficiency will also be enhanced. It is widely acknowledged that a multi-scale convolutional neural network [12] has been implemented due to its outstanding achievement on computer vision [13], and TextCNNs designed by arraying word embedding vertically has also shown quite high effectiveness for natural language processing (NLP) tasks [14]. In our CNN model, we treat a blog as a sequence of words x = [x 1 , x 2 , … , x 1 ] where each one is represented by its word embedding vector, and returns a feature matrix S of the blog. The narrow convolution layer attached after the matrix is based on a kernel W ∈ R kd of width k, a nonlinear function f and a bias variable b as described by Equation (6): where x i:j refers specifically to the concatenation of the sequence of words' vectors from position i to position j. In this task, we use several kernel sizes to obtain multiple local contextual feature maps in the convolution layer, and then apply the max-overtime pooling [15] to extract some of the most important features.

Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
The output of that is the low-dimensional dense and quantified representation of each single blog. After that, each user's relevant blogs are computable. We simply average their blogs' vectors to obtain the content embedding c(u) for an individual user: where T is the total number of a user's related blogs.
However, different sources of blogs imply the extent of a user's interest in different topics. For example, a blog posted by a user may be generated from an article written by himself, reposted by other users, or shared by users from another platform. It is natural that we may pay attention to these blogs in varying degrees when we infer this user's interests. Thus, a self-attention mechanism is introduced, which automatically assigns different weights to the value of each user's blog after training. The user context representation is given by weighted summation of all blogs' vectors: where a i is the weight of the i-th blog, s i is the one-hot source representation vector of the blog, v ∈ R n , W ∈ R n' × m , U ∈ R n' × n , s i ∈ R m , h i ∈ R n , and m is the number of all source platforms.
When we finish a user's context representation, the keyword matrix of all blogs' keywords extracted by our model in Task 1 will be concatenated. The final features are the output of above whole feature engineering. Afterwards, an ANN layer trains the user embeddings from the training set and predicts probability distribution of users' interests among 42 tags in validation and test set according to their embeddings.

User Growth Value Prediction
According to the description of Task 3, the growth value can be estimated as the degree of activeness. Therefore, our basic idea is to incorporate a users' interaction information and his or her behavior statistical features into a supervised learning model. The procedure of Task 3 is demonstrated by Figure 3. On the whole, we use a stacking framework [16] to enhance the accuracy of final prediction. After the basic behavior statistics analysis, the original features are selected as the inputs incorporated into the stacking model. Then, the stacking model is divided into two layers, the base layer and the stacking layer. In the base layer, we choose Passive Aggressive Regressor [17] and Gradient Boosting Regressor [18,19] as the group of basic regressors due to their excellent performance. In the stacking layer, we still use the support vector machines (SVM) model, especially, the NuSVR model, which can control its error rate. Finally, we obtain the final results of user growth value. Figure 4 illustrates an example of the daily statistics of user behaviors, including posting, browsing, commenting, voting up, voting down, adding favorites, following, and sending private messages. To predict the user growth value, it is noted that the dynamic changes of behaviors along the time line are more useful. To avoid the sparse data problem, we adopt the monthly statistics of user behaviors rather than daily statistics. Figure 4. Example of daily statistics of user behaviors. Note: "Add" refers to "add favoriates", and "send" refers to "send private messages".

Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
Then we use correlation analysis to exclude the "vote down" behavior because of its negative contribution to model prediction. After that, through feature selection, we use the average, log calculation and growth rate of the original data to obtain features for the stacking model.
where LOG(d) represents the calculation results of data d after adjustment, and GR(d t ) represents the calculation results of growth value from data d t in month t to data d t+1 in month t+1.

PAR/GDR-NuSVR-Stacking Model (PGNS)
Once we have obtained monthly statistics and derivative features as described above, the combination of them will be sent as inputs into Passive Aggressive Regressor and Gradient Boosting Regressor independently. By averaging the predictions of those two base models, a new feature will be created and input into the stacking model NuSVR. Because of the inherent randomness of base models, we adopt a self-check mechanism of 10-fold cross validation. If the trained model obtains a score higher than the threshold S* under given scoring rules, we will enter the corresponding features of validation set or test set into the model for a prediction value, which will be saved into a candidate set. On the contrary, if the trained model obtains a 10-fold cross validation score that is lower than S*, the model will be discarded and the program will return to the training session shown in the dotted box for a new round of training.
In order to reduce the errors of a single round of training, we set at least R* rounds for training and add all predictions that obtain higher scores than S* to the candidate set. According to our experience, the ratio of the size of a candidate set to R* is about 0.45. When all rounds of trainings are completed, all predictions in the candidate set will be calculated to generate an average prediction as the final results.

EVALUATION
In our model, we first adopt Jieba  toolkit for Chinese word segmentation, and then train a word embedding with the dimensions of 300 [11]. Table 3 shows the comparison results of our proposed approach for Task 1. It is observed that the best results are achieved when data of all the three aspects are used for capturing the main ideas of blogs. Besides, we also test performance of our combined neural network with different embedding inputs. Note that to obtain the results of individual embedding, we train a new CNN model for blog embedding, and compute the similarity between blog content and keywords in the embedding representation. The experimental results are summarized in Table 4. It is observed that the embedding of blog content proves more effective than that of keywords, while they together achieve the best run.  Table 5 displays the overall performance of our system's best run on each individual task, which achieved the sixth place in the competition.

CONCLUSIONS AND FUTURE WORK
In this paper, we present our system built for the User Profiling Technology Evaluation Campaign of SMP CUP 2017. To complete Task 1, we propose to extract keywords from three aspects from a user's blogs, including the blog itself, blogs on the same topic, and other blogs published by the same user. Then a unified neural network model with self-attention mechanism is constructed for Task 2. The model is based on multi-scale convolutional neural networks with the aim to capture both local and global information for user profiles. Finally, we adopt a stacking model for predicting user growth value. According to SMP CUP 2017's metrics, our model runs achieved the final scores of 0.563, 0.378 and 0.751 on three tasks, respectively.

Identifying User Profi le by Incorporating Self-Attention Mechanism based on CSDN Data Set
Future work includes analysis of the relationships between users and blogs. We only use the users' behavior in Task 2 in the current system, but the time when blogs are published is ignored. We plan to include network embedding into our model. Moreover, we will collect more blogs with real time information, and attempt to incorporate the time information into our weighting schema in those tasks.

AUTHOR CONTRIBUTIONS
B. Li (byli@uir.edu.cn) is the leader of the UIR-SIST system, who drew the whole framework of the system. J. Lu (lj1230@nyu.edu) was responsible for building the model for keyword extraction, while L. Chen (lec@boyabigdata.cn) and K. Meng (kmmeng@uir.edu.cn) were responsible for the model construction of user interests tagging. F. Wang (wangfengyi18@mails.ucas.ac.cn) summarized the user growth value prediction, while J. Xiang (xiang.j@husky.neu.edu) and N. Chen (nchen@uir.edu.cn) summarized the evaluation and made error analysis. X. Han (hanxu@cnu.edu.cn) drafted the whole paper. All authors revised and proofread the paper.
Xu Han is an Assistant Professor at Capital Normal University. She received her PhD degree in 2011. Her research interests are artificial intelligence and mobile cloud computing. She has published over 30 research papers in major international journals and conferences.

Binyang Li is an Associate Professor at School of Information Science and
Technology, University of International Relations. He received his PhD degree from Chinese University of Hong Kong in 2012. His research interests include natural language processing, sentiment analysis and social computing. He has published over 50 research papers in major international journals and conferences.