This tooklit provides several useful Python scripts for processing stack exchange data dump
- Python 2.7
 - pandas
 - xml.etree.ElementTree
 - cPickle or pickle
 
- download your interested Stack Exchange site data (*.stackexchange.com.7z) from stack exchange data dump, such as 
ai.stackexchange.com.7z - unzip 
ai.stackexchange.com.7zto directory:dataset/ai cd pre_precessing- execute: 
python pystack.py --input ../dataset/ai/ --task all 
- input: file directory which saves Posts.xml, PostLinks.xml, Votes.xml, Badges.xml, and Comments.xml. In above example, input is 
dataset/ai - task: can be selected from [Posts, PostLinks, Votes, Badges, Comments, and All], Each task corresponding a python file. By default, task is set as 
all. 
- Outputs will be saved in corresponding 
.csvand.pkl. - Analysis/Statistics of the Stack Exchange Site will be saved in file 
pystack_analysis.log. - We will describe the details in each task individually.
 
python process_posts.py --input ../dataset/ai/Posts.xml
OR
python pystack.py --input ../dataset/ai/ --task Posts
- QuestionId_AskerId.csv
 - QuestionId_AnswererId.csv
 - QuestionId_AcceptedAnswererId.csv
 - AnswerId_QuestionId.csv
 - AnswerId_AnswererId.csv
 - AskerId_AnswererId.csv
 - question_tags.pkl: A dict pickle file, of which key is question id, and its value is a list of tags
 - Questions.pkl: A dict pickle file, of which key is question id, and its value is a list of [question title, question body]
 - Answers.pkl: A dict pickle file, of which key is answer id, and its value is corresponding body
 
python process_postlinks.py --input ../dataset/ai/PostLinks.xml
OR
python pystack.py --input ../dataset/ai/ --task PostLinks
- PostId_RelatedPostId.csv: PostId -> RelatedPostId if LinkTypeId = 1; PostId is a duplicate of RelatedPostId if LinkTypedId = 3
 - Duplicate_Questions.csv: Duplicate question pairs
 - Related_Questions_Source2Target.csv: There is a link from a souce question to a target question
 
python process_votes.py --input ../dataset/ai/Votes.xml
OR
python pystack.py --input ../dataset/ai/ --task Votes
- QuestionId_Bounty.csv: columns = ["QuestionId","Bounty"], index = False
 
python process_badges.py --input ../dataset/ai/Badges.xml
OR
python pystack.py --input ../dataset/ai/ --task Badges
- Badges.csv, columns = ["UserId","BadgeName","BadgeDate"], index = False
 
python process_comments.py --input ../dataset/ai/Comments.xml
OR
python pystack.py --input ../dataset/ai/ --task Comments
- PostId_CommenterId.csv: index = False, columns = ["PostId","UserId","Score"], UserId gave a comment on PostId (Question or Answer(?)). And the number of up-votes he/she get is Score
 - PostId_CommenterId_Text.pkl: d = {"PostId":[],"UserId":[],"Score":[],"Text":[],"CreationDate":[]}
 
- Install 
p7zipif not already installed:sudo apt-get install p7zip - To install the command line utility 
sudp atp-get install p7zip-full - Or Install p7zip on Mac OSX
 - execute command to extract a *.7z file: 
7za x *.7z 
This code is written for research. It aims to help you start to do your analysi on Stack Exchange Sites without the dirty preprocessing work.
Feel free to post any questions or comments.
If you use this code, please consider to cite QDEE: Question Difficulty and Expertise Estimation in Community Question Answering Sites and ColdRoute: effective routing of cold questions in stack exchange sites