Skip to content

Commit 19e8d73

Browse files
committed
docs
1 parent 5f8cf1a commit 19e8d73

13 files changed

Lines changed: 100 additions & 3 deletions

File tree

datasets/README.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
These are various sample datasets for tensorlog - check the individual
2+
directories for more information. Some of these are used for testing.

datasets/amie-qa/README.txt

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
AMIE is a system that mines rules from KBs. Part of what was
2+
distributed with this system was a small sample KB.
3+
/afs/cs.cmu.edu/user/wcohen/shared-home/data/dialog-toy
4+
contains code to generate questions (using templates) from
5+
this KB.
6+
7+
Luis Galárraga, Christina Teflioudi, Katja Hose, and Fabian
8+
M. Suchanek. 2015. Fast rule mining in ontological knowledge bases
9+
with AMIE$$+$$+. The VLDB Journal 24, 6 (December 2015),
10+
707-730. DOI=http://dx.doi.org/10.1007/s00778-015-0394-1

datasets/amie/README.txt

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
TODO (Katie): docs
2+
3+
AMIE is a system that mines rules from KBs. Part of what was
4+
distributed with this system was a largish set of rules and some small
5+
KBs, used here to ....
6+
7+
Luis Galárraga, Christina Teflioudi, Katja Hose, and Fabian
8+
M. Suchanek. 2015. Fast rule mining in ontological knowledge bases
9+
with AMIE$$+$$+. The VLDB Journal 24, 6 (December 2015),
10+
707-730. DOI=http://dx.doi.org/10.1007/s00778-015-0394-1

datasets/cora/README.txt

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
This is the CORA bibliography-matching problem described original in
2+
3+
Poon, H., & Domingos, P. (2007, July). Joint inference in information
4+
extraction. In AAAI (Vol. 7, pp. 913-918).
5+
6+
and later adapted for PROPPR
7+
8+
William Yang Wang, Kathryn Mazaitis, William W. Cohen (2013):
9+
Programming with Personalized PageRank: A Locally Groundable
10+
First-Order Probabilistic Logic in CIKM-2013

datasets/fb15k-237/README.txt

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
FB15K-237 Knowledge Base Completion Dataset from
2+
https://www.microsoft.com/en-us/download/details.aspx?id=52312
3+
4+
Using a set of rules learned by ISG using PROPPR.
5+
6+
Experiment does rule weight learning.
7+
8+

datasets/fb15k-speed/README.txt

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
FB15K-237 Knowledge Base Completion Dataset from
2+
https://www.microsoft.com/en-us/download/details.aspx?id=52312
3+
4+
Using a set of rules learned by ISG using PROPPR.
5+
6+
Experiment does some performance tests for inference and such. The
7+
speed numbers don't seem very stable - they vary a lot, maybe due to
8+
system load? so the tests are kindof unreliable.
9+
10+
TODO: decide if the system got slower sometime around v1.3.6 or not.

datasets/grid/README.txt

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
Various artificial learning tasks inspired by experiments in:
2+
3+
Dries, Anton, et al. "ProbLog2: Probabilistic logic programming."
4+
Joint European Conference on Machine Learning and Knowledge Discovery
5+
in Databases. Springer, Cham, 2015.
6+
7+
code for running scalability experiments in JAIR submission
8+
- bigexpt.py
9+
- bigtfexpt.py
10+
well-documented code to demo building a TF model
11+
- demo.py
12+
13+
learning an approximation of the problog2 semantics, by learning
14+
probabilities defined by a biased logistic on top of the proof-count
15+
function, in JAIR submission
16+
- distlearning.py
17+
18+
learning to process queries with >1 target ouput, by learning
19+
probabilities defined by a biased logistic on top of the proof-count
20+
function, in JAIR submission
21+
- multiclass.py
22+
23+
demo of integration with TF - embedding learning is in JAIR submission
24+
- tfintegration.py
25+
26+
automated tests
27+
- expt.py
28+
- testexpt.py
29+
- tfexpt.py
30+

datasets/grid/bigexpt.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
# code for running scalability experiments in JAIR submission
2+
13
import sys
24
import numpy as NP
35
import random

datasets/grid/distlearning.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,6 @@
1616
# at a different scale. euclidean distance? jenson-shannon?
1717
# TODO: output test-set and full-data y's for visualization
1818
# TODO: why do the actual y's include non-zeros for the filler entities?
19-
# TODO: should I normalize this? why?
2019
#
2120
#
2221
# --gendata M - draw M sample interpretations (ie grids, where edges

datasets/grid/tfintegration.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,6 @@
55
import random
66
import getopt
77

8-
# todo: for visualization, sort everything by cell name
9-
#
108
# 1) demonstrates adding logical inference as an input to a tensorflow
119
# function, and training thru the function to modify fact confidences
1210
# (with the --corner soft option)

0 commit comments

Comments
 (0)