-
Notifications
You must be signed in to change notification settings - Fork 30
JavaScript Model Tracer
This page details the ways in which rule authors can organize production rule models for the JavaScript model tracer, as well as options for configuring the rule engine and interacting with it at run-time. For information on writing rules see this site (Note that only the nools DSL syntax is used for CTAT model tracer models).
Overview
Required Types
Model Structure
Initializing Working Memory
Backtracking
Evaluating Student Input
Configuring Model Behavior
Setting Tracing Flags
Browser Console Interactivity
Logging Custom Fields
Defining Skills
Hints and Techniques
The JavaScript Model Tracer uses the nools forward-chaining production rule engine to run tutor models. There are, however, differences between rule files written for use directly with nools and tutor models written for use with the model tracer. One important difference is that the model tracer uses its own version of the modify function to alter facts in working memory. To modify facts in tutor models, use the following function signature: modify(<fact>, <property>, <value>) rather than the signature described in the nools documentation.
The following fact types must be defined by a given model in order for certain tracer functionality to work.
-
StudentValues: Required for all models. This is the type of fact that represents student input in the tracer's working memory.
define StudentValues {
selection: null,
action: null,
input: null,
constructor: function(s,a,i) {
this.selection = s;
this.action = a;
this.input = i;
}
}
-
TPA: Required for any model which generates TPAs. TPAs are sent by assertingTPAfacts from inside rules'thenblocks.
define TPA {
selection: null,
action: null,
input: null,
constructor: function(s, a, i) {
this.selection = s;
this.action = a;
this.input = i;
}
}
-
Hint: Required for any model which generates hints. Hints are sent by assertingHintfacts from inside rules'thenblocks. Hints will appear in order of precedence, from high to low. Hints with equal precedence will appear in the order they were asserted.
define Hint {
precedence: 0,
msg: "",
constructor: function(m, optPrecedence) {
this.msg = m;
this.precedence = optPrecedence || 0;
}
}
-
IsHintMatch: Required for any model which sets the "use_hint_fact" configuration parameter totrue. See Configuring Model Behavior for more information.
define IsHintMatch {
constructor: function() {
}
}
For any non-trivial model, it's a good idea to break up the model definition into multiple different files. This makes models easier to understand and maintain, and allows authors to re-use the same sets of rules and types with different initial problem states. A given model can be broken up into any number of files, but there are four general kinds of file to consider when writing a model:
- Rule File: all of the rule and function definitions for the model
- Types File: type definitions for the model
- Problem File: any necessary problem-specific information
- Skills File: skill definitions and rule-to-skill mappings for the model
A typical problem might consist of these four files organized like so: problem_file(imports skills_file and rule_file (imports types_file))
Authors will usually want to have the working memory of the rule engine be in a particular state when a problem is begun. This can be accomplished using a "bootstrap" rule, or a rule which fires immediately when the model is loaded and then initializes the problem state in its then block. For example, the bootstrap rule of an addition problem might look like:
rule bootstrap {
when {
// assume addend1 and addend2 are global variables defined in a problem-specific file which imports this model
a1: Number from addend1;
a2: Number from addend2;
}
then {
assert(new Addend(a1));
assert(new Addend(a2));
//and so on...
halt();
}
}
For a model with no given values, the following "when" block can be used instead:
when {
b: Boolean b === true from true;
}
If these when blocks look strange, see here for an explanation of the from keyword
Important: be sure to call halt() at the end of the then block in your bootstrap rule to prevent the rule engine from starting to execute immediately on load
Fact references to other facts: If you want properties in your facts to refer to other facts (to create trees or other data structures, e.g.), it is usually best to use names, not direct object references, and then include patterns in your rules find the facts themselves. Straight object references to facts in working memory don't always work. To use names instead, you will need a string-valued "name" property in each of your fact types, and you will typically want to give each fact a unique name. To see what this means, assume you have the following 2 types:
define InterfaceElement { // represents a user interface component in working memory
name: null, // component's name, the selection in a selection-action-input tuple
value: null, // component's current value, the input in selection-action-input
constructor: function(n) {
this.name = n;
}
}
define Problem {
interfaceElement1: null, // reference to the name of a fact representing the component used on the 1st step
constructor: function(ie1Name) {
this.interfaceElement1 = ie1Name;
}
}
Then we recommend you initialize working memory as follows:
rule bootstrap {
when {
s: Boolean s === false from false;
}
then {
let ie1 = assert(new InterfaceElement("step1TextInput"));
assert(new Problem(ie1.name)); // do not use ie1 -- store the unique name instead of the reference
halt();
}
}
rule Step1 {
when {
prob : Problem {interfaceElement1: sel};
ie : InterfaceElement ie.name == sel && ie.value == null; // match on the name
}
then {
assert(new Hint("Type 33 in text input "+sel+"."));
if(checkSAI({selection: sel, action: "UpdateTextField", input: 33})) {
modify(ie, "value", 33);
halt();
}
}
}
Backtracking allows the model tracer to explore all possible solution paths independently of one another. It accomplishes this by saving the state of the model any time it finds more than one new rule activation on the agenda. If at a later point there are no more activations on the agenda and halt() has not been called, the tracer "backtracks" by restoring the model to the state that was most recently saved, then fires the activation subsequent to the one fired last time the model was at that point. This will continue until the search is ended by a call to halt() or all possible activation chains have been fired.
Authors can force the tracer to backtrack by calling backtrack() from within a rule's then block. The tracer will backtrack after executing the rest of the then block, even if there are still activations on the agenda.
Backtracking is disabled by default in the model tracer. To enable it, add the following call to the then block of your model's bootstrap rule: setProblemAttribute("use_backtracking", true).
Any time a model predicts a potential student step, its prediction can be checked against the most recent student input by calling the function checkSAI(predictedSAI, optionalComparator, isBuggyStep), where:
- predictedSAI is the step predicted by the model, in the form of an object with the properties "selection", "action", and "input".
- optionalComparator is an optional comparison function for the input properties only of the model and student SAIs. It should take as arguments two objects and return
trueformatchorfalseforno-match. If this argument is supplied the comparison will be based on equality of the two SAIs' selection and action properties and the result of this function for the input properties. If this argument is omitted the comparison will be based on equality of the two SAIs' selection, action, and input properties. - isBuggyStep is a boolean representing whether the predicted step should be considered a correct action. If this parameter is
falseor omitted, and none of the fields of predictedSAI have the value "not_specified", the step is considered correct.
checkSAI returns the result of the comparison as a boolean value: true if the student input matches the tutor prediction, false if not.
Authors can have the checkSAI function ignore any of the selection, action, and input properties by setting those they want ignored to the string: "not_specified". This is useful for pruning the search space the tutor explores while deferring actual evaluation until further down the chain. A predicted SAI with any field set to "not_specified" will not be considered a correct step. To match any value for a given field and have that prediction be considered correct, i.e. wildcard matching, set that field to "don't_care" instead.
Here's an example of how this function might be used in an addition model:
rule DetermineSum {
when {
a1: Addend a1.value !== null;
a2: Addend a2.value !== null && a2 !== a1;
sum: Sum sum.value === null;
}
then {
var ans = a1.value + a2.value;
var predictedSAI = {selection: sum.inputComponentName, action: "UpdateTextField", input: ans};
if (checkSAI(predictedSAI)) {
modify(sum, "value", ans);
halt();
}
}
}
Authors can use the setProblemAttribute(<attribute>, <value>) function from inside a bootstrap rule to control how the rule engine behaves. The valid attributes are listed in the table below:
| Name | Values | Description | Default |
| "use_backtracking" | true, false | Whether the model should use backtracking when searching for a match | false |
| "prune_old_activations" | true, false | Whether "old" activations should be allowed to fire. An activation is "old" if it was not generated by the last match cycle | false |
| "use_hint_fact" | true, false | If true, a fact of type "IsHintMatch" will be asserted in working memory at the start of every hint match cycle, and retracted at the end of the cycle. The "IsHintMatch" type must be defined in any model that sets this parameter to true. | false |
| "hint_out_of_order" | true, false | If true, the model tracer will provide feedback to the tutor interface when a step is taken out of order (meaning there were no steps predicted by the model during the last match cycle which shared the "selection" property of the input). | false |
| "search_all_permutations" | true, false | If true, the model tracer will explore all permutations of a given set of activations on the agenda at a given point. If false, the tracer will only create branch points (backtracking checkpoints) at states where there is at least one new activation, and at least two total activations, on the agenda. | true |
So, using the bootstrap rule from the previous section as an example, we could put the engine in backtracking mode by changing the then block like so:
then {
assert(new Addend(a1));
assert(new Addend(a2));
setProblemAttribute("use_backtracking", true);
halt();
}
As the rule engine executes student input against a model, it produces tracing information that can be useful for debugging the model or just following along with changes to the model's state. Distinct types of tracing information are each associated with their own flag, and which flags are set determines what information is made visible at run-time. Users can set flags using the function setTracerLogFlags([flag1], [flag2], ... [flagN]); Information is printed to the browser console (f12 to open in Firefox and Chrome). Users can unset flags using the function unsetTracerLogFlags([flag1], [flag2], ... [flagN]);
Valid flags are listed in the table below:
| Flag | Prints | When |
| "state_save" | IDs of all activations on the agenda | A branch point is reached (more than one new activation on the agenda) |
| "state_restore" | IDs of all activations on the agenda | A branch point is returned to as a result of backtracking |
| "agenda_insert" | The ID of the activation added to the agenda, whether it was new, and whether it was skipped | An activation is added to the agenda |
| "agenda_retract" | The ID of the activation removed from the agenda | An activation is removed from the agenda |
| "fire" | The ID of the activation about to fire | An activation is about to fire |
| "assert"/"modify"/"retract" | The type of fact, its ID number, and its values in JSON format | A fact has been asserted, modified, or retracted |
| "backtrack" | "backtracking," and whether it was triggered by the model (called from within a rule) or the engine (no more valid activations on the agenda) | The model backtracks |
| "error" | An error message | An error has occurred |
| "debug" | Various debugging messages, mostly to do with internal workings of the model tracer | N/A |
| "sai_check" | Both SAIs, whether or not they were found to match, and whether the model's prediction was a buggy step | Student input is compared to steps predicted by the model |
| "agenda_pre" | The IDs of all activations on the agenda | A match cycle is about to start |
| "agenda_post" | The IDs of all activations on the agenda | A match cycle has ended |
| "tpa" | The TPA fact asserted | A TPA fact has been asserted |
The following table lists a set of functions and their uses that are callable at run-time. These functions are globally defined, so can be called through the browser console or by custom scripts.
| Function Call | Description |
| printAgenda() | Print the IDs of all activations on the agenda at the time the function is called |
| printFact([factID]) | Print the type and property values (in JSON format) of the fact in working memory with ID , or "No Such Fact" if no fact with that ID exists. |
| printFacts([factType]) | Print all facts of type [factType], or, if no type is provided, print all facts in working memory |
| printMatch([CTNodeID]) | If an SAI match-check (function `checkSAI()`) was made as a result of firing the activation associated with [CTNodeID], prints the student/tutor SAIs and the result of the check; the argument should be just integer inside the brackets beside the node in the `printConflictTree()` output |
| whyNot([ruleName]) | For each constraint of the given rule, print whether it is currently matched by facts in working memory. If it is matched, also print all possible fact bindings for that constraint's alias. |
| setStepperMode([true|false]) | Enable or disable stepper mode, which allows you to fire an arbitrary number of activations at a time, rather than run a complete match cycle for every input. Disabling stepper mode will cause normal execution to resume immediately |
| takeSteps([numSteps]) | If in stepper mode, causes [numSteps] rule activations to fire. [numSteps] defaults to 1. Has no effect if stepper mode is not enabled. |
| setBreakpoint([ruleName],["first"|"every"|"none"]) | Set or clear a break-point for a given rule. Rule execution will halt immediately before a rule with a break-point set fires. Passing "first" as the second argument sets a break-point only on the next activation for that rule; passing "every" will set a break-point for every activation of that rule. "none" clears any existing break-point on that rule. |
| resume() | Resume normal execution after a break-point causes execution to halt |
| printConflictTree([firedOnly]) | Print a formatted list of rule names that have appeared on the agenda during the last or current match cycle. Each node's children are those activations that were on the agenda at the time that that node was fired. If [firedOnly] is equal to true, only activations that were fired will be displayed (default value is false). Activations which called checkSAI are preceded by a three-character string representing which fields of the student SAI matched the SAI predicted by the tutor. Each character represents one field of the SAI; a letter in that position signifies that that field was matched, and a '-' signifies that it was not.* For example, the string 'SA-' would mean that the Selection and Action fields of the two SAIs matched, and the Input field did not. This string is followed by a letter, which signifies the ultimate result of the match as follows:
|
Custom fields are a way for models to generate arbitrary name-value pairs and return them to the tracer environment for logging. Authors can make use of custom fields by declaring a global variable called "custom_fields" in their model and asserting it into working memory in their bootstrap rule. Fields can be set by calling modify(custom_fields, [name], [value]);. At the end of every match cycle, any custom fields that were set during that match will be propagated out to the tracer. Here's an example:
global custom_fields = {
field1: null,
field2: null
};
rule bootstrap {
when {
bool: Boolean bool === true from true;
}
then {
/*...*/
assert(custom_fields);
/*...*/
}
}
rule setFields {
when {
/* (some constraint here) */
}
then {
/*...*/
modify(custom_fields, "field1", "hello");
modify(custom_fields, "field2", "world!");
/*...*/
}
}
Important: the name of the variable used to store custom fields must be exactly "custom_fields" or the tracer environment won't be able to find it
Authors can associate rules with skills by declaring a global variable called "skill_definitions" and initializing it to an array of objects, each of which represents a skill. These objects must have the following properties:
- ruleName: the name of the rule associated with this skill
- category: the category the skill belongs to
- opportunities: the number of opportunities to exercise this skill that exist in this problem
The following properties are optional:
- skillName: the name of the skill (defaults to ruleName)
- label: the display name for the skill (defaults to skillName)
An example skill declaration might look like this:
global skill_definitions = [
{
ruleName: "rule1",
category: "skills",
opportunities: 4,
skillName: "skill-1"
},
{
ruleName: "rule2",
category: "skills",
opportunities: 2,
skillName: "skill-2",
label: "skill Too"
}
];
Some tips that might help.
We recommend the Apache HTTP server available for no charge at http://httpd.apache.org/. It is implemented for Windows and Linux and comes pre-installed on recent releases of the MacOSX operating system. If you set Apache's DocumentRoot directive to a parent of the CTAT directory available to the HTML Editor, then you can test your student interfaces in your browser with URLs like this:
http://localhost:80/CTAT/FractionAddition/HTML/fractionAddition.html?question_file=../CognitiveModel/1416.nools
where--
- 80 is the port number in your Apache server's Listen directive; default is 80;
- CTAT is the file system path (may descend through several folders) below Apache's DocumentRoot to your packages;
- FractionAddition/HTML/fractionAddition.html is the package path to your student interface file;
- ../CognitiveModel/1416.nools is the path to the top-level, problem-specific .nools file, relative to the student interface.
On the Chrome and Firefox browsers, the Developer Tools invoked from the hamburger menus at the upper right corner provide interactive debugging aids and access to the printAgenda() and other functions described above. The most useful tabs on the tools panel are these:
- Network shows the browser's requests to the network, including downloads of all the files retrieved by your HTML page, including those referred to by , or <script> tags and those loaded by JavaScript;
-
Console provides interactive access to the
printAgenda()and other functions described above.
- If the user interface fails to load and you see your rules printed to the console, scroll up to look for a syntax error description. The beginning of the error diagnostic shows where the rule parser failed, but the actual error might be earlier in your text. We recommend text editors that help you match parentheses, including Sublime (https://www.sublimetext.com/) and many others.
- Pressing the up-arrow repeatedly at the console prompt retrieves prior entries. You may want to develop a macro, e.g.,
printFacts(); printConflictTree(), printAgenda()to reenter by this means after every step.
Getting Started
Using CTAT
HTML Components
- HTML Examples
- CTATAudioButton
- CTATButton
- CTATChatPanel
- CTATCheckBox
- CTATComboBox
- CTATDoneButton
- CTATDragNDrop
- CTATFractionBar
- CTATGroupingComponent
- CTATHintButton
- CTATHintWindow
- CTATImageButton
- CTATJumble
- CTATNumberLine
- CTATNumericStepper
- CTATPieChart
- CTATRadioButton
- CTATSkillWindow
- CTATSubmitButton
- CTATTable
- CTATTextArea
- CTATTextField
- CTATTextInput
- CTATVideo