Once you have a rule set containing the rules for some process, for instance the rules for mortgage approval or the rules for pricing insurance, you can then put a set of data items through the rule set and get a response.
This is the process of inference. Note that ours is a fuzzy inference engine, so you not only get results, but the degree of certainty associated with them.
If your ruleset was created by machine learning, or if you used heuristics to create the ruleset then you may be inferring from incomplete data.
Darl handles uncertain data or an uncertain model according to the laws of possibility theory and calculates certainty and tolerance values for all results.
To use rulesets with a bot you need to ask for data items one at a time, but a ruleset may have hundreds of inputs. How do you decide which one to ask first?
The DARL inference engine automatically analyzes the rule set for what we call salience.
It runs the ruleset backwards looking for the most pivotal inputs, assigning a salience value to each one.
We sort these and ask the most salient input value first.
Each time the user answers, we re-run the salience calculations. This has the effect of eliminating all the inputs we no longer need to know the value of, because they have been made redundant by previous answers.
The inference engine asks the fewest possible questions to resolve all of the outputs in the rule set.
Each input in a ruleset is a data item we may need to elicit from the user, and each output may need to be reported back to the user.
We need to associate question text with each input and some kind of reporting for the outputs. There may also be outputs, sub calculations or sensitive values that we do not want to send back. Our user interface allows you to assign question text to each input, to specify the text associated with each category if the i/o is categorical, or to set ranges on numeric inputs.