You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Inference Agent is a server in DAS architecture (like Link Creation Agent and Query Agent). They are supposed to be in the same DASNode network.
Inference Agent processes inference requests. There will be several different types of inferences requests but for now we'll focus only in two of them:
PROOF_OF_IMPLICATION_OR_EQUIVALENCE
PROOF_OF_IMPLICATION
PROOF_OF_EQUIVALENCE
Each request types should be implemented as commands which expect potentially different parameters, although, for these three commands we're going to implement, the list of parameters is the same.
Inference requests are processed mostly in the same way:
Send request to Link Creation Agent to create reasoning links passing a list of [QUERY_LINK_CREATION, PROCESSOR/CREATION_TEMPLATE] pairs
Start Distributed Inference Control algorithm passing QUERY_INFERENCE and FITNESS_FUNCTION
Wait for the inference to finish
Send results back to the caller
But different types of inference will use different queries and different processors and fitness functions in steps 1. and 2.
Both, Link Creation Agent and Inference Control Algorithm will be part of DAS architecture so we can just use shared identifiers to indicate which link processor or which fitness function are supposed to be used. I suggest using string enumerations shared in src/cpp/commons.
Processors will be implemented in the Link Creation Agent while fitness functions will be implemented in the Distributed Inference Control algorithm.
Follows the sequence diagram depicting how inference requests are supposed to unfold.
PROOF_OF_IMPLICATION_OR_EQUIVALENCE
Link Creation request
Actually, the Inference Agent issues multiples requests to the Link Creation Agent.
(1) Requests to create SATISFYING_SET and PATTERNS
Query:
LINK_TEMPLATE Expression 3
NODE Symbol EVALUATION
LINK_TEMPLATE Expression 2
NODE Symbol PREDICATE
VARIABLE P
LINK_TEMPLATE Expression 2
NODE Symbol CONCEPT
VARIABLE C
Creation template:
LIST 2
LINK_CREATE Expression 2 1
NODE Symbol SATISFYING_SET
P
CUSTOM_FIELD truth_value 2
strength 1.0
confidence 1.0
LINK_CREATE Expression 2 1
NODE Symbol PATTERNS
C
CUSTOM_FIELD truth_value 2
strength 1.0
confidence 1.0
(2) Requests to create IMPLICATION
Query:
AND 3
LINK_TEMPLATE Expression 2
NODE Symbol SATISFYING_SET
VARIABLE P1
LINK_TEMPLATE Expression 2
NODE Symbol SATISFYING_SET
VARIABLE P2
NOT
OR
LINK_TEMPLATE Expression 3
NODE Symbol IMPLICATION
LINK_TEMPLATE Expression 3
NODE Symbol EVALUATION
VARIABLE P1
VARIABLE C
LINK_TEMPLATE Expression 3
NODE Symbol EVALUATION
VARIABLE P2
VARIABLE C
LINK_TEMPLATE Expression 3
NODE Symbol IMPLICATION
LINK_TEMPLATE Expression 3
NODE Symbol EVALUATION
VARIABLE P2
VARIABLE C
LINK_TEMPLATE Expression 3
NODE Symbol EVALUATION
VARIABLE P1
VARIABLE C
Processor: IMPLICATION_DEDUCTION
(3) Requests to create EQUIVALENCE
Query:
AND 3
LINK_TEMPLATE Expression 2
NODE Symbol PATTERNS
VARIABLE C1
LINK_TEMPLATE Expression 2
NODE Symbol PATTERNS
VARIABLE C2
NOT
OR
LINK_TEMPLATE Expression 3
NODE Symbol EQUIVALENCE
LINK_TEMPLATE Expression 3
NODE Symbol EVALUATION
VARIABLE P
VARIABLE C1
LINK_TEMPLATE Expression 3
NODE Symbol EVALUATION
VARIABLE P
VARIABLE C2
LINK_TEMPLATE Expression 3
NODE Symbol EQUIVALENCE
LINK_TEMPLATE Expression 3
NODE Symbol EVALUATION
VARIABLE P
VARIABLE C2
LINK_TEMPLATE Expression 3
NODE Symbol EVALUATION
VARIABLE P
VARIABLE C1
Processor: EQUIVALENCE_DEDUCTION
Distributed Inference Algorithm parameters
This is the query passed to the distributed inference control algorithm:
CHAIN
<handle 1>
<handle 2>
<max proof length>
OR
LINK_TEMPLATE Expression 3
NODE Symbol IMPLICATION
_FIRST_
_SECOND_
LINK_TEMPLATE Expression 3
NODE Symbol EQUIVALENCE
_FIRST_
_SECOND_
And the tag for the fitness function is "IMPLICATION_OR_EQUIVALENCE_DEDUCTION"
PROOF_OF_IMPLICATION and PROOF_OF_EQUIVALENCE
These types of inference requests will be detailed in another card
The text was updated successfully, but these errors were encountered:
Inference Agent is a server in DAS architecture (like Link Creation Agent and Query Agent). They are supposed to be in the same DASNode network.
Inference Agent processes inference requests. There will be several different types of inferences requests but for now we'll focus only in two of them:
Each request types should be implemented as commands which expect potentially different parameters, although, for these three commands we're going to implement, the list of parameters is the same.
Inference requests are processed mostly in the same way:
But different types of inference will use different queries and different processors and fitness functions in steps 1. and 2.
Both, Link Creation Agent and Inference Control Algorithm will be part of DAS architecture so we can just use shared identifiers to indicate which link processor or which fitness function are supposed to be used. I suggest using string enumerations shared in
src/cpp/commons
.Processors will be implemented in the Link Creation Agent while fitness functions will be implemented in the Distributed Inference Control algorithm.
Follows the sequence diagram depicting how inference requests are supposed to unfold.
PROOF_OF_IMPLICATION_OR_EQUIVALENCE
Link Creation request
Actually, the Inference Agent issues multiples requests to the Link Creation Agent.
(1) Requests to create SATISFYING_SET and PATTERNS
Query:
Creation template:
(2) Requests to create IMPLICATION
Query:
Processor: IMPLICATION_DEDUCTION
(3) Requests to create EQUIVALENCE
Query:
Processor: EQUIVALENCE_DEDUCTION
Distributed Inference Algorithm parameters
This is the query passed to the distributed inference control algorithm:
And the tag for the fitness function is "IMPLICATION_OR_EQUIVALENCE_DEDUCTION"
PROOF_OF_IMPLICATION and PROOF_OF_EQUIVALENCE
These types of inference requests will be detailed in another card
The text was updated successfully, but these errors were encountered: