Relational dependency network

Relational dependency networks (RDNs) are graphical models which extend dependency networks to account for relational data. Relational data is data organized into one or more tables, which are cross-related through common fields. A relational database is the canonical example of a system that serves to maintain relational data. A relational dependency network can be used to characterize the knowledge contained in a database.


Introduction

Relational Dependency Networks (or RDNs) aims to get the joint probability distribution over the variables of a dataset represented in the relational domain. They are based on Dependency Networks (or DNs) and extend them to the relational setting. RDNs have efficient learning methods where a RDN can learn the parameters independently, that is, the conditional probability distributions can be estimated separately. Since there may be some inconsistencies due to the independently learning method, RDNs use Gibbs sampling to recover joint distribution, like DNs.

Unlike Dependency Networks, RDNs need three graphs to fully represent them.

  • Data graph: It's a graph whose nodes represent objects from the data set and edges represent dependencies between objects. Each of the objects and edges receives a type and each of the objects has an attribute set.
  • Model graph: It is a graph of a higher level, more specifically, at the level of types. Thus, nodes represent attributes of a given type and edges represent dependencies between attributes of the same type or between attributes of different types. Each node is associated with a probability distribution conditioned to its parent nodes. The model graph makes no assumptions about the data set, which makes it general enough to support different data represented by the data graph. Thus, it is possible to use a given data set to learn the structure and conditional probability distributions of the model graph and then generate the inference graph from the model graph applied to a data graph that represents another set of data.
  • Inference graph: It corresponds to that graph generated by both data graph and model graph in a process called roll out. Inference graphs are probably larger than data graphs and model graphs due each of the attributes for each of the objects is an instance on the inference graph with the characteristics of that corresponding attribute from model graph.

In summary, the data graph guides how the model graph will be rolled out to generate the inference graph.

RDN Learning

The learning method for a RDN is similar to that method used by DNs, that is, all conditional probability distributions can be learned for each of the variables independently. However, only conditional relational learners can be used during parameters estimation process for RDNs. Therefore, that learners used by DNs, like decision trees or logistic regression, don't work for RDNs. Neville, J., & Jensen, D. (2007) [1] present some experiments results comparing RDNs when learning with Relational Bayesian Classifiers and RDNs when learning with Relational Probability Trees. Natarajan et al. (2012) [2] use a serie of regression models to represent conditional distributions.

This learning method makes the RDN a model with an efficient learning time. However, this method also makes RDNs susceptible to some structural or numerical inconsistences. If the conditional probability distribution estimation method uses feature selection, it's possible that a given variable finds a dependency among it and another variable while the latter doesn't find this dependency. In this case, the RDN is structurally inconsistent. In addition, if the joint distribution doesn't sum one due to approximations caused by the independent learning, then we say that there is a numerical inconsistence. Fortunately, such inconsistences can be bypassed during the inference step, as we will see soon in the RDN inference section.

RDN Inference

RDN inference begins with the creation of inference graph through a process called roll out. In this process, the model graph is rolled out over the data graph to form the inference graph. Next, Gibbs sampling technique can be used to recover conditional probability distribution.

Applications

RDNs have been applied in many real-world domains. The main advantages of RDNs are their ability to use relationships informations to improve the model's performance. Diagnosis, forecasting, automated vision, sensor fusion and manufacturing control are some examples of problems where RDNs were applied.

Implementations

Some suggestions of RDN implementations:

  • BoostSRL:[3] A system specialized on gradient-based boosting approach learning for different types of Statistical Relational Learning models, including Relational Dependency Networks. For more details and notations, see Natarajan et al. (2011).[2]
gollark: You can run it on microcontrollers and stuff fine, I Don't See The Problem:tm:
gollark: Okay then, it's *not* freestanding but I don't actually care.
gollark: These are things I have not delved deeply into, however.
gollark: I think the way it works is that `std` either reexports or redefines the `core` stuff.
gollark: I ag¶ee.

References

  1. Neville, Jennifer; Jensen, David (2007). "Relational Dependency Networks" (PDF). Journal of Machine Learning Research. 8: 653–692. Retrieved 9 February 2020.
  2. Natarajan, Sriraam; Khot, Tushar; Kersting, Kristian; Gutmann, Bernd; Shavlik, Jude (10 May 2011). "Gradient-based boosting for statistical relational learning: The relational dependency network case" (PDF). Machine Learning. 86 (1): 25–56. doi:10.1007/s10994-011-5244-9. Retrieved 9 February 2020.
  3. Lab, StARLinG. "BoostSRL Wiki". StARLinG. Retrieved 9 February 2020.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.