This Is AuburnElectronic Theses and Dissertations

Show simple item record

All You Need is Tensor Decomposition: A Better Understanding of Sparse Tensors


Metadata FieldValueLanguage
dc.contributor.advisorKu, Wei-Shinn
dc.contributor.authorHui, Bo
dc.date.accessioned2023-04-26T13:51:08Z
dc.date.available2023-04-26T13:51:08Z
dc.date.issued2023-04-26
dc.identifier.urihttps://etd.auburn.edu//handle/10415/8636
dc.description.abstractTensor is widely employed in data sciences to represent multi-dimensional information. Due to the low-rank nature of many tensors in the real world, predicting the unobserved entries of a partially observed tensor has gained much interest in many applications such as knowledge base completion and recommendation. Tensor decomposition is a widely-used method to solve these problems. However, existing works decompose the tensor into Euclidean vectors and assume a multilinearity relationship between tensor entries. In real applications, the input tensors tend to exhibit complex factor interactions. We propose to model these complex interactions for a better understanding of the sparse tensors. We first study if the side information can be used to improve the performance of tensor decomposition. Specifically, we design a neural tensor model to estimate the values in a user-item-time tensor. A regularization loss head based on a novel social Hausdorff distance function is designed to optimize the reconstructed tensor. Despite the success of neural tensor models which is defined in Euclidean space, recent works shows that hyperbolic space is roomier than Euclidean space. To leverage the power of hyperbolic vector, we propose to decompose tensor in hyperbolic space instead of Euclidean space. Considering that the most popular optimization tools such as SGD (Stochastic Gradient Descent) have not been generalized in hyperbolic space, we have designed an adaptive optimization algorithm according to the distinctive property of hyperbolic manifold. In addition, we raise a new question: can we model the interaction between latent factors with neural ODEs (Ordinary Differential Equations)? Studying this research problem is particularly interesting since we can parameterize the derivative of the latent factors using a neural network. We design a neural ODEs tensor model to optimize the decomposed factors. Concretely, we aggregate the decomposed factors and feed it into a Neural ODEs model to reconstruct the input tensor. An ODE solver is introduced to minimize the difference between the original tensor and reconstructed tensor. We experiment with multiple real world datasets spanning diverse domains to demonstrate the effectiveness of the proposed methods.en_US
dc.rightsEMBARGO_NOT_AUBURNen_US
dc.subjectComputer Science and Software Engineeringen_US
dc.titleAll You Need is Tensor Decomposition: A Better Understanding of Sparse Tensorsen_US
dc.typePhD Dissertationen_US
dc.embargo.lengthMONTHS_WITHHELD:36en_US
dc.embargo.statusEMBARGOEDen_US
dc.embargo.enddate2026-04-26en_US

Files in this item

Show simple item record