Skip to main content
Log in

An innovative linear unsupervised space adjustment by keeping low-level spatial data structure

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

A novel objective function has been introduced for solving the problem of space adjustment when supervisor is unavailable. In the introduced objective function, it has been tried to minimize the difference between distributions of the transformed original and test-data spaces. The local structural information presented in the original space is preserved by optimizing the mentioned objective function. We have proposed two techniques to preserve the structural information of original space: (a) identifying those pairs of examples that are as close as possible in original space and minimizing the distance between these pairs of examples after transformation and (b) preserving the naturally occurring clusters that are presented in original space during transformation. This cost function together with its constraints has resulted in a nonlinear objective function, used to estimate the weight matrix. An iterative framework has been employed to solve the problem of optimizing the objective function, providing a suboptimal solution. Next, using orthogonality constraint, the optimization task has been reformulated into the Stiefel manifold. Empirical examination using real-world datasets indicates that the proposed method performs better than the recently published state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Pan SJ, Tsang I, Kwok J, Yang Q (2011) Domain adaptation via transfer component analysis. IEEE Trans Neural Netw 22:199–210

    Article  Google Scholar 

  2. Bache K, Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml

  3. Saenko K, Kulis B, Fritz M, Darrell T (2010) Adapting visual category models to new domains. In: European conference on computer vision, pp 213–226

  4. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22:1345–1359

    Article  Google Scholar 

  5. Beijbom O (2012) Domain adaptation for computer vision applications. Technical report, University of California, San Diego

  6. Sugiyama M, Nakajima S, Kashima H, von Bünau P, Kawanabe M (2007) Direct importance estimation with model selection and its application to covariate shift adaptation. In: Proceedings of neural information processing systems, pp 1962–1965

  7. Dai W, Yang Q, Xue GR, Yu Y (2007) Boosting for transfer learning. In: International conference on machine learning, pp 193–200

  8. Wan C, Pan R, Li J (2011) Bi-weighting domain adaptation for cross-language text classification. In: International joint conference on artificial intelligence, pp 1535–1540

  9. Gopalan R, Li R, Chellappa R (2011) Domain adaptation for object recognition: an unsupervised approach. In: International conference in computer vision, pp 999–1006

  10. Kulis B, Saenko K, Darrell T (2011) What you saw is not what you get: domain adaptation using asymmetric kernel transforms. In: IEEE conference on computer vision and pattern recognition, pp 1785–1792

  11. Jhuo IH, Liu D, Lee DT, Chang SF (2012) Robust visual domain adaptation with low-rank reconstruction. In: IEEE conference on computer vision and pattern recognition, pp 2168–2175

  12. Chattopadhyay R, Krishnan NC, Panchanathan S (2011) Topology preserving domain adaptation for addressing subject based variability in SEMG signal. In: AAAI spring symposium: computational physiology, pp 4–9

  13. Howard A, Jebara T (2009) Transformation learning via kernel alignment. In: International conference on machine learning and applications, pp 301–308

  14. Jiang W, Zavesky E, Fu Chang S, Loui A (2008) Cross-domain learning methods for high-level visual concept classification. In: International conference on image processing, pp 161–164

  15. Yang J, Yan R, Hauptmann AG (2007) Cross-domain video concept detection using adaptive SVMs. In: International conference on multimedia, pp 188–197

  16. Shi X, Fan W, Ren J (2008) Actively transfer domain knowledge. In: European conference on machine learning, pp 342–357

  17. Baktashmotlagh M, Harandi M, Lovell B, Salzmann M (2013) Unsupervised domain adaptation by domain invariant projection. In: International conference on computer vision, pp 769–776

  18. Duan L, Xu D, Tsang IW, Luo J (2012) Visual event recognition in videos by learning from web data. IEEE Trans Pattern Anal Mach Intell 34:1667–1680

    Article  Google Scholar 

  19. Fernando B, Habrard A, Sebban M, Tuytelaars T (2013) Unsupervised visual domain adaptation using subspace alignment. In: International conference in computer vision, pp 2960–2967

  20. Gong B, Shi Y, Sha F, Grauman K (2012) Geodesic flow kernel for unsupervised domain adaptation. In: IEEE conference on computer vision and pattern recognition, pp 2066–2073

  21. Samanta S, Das S (2013) Domain adaptation based on eigen-analysis and clustering, for object categorization. In: International conference on computer analysis of images and patterns, LNCS, pp 245–253

  22. Hoffmann H (2007) Kernel PCA for novelty detection. In: Pattern recognition, pp 863–874

  23. Pezeshki A, Scharf LL, Chong EK (2010) The geometry of linearly and quadratically constrained optimization problems for signal processing and communications. J Frankl Inst 347:818–835

    Article  MathSciNet  MATH  Google Scholar 

  24. Boyd S, Vandenberghe L (2006) Convex optimization. Cambridge University Press, New York

    MATH  Google Scholar 

  25. Absil PA, Mahony R, Sepulchre R (2008) Optimization algorithms on matrix manifolds. Princeton University Press, Princeton

    Book  MATH  Google Scholar 

  26. Tagare HD (2011) Notes on optimization on Stiefel manifolds. Technical report, Department of Diagnostic Radiology, Department of Biomedical Engineering, Yale University

  27. Wen Z, Yin W (2013) A feasible method for optimization with orthogonality constraints. Math Prog 142:397–434

    Article  MathSciNet  MATH  Google Scholar 

  28. Löfberg J (2004) YALMIP: a Toolbox for modeling and optimization in MATLAB. In: Proceedings of the CACSD conference, Taiwan, Taipei

  29. Chopra S, Balakrishnan S, Gopalan R (2013) Dlid: Deep learning for domain adaptation by interpolating between domains. In: ICML workshop on challenges in representation learning

  30. Tzeng E, Hoffman J, Zhang N, Saenko K, Darrell T (2014) Deep domain confusion: maximizing for domain invariance. CoRR, abs/1412.3474

  31. Long M, Wang J (2015) Learning transferable features with deep adaptation networks. CoRR, abs/1502.02791

  32. Duan L, Xu D, Tsang IWH (2012) Domain adaptation from multiple sources: a domain-dependent regularization approach. IEEE Trans Neural Netw Learn Syst 23:504–518

    Article  Google Scholar 

  33. Bay H, Ess A, Tuytelaars T, Gool LV (2008) Speeded-up robust features (SURF). Comput Vis Image Underst 110:346–359

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vahideh Rezaie.

Appendices

Appendix 1: Nonparametric density-based clustering (NPDBC)

1.1 NPDBC

The technique presented for clustering is based on computation of the dataset density distribution in the original space. The employed clustering algorithm is considered as a nonparametric density-based clustering (NPDBC) method. The NPDBC method partitions a given dataset by estimating the data density employing Parzen window estimation. The window size is set to N −0.33 X at each attribute, where NX stands for the samples frequency in original space. The data density distribution is first projected on the direction of the greatest spread. The peaks are then discovered in the probability density distribution (PDD). All samples located around a peak are partitioned into a same cluster. It is obvious that the number of clusters is equal to the number of peaks in PDD. The only reason for employing NPDBC technique in data partitioning is to keep away from the preliminary guess of the quantity of clusters available in the dataset.

Appendix B: List of symbols used in the manuscript

See Table 6.

Table 6 List of symbols used in the manuscript

Appendix C: Analytical solutions for the iterative method in Algorithm 1

Substituting \( W_{X} = U_{X}\Psi _{X} + U_{Y}\Omega _{X} \) in the constraint \( W_{X}^{\text{T}} C_{X} W_{X} = I \) given in Eqs. (6) and (10) gives:

$$ \begin{aligned} W_{X}^{\text{T}} C_{X} W_{X} = I & \Rightarrow \left( {U_{X}\Psi _{X} + U_{Y}\Omega _{X} } \right)^{\text{T}} U_{X}\Lambda _{X}^{2} U_{X}^{\text{T}} \left( {U_{X}\Psi _{X} + U_{Y}\Omega _{X} } \right) = II^{\text{T}} \\ & \Rightarrow \left( {\Omega _{X}^{\text{T}} U_{Y}^{\text{T}} U_{X} +\Psi _{X}^{\text{T}} )\Lambda _{X}\Lambda _{X}^{\text{T}} (\Psi _{X} + U_{X}^{\text{T}} U_{Y}\Omega _{X} } \right) = II^{\text{T}} \\ & \Rightarrow \left( {\Omega _{X}^{\text{T}} U_{Y}^{\text{T}} U_{X}\Lambda _{X} +\Psi _{X}^{\text{T}} \Lambda_{X} } \right)\left( {\Lambda _{X}^{\text{T}}\Psi _{X} +\Lambda _{X}^{\text{T}} U_{X}^{\text{T}} U_{Y}\Omega _{X} } \right)^{ } = II^{\text{T}} \\ & \Rightarrow \left( {\Lambda _{X}\Psi _{X} +\Lambda _{X} U_{X}^{\text{T}} U_{Y}\Omega _{X} )^{\text{T}} (\Lambda _{X}\Psi _{X} +\Lambda _{X} U_{X}^{\text{T}} U_{Y}\Omega _{X} } \right)^{ } = II^{\text{T}} \\ \end{aligned} $$

Then, from above,

$$ \begin{aligned} & \left( {\Lambda _{X}\Psi _{X} +\Lambda _{X} U_{X}^{\text{T}} U_{Y}\Omega _{X} } \right) =\Upsilon _{X} \\ & \quad \Rightarrow\Psi _{X} =\Lambda _{X}^{ - 1}\Upsilon _{X} - U_{X}^{\text{T}} U_{Y}\Omega _{X} \\ \end{aligned} $$
(C1)

where \( \Upsilon _{X} \) is an orthogonal matrix. Next, we substitute WX in the term \( W_{X}^{\text{T}} \tilde{P}_{X} W_{X} \) of Cost (WXWY), in Eqs. (10) (or Eq. 6), to obtain:

$$ \begin{aligned} W_{X}^{\text{T}} \tilde{P}_{X} W_{X} & = \left( {U_{X}\Psi _{X} + U_{Y}\Omega _{X} } \right)^{\text{T}} \tilde{P}_{X}^{1/2} \tilde{P}_{X}^{T/2} \left( {U_{X}\Psi _{X} + U_{Y}\Omega _{X} } \right) \\ & = \left( {\tilde{P}_{X}^{{{\text{T}}/2}} U_{X}\Psi _{X} + \tilde{P}_{X}^{{{\text{T}}/2}} U_{Y}\Omega _{X} } \right)^{T} \left( {\tilde{P}_{X}^{{{\text{T}}/2}} U_{X}\Psi _{X} + \tilde{P}_{X}^{{{\text{T}}/2}} U_{Y}\Omega _{X} } \right) \\ \therefore \frac{\partial }{{\partial\Omega _{X} }}{\text{tr}}_{{\frac{1}{2}W_{X}^{\text{T}} \tilde{P}_{X} W_{X} }} & = \left( {\tilde{P}_{X}^{{{\text{T}}/2}} U_{Y}\Omega _{X} } \right)^{\text{T}} \left( {\tilde{P}_{X}^{{{\text{T}}/2}} U_{X}\Psi _{X} + \tilde{P}_{X}^{{{\text{T}}/2}} U_{Y}\Omega _{X} } \right) \\ \end{aligned} $$
(C2)

where \( \tilde{P}_{X} = \left( {I + \mu_{X}^{\text{T}} \mu_{X} + X^{\text{T}} C^{{P^{X} }} X - S^{X} } \right) \) in Eq. (10) (and \( \tilde{P}_{X} = \left( {I + \mu_{X}^{\text{T}} \mu_{X} + X^{\text{T}} C^{{P^{X} }} X - S^{X} } \right) \) in Eq. 6). The cost function used in Eq. (10) (similar to that in Eq. 6) is:

$$ {\text{Cost}}\left( {W_{X} ,W_{Y} } \right) = \frac{1}{2}{\text{trace}}\left( {W_{X}^{\text{T}} \tilde{P}_{X} W_{X} } \right) + \frac{1}{2}{\text{trace}}\left( {W_{Y}^{\text{T}} \tilde{P}_{Y} W_{Y} } \right) $$

Taking the derivative from the above and substituting from Eq. (C2), we get:

$$ \frac{\partial }{{\partial\Omega _{X} }}{\text{Cost}}\left( {W_{X} ,W_{Y} } \right) = \left( {\tilde{P}_{X}^{{{\text{T}}/2}} U_{Y}\Omega _{X} } \right)^{\text{T}} \left( {\tilde{P}_{X}^{{{\text{T}}/2}} U_{X}\Psi _{X} + \tilde{P}_{X}^{{{\text{T}}/2}} U_{Y}\Omega _{X} } \right) $$
(C3)

To get the optimal value of \( \Omega _{\text{X}} \), we use \( \frac{\partial }{{\partial\Omega _{X} }}{\text{Cost}}\left( {W_{X} ,W_{Y} } \right) = 0 \). This gives

$$ \begin{aligned} & \left( {\tilde{P}_{X}^{{{\text{T}}/2}} U_{Y}\Omega _{X} } \right)^{\text{T}} \left( {\tilde{P}_{X}^{{{\text{T}}/2}} U_{X}\Psi _{X} + \tilde{P}_{X}^{{{\text{T}}/2}} U_{Y}\Omega _{X} } \right) = 0 \\ & \quad \Rightarrow\Omega _{X} = - \left( {\tilde{P}_{X}^{{{\text{T}}/2}} U_{Y} } \right)^{\dag } \left( {\tilde{P}_{X}^{{{\text{T}}/2}} U_{X} } \right)\Psi _{X} \\ \end{aligned} $$
(C4)

Substituting \( W_{Y} = U_{Y}\Psi _{Y} + U_{X}\Omega _{Y} \) in the constraint W T Y CYWY = I given in Eqs. (6) and (10) gives:

$$ \begin{aligned} W_{Y}^{\text{T}} C_{Y} W_{Y} = I & \Rightarrow \left( {U_{Y}\Psi _{Y} + U_{X}\Omega _{Y} } \right)^{\text{T}} U_{Y}\Lambda _{Y}^{2} U_{Y}^{\text{T}} \left( {U_{Y}\Psi _{Y} + U_{X}\Omega _{Y} } \right) = I \\ & \Rightarrow \left( {\Omega _{Y}^{\text{T}} U_{X}^{\text{T}} U_{Y} +\Psi _{Y}^{\text{T}} } \right)\Lambda _{Y}\Lambda _{Y}^{\text{T}} \left( {\Psi _{Y} + U_{Y}^{\text{T}} U_{X}\Omega _{Y} } \right) = I \\ & \Rightarrow \left( {\Omega _{Y}^{\text{T}} U_{X}^{\text{T}} U_{Y}\Lambda _{Y} +\Psi _{Y}^{\text{T}}\Lambda _{Y} } \right)\left( {\Lambda _{Y}^{\text{T}}\Psi _{Y} +\Lambda _{Y}^{\text{T}} U_{Y}^{\text{T}} U_{X}\Omega _{Y} } \right) = I \\ & \Rightarrow \left( {\Omega _{Y}^{\text{T}} U_{X}^{\text{T}} U_{Y}\Lambda _{Y} +\Psi _{Y}^{\text{T}} \Lambda_{Y} } \right) = I \\ & \Rightarrow \left( {\Lambda _{Y}\Psi _{Y} +\Lambda _{Y} U_{Y}^{\text{T}} U_{X}\Omega _{Y} } \right)^{\text{T}} \left( {\Lambda _{Y}^{\text{T}}\Psi _{Y} +\Lambda _{Y}^{\text{T}} U_{Y}^{\text{T}} U_{X}\Omega _{Y} } \right) = I \\ \end{aligned} $$

Then, from above,

$$ \left( {\Lambda _{Y}\Psi _{Y} +\Lambda _{Y} U_{Y}^{\text{T}} U_{X}\Omega _{Y} } \right) =\Upsilon _{Y} \Rightarrow\Psi _{Y} =\Lambda _{Y}^{ - 1}\Upsilon _{Y} - U_{Y}^{T} U_{X}\Omega _{Y} $$
(C5)

where \( \Upsilon _{Y} \) is an orthogonal matrix. Next, we substitute WY in the term \( W_{Y}^{\text{T}} \tilde{P}_{Y} W_{Y} \) of Cost (WXWY), in Eq. (10) (or Eq. 6), to obtain:

$$ \begin{aligned} W_{Y}^{\text{T}} \tilde{P}_{Y} W_{Y} & = \left( {U_{Y}\Psi _{Y} + U_{X}\Omega _{Y} } \right)^{\text{T}} \tilde{P}_{Y}^{1/2} \tilde{P}_{Y}^{{{\text{T}}/2}} \left( {U_{Y}\Psi _{Y} + U_{X}\Omega _{Y} } \right) \\ & = \left( {\tilde{P}_{Y}^{{{\text{T}}/2}} U_{Y}\Psi _{Y} + \tilde{P}_{Y}^{{{\text{T}}/2}} U_{X}\Omega _{Y} } \right)^{\text{T}} \left( {\tilde{P}_{Y}^{{{\text{T}}/2}} U_{Y}\Psi _{Y} + \tilde{P}_{Y}^{{{\text{T}}/2}} U_{X}\Omega _{Y} } \right) \\ \therefore \frac{\partial }{{\partial\Omega _{Y} }}{\text{tr}}_{{\frac{1}{2}W_{Y}^{\text{T}} \tilde{P}_{Y} W_{Y} }} & = \left( {\tilde{P}_{Y}^{{{\text{T}}/2}} U_{X}\Omega _{Y} } \right)^{\text{T}} \left( {\tilde{P}_{Y}^{{{\text{T}}/2}} U_{Y}\Psi _{Y} + \tilde{P}_{Y}^{{{\text{T}}/2}} U_{X}\Omega _{Y} } \right) \\ \end{aligned} $$
(C6)

where \( \tilde{P}_{Y} = \left( {I + \mu_{Y}^{\text{T}} \mu_{Y} + Y^{\text{T}} C^{{P^{Y} }} Y - S^{Y} } \right) \) in Eq. (10) (and \( \tilde{P}_{Y} = \left( {I + \mu_{Y}^{\text{T}} \mu_{Y} + Y^{\text{T}} C^{{P^{Y} }} Y - S^{Y} } \right) \) in Eq. 6). With the same analysis of Eq. (C4), we have:

$$ \Rightarrow\Omega _{Y} = - \left( {\tilde{P}_{Y}^{{{\text{T}}/2}} U_{X} } \right)^{\dag } \left( {\tilde{P}_{Y}^{{{\text{T}}/2}} U_{Y} } \right)\Psi _{Y} $$
(C7)

ΨX (or ΨY) has been stated in terms of the matrix variable \( \Upsilon _{X} \) (or \( \Upsilon _{Y} \)) in Eq. (C1) (or Eq. C5). Next, we plan to achieve ΨX (or ΨY) with the least norm under the condition that ΩX (or ΩY) is a constant. For this purpose, we find the value of matrix \( \Upsilon _{X} \) (or matrix \( \Upsilon _{Y} \)), which yields the least value of \( {\text{tr}}_{{\Psi _{X}^{\text{T}}\Psi _{X} }} \) (or \( {\text{tr}}_{{\Psi _{Y}^{\text{T}}\Psi _{Y} }} \)). This is shown below:

$$ \begin{aligned} {\text{tr}}_{{\Psi _{X}^{\text{T}}\Psi _{X} }} & = {\text{tr}}_{{\left( {\Lambda _{X}^{ - 1} \Upsilon_{X} - U_{X}^{\text{T}} U_{Y}\Omega _{X} } \right)^{T} \left( {\Lambda _{X}^{ - 1}\Upsilon _{X} - U_{X}^{\text{T}} U_{Y}\Omega _{X} } \right)}} \\ & = {\text{tr}}_{{\Psi _{X}^{\text{T}}\Psi _{X} }} = {\text{tr}}_{{\left( {\Upsilon _{X}^{\text{T}}\Lambda _{X}^{ - 1} -\Omega _{X}^{\text{T}} U_{Y}^{\text{T}} U_{X} } \right)\left( {\Lambda _{X}^{ - 1}\Upsilon _{X} - U_{X}^{\text{T}} U_{Y}\Omega _{X} } \right)}} \\ & = - 2{\text{tr}}_{{\Omega _{X}^{\text{T}} U_{Y}^{\text{T}} U_{X}\Lambda _{X}^{ - 1}\Upsilon _{X} }} + {\text{tr}}_{{\Upsilon _{X}^{\text{T}}\Lambda _{X}^{ - 2}\Upsilon _{X} }} + {\text{tr}}_{{\Omega _{X}^{\text{T}}\Omega _{X} }} \\ \end{aligned} $$
$$ \therefore \frac{\partial }{{\partial\Omega _{X} }}{\text{trace}}\left( {\Psi _{X}^{\text{T}}\Psi _{X} } \right) = - 2\Upsilon _{X}\Lambda _{X}^{ - 1} U_{X}^{\text{T}} U_{Y} + 2\Omega _{X} = 0 $$
$$ \Rightarrow\Upsilon _{X} =\Lambda _{X} U_{X}^{\text{T}} U_{Y}\Omega _{X}^{\text{T}} C_{Y}^{{ - {\text{T}}/2}} $$
(C8)
$$ \Rightarrow\Upsilon _{Y} =\Lambda _{Y} U_{Y}^{\text{T}} U_{X}\Omega _{Y}^{\text{T}} C_{X}^{{ - {\text{T}}/2}} $$
(C9)

Equations (C1), (C4), (C5), (C7), (C8) and (C9) have been used as Eqs. (11)–(13), to iteratively obtain the suboptimal value of W in Algorithm 1.

Appendix D: Solution to nonlinear constraint for optimization

Here, we provide a simplification of the quadratic constraint in Eqs. (6) and (10), to a linear form. As the distribution of \( \tilde{X} \) and \( \tilde{Y} \) should be the same, we can say:

$$ C_{{\tilde{X}}} = C_{{\tilde{Y}}} = I \Leftrightarrow W_{X}^{\text{T}} C_{X} W_{X} = W_{Y}^{\text{T}} C_{Y} W_{Y} = I $$
(D1)

This equation suggests that matrix WX (or matrix WY) is the geometric mean of C −1 X (or C −1 Y ) and I. If we use the relaxation that matrix WX is symmetric, then we get

$$ W_{X} C_{X} W_{X} = I $$
(D2)

Let \( U_{pX} \) be the upper triangular matrix obtained by Cholesky decomposition of CX, i.e., \( C_{X} = U_{pX}^{\text{T}} U_{pX} \) and \( V_{X} = U_{pX}^{ - 1} \). If \( R_{X}^{2} = U_{pX} IU_{pX}^{\text{T}} = U_{pX} U_{pX}^{\text{T}} \), then I = VXR 2 X V T X . Then,

$$ \begin{aligned} W_{X}^{\text{T}} C_{X} W_{X} & = V_{X} R_{X}^{2} V_{X}^{\text{T}} \\ \Rightarrow W_{X}^{\text{T}} U_{pX}^{\text{T}} U_{pX} W_{X} & = V_{X} R_{X} R_{X}^{\text{T}} V_{X}^{\text{T}} \\ \Rightarrow U_{pX} W_{X} & =\Upsilon _{X} R_{X}^{\text{T}} V_{X}^{\text{T}} \\ \Rightarrow W_{X} & = V_{X}\Upsilon _{X} R_{X}^{\text{T}} V_{X}^{\text{T}} \\ \end{aligned} $$
(D3)

where \( \Upsilon _{X} \) is an orthogonal matrix. With the same analysis if matrix WX is symmetric, \( U_{pY} \) is the upper triangular matrix obtained by Cholesky decomposition of CY, \( V_{Y} = U_{pY}^{ - 1} \), and \( R_{Y}^{2} = U_{pY} U_{pY}^{\text{T}} \) we have

$$ \Rightarrow W_{Y} = V_{Y} \Upsilon_{Y} R_{Y}^{\text{T}} V_{Y}^{\text{T}} $$
(D4)

where \( \Upsilon _{Y} \) is an orthogonal matrix.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nejatian, S., Rezaie, V., Parvin, H. et al. An innovative linear unsupervised space adjustment by keeping low-level spatial data structure. Knowl Inf Syst 59, 437–464 (2019). https://doi.org/10.1007/s10115-018-1216-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-018-1216-8

Keywords

Navigation