Abstract
In this chapter, we present a simple classification scheme that utilizes only 1-bit measurements of the training and testing data. Our method is intended to be efficient in terms of computation and storage while also allowing for a rigorous mathematical analysis. After providing some motivation, we present our method and analyze its performance for a simple data model. We also discuss extensions of the method to the hierarchical data setting, and include some further implementation considerations. Experimental evidence provided in this chapter demonstrates that our methods yield accurate classification on a variety of synthetic and real data.
The views expressed in this article are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the U.S. Government.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
N. Ailon, B. Chazelle, Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform, in Proceedings of the Thirty-Eighth Annual ACM Symposium on Theory of Computing (ACM, 2006), pp. 557–563
D. Achlioptas, Database-friendly random projections: Johnson-Lindenstrauss with binary coins. J. Comput. Syst. Sci. 66(4), 671–687 (2003)
A.M. Andrew, An introduction to support vector machines and other kernel-based learning methods by Nello Christianini and John Shawe-Taylor (Cambridge University Press, Cambridge, 2000), xiii+ pp. 189, ISBN 0-521-78019-5 (hbk, ÂŁ 27.50)
P. Boufounos, R. Baraniuk, 1-bit compressive sensing, in Proceedings of IEEE Conference on Information, Science and Systems (CISS) (Princeton, NJ, 2008)
R. Baraniuk, M. Davenport, R. DeVore, M. Wakin, The Johnson-Lindenstrauss lemma meets compressed sensing. Preprint 100(1) (2006)
R. Baraniuk, S. Foucart, D. Needell, Y. Plan, M. Wootters, Exponential decay of reconstruction error from binary measurements of sparse signals. IEEE Trans. Inf. Theory 63(6), 3368–3385 (2017)
A. Choromanska, K. Choromanski, M. Bojarski, T. Jebara, S. Kumar, Y. LeCun, Binary embeddings with structured hashed projections, in Proceedings of The 33rd International Conference on Machine Learning (2016), pp. 344–353
D. Cai, X. He, J. Han, Spectral regression for efficient regularized subspace learning, in Proceedings of International Conference on Computer Vision (ICCV’07) (2007)
D. Cai, X. He, Y. Hu, J. Han, T. Huang, Learning a spatially smooth subspace for face recognition, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Machine Learning (CVPR’07) (2007)
D. Cai, X. He, J. Han, H.-J. Zhang, Orthogonal laplacianfaces for face recognition. IEEE Trans. Image Process. 15(11), 3608–3614 (2006)
S. Cheong, S.H. Oh, S.-Y. Lee, Support vector machines with binary tree architecture for multi-class classification. Neural Inf. Process. Lett. Rev. 2(3), 47–51 (2004)
E. Candès, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)
E. Candès, J. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math. 59(8), 1207–1223 (2006)
N. Christianini, J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods (Cambridge University Press, Cambridge, England, 2000)
S. Dasgupta, A. Gupta, An elementary proof of a theorem of Johnson and Lindenstrauss. Random Struct. Algorithms 22(1), 60–65 (2003)
D. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)
S. Dirksen, A. Stollenwerk, Fast binary embeddings with gaussian circulant matrices: improved bounds. arXiv preprint arXiv:1608.06498 (2016)
D. Duncan, T. Strohmer, Classification of alzheimer’s disease using unsupervised diffusion component analysis. Math. Biosci. Eng. 13, 1119–1130 (2016)
J. Fang, Y. Shen, H. Li, Z. Ren, Sparse signal recovery from one-bit quantized data: an iterative reweighted algorithm. Signal Process. 102, 201–206 (2014)
Y. Gong, S. Lazebnik, A. Gordo, F. Perronnin, Iterative quantization: a procrustean approach to learning binary codes for large-scale image retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 35(12), 2916–2929 (2013)
S. Gopi, P. Netrapalli, P. Jain, A.V. Nori, One-bit compressed sensing: provable support and vector recovery. ICML 3, 154–162 (2013)
A. Gupta, R. Nowak, B. Recht, Sample complexity for 1-bit compressed sensing and sparse classification, in 2010 IEEE International Symposium on Information Theory Proceedings (ISIT) (IEEE, 2010), pp. 1553–1557
A.D. Gordon, A review of hierarchical classification. J. R. Stat. Soc. Ser. A Gen. 119–137 (1987)
M.A. Hearst, S.T. Dumais, E. Osuna, J. Platt, B. Scholkopf, Support vector machines. IEEE Intell. Syst. Appl. 13(4), 18–28 (1998)
R. Higdon, N.L. Foster, R.A. Koeppe, C.S. DeCarli, W.J. Jagust, C.M. Clark, N.R. Barbas, S.E. Arnold, R.S. Turner, J.L. Heidebrink et al., A comparison of classification methods for differentiating fronto-temporal dementia from alzheimer’s disease using FDG-PET imaging. Stat. Med. 23(2), 315–326 (2004)
J. Hahn, S. Rosenkranz, A.M. Zoubir, Adaptive compressed classification for hyperspectral imagery, in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2014), pp. 1020–1024
B. Hunter, T. Strohmer, T.E. Simos, G. Psihoyios, C. Tsitouras, Compressive spectral clustering, in AIP Conference Proceedings, vol. 1281 (AIP, 2010), pp. 1720–1722
X. He, S. Yan, Y. Hu, P. Niyogi, H.-J. Zhang, Face recognition using laplacianfaces. IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 328–340 (2005)
L. Jacques, K. Degraux, C. De Vleeschouwer, Quantized iterative hard thresholding: bridging 1-bit and high-resolution quantized compressed sensing. arXiv preprint arXiv:1305.1786 (2013)
W. Johnson, J. Lindenstrauss, Extensions of Lipschitz mappings into a Hilbert space, in Proceedings of Conference in Modern Analysis and Probability (New Haven, CT, 1982)
L. Jacques, J. Laska, P. Boufounos, R. Baraniuk, Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Trans. Inf. Theory 59(4), 2082–2102 (2013)
T. Joachims, Text categorization with support vector machines: learning with many relevant features, in Machine Learning: ECML-98 (1998), pp. 137–142
A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems (2012), pp. 1097–1105
K. Knudson, R. Saab, R. Ward, One-bit compressive sensing with norm estimation. IEEE Trans. Inf. Theory 62(5), 2748–2758 (2016)
F. Krahmer, R. Ward, New and improved Johnson-Lindenstrauss embeddings via the restricted isometry property. SIAM J. Math. Anal. 43(3), 1269–1281 (2011)
Y. LeCun, The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/
J.N. Laska, Z. Wen, W. Yin, R.G. Baraniuk, Trust, but verify: fast and accurate signal recovery from 1-bit compressive measurements. IEEE Trans. Signal Process. 59(11), 5289–5301 (2011)
H. Li, Y. Yang, D. Chen, Z. Lin, Optimization algorithm inspired deep neural network structure design. arXiv preprint arXiv:1810.01638 (2018)
D. Molitor, D. Needell, Hierarchical classification using binary data (2018). Submitted
G.F. Montufar, R. Pascanu, K. Cho, Y. Bengio, On the number of linear regions of deep neural networks, in Advances in Neural Information Processing Systems (2014), pp. 2924–2932
D. Needell, R. Saab, T. Woolf, Simple classification using binary data. J. Mach. Learn. Res. (2017). Accepted
Y. Plan, R. Vershynin, One-bit compressed sensing by linear programming. Commun. Pure Appl. Math. 66(8), 1275–1297 (2013)
Y. Plan, R. Vershynin, Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach. IEEE Trans. Inf. Theory 59(1), 482–494 (2013)
Y. Plan, R. Vershynin, Dimension reduction by random hyperplane tessellations. Discret. Comput. Geom. 51(2), 438–461 (2014)
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015)
M. Rudelson, R. Vershynin, On sparse reconstruction from Fourier and Gaussian measurements. Comm. Pure Appl. Math. 61(8), 1025–1171 (2008)
I. Steinwart, A. Christmann, Support Vector Machines (Springer Science & Business Media, 2008)
H.M. Shi, M. Case, X. Gu, S. Tu, D. Needell, Methods for quantized compressed sensing, in Proceedings of Information Theory and Applications (ITA) (2016)
C.N. Silla, A.A. Freitas, A survey of hierarchical classification across different application domains. Data Min. Knowl. Discov. 22(1–2), 31–72 (2011)
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1–9
K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
J. Weston, C. Watkins, Multi-class support vector machines. Technical Report (Citeseer, 1998)
X. Yi, C. Caravans, E. Price, Binary embedding: fundamental limits and fast algorithm (2015)
F.X. Yu, S. Kumar, Y. Gong, S.-F. Chang, Circulant binary embedding, in International Conference on Machine Learning, vol. 6 (2014), p. 7
M. Yan, Y. Yang, S. Osher, Robust 1-bit compressive sensing using adaptive outlier pursuit. IEEE Trans. Signal Process. 60(7), 3868–3875 (2012)
Q.-S. Zhang, S.-C. Zhu, Visual interpretability for deep learning: a survey. Front. Inf. Technol. Electron. Eng. 19(1), 27–39 (2018)
Acknowledgements
Molitor and Needell were partially supported by NSF CAREER grant \(\#1348721\) and NSF BIGDATA \(\#1740325\). Saab was partially supported by the NSF under DMS-1517204.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Molitor, D., Needell, D., Nelson, A., Saab, R., Salanevich, P. (2019). Classification Scheme for Binary Data with Extensions. In: Boche, H., Caire, G., Calderbank, R., Kutyniok, G., Mathar, R., Petersen, P. (eds) Compressed Sensing and Its Applications. Applied and Numerical Harmonic Analysis. Birkhäuser, Cham. https://doi.org/10.1007/978-3-319-73074-5_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-73074-5_4
Published:
Publisher Name: Birkhäuser, Cham
Print ISBN: 978-3-319-73073-8
Online ISBN: 978-3-319-73074-5
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)