Abstract
This paper introduces a novel data-driven approach based on subjective constraints and feature learning for training perceptual models of preference. Fuzzy evaluation is applied to describe the subjective opinions from a large set of data collected from user study. Combined with the objective attributes of the training models and the subjective preferences, an optimization method is developed successfully for training and learning perceptual models. Two applications are given in details for the selection of “best” viewpoint of 3D objects and the optimized direction of 3D printing, which verify the effectiveness of our approach. This work also demonstrate a good human-computer interaction practice that draws supporting knowledge from both the machine side and the human side.
The work described in the paper was jointly sponsored by Natural Science Foundation of Shanghai (18ZR1420100) and National Natural Science Foundation of China (61703274). This work was partially supported by NSFC 61628211.
1 Introduction
Recently, it is really prevailing of learning algorithms on the study of classification and identification of patterns, signals, and other objective information [17, 24]. However, the preference, opinions and views from the human side are also essential in decision making and reasoning [1]. The questions of what are good views, which one looks better, why it is more popular have been addressed in wide areas, such as art, media, literature appreciation and architecture design.
A highly reliable and effective performance evaluation rule is essential in handling cases like subjectivity and imprecise information [4, 9]. Fuzzy set theory that was first introduced by Zadeh [30] is a suitable tool to evaluate the preference from the human side. The theory of fuzzy set has been applied in different evaluation systems for many daily applications [3], such as pattern classification [11], shipping performance evaluation [5], feature extraction [25] and personnel selection [8], etc. As classical logic only permits conclusions which are either true or false, fuzzy degree between black and white was proposed to describe the expression of partial truth, tall, small etc. However, as the presence of imprecision, vagueness and subjectivity in fuzzy set theory, it is always necessary to further train and fine-tune a fuzzy model to improve its veracity.
Past decades have witnessed the development of numerous learning methods, such as support vector machines [7], echo state networks [12, 32], convolutional deep neural networks [15], and deep Boltzmann machines [19]. These techniques mainly depend on a huge number of figures to learn the features and have shown the effectiveness in classification tasks. With respect to those preference related problems, it is always difficult to collect a large set of data for training. In addition, because of the bias and imprecision of personal preference, generalization ability is of great importance for such kind of problem. Extreme learning machine (ELM) proposed by [10] has shown very good generalization ability and robustness performance. Recently, as feature selection has drawn increasingly attentions [28], some extreme learning machine approaches with auto-encoders for multi-layer perception have been developed [22]. The architecture composes of self-taught feature extraction and supervised feature classification, which are bridged by random hidden weights. Multi-layer ELM is able to achieve more compact and meaningful feature representations compared to shallow ELM. Nowadays, ELM has been applied in a wider range of areas, such as vigilance estimation [21] and time sequence classification [16]. Especially, some ELM approaches have been successfully employed in the graphics area for classification and optimization [27, 29, 31]. An ELM classifier was trained in [29] for 3D shape segmentation. Unlike most works focusing on learning features, a shallow ELM for optimization has been applied for finding the “best” 3D printing direction [31].
In this paper, we proposed a fuzzy ELM approach for perceptual models of preference. The proposed approach is able to solve the evaluation problem by combing both the subjective knowledge and the objective evaluation. Fuzzy set theory is applied for dealing with subjective preference information, and multi-layer ELM is used to extract good features without supervised labels. This approach has been tested in practical applications to substantiate its effectiveness. Two applications of perception models in computer graphics area are introduced in this paper. Zhang et al. [31] optimized the printing direction based on four metrics. However, the trained model may be less effective when it is applied to 3D models with different size and style. Similarly, Secord et al. [20] proposed a data-driven approach for viewpoint preference. However, their model exists multicollinearity in different attributes, which will lead to instability caused by coupled parameters. To overcome the shortcomings of the existing methods, we propose a novel fuzzy extreme learning approach to improve the results of these two applications. Our method combines the advantages of fuzzy theory, feature extraction and ELM. Furthermore, the method can be easily modified to optimize its performance in different applications. Experiment results show that the perceptual models learned by our method of fuzzy ELM are effective and easy-to-implement.
The rest of this paper is organized as follows. Section 2 introduces the preliminary work, including the fundamental concepts and the theories of ELM and fuzzy model. Section 3 describes the proposed framework for preference learning problems. Section 4 optimizes the performance of 3D printed models through the orientation selection. Section 5 presents the optimization results of perceptual models of viewpoint preference of 3D models. Simulations and experiments have been conducted to verify the effectiveness of our method. Finally, our paper ends with the conclusion in Sect. 6.
2 Preliminaries
2.1 Extreme Learning Machine
Unlike the traditional function approximation theories which require to adjust input weights and the hidden layer biases, the input weights and hidden layer biases of Extreme Learning Machine (ELM) can be randomly assigned if only the activation function is infinitely differentiable. A basic ELM neural network is composed of one input layer, one hidden layer and one output layer. The input nodes depend on the input data. In the hidden layer, all the nodes are randomly generated and independent of training data. In the output layer, all the weights \(\beta _i\) are problem-based and could be adjusted to solve problems such as feature learning, clustering, regression and classification.
Consider there are N input-output samples \((x_i,t_i)\in \mathfrak {R}^n \times \mathfrak {R}^m\) for training, where \(x_i\) is an input vector with dimension n and \(t_i\) is a target vector with dimension m. The output of an single layer feedforward neural network with L hidden nodes can be represented by
where \(a_i\) and \(b_i\) are learning parameters of hidden nodes and \(\beta _i\) is the weight connecting the ith hidden node to the output node. \(h_i(a_i,b_i,x)\) is the output of the ith hidden node with respect to the input x. \(\beta =[\beta _1,...,\beta _L]^T\) is the vector of the output weights between the hidden layer of L nodes and the output node, and \(h(x)=[h_1(x),...,h_L(x)]\) is the output (row) vector of the hidden layer with respect to the input x. \(h_i(a_i,b_i,x)\) could be composed of additive hidden nodes with different activation functions, such as sigmoid functions, hardlimit functions, gaussian functions, wavelet, hyperbolic functions, etc.
For a multiclass classifier or a regression problem, we assume there are m output nodes. If the original class label is p, the expected output vector of the m output nodes is \(t_i =[0,..., 0,1^p, 0,..., 0]^T\). In this case, only the pth element of \(t_i = [t_{i,1},...,t_{i,m}]\) is one, while the rest elements are set to zero. The optimization problem for ELM with multi-output nodes can be formulated as
where \(\xi _i= [\xi _{i,1},...,\xi _{i,m}]^T\) is the training error vector of the m output nodes with respect to the training sample \(x_i\).
After the hidden nodes’ parameters are chosen randomly, ELM neural network can be considered as a linear system and the output weights can be optimized according to the optimization procedure (2). The universal approximation capability has been analyzed in [10] that ELM neural network with randomly generated additive and a wide range of activation functions can universally approximate any continuous target functions in any compact subset of the Euclidean space. There are two phases in the training process of ELM: (1) feature mapping and (2) output weights solving. As solving the output weight is based on the optimization problem, we can design different feature mapping approaches to improve the ELM algorithm.
2.2 Fuzzy Model for Pairwise Comparison
For some preference and decision-making problems, it is very difficult to collect the opinions such as “which one is the best?” or “What is the optimal solution?”. However, it is possible to collect the information for pairwise comparison, such as “which one is better, A or B?”. People are more likely to answer these 2-Alternative Forced Choice (2AFC) questions. In this paper, pairwise comparison information is assumed to be collected randomly, consistently and with a roughly uniform distribution.
According to the study of pairwise comparisons in [18], the value \(s_{ij}\) is assigned of the comparison pair of i and j, which represents a relative preference of i over j. If the element i is preferred to j then \(s_{ij} > 1\). Let us consider a prioritisation problem with N unknown priorities, then the reciprocal property \(s_{ji}=1/s_{ij}\) for \(i,j=1, 2, ..., N\) always holds. A positive reciprocal matrix of pairwise comparisons \(S=\{s_{ij}\}\in \mathfrak {R}^{N \times N}\) is constructed through \(N(N-1)/2\) judgements. Then a priority vector \(v=(v_1,v_2,...,v_N)^T\) may be derived from the matrix. Moreover, we can make them satisfy the partition-of-unity as
When all elements \(s_{ij}\) have perfect values, then
However, the evaluations, \(\{s_{ij}\}\), are usually not perfect – i.e., they only approximately estimate the exact ratios \(v_i/v_j\). In addition, the comparisons are not complete for all the judgements when N is very large.
A fuzzy approach to priorities derivation is then derived for dealing the imperfect pairwise comparisons based on inexact and incomplete judgments. In fuzzy set theory, the membership function of a fuzzy set (with a range covering the interval (0, 1)) represents the degree of truth as an extension of valuation. The values between 0 and 1 characterize fuzzy members, which belong to the fuzzy set only partially [13]. Fuzzy membership function thus becomes a suitable approach to evaluate perceptual models of preference. A normal fuzzy set \(\tilde{s}\) is a triangular fuzzy number, defined by three real numbers \(l\le m\le u\), and has a linear piecewise continuous membership function \(\mu _{\tilde{s}}(\cdot )\) with the following characteristics [14]:
-
\(\mu _{\tilde{s}}(\cdot )\) is a continuous mapping from \(\mathfrak {R}\) to the closed interval [0, 1]
-
For all \(x\in [-\infty , l]\) and \(x\in [u, \infty ]\), \(\mu _{\tilde{s}}(x)=0.\)
-
\(\mu _{\tilde{s}}(x)\) is strictly linearly increasing on [l, m] and strictly linearly decreasing on [m, u].
-
For \(x=m\), \(\mu _{\tilde{s}}(x)=1.\)
Let the pairwise comparison judgements \(\{s_{ij}\}\) be represented by fuzzy numbers \(s_{ij}=(l_{ij},m_{ij},u_{ij})\), and consider a set of m \((m<N(N-1)/2)\) incomplete pairwise comparisons. If the judgements are inconsistent, there is no priority vector that satisfies all interval judgements simultaneously. However, it is reasonable to try and find a vector that satisfies all judgements “as well as possible”. This implies that a solution vector has to satisfy all interval judgements approximately in order to become “good enough”.
3 Model Description
The proposed fuzzy extreme learning machine model combines convolutional ELM and fuzzy pairwise learning as optimization constraints, as shown in Fig. 1. The training procedure consists of two phases: feature mapping and optimization.
3.1 ELM Feature Mapping
Convolution is the process of multiplying each element of the image with its local neighbors, weighted by the kernel. Different kinds of kernels can cause a wide range of effects, such as blurring, sharpening, embossing, edge detection, and more. Convolution have been widely used to find the features, especially on images. Inspired by these prior works, the procedure of ELM feature mapping can be described as follows:
-
First, choose kernels randomly to obtain convolutional feature maps with input data and kernels.
-
Second, pooling operation is performed to maintain rotation invariance and minimize data size.
-
Then, generating random and sparse weights from convoluted and pooled data.
-
Finally, feature map have been prepared for the exaction and optimization in the next phase.
3.2 ELM Optimization
Based on the preliminaries about the extreme learning machine and fuzzy constraints, the optimization procedure of fuzzy extreme learning machine will be formulated in this subsection. Assume v is the feature mapping of input data. Denote H as the feature mapping operator from input data to hidden layer output \(H:\{x_i\}\mapsto \{z_i\}\).
According to (2), the optimization problem for fuzzy ELM can be formulated as follows:
where \(d= [d_1,d_2,...,d_m]^T\) is a tolerance parameter vector, \(\lambda \) and \(\rho \) are multipliers satisfying \(\lambda >0\) and \(\rho >0\). As the optimization problem (6) is a convex quadratic problem with linear equality and inequality constraints, common toolbox is applicable to work it out in a very short time [2].
To conclude, a fuzzy extreme learning machine procedure for preference optimization can be concluded as follows:
-
Given a training set \(\{x_i\}\), and m pairwise comparisons \(\{s_{ij}\}\) within the training set \(\{x_i\}\), \(i,j=1,2,...,N\).
-
Define triangular membership functions \((l_{ij},m_{ij},u_{ij})\) for all the pairwise comparisons.
-
Denote hidden node number as L and randomly generate kernels for convolution.
-
Using convolution and pooling for feature mapping and consider the feature mapping relationship as \(H: \{x_i\}\mapsto \{z_i\}\).
-
Add fuzzy constraints when minimization the mismatching and solve the convex optimization problem (6)
-
Calculate the output weight \(\beta \).
-
Find the ranking priority result as \(\{v_i\}\).
-
Obtain the best choice of training set satisfying \(v_{i^*}= \max (v_i)\).
4 Perceptual Model for 3D Printing Orientations
Additive manufacturing methods often require robust branching support structures to prevent material collapse at overhangs, resulting in unsightly surface artifacts after the supports have been removed. Figure 2 shows the 3D printed model containing artifacts from different support structures when printing a 3D model along different directions. Improper support will damage the small features of the model and have influence on visual artifacts. This section improves the perceptual model for determining 3D printing orientations already discussed in [31]. Four metrics including contact area, visual saliency, viewpoint preference, and smoothness entropy were considered. Despite the effectiveness of the proposed method in [31], the algorithm still had some drawbacks. First, these metrics have different kinds of dimensions. But the evaluation function is not dimension invariant (i.e., \(F(d)\ne F(10d)\)), the model should be normalized before training. Second, metrics in [31] cannot be proved to be ergodic. There possibly exist some other metrics undetected. Thirdly, graphics computation is really time consuming. For some complex models, it may cause several hours for calculating the metrics. In order to overcome these shortages, we propose a fuzzy ELM approach in this section to improve the algorithm.
4.1 Collecting Human Preferences
The service of Amazon Mechanical Turk (AMT) is applied to collect human preferences. Pairwise comparisons are conducted through investigation on Amazon Mechanical Turk to select between two printing directions. Several models are employed to generate tasks as follows. We uniformly sample the Gauss sphere to obtain 1448 possible printing directions. For each model, 500 pairs of printing directions are randomly picked. Each pair was shown to 8 people. Among them, 16 random pairs of each model occur more than once for testing the reliability. For each picked pair \(s_{ij}\), let’s assume that the direction i was picked \(p_i\) times, and j was chosen \(p_j\) times for all comparisons between directions i and j for all \(i,j=1,2,3...,N\). For some comparisons, if \(p_i=p_j\) means the preference on i and j is very contradictive, the uncertainty of \(s_{ij}\) is comparatively large. However, when \(p_i=0\) the direction i is preferred definitely; vice versa for \(p_j=0\). Moreover, as \(p_i\) and \(p_j\) increase, the vagueness of preference will decrease. Based on the above facts, the pairwise comparison judgements \(\{s_{ij}\}\) can be represented by fuzzy numbers \(s_{ij}=(l_{ij},m_{ij},u_{ij})\). Then, we can evaluate human preferences and apply these information to train the perceptual model.
4.2 Input Representation
Given a set of 3D shapes, in OBJ (or .OBJ) format or STL (STereoLithography) format. Both formats are widely used for rapid prototyping, 3D printing and computer-aided manufacturing [6]. STL files describe only the surface geometry of a three-dimensional object without any representation of color, texture or other common CAD model attributes. OBJ (or .OBJ) is also a universally accepted geometry definition file format representing 3D geometry alone—namely, the position of each vertex, and the faces that make each polygon defined as a list of vertices.
In order to find the “best” 3D printing results of models, some information should be added to the 3D model itself. Distribution of contact area (\(f_1\)) should be first considered. For different printing orientations, the surface area of regions connecting to supporting structures is determined by considering overhangs as well as potential connections at the base of supports. According to the graphics study in [23], surface regions need support only if the angle between its tangent plane and the printing direction is larger than the critical angle \(\alpha \). Faces with angle less than \(\alpha \) are self-supported. In this section, set \(\alpha =25^{\circ }\) according to [26]. A distribution of contact area \(f_1\) then can be computed and visualized as one of the important features.
In addition, subjective preference is also very important. For example, we prefer there are no supports on eyes and faces. For some man-made models, we prefer there’s no defect in working plane. An interactive tool is developed to incorporate the subjective preference into a model. Denote distribution of subjective important feature as \(f_2\). Select one of important directions \(d_{a1}\), a Harmonic field \(H_1(d)\) is computed as
Similarly, select one least important direction (e.g., base) as \(d_{a2}\), a Harmonic field \(H_2(d)\) can be similarly calculated. The distribution of subjective importance feature can be emerged as
Moreover, the geometry feature, such as coordinates and relationship between vertexes, also should be considered. Denote geometry feature as \(f_3\). As a result, the input data for perceptual 3D printing model, which is a combination of contacted area distribution feature(\(f_1\)), subjective importance feature(\(f_2\)), and structure geometry feature (\(f_3\)).
5 Perceptual Models of Viewpoint Preference
In this section, we will demonstrate the better performance on three perceptual models for 3D printing orientations by placing fewer support structures at visually important regions. Our results are also compared with Autodesk MeshMixer obtained by considering only a single factor.
Using the proposed approach, Fig. 3 show the “optimal”, “default”, and “meshmixer” printing directions results for “Kitten”. Figures 4 and 5 show the “optimal” and “default” printing directions results for “Bunny” and “Armadillo”, respectively, which further substantiate the effectiveness of proposed method. We can see the additive support is limited and no support is added on the regions that are important for viewpoint preference. Therefore, the negative influence caused by supporting structures can be reduced as much as possible.
6 Conclusion
This paper introduces a fuzzy extreme learning machine approach for training perceptual models of preference. A novel fuzzy extreme learning machine approach was presented for obtaining preference priority from a set of inconsistency pairwise comparisons. Our approach combines the advantage of feature mapping, ELM optimization, and fuzzy membership formulation. Two applications are given in details for the selection of “best” viewpoint of 3D objects and the optimization of 3D printing direction. Compare to the previous results, our approach has better robustness and generalization capabilities as the input comparisons are inaccurate and inconsistency. Moreover, our model can be improved if more models has been trained. Not only in graphics area, our approach can be applied in a wide range of area such as parametric design for clothing patterns, mechanical structure optimization, etc. Our approach demonstrates a good human-computer interaction practice. Future work will focus on improving our algorithm for better performance and extend our approach for more applications.
References
Acampora, G., Cadenas, J.M., Loia, V., Ballester, E.M.: A multi-agent memetic system for human-based knowledge selection. IEEE Trans. Syst. Man Cybern. Part A: Syst. Hum. 41(5), 946–960 (2011)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Chakraborty, S., Konar, A., Jain, L.C.: An efficient algorithm to computing max;min inverse fuzzy relation for abductive reasoning. IEEE Trans. Syst. Man Cybern. Part A: Syst. Hum. 40(1), 158–169 (2010)
Chen, S.M., Tsai, B.H.: Autocratic decision making using group recommendations based on intervals of linguistic terms and likelihood-based comparison relations. IEEE Trans. Syst. Man Cybern. Syst. 45(2), 250–259 (2015)
Chou, T.Y., Liang, G.S.: Application of a fuzzy multi-criteria decision-making model for shipping company performance evaluation. Marit. Policy Manag. 28(4), 375–392 (2001)
Chua, C.K., Leong, K.F.: Rapid Prototyping: Principles and Applications, vol. 1. World Scientific, Singapore (2003)
Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)
Güngör, Z., Serhadlıoğlu, G., Kesen, S.E.: A fuzzy AHP approach to personnel selection problem. Appl. Soft Comput. 9(2), 641–646 (2009)
Gören, S., Baccouche, A., Pierreval, H.: A framework to incorporate decision-maker preferences into simulation optimization to support collaborative design. IEEE Trans. Syst. Man Cybern. Syst. 47(2), 229–237 (2016)
Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70(1), 489–501 (2006)
Ishibuchi, H., Nakashima, T., Murata, T.: Performance evaluation of fuzzy classifier systems for multidimensional pattern classification problems. IEEE Trans. Syst. Man Cybern. Part B Cybern. 29(5), 601–618 (1999)
Jaeger, H.: Echo state network. Scholarpedia 2(9), 2330 (2007)
Klir, G., Yuan, B.: Fuzzy sets and fuzzy logic, theory and applications (2008)
Klir, G.J., Yuan, B.: Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems. World Scientific, Singapore (1996)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Liu, H., Yu, L., Wang, W., Sun, F.: Extreme learning machine for time sequence classification. Neurocomputing 174, 322–330 (2016)
Michalski, R.S., Carbonell, J.G., Mitchell, T.M.: Machine Learning: An Artificial Intelligence Approach. Springer, Heidelberg (2013)
Mikhailov, L.: Deriving priorities from fuzzy pairwise comparison judgements. Fuzzy Sets Syst. 134(3), 365–385 (2003)
Salakhutdinov, R., Hinton, G.E.: Deep Boltzmann machines. In: AISTATS, vol. 1, p. 3 (2009)
Secord, A., Lu, J., Finkelstein, A., Singh, M., Nealen, A.: Perceptual models of viewpoint preference. ACM Trans. Graph. (TOG) 30(5), 109 (2011)
Shi, L.C., Lu, B.L.: EEG-based vigilance estimation using extreme learning machines. Neurocomputing 102, 135–143 (2013)
Tang, J., Deng, C., Huang, G.B.: Extreme learning machine for multilayer perceptron. IEEE Trans. Neural Netw. Learn. Syst. 27(4), 809–821 (2016)
Vanek, J., Galicia, J.A.G., Benes, B.: Clever support: efficient support structure generation for digital fabrication. Comput. Graph. Forum 33(5), 117–125 (2014)
Vapnik, V.: The Nature of Statistical Learning Theory. Springer, Heidelberg (2013)
Wang, C.C., Chang, T.K., Yuen, M.M.: From laser-scanned data to feature human model: a system based on fuzzy logic concept. Comput.-Aided Des. 35(3), 241–253 (2003)
Wang, C.C., Leung, Y.S., Chen, Y.: Solid modeling of polyhedral objects by layered depth-normal images on the GPU. Comput.-Aided Des. 42(6), 535–544 (2010)
Wang, Y., Xie, Z., Xu, K., Dou, Y., Lei, Y.: An efficient and effective convolutional auto-encoder extreme learning machine network for 3D feature learning. Neurocomputing 174, 988–998 (2016)
Wu, Q., Wang, Z., Deng, F., Chi, Z., Feng, D.D.: Realistic human action recognition with multimodal feature selection and fusion. IEEE Trans. Syst. Man Cybern. Syst. 43(4), 875–885 (2013)
Xie, Z., Xu, K., Liu, L., Xiong, Y.: 3D shape segmentation and labeling via extreme learning machine. In: Computer Graphics Forum, vol. 33, pp. 85–95. Wiley (2014)
Zadeh, L.A.: Fuzzy sets. Inf. Control 8(3), 338–353 (1965)
Zhang, X., Le, X., Panotopoulou, A., Whiting, E., Wang, C.C.: Perceptual models of preference in 3D printing direction. ACM Trans. Graph. (TOG) 34(6), 215 (2015)
Zhang, X., Le, X., Wu, Z., Whiting, E., Wang, C.C.: Data-driven bending elasticity design by shell thickness. In: Computer Graphics Forum, vol. 35, pp. 157–166 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Mei, J., Le, X., Zhang, X., Wang, C.C.L. (2019). A Learning-Based Approach for Perceptual Models of Preference. In: Lu, H., Tang, H., Wang, Z. (eds) Advances in Neural Networks – ISNN 2019. ISNN 2019. Lecture Notes in Computer Science(), vol 11554. Springer, Cham. https://doi.org/10.1007/978-3-030-22796-8_35
Download citation
DOI: https://doi.org/10.1007/978-3-030-22796-8_35
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22795-1
Online ISBN: 978-3-030-22796-8
eBook Packages: Computer ScienceComputer Science (R0)