1 Introduction

Recently, it is really prevailing of learning algorithms on the study of classification and identification of patterns, signals, and other objective information [17, 24]. However, the preference, opinions and views from the human side are also essential in decision making and reasoning [1]. The questions of what are good views, which one looks better, why it is more popular have been addressed in wide areas, such as art, media, literature appreciation and architecture design.

A highly reliable and effective performance evaluation rule is essential in handling cases like subjectivity and imprecise information [4, 9]. Fuzzy set theory that was first introduced by Zadeh [30] is a suitable tool to evaluate the preference from the human side. The theory of fuzzy set has been applied in different evaluation systems for many daily applications [3], such as pattern classification [11], shipping performance evaluation [5], feature extraction [25] and personnel selection [8], etc. As classical logic only permits conclusions which are either true or false, fuzzy degree between black and white was proposed to describe the expression of partial truth, tall, small etc. However, as the presence of imprecision, vagueness and subjectivity in fuzzy set theory, it is always necessary to further train and fine-tune a fuzzy model to improve its veracity.

Past decades have witnessed the development of numerous learning methods, such as support vector machines [7], echo state networks [12, 32], convolutional deep neural networks [15], and deep Boltzmann machines [19]. These techniques mainly depend on a huge number of figures to learn the features and have shown the effectiveness in classification tasks. With respect to those preference related problems, it is always difficult to collect a large set of data for training. In addition, because of the bias and imprecision of personal preference, generalization ability is of great importance for such kind of problem. Extreme learning machine (ELM) proposed by [10] has shown very good generalization ability and robustness performance. Recently, as feature selection has drawn increasingly attentions [28], some extreme learning machine approaches with auto-encoders for multi-layer perception have been developed [22]. The architecture composes of self-taught feature extraction and supervised feature classification, which are bridged by random hidden weights. Multi-layer ELM is able to achieve more compact and meaningful feature representations compared to shallow ELM. Nowadays, ELM has been applied in a wider range of areas, such as vigilance estimation [21] and time sequence classification [16]. Especially, some ELM approaches have been successfully employed in the graphics area for classification and optimization [27, 29, 31]. An ELM classifier was trained in [29] for 3D shape segmentation. Unlike most works focusing on learning features, a shallow ELM for optimization has been applied for finding the “best” 3D printing direction [31].

In this paper, we proposed a fuzzy ELM approach for perceptual models of preference. The proposed approach is able to solve the evaluation problem by combing both the subjective knowledge and the objective evaluation. Fuzzy set theory is applied for dealing with subjective preference information, and multi-layer ELM is used to extract good features without supervised labels. This approach has been tested in practical applications to substantiate its effectiveness. Two applications of perception models in computer graphics area are introduced in this paper. Zhang et al. [31] optimized the printing direction based on four metrics. However, the trained model may be less effective when it is applied to 3D models with different size and style. Similarly, Secord et al. [20] proposed a data-driven approach for viewpoint preference. However, their model exists multicollinearity in different attributes, which will lead to instability caused by coupled parameters. To overcome the shortcomings of the existing methods, we propose a novel fuzzy extreme learning approach to improve the results of these two applications. Our method combines the advantages of fuzzy theory, feature extraction and ELM. Furthermore, the method can be easily modified to optimize its performance in different applications. Experiment results show that the perceptual models learned by our method of fuzzy ELM are effective and easy-to-implement.

The rest of this paper is organized as follows. Section 2 introduces the preliminary work, including the fundamental concepts and the theories of ELM and fuzzy model. Section 3 describes the proposed framework for preference learning problems. Section 4 optimizes the performance of 3D printed models through the orientation selection. Section 5 presents the optimization results of perceptual models of viewpoint preference of 3D models. Simulations and experiments have been conducted to verify the effectiveness of our method. Finally, our paper ends with the conclusion in Sect. 6.

2 Preliminaries

2.1 Extreme Learning Machine

Unlike the traditional function approximation theories which require to adjust input weights and the hidden layer biases, the input weights and hidden layer biases of Extreme Learning Machine (ELM) can be randomly assigned if only the activation function is infinitely differentiable. A basic ELM neural network is composed of one input layer, one hidden layer and one output layer. The input nodes depend on the input data. In the hidden layer, all the nodes are randomly generated and independent of training data. In the output layer, all the weights \(\beta _i\) are problem-based and could be adjusted to solve problems such as feature learning, clustering, regression and classification.

Consider there are N input-output samples \((x_i,t_i)\in \mathfrak {R}^n \times \mathfrak {R}^m\) for training, where \(x_i\) is an input vector with dimension n and \(t_i\) is a target vector with dimension m. The output of an single layer feedforward neural network with L hidden nodes can be represented by

$$\begin{aligned} f_{L}(x)= \sum ^{L}_{i=1} \beta _i h_i(a_i,b_i,x)=h(x)\beta , \end{aligned}$$
(1)

where \(a_i\) and \(b_i\) are learning parameters of hidden nodes and \(\beta _i\) is the weight connecting the ith hidden node to the output node. \(h_i(a_i,b_i,x)\) is the output of the ith hidden node with respect to the input x. \(\beta =[\beta _1,...,\beta _L]^T\) is the vector of the output weights between the hidden layer of L nodes and the output node, and \(h(x)=[h_1(x),...,h_L(x)]\) is the output (row) vector of the hidden layer with respect to the input x. \(h_i(a_i,b_i,x)\) could be composed of additive hidden nodes with different activation functions, such as sigmoid functions, hardlimit functions, gaussian functions, wavelet, hyperbolic functions, etc.

For a multiclass classifier or a regression problem, we assume there are m output nodes. If the original class label is p, the expected output vector of the m output nodes is \(t_i =[0,..., 0,1^p, 0,..., 0]^T\). In this case, only the pth element of \(t_i = [t_{i,1},...,t_{i,m}]\) is one, while the rest elements are set to zero. The optimization problem for ELM with multi-output nodes can be formulated as

$$\begin{aligned}&\min ~\frac{1}{2}\Vert \beta \Vert ^2+\frac{1}{2}\lambda \sum _{i=1}^N\Vert \xi _i\Vert ^2 \nonumber \\&\mathrm{s.t.}~h(x_i)\beta =t_i^T-\xi _i^T, ~i=1,...,N \end{aligned}$$
(2)

where \(\xi _i= [\xi _{i,1},...,\xi _{i,m}]^T\) is the training error vector of the m output nodes with respect to the training sample \(x_i\).

After the hidden nodes’ parameters are chosen randomly, ELM neural network can be considered as a linear system and the output weights can be optimized according to the optimization procedure (2). The universal approximation capability has been analyzed in [10] that ELM neural network with randomly generated additive and a wide range of activation functions can universally approximate any continuous target functions in any compact subset of the Euclidean space. There are two phases in the training process of ELM: (1) feature mapping and (2) output weights solving. As solving the output weight is based on the optimization problem, we can design different feature mapping approaches to improve the ELM algorithm.

2.2 Fuzzy Model for Pairwise Comparison

For some preference and decision-making problems, it is very difficult to collect the opinions such as “which one is the best?” or “What is the optimal solution?”. However, it is possible to collect the information for pairwise comparison, such as “which one is better, A or B?”. People are more likely to answer these 2-Alternative Forced Choice (2AFC) questions. In this paper, pairwise comparison information is assumed to be collected randomly, consistently and with a roughly uniform distribution.

According to the study of pairwise comparisons in [18], the value \(s_{ij}\) is assigned of the comparison pair of i and j, which represents a relative preference of i over j. If the element i is preferred to j then \(s_{ij} > 1\). Let us consider a prioritisation problem with N unknown priorities, then the reciprocal property \(s_{ji}=1/s_{ij}\) for \(i,j=1, 2, ..., N\) always holds. A positive reciprocal matrix of pairwise comparisons \(S=\{s_{ij}\}\in \mathfrak {R}^{N \times N}\) is constructed through \(N(N-1)/2\) judgements. Then a priority vector \(v=(v_1,v_2,...,v_N)^T\) may be derived from the matrix. Moreover, we can make them satisfy the partition-of-unity as

$$\begin{aligned} v_1+v_2+\cdots +v_N=1,~ v_i\ge 0, i=1,2,...,N \end{aligned}$$
(3)

When all elements \(s_{ij}\) have perfect values, then

$$\begin{aligned} s_{ij}=v_i/v_j,~s_{ij}=s_{ik}s_{kj},~ i,j,k=1,2,..,N \end{aligned}$$
(4)

However, the evaluations, \(\{s_{ij}\}\), are usually not perfect – i.e., they only approximately estimate the exact ratios \(v_i/v_j\). In addition, the comparisons are not complete for all the judgements when N is very large.

A fuzzy approach to priorities derivation is then derived for dealing the imperfect pairwise comparisons based on inexact and incomplete judgments. In fuzzy set theory, the membership function of a fuzzy set (with a range covering the interval (0, 1)) represents the degree of truth as an extension of valuation. The values between 0 and 1 characterize fuzzy members, which belong to the fuzzy set only partially [13]. Fuzzy membership function thus becomes a suitable approach to evaluate perceptual models of preference. A normal fuzzy set \(\tilde{s}\) is a triangular fuzzy number, defined by three real numbers \(l\le m\le u\), and has a linear piecewise continuous membership function \(\mu _{\tilde{s}}(\cdot )\) with the following characteristics [14]:

  • \(\mu _{\tilde{s}}(\cdot )\) is a continuous mapping from \(\mathfrak {R}\) to the closed interval [0, 1]

  • For all \(x\in [-\infty , l]\) and \(x\in [u, \infty ]\), \(\mu _{\tilde{s}}(x)=0.\)

  • \(\mu _{\tilde{s}}(x)\) is strictly linearly increasing on [lm] and strictly linearly decreasing on [mu].

  • For \(x=m\), \(\mu _{\tilde{s}}(x)=1.\)

Let the pairwise comparison judgements \(\{s_{ij}\}\) be represented by fuzzy numbers \(s_{ij}=(l_{ij},m_{ij},u_{ij})\), and consider a set of m \((m<N(N-1)/2)\) incomplete pairwise comparisons. If the judgements are inconsistent, there is no priority vector that satisfies all interval judgements simultaneously. However, it is reasonable to try and find a vector that satisfies all judgements “as well as possible”. This implies that a solution vector has to satisfy all interval judgements approximately in order to become “good enough”.

3 Model Description

The proposed fuzzy extreme learning machine model combines convolutional ELM and fuzzy pairwise learning as optimization constraints, as shown in Fig. 1. The training procedure consists of two phases: feature mapping and optimization.

Fig. 1.
figure 1

Block diagram of fuzzy extreme learning machine

3.1 ELM Feature Mapping

Convolution is the process of multiplying each element of the image with its local neighbors, weighted by the kernel. Different kinds of kernels can cause a wide range of effects, such as blurring, sharpening, embossing, edge detection, and more. Convolution have been widely used to find the features, especially on images. Inspired by these prior works, the procedure of ELM feature mapping can be described as follows:

  • First, choose kernels randomly to obtain convolutional feature maps with input data and kernels.

  • Second, pooling operation is performed to maintain rotation invariance and minimize data size.

  • Then, generating random and sparse weights from convoluted and pooled data.

  • Finally, feature map have been prepared for the exaction and optimization in the next phase.

3.2 ELM Optimization

Based on the preliminaries about the extreme learning machine and fuzzy constraints, the optimization procedure of fuzzy extreme learning machine will be formulated in this subsection. Assume v is the feature mapping of input data. Denote H as the feature mapping operator from input data to hidden layer output \(H:\{x_i\}\mapsto \{z_i\}\).

$$\begin{aligned} z_i=H(x_i), i=1,2,3,...,N. \end{aligned}$$
(5)

According to (2), the optimization problem for fuzzy ELM can be formulated as follows:

$$\begin{aligned}&\min ~\frac{1}{2}\Vert \beta \Vert ^2+\frac{1}{2}\lambda \Vert H(x)\beta -v\Vert ^2+\frac{1}{2}\rho \Vert \mu \Vert ^2 \nonumber \\&\mathrm{s.t.} Rv-d\mu \le 0, 0\le \mu \le 1,\nonumber \\&~~~~~~ \sum _{i=1}^N v_i=1, v_i>0, i=1,2,...,N. \end{aligned}$$
(6)

where \(d= [d_1,d_2,...,d_m]^T\) is a tolerance parameter vector, \(\lambda \) and \(\rho \) are multipliers satisfying \(\lambda >0\) and \(\rho >0\). As the optimization problem (6) is a convex quadratic problem with linear equality and inequality constraints, common toolbox is applicable to work it out in a very short time [2].

To conclude, a fuzzy extreme learning machine procedure for preference optimization can be concluded as follows:

  • Given a training set \(\{x_i\}\), and m pairwise comparisons \(\{s_{ij}\}\) within the training set \(\{x_i\}\), \(i,j=1,2,...,N\).

  • Define triangular membership functions \((l_{ij},m_{ij},u_{ij})\) for all the pairwise comparisons.

  • Denote hidden node number as L and randomly generate kernels for convolution.

  • Using convolution and pooling for feature mapping and consider the feature mapping relationship as \(H: \{x_i\}\mapsto \{z_i\}\).

  • Add fuzzy constraints when minimization the mismatching and solve the convex optimization problem (6)

  • Calculate the output weight \(\beta \).

  • Find the ranking priority result as \(\{v_i\}\).

  • Obtain the best choice of training set satisfying \(v_{i^*}= \max (v_i)\).

4 Perceptual Model for 3D Printing Orientations

Additive manufacturing methods often require robust branching support structures to prevent material collapse at overhangs, resulting in unsightly surface artifacts after the supports have been removed. Figure 2 shows the 3D printed model containing artifacts from different support structures when printing a 3D model along different directions. Improper support will damage the small features of the model and have influence on visual artifacts. This section improves the perceptual model for determining 3D printing orientations already discussed in [31]. Four metrics including contact area, visual saliency, viewpoint preference, and smoothness entropy were considered. Despite the effectiveness of the proposed method in [31], the algorithm still had some drawbacks. First, these metrics have different kinds of dimensions. But the evaluation function is not dimension invariant (i.e., \(F(d)\ne F(10d)\)), the model should be normalized before training. Second, metrics in [31] cannot be proved to be ergodic. There possibly exist some other metrics undetected. Thirdly, graphics computation is really time consuming. For some complex models, it may cause several hours for calculating the metrics. In order to overcome these shortages, we propose a fuzzy ELM approach in this section to improve the algorithm.

Fig. 2.
figure 2

3D printed model from different support structure placement.

4.1 Collecting Human Preferences

The service of Amazon Mechanical Turk (AMT) is applied to collect human preferences. Pairwise comparisons are conducted through investigation on Amazon Mechanical Turk to select between two printing directions. Several models are employed to generate tasks as follows. We uniformly sample the Gauss sphere to obtain 1448 possible printing directions. For each model, 500 pairs of printing directions are randomly picked. Each pair was shown to 8 people. Among them, 16 random pairs of each model occur more than once for testing the reliability. For each picked pair \(s_{ij}\), let’s assume that the direction i was picked \(p_i\) times, and j was chosen \(p_j\) times for all comparisons between directions i and j for all \(i,j=1,2,3...,N\). For some comparisons, if \(p_i=p_j\) means the preference on i and j is very contradictive, the uncertainty of \(s_{ij}\) is comparatively large. However, when \(p_i=0\) the direction i is preferred definitely; vice versa for \(p_j=0\). Moreover, as \(p_i\) and \(p_j\) increase, the vagueness of preference will decrease. Based on the above facts, the pairwise comparison judgements \(\{s_{ij}\}\) can be represented by fuzzy numbers \(s_{ij}=(l_{ij},m_{ij},u_{ij})\). Then, we can evaluate human preferences and apply these information to train the perceptual model.

4.2 Input Representation

Given a set of 3D shapes, in OBJ (or .OBJ) format or STL (STereoLithography) format. Both formats are widely used for rapid prototyping, 3D printing and computer-aided manufacturing [6]. STL files describe only the surface geometry of a three-dimensional object without any representation of color, texture or other common CAD model attributes. OBJ (or .OBJ) is also a universally accepted geometry definition file format representing 3D geometry alone—namely, the position of each vertex, and the faces that make each polygon defined as a list of vertices.

In order to find the “best” 3D printing results of models, some information should be added to the 3D model itself. Distribution of contact area (\(f_1\)) should be first considered. For different printing orientations, the surface area of regions connecting to supporting structures is determined by considering overhangs as well as potential connections at the base of supports. According to the graphics study in [23], surface regions need support only if the angle between its tangent plane and the printing direction is larger than the critical angle \(\alpha \). Faces with angle less than \(\alpha \) are self-supported. In this section, set \(\alpha =25^{\circ }\) according to [26]. A distribution of contact area \(f_1\) then can be computed and visualized as one of the important features.

In addition, subjective preference is also very important. For example, we prefer there are no supports on eyes and faces. For some man-made models, we prefer there’s no defect in working plane. An interactive tool is developed to incorporate the subjective preference into a model. Denote distribution of subjective important feature as \(f_2\). Select one of important directions \(d_{a1}\), a Harmonic field \(H_1(d)\) is computed as

$$\begin{aligned}&\nabla ^2 H(d)=0 \nonumber \\&\mathrm{s.t.~} H(d_{a1})=1, H(-d_{a1})=0, \end{aligned}$$
(7)

Similarly, select one least important direction (e.g., base) as \(d_{a2}\), a Harmonic field \(H_2(d)\) can be similarly calculated. The distribution of subjective importance feature can be emerged as

$$\begin{aligned} f_1=\min (H_1,H_2,...) \end{aligned}$$
(8)

Moreover, the geometry feature, such as coordinates and relationship between vertexes, also should be considered. Denote geometry feature as \(f_3\). As a result, the input data for perceptual 3D printing model, which is a combination of contacted area distribution feature(\(f_1\)), subjective importance feature(\(f_2\)), and structure geometry feature (\(f_3\)).

5 Perceptual Models of Viewpoint Preference

In this section, we will demonstrate the better performance on three perceptual models for 3D printing orientations by placing fewer support structures at visually important regions. Our results are also compared with Autodesk MeshMixer obtained by considering only a single factor.

Using the proposed approach, Fig. 3 show the “optimal”, “default”, and “meshmixer” printing directions results for “Kitten”. Figures 4 and 5 show the “optimal” and “default” printing directions results for “Bunny” and “Armadillo”, respectively, which further substantiate the effectiveness of proposed method. We can see the additive support is limited and no support is added on the regions that are important for viewpoint preference. Therefore, the negative influence caused by supporting structures can be reduced as much as possible.

Fig. 3.
figure 3

“Optimal”, “default” and “Meshmixer” printing directions for “Kitten”.

Fig. 4.
figure 4

“Optimal”and “default” printing directions for “Bunny”.

Fig. 5.
figure 5

“Optimal”and “default” printing directions for “Armadillo”.

6 Conclusion

This paper introduces a fuzzy extreme learning machine approach for training perceptual models of preference. A novel fuzzy extreme learning machine approach was presented for obtaining preference priority from a set of inconsistency pairwise comparisons. Our approach combines the advantage of feature mapping, ELM optimization, and fuzzy membership formulation. Two applications are given in details for the selection of “best” viewpoint of 3D objects and the optimization of 3D printing direction. Compare to the previous results, our approach has better robustness and generalization capabilities as the input comparisons are inaccurate and inconsistency. Moreover, our model can be improved if more models has been trained. Not only in graphics area, our approach can be applied in a wide range of area such as parametric design for clothing patterns, mechanical structure optimization, etc. Our approach demonstrates a good human-computer interaction practice. Future work will focus on improving our algorithm for better performance and extend our approach for more applications.