Abstract
Complexity is the inherent property of systems to be designed. The need for managing the complexity issues is constantly growing since systems per se are becoming more and more complex mainly due to technological advances, increasing user requirements and market pressure. Complexity management can help to increase quality and understandability of developed products, decrease the number of design errors [TZ81] and shorten their development time. Managing complexity means, firstly, knowing how to measure it. Complexity measures allow reasoning about system structure, understanding the system behaviour, comparing and evaluating systems or foreseeing their evolution.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 What Is Complexity?
Complexity is the inherent property of systems to be designed. The need for managing the complexity issues is constantly growing since systems per se are becoming more and more complex mainly due to technological advances, increasing user requirements and market pressure. Complexity management can help to increase quality and understandability of developed products, decrease the number of design errors [TZ81] and shorten their development time. Managing complexity means, firstly, knowing how to measure it. Complexity measures allow reasoning about system structure, understanding the system behaviour, comparing and evaluating systems or foreseeing their evolution.
Researchers and practitioners struggle with the complexity problem for more than three decades. Software engineers have long seen complexity as a major factor affecting design quality and productivity. The efforts to manage complexity have resulted in the introduction and studies of such general principles as separation of concerns, information hiding, system decomposition and raising abstraction level in a system design [CHW98]. On the other hand, new design methodologies, which implement those principles combined with various design techniques (e.g. object-oriented design, generative programming [CW07], etc.), have emerged and are further evolving. The evident example is product line engineering (PLE) [KLD02], which shifts from the design of a single system to the design of a family of related systems. The methodology widely exploits the model-driven approach, where at the focus are high-level domain model s.
Complexity is a difficult concept to define. Though the term ‘complexity’ is used in many of over 25 roadmaps for software [BR00] and can, for example, be found in relation to software development, software metrics, software engineering for safety, reverse engineering, configuration management and empirical studies of software engineering [Vis05], so far, there is no exact understanding of what is meant by complexity with various definitions still being proposed. High complexity of a system usually means that we cannot represent it in a short and comprehensive description. Briand et al. [BMB96] state that complexity (of a modular software system) is a system property that depends on the relationships among elements and is not a property of any isolated element. IEEE Std. 610.12:1990 [IEEE90] defines software complexity as ‘the degree to which a system or component has a design or implementation that is difficult to understand and verify’. Therefore, complexity relates both to comprehension complexity and to representation complexity.
Another definition deals with psychological complexity (also known as cognitive complexity ) of programs explaining that ‘true meaning of software complexity is the difficulty to maintain, change and understand software’ [Zus91]. There are three specific types of psychological complexity that affect programmer ability to comprehend software: problem complexity, system design complexity and procedural complexity [CA88]. Problem complexity is a function of the problem domain. It is assumed that complex problem spaces are more difficult to comprehend than simple problem spaces.
Knowledge-based perception of software complexity is described in [RMW+04] as a process of ‘translating’ human-seen complexity into numbers. The process starts with an experiment that involves human beings and provides data with embedded knowledge about human perception of complexity. Data processing and analysis of data models lead to discovery of simple rules which represent human perception of software complexity.
From the organizational viewpoint, complexity of a system is defined with respect to number, dissimilitude and states’ variety of system elements and relationships between them [BAK+04]. These complexity variables enable the distinction between structural (static) and dynamic complexity. Structural complexity describes the system structure at a defined point in time, and dynamic complexity represents the change of system configuration in time.
The aim of this chapter is to contribute towards research in software complexity measurement and management by defining complexity metrics specifically for feature models and meta-programs. The research is relevant because of the importance of ensuring meta-program testability and reliability and developing effective meta-program testing procedures, to which meta-program complexity measures can contribute similarly to the contribution of software metrics to predict critical information about reliability and maintainability of software systems using automatic analysis of source code.
2 Complexity Management
How to manage complexity? Many factors influence on better management of complexity. For example, from the cognitive complexity viewpoint, the major factor is understandability [MA08a]. One way to avoid exceeding the cognitive constraints and creating cognitive overload is to reduce the amount of information that needs to be stored in a short-term memory and to decrease the uncertainty of that information [MV95]. A common method to achieve this would be by creating new and useful abstractions. As a program is more than just the informative code during the process of understanding, a programmer’s level of expertise in a given domain, that is, domain knowledge, greatly affects program understanding as well as programmer’s knowledge. The commonly recognized principles for managing complexity are reducing the amount of information, decomposing a system into modules, abstracting or hiding information and providing different levels of abstractions.
Software design complexity is also related with design quality. As complexity increases, design quality also tends to decrease. To achieve the levels of quality needed in today’s complex software designs, quality must be designed in, not tested in. Thus the design-for-quality paradigm is becoming extremely important. In this context, Keating [Kea00] proposes simple software design partitioning rules as a basis for a quantitative measure of complexity: the number of modules at any level of hierarchy must be 7 ± 2.
The complexity growth forces researchers to seek for the adequate means for better management of complexity. A number of techniques has been identified and followed in software design practice that enforces higher program comprehensibility and reuses and eases complexity management. These include various lexical conventions, design style conventions and design process conventions. The primary tasks are the understanding of the complexity problem and finding of relevant measures for evaluating software complexity. These issues may have a direct influence on testability, performance, efficiency and other characteristics of software systems to be designed.
Software metrics have always been strongly related to the programming paradigm used by the respective researchers. For example, McCabe’s cyclomatic complexity [Cab76, SEI06] was proposed for measuring the testing efforts of structural programs. For object-oriented programs, complexity metrics are based on special object-oriented (OO) features, such as the number of classes, depth of inheritance tree, number of subclasses, etc. [SEI06].
With the arrival of higher-level programming paradigms such as aspect-oriented programming, generic programming or meta-programming, new complexity metrics should be defined because metrics applied to programs implemented in different paradigms than the one they were developed for may give false results [SPP06].
3 Complexity Metrics
Complexity measures allow reasoning about system structure, understanding system behaviour, comparing and evaluating systems or foreseeing their evolution. System design complexity addresses complexity associated with mapping of a problem space into a given representation. An overall rating of system complexity (system complexity) consists of the sum of the individual module complexities associated with the module’s connections to other modules (structural complexity) and the amount of work the module performs (data complexity) [CG90].
Structural complexity addresses the concept of coupling, that is, the interdependence of modules of source code. It is assumed that the higher coupling between modules is, the more difficult it is for a programmer to comprehend a given module. Data complexity addresses the concept of cohesion, that is, the intradependence of modules. In this case, it is assumed that the higher cohesiveness is, the easier it is for a programmer to comprehend a given module. The structural and data complexity measures are based on the module’s fan-in, fan-out and number of input/output variables. These metrics address system complexity at the system and module levels. Procedural complexity is associated with the complexity of the logical structure of a program assuming that the length of a program in lines of code (LOC) or the number of logical constructs such as sequences, decisions or loops determines complexity of the program.
Rauterberg [Rau96] addresses a similar problem, that is, how to measure the cognitive complexity in human-computer interaction. He proposes to derive cognitive complexity (CoC) from behaviour complexity (BC), system complexity (SC) and task complexity (TC) as: CoC = SC + TC–BC.
Sheetz et al. [STM91] address complexity of the OO system at the application, object, method and variable levels, and at each level propose the measures to account for the cohesion and coupling aspects of the system. Complexity is presented as a function of the measurable characteristics the OO system such as fan-in, fan-out, number of I/O variables, fan-up, fan-down and polymorphism.
Cyclomatic complexity is one of the more widely accepted static software metrics [SEI06]. It is intended to be independent of the language and language format. The other metrics bring out other facets of complexity, including both structural and computational complexity: Halstead complexity measures [Hal77] identify algorithmic complexity, measured by counting operators and operands; Henry and Kafura metrics [HK81] indicate coupling between modules (parameters, global variables, calls); Bowles metrics [SEI06] evaluate the module and system complexity, coupling via parameters and global variables; Troy and Zweben [TZ81] metrics evaluate modularity or coupling; complexity of structure (maximum depth of a structure chart). Wang’s cognitive complexity measure [Wan09] indicates the cognitive and psychological complexity of software as a human intelligence artefact. New complexity metrics also have been proposed for aspect-oriented programming (AspectJ) [PSP06] and generic programming (C++ STL) [PPP07].
There were efforts to describe formal properties of complexity metrics that could be used for evaluation and theoretical validation of complexity measures. Weyuker [Wey88] introduces a set of syntactic software complexity properties as criteria and examines the strengths and weaknesses of the known complexity measures, which include statement count, cyclomatic number, effort measure and data flow complexity. Briand et al. [BMB96] provide a theoretical framework for relating structural complexity, cognitive complexity and external quality attributes.
4 Complexity Measures of Feature Models as Meta-Programs Specifications
As system designs evolve under the pressure and demands for better quality, higher functionality and shorter time to market, the complexity growth has direct impact on design methods, approaches and paradigms. Complexity is the intrinsic attribute of systems and processes through which systems are created. One way to manage the design complexity is to enhance reuse in the context of PLE, where requirements may evolve. What is happening when we need to extend the scope of requirements beyond one system/component or beyond a family of related systems/components, if there is some prediction on their possible usage in a wider context? It is easy to predict intuitively: the models we need to deal with are becoming more and more complex. But to which limits we can let to grow complexity of the models in terms of requirements prediction, implementation difficulties and how we need to manage this complexity at a higher abstraction level? The task is to understand the complexity issues and to learn to measure the complexity quantitatively. There are two different views on complexity [LG06]: complexity as ‘difficulty to test’ (i.e. number of test cases needed to achieve full path coverage) and complexity as ‘difficulty to understand a model’. The latter is also known as cognitive complexity of a model. Cardoso et al. [CMN+06] also identify different types of complexity: computational complexity, psychological (cognitive) complexity and representational complexity. Cognitive complexity focuses on the analysis of how complicated a problem is from the perspective of the person trying to solve it. Cognitive complexity is related to short-term memory limitations, which vary depending on the individual and on what kind of information is being retained [Kin98]. For software designers, the ability of coping with complexity of a domain model is a fundamental issue, which influences the quality of the final product. High cognitive complexity of a model leads to a higher risk of making design errors and may lead to lower than required quality of a developed product, such as decreased maintainability. We claim that the properties (such as structural complexity and size) of a feature model represented using Feature Diagrams (FDs) have an impact on its cognitive complexity.
In this context, it is useful to have a boundary for cognitive complexity. We rely on the early Miller’s work [Mil56] stating that human beings can hold 7 (±2) chunks of information in their short-term memory at one time. We also use the rule of Keating which is based on the Miller’s work as applied to design domain: ‘the number of modules at any level of hierarchy must be 7 ± 2’ [Kea00]. Our empirical rule (Rule 1) for the boundary of cognitive complexity as applied to the feature model is as follows:
Rule 1.
The number of variation points in an FD must be 7 ± 2, if a designer wants to avoid consequences of high cognitive complexity . If the number of variation points is fewer than 5, the value of the model may be diminished due to the decreasing granularity level and too much information hiding . If the number of variation points is more than 9, the user needs to decompose the model into parts or levels in order the particle to remain within the limits of cognitive complexity.
Rule 2.
The cognitive complexity of an FD is calculated as the maximal number of levels in a feature hierarchy or the maximal number of features in each level of a feature hierarchy.
Rule 3 describes structural (representational) complexity of a feature model. Rule 3 has some correlation with the cyclomatic number, a well-known measure for evaluating the complexity of a program [SEI06]. Each path in a program graph correlates to the adequate sub-tree in the feature diagram since the realization of the sub-tree can be seen as a program (path) with the syntax rules for correct implementation of a particular product instance.
Rule 3.
The structural complexity of a feature model with variation points is evaluated by the number of sub-trees where each variation point has only one selected variant. Each sub-tree is derived from the initial feature diagram as a generic model for a given domain.
Based on the empirical research and practical implementations [FMM94], the cyclomatic complexity has the following boundaries: from 1 to 10, the program is simple; from 11 to 20, it is slightly complex; from 21 to 50, it is complex; and above 50, it is over-complex (untestable).
Rule 4 states how the cognitive complexity and the structural complexity should be combined. It is based on the empiric Metcalf’s law and Keating’s adaptation of the law for the complexity evaluation of a design partitioning [Kea00]. Metcalf’s law states that the ‘power’ of a network is equal to the square of the nodes on it, and the ‘value’ of the network is equal to the square of the branches on the network. The Keating’s measure is
Here, C is the design complexity, M is the number of modules in a design and I is the number of interfaces among modules.
As design complexity can be presented as a graph in which vertices represent modules and edges represent interfaces (Keating’s model), we can apply this complexity measure to feature models. What is different in our case is that a feature diagram has different properties: vertices and edges play different roles and should have different cognitive weights.
We define cognitive weight of a feature as the degree of difficulty or relative time and effort required to comprehend it, and total cognitive weight of a feature model represented as a feature diagram is the sum of the cognitive weights of its graph elements. Following Shao and Wang [SW03], we define weight of sequential structure as 1, weight of branching (if-else) as 2 and weight of case selection as 3 and introduce cognitive weights to Eq. 12.2 (see Table 12.1).
Rule 4.
The compound complexity measure of a feature diagram (FD) is estimated by Eq. ( 12.2 ):
Here, Cm is the compound complexity measure, F is the number of features (variation points and variants), Rand is the number of mandatory relationships, Ror is the number of optional relationships, Rcase is the number of alternative relationships, Rgr is the number of relationship groupings, R is the number of relationships among terminal nodes including constraints and the division coefficient is the sum of cognitive weights for equalizing the role of relationships.
Example.
Here, we present an example how the complexity of feature models can be calculated. Suppose we have a feature model of the reservation system (Fig. 12.1) that can be used to book different types (room, car, boat) of accommodation or vehicles. Such kind of systems is commonly used to demonstrate and validate new software development and modelling methods. The system manages information about reservations, customers and customer billing and provides functionality for making reservations, check in and check out. A customer may make reservations, change or cancel reservations. When making a reservation, the customer provides his/her personal details and specifies type of reservation (car, boat or room), start date (date of arrival in case of room or beginning of usage in case of room or boat) and end date. In case of room reservation, the customer can specify room type (single, double, presidential, lux). In case of car reservation, the customer can specify car brand.
The results of complexity calculation for this feature model are presented in Fig. 12.2.
5 Evaluation of Abstraction Levels
Abstraction is a basic property for understanding the reality and managing complexity of software systems. The abstraction level is the level of detail of a software system (model, component, program, etc.) [Dam06]. In this sense, abstraction is a primary concept in software engineering and is, in fact, a basic property for understanding the reality and managing the complexity of software systems [Alb03]. The simplest interpretation of abstraction is hiding of irrelevant details, though there are many different views what ‘irrelevant’ is [LBL08]. Abstraction is a gradual increase in the level of representation of a software system, when existing detailed information is replaced with information that emphasizes certain aspects important to the developer while other aspects are hidden. Abstraction is primarily responsible for the evolution of programming languages by stimulating adoption of higher-level mechanisms and constructs for programming. More abstract programming language mechanisms allow replacing complex and repeating low-level operations. Better abstraction allows addressing complex problems with less code and less programming errors.
Though different layers of abstraction represent a qualitative leap in the level of abstraction that allows achieving higher productivity and faster development times, an interesting problem would be to evaluate the level of abstraction in a software system quantitatively. The problem is not a trivial one, because the level of abstraction is related to the concepts of software complexity [Gla02] and information content [TCS04]. Indeed, a representation of a software system at a higher layer of abstraction contains less detail and usually has less source code lines than a corresponding representation at a lower layer of abstraction.
The problem considered in this chapter is how to evaluate the raise of abstraction introduced by a higher-level language quantitatively. We argue that it can be evaluated relatively by comparing information content at both layers of abstraction. Since some of information is abstracted away at a higher layer of abstraction, we expect that information quantity directly represented at a higher level of abstraction generally should decrease, because much of it is hidden in the underlying tools (pre-processors, compilers, etc.) and software libraries used. However, the entire content of information required to solve a certain problem should remain the same, as stipulated by the law of conservation of information, which states that information in a closed system of natural causes remains constant or decreases [Dem99]. Therefore, a ratio between content of information at a higher and lower levels of abstraction is a metric of abstraction.
We can estimate the increase/decrease of abstraction in software by measuring the content of information as different layers of abstraction. There are several methods to evaluate information content/complexity such as computational complexity, Shannon entropy and topological complexity [Edm99]. We use the algorithmic information content metric known as Kolmogorov complexity [LV97].
Kolmogorov complexity is a measure of randomness of strings based on their information content. We use Kolmogorov complexity-based metric to estimate the increase in the level of abstraction in meta-programs quantitatively. Meta-programs are generic programs (or program generators) that encapsulate families of similar software components. We evaluate the level of abstraction in meta-programs as compared to families of domain programs by estimating and comparing the information content at the meta-level and domain level of abstraction using a common compression algorithm.
The main idea of Kolmogorov complexity is to measure the ‘complexity’ of an object by the length of the smallest program that generates it. In general case, we have a domain object x and a description system (e.g. programming language) φ that maps from a description w (i.e. a program) to this object. Kolmogorov complexity K(x) of an object x in the description system φ is the length of the shortest program in the description system φ capable of producing x on a universal computer such as a Turing machine:
Kolmogorov complexity is the ultimate lower bound among all measures of information content. Unfortunately, it cannot be computed in the general case [LV97]. Usually, the universal compression algorithms are used to give an upper bound to Kolmogorov complexity. Suppose that we have a compression algorithm C i . Then, a shortest compression of w in the description system φ will give the upper bound to information content in x:
As abstraction hides the complexity, abstraction of an object x in a description system φ can be defined as an inverse of complexity of x estimated in terms of Kolmogorov complexity:
The increase of abstraction level between a program φw that is a representation of x in a description system φ and a program ψw that is a representation of x in a description system ψ at a higher level of abstraction can be defined as follows:
Having in mind Eq. 12.4 and that a meta-program MP is a concise representation of a component family λ, which is a union of all its members Pj, we estimate the increase of abstraction level A in a meta-program as compared to a domain program as follows:
Here, C i is a compression algorithm.
The content of information in component families can be estimated using the compression-based information content metric. We use BWT compression algorithm, because currently it allows achieving best compression results for text-based information [Man99] and thus better approximate information content. The lowest size of the compressed components will put the upper limit on the estimated information quantity in the analysed component family.
Example.
We develop a meta-program, which describes a component family at a higher level of abstraction. The identified generic parameters and their values for the gate component family are as follows:
-
Gate _function = { AND, OR, XOR, NAND, NOR, XNOR }
-
Gate_inputs = { integer numbers from 2 to 16 }
A meta-program (see Fig. 12.3) was developed using Open PROMOL as a meta-language. This meta-program describes a generic gate and covers a family of 90 different component instances, which can be generated from it.
Then, we evaluate content of information at a higher level (meta-level) of abstraction. We again compress the meta-program using a selected compression algorithm, which in our case is BWT. The lowest size of compressed meta-program will put the upper limit on the estimated information content at the meta-level.
The increase of abstraction between meta-level and domain level shall be the ratio of estimated information content at the meta-level and domain level, as stipulated in Eq. 12.7. The size of the meta-program given in Fig. 12.3 is 291 B. The size of the meta-program compressed using the BWT algorithm is 245 B, which is the estimated quantity of information at the meta-level.
Next, we generate all instances of this meta-program for all possible values of the generic parameters f and num. We obtain 90 different component instances (2 of them are given in Fig. 12.4a, b). The total size of these instances is 21,426 B when uncompressed and 726 B after compression.
Next, we apply Eq. 12.8 to obtain the estimated abstraction increase for the gate component family:
Thus, we estimate that the introduction of meta-programming for describing generic gate components using VHDL as a domain language and Open PROMOL as a meta-language allowed to increase abstraction by 3 times.
We also have performed the experiments with the following VHDL component families and meta-programs: gate, RSA coding processor, serial multiplier, register, shift register, multiplexer and majority function for voting in fault-tolerant systems. We also have performed the experiments with the DSP algorithms implemented as embedded software in C as follows: DCT, FFT, Romberg integration, Chebyshev approximation and Taylor series expansion of popular mathematical functions. The results are summarized in Table 12.2.
The statistical evaluation of the obtained results for abstraction increase (mean = 2.9; std. deviation = 0.992; std. error = 0.286) was performed using one-sample Student’s t-test. The mean is within 95% confidence interval.
The quantity of information has decreased by 2.9 times on average in meta-programs as compared with domain program families. This number varies depending upon the type and size of components, the number of component instances in a component family, the number of generic parameters in a meta-program, similarity of components within a component family and syntactic characteristics of domain language and meta-language. In general, we can estimate that the level of abstraction in Open PROMOL meta-programs is about 3 times higher than the level of abstraction in domain programs.
6 Complexity of Meta-Programs and Meta-Programming Techniques
Meta-programming, as a paradigm for developing programs that create other programs, is a level of complexity above traditional programming paradigms. There are two types of meta-programming: homogeneous and heterogeneous meta-programming (see Chap. 4).
In case of homogeneous meta-programming, we have two subsets of a domain language: one is dedicated for expressing domain functionality, and the other is used for managing variability at meta-level (generic parameters, templates, etc.). The developer has to know only one programming language syntax, the meta-program is as readable as a domain program written in the same domain programming language and the development flow uses the same development toolset. Therefore, the complexity of developing meta-programs using homogeneous meta-programming technique is only slightly higher than complexity of traditional programming.
In case of heterogeneous meta-programming, we have two different languages: a domain language itself and a meta-language, which manipulates with source code of domain language programs. As a result, the cognitive complexity of heterogeneous meta-programs expressed in terms of their readability and understandability is significantly higher, because the developer must know, understand and use the syntactical constructs of two different languages in the same meta-specification. The development flow is significantly more complex: not only two development environments have to be used, but also the testing of meta-programs is a significant and time-consuming problem. Therefore, complexity of developing meta-programs using heterogeneous meta-programming techniques is considerably higher than complexity of traditional programming.
Complexity measures may be helpful for reasoning about meta-program structure, understanding the relationships between different parts of meta-programs, comparing and evaluating meta-programs. Here, we distinguish between:
-
1.
First-order properties, or characteristics, which are derived directly from the meta-program description itself using simple mathematical actions such as counting, for example, program size (count of symbols in a file)
-
2.
Second-order properties, or metrics, which cannot be derived directly from artefacts but are calculated from first-order properties
Meta-program complexity can be evaluated at several dimensions:
-
1.
Information: Meta-program as message (sequence of symbols) containing information with unknown syntax and structure.
-
2.
Meta-language: Meta-program as annotated domain knowledge. Domain knowledge is expressed using a domain language, whereas domain variability is specified using a meta-language. Such separation of domain and meta-levels is a first step towards the creation of a meta-program.
-
3.
Graph: Meta-program as a graph of execution paths, where a root is a meta-program, the nodes are the meta-language constructs, and the leaves are the domain program instances.
-
4.
Algorithm: Meta-program as a high-level program specification (algorithm), which contains a collection of functional (structural) operations. An operation may have one or more operands specified as meta-program attributes (parameters).
-
5.
Cognition: Meta-program as a number of different information units available for human cognition. A unit may represent a meta-language construct (macro, template, function, etc.), its argument or a meta-parameter.
7 Complexity Metrics of Heterogeneous Meta-Programs
We use the following metrics for evaluating complexity at different dimensions of a meta-program: relative Kolmogorov complexity (RKC), meta-language richness (MR), cyclomatic complexity (CC), normalized difficulty (ND), and cognitive difficulty (CD).
7.1 Information Dimension: Relative Kolmogorov Complexity
There are several methods to evaluate informational software complexity such as Shannon entropy, computational complexity, network complexity and topological complexity. We use the algorithmic complexity metric also known as Kolmogorov complexity [LV97] (see Eq. 12.3). Kolmogorov complexity has been used earlier (under the name of generative software complexity) to measure the effectiveness of applying program generation techniques to software [Hee03]. Program generators were defined as compressed programs, and the shortest generator is assumed to have maximal generative complexity. Here, we evaluate the complexity of a meta-program M using the relative Kolmogorov complexity (RKC) metric, which is calculated using a compression algorithm C as follows:
Here, \( \left\| M \right\| \) is the size of a meta-program M, and \( \left\| {C(M)} \right\| \) is the size of a compressed meta-program M.
A high value of RKC means that there is a high variability of text content, that is, high complexity. A low value of RKC means high redundancy, that is, the abundance of repeating fragments in meta-program code.
7.2 Meta-language Dimension: Meta-language Richness
Meta-program M can be defined as a collection of domain language statements with corresponding annotations (metadata) expressed symbolically: \( O = \left\langle {\left( {s,m} \right)|s,m \in {\Sigma^{*}}} \right\rangle \), where s is a domain language statement, m is the metadata of s and \( {\Sigma^{*}} \) is a string of symbols from alphabet \( \Sigma \).
For the evaluation of meta-program complexity at the meta-language dimension, we use the meta-language richness (MR) metric:
Here, \( \left\| M \right\| \) is the size (length) of a meta-program M, and \( \left\| m \right\| \) is the size (length) of the meta-language constructs in a meta-program M.
A higher value of MR means that a meta-program contains more metadata and its description is more complex.
7.3 Graph Dimension: Cyclomatic Complexity
Cyclomatic complexity (CC) [Cab76] of a program directly measures the number of linearly independent paths through a program’s source code from entrance to each exit. For meta-programs, CC is equal to the number of distinct domain program instances that can be generated from a meta-program.
A meta-program M can be defined as a function \( \Phi (M):P \to I \)that maps from a set of its parameters P to a set of its domain program instances I. Following this definition, CC of a meta-program is equal to the cardinality of a set of the distinct domain program instances described by a meta-program:
Since Φ is an injective function, which associates distinct meta-program parameter values with distinct domain program instances, the cyclomatic complexity of a meta-program M can be computed using only the interface description of a meta-program. For independent parameters, the value of CC can be calculated as a product of the number of allowed parameter values for each parameter of a meta-program:
A higher value of CC indicates higher complexity of the meta-program’s parameter set (meta-interface).
7.4 Algorithmic Complexity: Normalized Difficulty
A functional program specification S is a sequence of functions \( S = \left( {f|f \in F} \right) \), where \( f:\left( {a,a \in A} \right) \to A \) is a specific function (operator) that may have a sequence of operands as its arguments, and A is a set of function operands. For meta-programs, we accept that operations are specified as meta-language functions, and operands are specified as meta-program parameters. For the evaluation of meta-program complexity at the algorithm dimension, we use the Halstead complexity metrics [Hal77]. From a meta-program, we derive the number of distinct operators \( {n_1} = \left| F \right| \), the number of distinct operands \( {n_2} = \left| A \right| \), the total number of operators \( {N_1} = \left| S \right| \) and the total number of operands \( {N_2} = \sum_{{f \in S}} {} \left| A \right| \).
Halstead Difficulty D indicates the cognitive difficulty of a program:
Halstead volume V measures the size of a program specification:
For evaluating meta-program complexity at the algorithm dimension, we propose the normalized difficulty (ND) metric, which is a normalized ratio of the cognitive difficulty and size metrics:
The ND metric measures the complexity of a meta-program as an algorithm. A high ND value means that meta-program is highly complex in terms of time and effort required to understand it.
7.5 Cognitive Complexity: Cognitive Difficulty
Following Miller [Mil56] stating that humans can hold 7 (±2) chunks of information in their short-term memory at one time and Keating [Kea00], who claims that the number of modules at any level of software hierarchy must be 7 ± 2, for evaluating complexity of meta-programs, we propose the cognitive difficulty (CD) metric. Cognitive difficulty is calculated as the maximal number of meta-level units (meta-parameters P, meta-language constructs N 1 or their respective arguments N 2) in a meta-program:
The meta-program complexity metrics are summarized in Table 12.3.
8 Complexity of Homogeneous Meta-Programming
Inspired by the ‘interface size’ metric defined by [BVT03] for object-oriented programs, we define complexity metrics of the homogeneous meta-programs based on the complexity of their meta-interfaces as follows. The complexity of a homogeneous meta-program is the sum of the complexities of its constituent parts, that is, meta-types and meta-functions:
The complexity of the meta-type \( {C^{\text{MT}}} \) is defined as the number of meta-parameters of a meta-type plus the sum of the weights of the meta-parameters:
Here, \( {P^{\text{MT}}} \) is a set of the meta-parameters of a meta-type, and \( \varpi \left( {{p_i}} \right) \) is the weight of the meta-parameter \( {p_i} \).
The complexity of the meta-function \( {C^{\text{MF}}} \) is defined as the sum of the complexities of types of its arguments plus the complexity of the type of the return value:
Here, \( {A^{\text{MF}}} \) is a set of the arguments of a meta-function, and \( {r^{\text{MF}}} \) is a type of the return value of a meta-function.
The weights of the meta-parameters used for calculation of complexities are presented in Table 12.4.
Example of complexity calculation for a meta-function :
// Java. Determine if an object is in an array.
static <T, V extends T> boolean isIn(T x, V[] y) {
for(int i=0; i < y.length; i++)
if(x.equals(y[i])) return true;
return false;
}
Complexity = 1 + (2 + 2) = 5
Example of complexity calculation for a template class:
//C++. Generic Vector type
template<class T, int size>
class Vector {
private : T values[size];
};
Complexity = 2 + 1 = 3
9 Theoretical Validation of Complexity Metrics
Validation of software metrics is important to ensure that metrics are accepted by the scientific community and used properly. There are two metric validation methods: theoretical and empirical [Ema00].
Theoretical validation ensures that the metric is a proper numerical characterization of software property it claims to measure. Empirical validation relates metrics with some important external attributes of software (such as the number of faults). While both types of validation are necessary, the empirical validation requires much time and many researchers to contribute since many studies need to be performed to gather convincing evidence from many real-world libraries and applications that a metric is valid. Meta-program complexity research is not mature yet; therefore, while there are open meta-program libraries available for such research, currently there are not sufficient data available publicly on the external characteristics of such meta-programs such as reliability or maintainability.
Therefore, we validate the proposed meta-program complexity metrics theoretically using Weyuker’s properties [Wey88], a set of formal properties that can be used to evaluate any software metrics.
Property 1 is satisfied when we can find two meta-programs of different complexity. All complexity metrics satisfy Property 1:
Property 2 is satisfied when there are finitely many programs of complexity c, where c is a non-negative number. The property is not satisfied for complexity measures that are size independent.
Property 3 (Eq. 12.21) is satisfied if we can find two distinct meta-programs that have equal complexity. The property is satisfied by all proposed meta-program complexity metrics:
Property 4 is satisfied if equivalent meta-programs of different complexity can be written. The property is not satisfied by RKC and MR metrics:
Property 5 is satisfied if after concatenating two meta-programs, the complexity of the merged meta-program increases beyond individual complexities of original meta-programs. The property is satisfied by all metrics, except MR (because of averaging):
Property 6 is satisfied if concatenation of two equally complex meta-programs with some other meta-program gives different complexity meta-programs. The property is satisfied by all metrics (because meta-programs can have common meta-parameters but distinct meta-bodies):
Property 7 is satisfied if by permuting the order of statements in a meta-program, the complexity of the meta-program changes. The property is not satisfied by all meta-program complexity metrics except RKC metric.
Property 8 is satisfied if renaming of the symbols and variables of a meta-program does not change the complexity of a program. The property is satisfied for all meta-program complexity metrics except RKC metric.
Properties 9a (Eq. 12.25) and 9b (Eq. 12.26) are satisfied when two (or more) meta-programs are concatenated; the sum of complexities of the original meta-programs is less than the complexity of the bigger meta-program. The property is satisfied by RKC (because the concatenation provides more opportunities for compression), CC (because adding new meta-parameters lead to geometrical increase of meta-program instance number), and CD (because two meta-programs can have the same meta-parameters, meta-language constructs or their arguments) metrics. Properties 9a and 9b are not satisfied by the MR metric (because combining two meta-programs will not lead to their increased coupling). Only property 9a is satisfied by the ND metric:
The results of theoretical validation are summarized in Table 12.5. Note that Weyuker’s properties were developed for procedural languages. Hence, there might be possibility that a proposed meta-program complexity measure may not satisfy all the properties but still may be valid for meta-programming domain as is the case with object-oriented metrics [MA08b].
10 Examples of Meta-Program Complexity Calculation
10.1 Complexity of Heterogeneous Meta-Programs
We demonstrate the complexity calculation of the heterogeneous meta-program developed for the hardware design domain. In that domain, a great number of similar domain entities exist. For example, the most widely used hardware library components are gates (see Fig. 12.4; in VHDL), which implement a particular logical function.
The hardware designer requires many different gate components implementing different functions and having a different number of inputs. All these components are very similar to each other both syntactically and semantically, and thus they constitute a component family.
Next, we develop a meta-program, which describes a gate component family. For example, the identified generic parameters and their values for the gate component family are as follows:
Gate_function = { AND, OR, XOR, NAND, NOR, XNOR }
Gate_inputs = { integer numbers from 2 to 8 }
A gate meta-program (see Fig. 12.5) was developed using Open PROMOL meta-language. The meta-program has two parameters, three meta-language functions and its size is 291 B. It differs from the one given in Fig. 12.3 by one essential detail (i.e. by the number of inputs), which is important for calculation in this context. Though the meta-body is the same in both figures, we have repeated it here for convenience of reading.
We calculate the RKC value using a BWT (Burrows-Wheeler transform) compression algorithm, because currently it allows achieving best compression results for text-based information and thus allows to better approximate information content. The size of the gate meta-program is 271 B. The size of the compressed meta-program will put the upper limit on its information content. After compression, we obtain 245 B; therefore, RKC value of a gate meta-program is equal to 245/271 = 0.90.
We calculate MR of the gate meta-program by calculating the size of its meta-interface and the length of its meta-language functions, which is equal to 139 B. Therefore, its MR value is equal to 139/271 = 0.51.
Cyclomatic complexity of a meta-program is a number of different program instances that can be generated from it. The metric can be calculated as the number of distinct meta-program parameter values. Parameters f and num are independent. Parameter f can have six different values, and parameter num can have seven values. The gate meta-program covers a family of \( 6 \cdot 7 = 42 \) different component instances. Therefore, its CC value is 42.
The gate meta-program has 3 meta-language functions, 2 distinct functions (@gen, @sub), 4 meta-language function arguments and 3 distinct arguments (num, {,}, {@sub[f]}). Therefore, its ND is equal to \( {{{2 \cdot 4}} \left/ {{\left( {3 + 4} \right) \cdot \left( {2 + 3} \right)}} \right.} = {{8} \left/ {{35}} \right.} = 0.23 \). From the same values, we calculate that its CD is \( \max \left( {2,3,4} \right) = 4 \).
The values of the calculated complexity metrics for the gate meta-program are summarized in Table 12.6.
Based on the meta-program complexity metric values, we can make the following conclusions on complexity of the gate meta-program. The RKC value is high; therefore, the meta-program almost has no repeating fragments, it is coded at a meta-level efficiently and there is hardly room for any additional generalization without introducing new parameters or widening the scope of the meta-program. The MR value shows that meta-language constructs cover only about a half of the meta-program’s size; therefore, its understandability and readability is good.
Following Frappier et al. [FMM94], who introduce the following boundaries of the CC values based on empirical research and practical implementations of large software systems: simple (1–10), slightly (moderately) complex (11–20), complex (21–50), over-complex and untestable (>50), we conclude that due to large parameter space of the meta-program, the exhaustive testability of its instances is complex. The CD value is below lower threshold (<5) for short-term memorability of chunks of information as formulated by [MNA07]; therefore, cognitive complexity of the meta-program is low.
Finally, we present complexity values calculated for Open PROMOL meta-programs created from Altera’s library for OrCAD VHDL components (Table 12.7). Altera’s library is a large collection of specific components, which are supposed to cover the entire circuit design domain (it contains 282 macro-functions and 73 primitives, i.e. 355 VHDL components at all). The components were generalized using Open PROMOL meta-language to create a generic VHDL component library [Dam01].
We evaluate the results presented in Table 12.7 as follows. Most complex meta-programs are those, which describe components with largest variability in the domain, thus requiring a larger number of parameters for selection of a specific instance and a larger number of meta-language functions to represent their variability (see values of CC and CD metrics). Such meta-programs are difficult to test and maintain. Their complexity can be decreased by introducing hierarchical decomposition at the meta-program level.
10.2 Complexity of Homogeneous Meta-Programs
As an example of complexity measurement of homogeneous meta-programs, we analyze Boost C++ Libraries. Boost [AG04] is a collection of open-source libraries that extend the functionality of C++. To ensure efficiency and flexibility, Boost extensively uses C++ template meta-programming techniques. In C++, the template mechanism provides a rich facility for computation at compile time. Here, we analyze complexity of template functions in a Boost.Math. This library contains several contributions in the domain of mathematics such as complex number and special mathematical functions. An example of such a template function (a fragment) is presented in Fig. 12.6.
Template functions in the Boost.Math library are rather simple. They mostly have CC values either 3, 16 or 19, meaning that each template function has a single template parameter, which can accept either 3 floating point, 16 integer or 19 floating point and integer C++ type values. Only ‘common_factor_ct’ has template function ‘static_lcm’, whose template parameters are numbers of long type rather than types. All template functions also have the same ND value, because all template references are to the same template parameter class and have only one template parameter; therefore, the number of distinct meta-program operators and operands is equal to 1, and ND is equal to 0.25. The value of the CD metric is larger for components, which have a larger number of template references. The values of the RKC and MR metrics are larger for smaller components, which have less domain language (C++ non-template) code. When evaluating testability and maintainability of Boost.Math library components, the CD value could be used using the boundaries proposed by Frappier et al. [FMM94].
11 Summary, Evaluation and Future Work
In this chapter:
-
1.
We have analysed information content in higher-level programs (meta-programs) and compared it with information content in lower-level (domain) program families. We have proposed to estimate the abstraction level of a program as an inverse of its complexity as defined by Kolmogorov complexity metric measured using a standard text compression algorithm. Based on the performed experiments, we estimate that meta-programming decreases the information content and thus increases the level of abstraction in analysed domains by approx. 3 times.
-
2.
We have proposed three measures for evaluating the complexity of feature diagrams (FDs). The measures are based on some properties of FDs, the empiric laws of Miller’s and Metcalf’s as well as on Keating’s rules. The first measure evaluates the boundaries of cognitive complexity, which are expressed through the property of ‘magic seven’ applied to variation points in the FD. The second measure evaluates the structural complexity expressed through the quantitatively identifiable number of adequate sub-trees in the FD. The measure correlates with the cyclomatic number that is used to evaluate program complexity. The third measure evaluates both cognitive and structural aspects of the feature model complexity.
-
3.
We have proposed metrics for evaluating the complexity of meta-programs at several dimensions (information, meta-language, graph, algorithm, cognition) using a variety of measures adopted from information theory and software engineering domain. Such metrics can be used to rank meta-programs based on their complexity values, to assess testability and maintainability of meta-programs, and can be used by reusable software library developers for evaluating complexity of their work artefacts. Despite the lack of larger-scale empirical validation, we still expect that meta-program complexity metrics could be used to indicate poorly written or untestable meta-programs, when the metric values exceed predefined maximal or minimal boundaries.
The introduced complexity measures of feature models and meta-programs allow reasoning about the structure and behaviour of the system to be modelled at a higher abstraction level and allow comparing and evaluating system models or the complexity of their transformation into lower-level representation (e.g. into generic programs). The measures also allow to reason about the granularity level, important reuse characteristics that are difficult to express quantitatively and generic programs (components) to be derived from the feature model. As complexity is the inherent system property with multiple aspects, it is difficult to devise a unified measure reflecting all aspects of the model. The proposed complexity measures reflect different views on complexity and enable to evaluate the design complexity at the model level. Though the presented case study supports theoretical assumptions, more empirical research is needed in order to better evaluate the measures and to reason about their value with a larger degree of certainty.
Quantitative evaluation of the complexity of models is a very important task due to many reasons: (1) complexity in system design is continuously growing and, as a result, there is a great need to manage complexity; (2) designs are moving towards a higher abstract level, thus, the model-driven development is further strengthening its position; (3) complexity assessment of the developed software systems in the early stages of the software lifecycle allow to make cost-effective changes to the developed systems; (4) though software has many complexity measures (e.g. number of code lines, cyclomatic number, psychological complexity), but the straightforward use of those measures is not always relevant at the model level; (5) how we can reason on the introduction of a new abstraction level (in order to manage the complexity and, e.g. to avoid over-generalization in component design) without having quantitative measures?
The task to deal with model complexity is hard because of a large variety of model types used to describe the models. We focus on a specific type of models described by FDs, which are very useful in the context of product line approaches and the use of generative technologies for implementing the approaches. Due to the number of factors that contribute to FD complexity, we cannot identify a single metric that measures all aspects of a feature model’s complexity. This situation is well known from the measurements of program source code complexity. A common solution is to use different measures within a metrics suite. Each individual measure can evaluate one aspect of the complexity, and together they can provide a more accurate estimation.
12 Exercise Questions
-
12.1.
Clarify what is complexity of a system in general? Repeat the technological aspects of complexity considered in Chap. 1.
-
12.2.
Why complexity of systems is so important attribute? Enumerate software fields where complexity issues are at the focus.
-
12.3.
Analyze different views to the complexity problem and identify types of software complexity.
-
12.4.
What is complexity management? What are commonly recognized principles to manage complexity?
-
12.5.
What are complexity measures? How they differ for each complexity type?
-
12.6.
What is cognitive complexity and how it relates to the ‘magic 7’ problem? Analyze the Keating’s complexity measure more thoroughly.
-
12.7.
Analyze cyclomatic complexity and complexity measures for object-oriented programming. Identify complexity evaluation problems with the arrival of new programming paradigms.
-
12.8.
Clarify the coupling and cohesion of modules within a software system and how those features relate to complexity. How they affect reusability?
-
12.9.
What are complexity measures to evaluate feature models?
-
12.10.
Select some feature diagrams from previous chapters (e.g. 9 or 10) and using measures of Sect. 12.4, calculate feature model complexity.
-
12.11.
Provide investigation on model complexity measures more thoroughly as a separate research topic: (a) for graphical modes and (b) for abstract and formal models.
-
12.12.
Compare and evaluate the measures given in Sect. 12.4 and devise new measures to evaluate complexity of feature-based models.
-
12.13.
What is an abstraction level in system design and how its complexity can be measured and evaluated?
-
12.14.
Learn and explain Kolmogorov complexity as applied to measuring the increase/decrease of abstraction level.
-
12.15.
Learn and explain meta-program complexity issues at the following dimensions: (a) information, (b) meta-language, (c) graph, (d) algorithm and (e) cognition.
-
12.16.
Provide some experiments to evaluate complexity metrics of heterogeneous meta-programs (using metrics given in Sect. 12.7), if the paradigm is yours research topic.
References
Abrahams D, Gurtovoy A (2004) C++ template metaprogramming: concepts, tools, and techniques from boost and beyond. Addison Wesley Professional, Boston
Albin ST (2003) The art of software architecture: design methods and techniques. Wiley, Indianapolis
Blecker T, Abdelkafi N, Kaluza B, Kreutler G (2004) A framework for understanding the interdependencies between mass customization and complexity. In: Proceedings of the 2nd international conference on business economics, management and marketing, Athens, Greece, 24–27 June 2004
Briand LC, Morasca S, Basili VR (1996) Property-based software engineering measurement. IEEE Trans Softw Eng 22(1):68–86
Bennett KH, Rajlich V (2000) Software maintenance and evolution: a roadmap. In: Finkelstein AC (ed) Future of software engineering. ACM New York, NY, USA, pp 73–87
Bandi RK, Vaishnavi VK, Turk DE (2003) Predicting maintenance performance using object-oriented design complexity metrics. IEEE Trans Softw Eng 29(1):77–87
Card DN, Agresti WW (1988) Measuring software design complexity. J Syst Softw 8:185–197
Card DN, Glass RL (1990) Measuring software design quality. Prentice Hall, Englewood Cliffs
Coplien J, Hoffman D, Weiss D (1998) Commonality and variability in software engineering. IEEE Softw 15:37–45
Cardoso J, Mendling J, Neumann G, Reijers HA (2006) A discourse on complexity of process models. In: Eder J, Dustdar S (eds) Proceedings of Business Process Management BPM 2006 workshops, Vienna, Austria, 4–7 Sept 4–7 2006. LNCS, vol 4103, Berlin: Springer-Verlag, pp 117–128
Czarnecki K, Wasowski A (2007) Feature diagrams and logics: there and back again. In: 11th international software product line conference, SPLC 2007. 10–14 Sept 2007, Washington, USA, pp 23–34
Damaševičius R (2001) Scripting Language Open PROMOL: Extension, Environment and Application. MSc, thesis. Kaunas University of Technology, Lithuania
Damaševičius R (2006) On the quantitative estimation of abstraction level increase in metaprograms. Comput Sci Inf Syst (ComSIS) 3(1):53–64
Dembski WA (1999) Intelligent design as a theory of information. Intervarsity Press, Downers Grove
Edmonds B (1999) Syntactic measures of complexity. Doctoral Thesis, University of Manchester, Manchester
Emam KEl (2000) A methodology for validating software product metrics. National Research Council of Canada, Ottawa, ON, Canada (NCR/ERC-1076)
Frappier M, Matwin S, Mili A (1994) Software metrics for predicting maintainability. Software metrics study: Technical Memorandum 2. Canadian Space Agency, St-Hubert, Virginia Polytechnic, 2(3):129–143
Glass RL (2002) Sorting out software complexity. Source Commun ACM Archive 45(11):19–21
Halstead MH (1977) Elements of software science. Elsevier, New York
Heering J (2003) Quantification of structural information: on a question raised by Brooks. ACM SIGSOFT Softw Eng Notes 28(3):6–6
Henry SM, Kafura DG (1981) Software structure metrics based on information flow. IEEE Trans Softw Eng 7(5):510–518
IEEE Computer Society: IEEE standard glossary of software engineering terminology, IEEE Std. 610.12 – 1990
Keating M (2000) Measuring design quality by measuring design complexity. In: Proceedings of the 1st International Symposium on Quality of Electronic Design (ISQED 2000), IEEE Computer Society Washington, DC, USA, pp 103–108
Kintsch W (1998) Comprehension: a paradigm for cognition. Cambridge University Press, New York
Kang K, Lee J, Donohoe P (2002) Feature-oriented product line engineering. IEEE Softw 19(4):58–65
Liu J, Basu S, Lutz R (2008) Generating variation-point obligations for compositional model checking of software product lines. Technical Report 08-04, Computer Science, Iowa State University
Laue R, Gruhn V (2006) Complexity metrics for business process models. In: Abramowicz W, Mayr HC (eds) Proceedings of 9th international conference on Business Information Systems (BIS 2006), LNI 85, Klagenfurt, Austria, pp 1–12
Li M, Vitanyi P (1997) An introduction to Kolmogorov complexity and its applications. Springer, New York
Misra S, Akman I (2008) A model for measuring cognitive complexity of software. In: Proceedings of 12th international conference on Knowledge-Based Intelligent Information and Engineering Systems (KES 2008), Zagreb, Croatia, 3–5 Sept 2008, Part II. LNCS, vol 5178, Springer, pp 879–886
Misra S, Akman I (2008) Applicability of Weyuker’s properties on OO metrics: some misunderstandings. Comput Sci Inf Syst (ComSIS), Springer-Verlag Berlin, Heidelberg, 5(1):17–24
Manzini G (1999) The Burrows-Wheeler transform: theory and practice, vol 1672, Lecture notes in computer science. MFCS ’99 Proceedings of the 24th International Symposium on Mathematical Foundations of Computer Science, Springer-Verlag London, pp 34–47
Mc Cabe TJ (1976) A complexity measure. IEEE Trans Softw Eng SE-2(4):308–320
Miller G (1956) The magic number seven, plus or minus two: some limits on our capacity for processing information. Psychol Rev 63(2):81–97
Mendling J, Neumann G, van der Aalst WMP (2007) Understanding the occurrence of errors in process models based on metrics. In: Proceedings of OTM conference 2007. LNCS, vol 4803: CoopIS, DOA, ODBASE, GADA, and IS – Volume Part I, Springer-Verlag Berlin, Heidelberg, pp 113–130
von Mayrhauser A, Vans AM (1995) Program understanding: models and experiments. In: Yovits MC, Zelkowitz MV (eds) Advances in computers, vol 40. Academic, Troy, pp 1–38
Pataki N, Pocza K, Porkolab Z (2007) Towards a software metric for generic programming paradigm. In: 16th IEEE international electrotechnical and computer science conference, 24–26 Sept, Portorož, Slovenia
Pataki N, Sipos Á, Porkoláb Z (2006) Measuring the complexity of aspect-oriented programs with multiparadigm metric. In: Proceedings of 10th ECOOP workshop on Quantitative Approaches in Object-Oriented Software Engineering (QAOOSE 2006), 3 July 2006, Nantes, France, pp 1–10
Rauterberg M (1996) How to measure cognitive complexity in human-computer interaction. In: Proceedings of the 13th European meeting on cybernetics and systems research, vol 2, Vienna, Austria, 9–12 April, pp 815–820
Reformat M, Musilek P, Wu V, Pizzi NJ (2004) Human perception of software complexity: knowledge discovery from software data. In: Proceedings of 16th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2004), 15–17 Nov 2004, IEEE Computer Society Washington, DC, USA, pp 202–206
Software Engineering Institute (SEI) (2006) Cyclomatic complexity. In: Software technology roadmap, 2006. Online: http://www.sei.cmu.edu/str/descriptions/cyclomatic_body.html
Sipos A, Pataki N, Porkolab Z (2006) On multiparadigm software complexity metrics. Pure Math Appli 17(3–4):469–482
Sheetz SD, Tegarden DP, Monarchi DE (1991) Measuring object-oriented system complexity. In: Proceedings of the first workshop of information technologies and systems. Cambridge, MA, pp 285–307
Shao J, Wang Y (2003) A new measure of software complexity based on cognitive weights. Can J Elect Comput Eng 28(2):69–74
Taha W, Crosby S, Swadi K (2004) A new approach to data mining for software design. In: Proceedings of the international conference on Computer Science, Software Engineering, Information Technology, e-Business, and Applications (CSITeA’04), Cairo, Egypt
Troy DA, Zweben SH (1981) Measuring the quality of structured designs. J Syst Softw 2:113–120
Visscher B-F (2005) Exploring complexity in software systems. PhD thesis. University of Portsmouth
Wang Y (2009) On the cognitive complexity of software and its quantification and formal measurement. Int J Softw Sci Comput Intell 1(2):31–53
Weyuker EJ (1988) Evaluating software complexity measures. IEEE Trans Softw Eng 14(9):1357–1365
Zuse H (1991) Software complexity – measures and methods. DeGruyter Publications, Berlin/New York
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2013 Springer-Verlag London
About this chapter
Cite this chapter
Štuikys, V., Damaševičius, R. (2013). Complexity Evaluation of Feature Models and Meta-Programs. In: Meta-Programming and Model-Driven Meta-Program Development. Advanced Information and Knowledge Processing, vol 5. Springer, London. https://doi.org/10.1007/978-1-4471-4126-6_12
Download citation
DOI: https://doi.org/10.1007/978-1-4471-4126-6_12
Published:
Publisher Name: Springer, London
Print ISBN: 978-1-4471-4125-9
Online ISBN: 978-1-4471-4126-6
eBook Packages: Computer ScienceComputer Science (R0)