1 Introduction

Many engineering challenges are addressed in the form of an optimization problem. These engineering challenges must generally satisfy multiple conflicting and heterogeneous objectives, such as finding the lowest cost, the most profit, the shortest path, the maximum reliability, the best topology, etc. For such cases, achieving the most preferred response/output requires mathematical modeling of the corresponding challenge and solving it by using an optimization algorithm. In a general sense, the optimization problem refers to the process of finding the most satisfactory response/output under the specified conditions. Technically speaking, the optimization problem can also be defined as the process of finding the minimum or maximum value of one or more objective functions, provided that the equality and inequality constraints, if any, are not violated. In a broad sense, the optimization algorithms can be broken down into two main categories, deterministic and nondeterministic/stochastic, as outlined in Sect. 1.4 of Chap. 1.

Most traditional, or conventional, optimization algorithms (e.g., the Newton-Raphson algorithm) fall into the category of deterministic optimization algorithms. Basically, deterministic optimization algorithms need to have the derivatives of the objective functions in order to solve the optimization problems. Each of the deterministic optimization algorithms is only appropriate for solving a narrow range of optimization problems. More precisely, because most real-world optimization problems involve complexities—such as mixed-integer decision-making variables, multiple conflicting and heterogeneous objective functions, and non-convex, non-smooth, and nonlinear equations—there is no unique deterministic optimization algorithm that has the desirable performance to solve the real-world optimization problems with the aforementioned complexities. Moreover, the nondeterministic/stochastic algorithms always have a stochastic behavior and can be divided into two main categories—heuristic and meta-heuristic. One of the strengths of heuristic optimization algorithms is their uncomplicated architecture, compared to deterministic optimization algorithms. Accordingly, the implementation of heuristic optimization algorithms in different engineering optimization problems, particularly complicated large-scale optimization problems, can lead to finding relatively satisfactory solutions in a reasonable amount of time. Nevertheless, the main disadvantage of the heuristic optimization algorithms is that there is no guarantee that an optimal solution and/or a set of optimal solutions will be found. Developments designed to overcome the disadvantage of heuristic optimization algorithms are referred to as meta-heuristic optimization algorithms. The meta-heuristic optimization algorithms are the optimization techniques independent of the architecture of the optimization problems. That is to say that, unlike other optimization algorithms, the meta-heuristic optimization algorithms can extensively be employed to solve a wide range of optimization problems with different structures. A well-organized classification of the meta-heuristic optimization algorithms with a focus on inspirational source was exhaustively reported in Sect. 1.5.1 of Chap. 1.

From an implementation point of view, the existing meta-heuristic optimization algorithms bring about multiple undesirable difficulties, such as premature convergence, getting stuck in a local optimum point, low convergence rate, and extremely high dependency on accurate adjustments of initial values of algorithm parameters. Technically speaking, when the existing meta-heuristic optimization algorithms fall into a local optimum point, most of these algorithms do not have the ability to exit the local optimum point and to continue the search process for reaching a global optimal point; and, thus, premature convergence occurs. In most of the existing meta-heuristic optimization algorithms, the process of generating new solutions also depends on a confined decision-making space whose dependency can affect the favorable performance of these optimization techniques. Put another way, in each new generation, a solution vector is generated with respect to a finite set of solution vectors stored in the memory of the algorithm. For example, the genetic algorithm (GA) takes into account only two parent vectors stored in memory—mating pool—to generate a new solution vector. Consequently, most of the existing meta-heuristic optimization algorithms do not have a high chance of reaching a global optimum point in solving complicated, real-world, large-scale, non-convex, non-smooth optimization problems that have a nonlinear, mixed-integer nature with big data, due to the poor performance of these optimization techniques during the search process, along with the other difficulties identified above.

In 2001, a new population-based meta-heuristic optimization algorithm, referred to as a harmony search algorithm (HSA), was developed by the inspiration of music phenomena. The original HSA had a somewhat different architecture compared to other existing meta-heuristic optimization algorithms. In the proposed architecture for this optimization algorithm, the process of generating new solutions depends on the entire space of the nonempty feasible decision-making. Put simply, in each new generation, or improvisation, the HSA generates a new solution vector after sweeping over all of the solution vectors stored in the memory of the algorithm; this characteristic can appreciably enhance the performance of the HSA in the search process. With that in mind, the favorable performance of the HSA compared to its counterparts has given rise to its widespread utilization for solving complicated, real-world, large-scale, non-convex, non-smooth optimization problems in different branches of the engineering sciences (e.g., electrical, civil, computer, mechanical, and aerospace). In addition, many enhanced versions of the HSA have been developed with the aim of improving the efficiently and efficacy of the performance of this algorithm in solving the complicated, real-world, large-scale, non-convex, non-smooth optimization problems by specialists and researchers. However, by increasing an unbalanced number of dimensions of complicated, real-world, large-scale, non-convex, non-smooth optimization problems with big data, the performance of most of the existing meta-heuristic optimization algorithms, even the HSA and its enhanced versions, is highly influenced and cannot maintain its favorable performance in the face of such optimization problems. This is due to the tenuous and vulnerable characteristics employed in the architecture of the existing meta-heuristic optimization algorithms: having only a single-stage computational structure; using single-dimensional structures; etc. In 2011, for the first time, a new meta-heuristic optimization algorithm, referred to as a melody search algorithm (MSA), was proposed. It had a very different architecture compared to other meta-heuristic optimization algorithms. The MSA was inspired by the phenomena and concepts of music and developed as a new version of architecture of the HSA. It has a two-stage (or level) computational, multi-dimensional, and single-homogenous structure. Organizing the MSA brings about an innovative direction in the architecture of the meta-heuristic algorithms in order to solve complicated, real-world, large-scale, non-convex, non-smooth optimization problems having a nonlinear, mixed-integer nature with big data. With regard to the well-designed architecture of the music-inspired optimization algorithms and their favorable performance, there may well be appropriate optimization techniques for overcoming the difficulties in solving complicated, real-world, large-scale, non-convex, non-smooth optimization problems and finding the most satisfactory response/output with higher accuracy and convergence speed compared to other existing meta-heuristic optimization algorithms.

For the reasons identified above, the authors have focused on two targets in the context of the music-inspired optimization algorithms.

  • Target 1: Providing an extensive introduction to the HSA.

  • Target 2: Presenting an extensive introduction to the MSA .

The remainder of this chapter is arranged as follows. First, the interdependencies of phenomena and concepts of music and the optimization problem are reviewed briefly in Sect. 3.2. An overview of the HSA is presented in Sect. 3.3. In Sect. 3.4, a general classification of the enhanced versions of the HSA is reported, followed by a thorough description of the improved harmony search algorithm (IHSA) in Sect. 3.5. In Sect. 3.6, an overview of the MSA is presented. Finally, the chapter ends with a brief summary and some concluding remarks in Sect. 3.7.

2 A Brief Review of Music

In this section, the authors briefly address the definition of music, its history, and the interdependencies of phenomena and concepts of music and the optimization problem.

2.1 The Definition of Music

Generally speaking, music, as a social and communicative tool, is the art of incorporating vocal or instrumental sounds (or both) in order to reach pleasant and melodious forms of hearing. The phrase “music” originated from the ancient Greek word “Mousiké,” which spoke to each of the skills and arts imparted by the nine Muses—daughters of Zeus and Mnemosyne, who were inspirational goddesses of science and art in Greek mythology. In ancient Iran, however, music was referred to as “Khóniya.” The phrase “Khóniya” comes from the words “Khóniyak” and “Hónavac,” which evolved in two parts: “Hó” meaning beauty/pleasure and “Navak” meaning tone/song. The phrase “Khóniya,” therefore, represents a beautiful/pleasant tone/song. From ancient times to present, music has earned a lot of consideration in view of its desirable effect on emotions and performances of humans. With that in mind, different interpretations and definitions have been reported for music by well-known philosophers and scientists. Some of the most significant definitions for music are as follows:

  • Greek philosopher Plato: “Music is a moral law. It gives soul to the universe, wings to the mind, flight to the imagination, and charm and gaiety to life and to everything.”

  • Greek philosopher Aristotle: “Music has the power of producing a certain effect on the moral character of the soul, and if it has the power to do this, it is clear that the young must be directed to music and must be educated in it.”

  • German philosopher Friedrich Nietzsche: “Without music, life would be an error. The German imagines even God singing songs.”

  • Persian philosopher Abu Nasr Al-Farabi: “Music is the science of identifying tones and includes two parts: theory of music and practice of music.”

  • Persian polymath Avicenna—Abu Ali Sina or Ibn Sina: “Music is a mathematical science in which the quality of the tones in terms of rhythm and harmony and how to set the time among tones are exhaustively discussed.”

Although Avicenna, as the most distinguished Persian philosopher, physician, astronomer, thinker, and writer, referred to music in the mathematical section of the Book of Healing—Al-Shifa—music can be generally considered an art. This is due to the fact that, unlike principles of mathematical science, music is adjustable and changeable with respect to the tastes, ideas, and experiences of the player/instrumentalist/musician. As a result, music has recently been represented as a combination of mathematical science and art.

2.2 A Brief Review of Music History

Music history is not rigorously known with a view to historical studies concerned with music from its origins to the present. Archaeological evidence, however, demonstrates the effects of the phenomena associated with music in the process of human life in territories such as ancient Iran, Greece, Abyssinia, Japan, and Germany, several thousand years ago. An ancient, unique cylinder seal was discovered in the Choghamish district,Footnote 1 which dates back to 3400 BC (i.e., the fourth millennium BC), suggesting that the oldest world music orchestra was in Dezful county, Khuzestan province, Iran [1]. This cylinder seal is actually the earliest historical evidence indicating that music was artistically organized. Figure 3.1 shows a depiction of this seal, which is currently on display at the National Museum of Iran.Footnote 2 As can be seen, a scene of music performance with a feasting man, a servant, a vocalist, and multiple players is depicted in this cylinder seal. In addition, string, wind, and percussion instrument are exhibited in one inscription for the first time, which reveals the origin of the harmonious and symphonious tones/songs. As a result, ancient Iran was one of the first civilizations in the world in which full knowledge pertaining to the fundamental concepts of music was widely provided and developed for different purposes several thousand years ago.

Fig. 3.1
figure 1

The oldest world music orchestra in 3400 BC in the Choghamish district of Dezful county, Khuzestan province, Iran [1]

2.3 The Interdependencies of Phenomena and Concepts of Music and the Optimization Problem

Since the advent of music, humans have sought to take advantage of music capabilities to overcome difficulties and obstacles in various sciences. Music therapy is one of the most popular applications of music in medical science for treating patients. Music therapy is generally a clinical use of music consisting of three major processes: (1) induction of relaxation; (2) acceleration of the process of curing diseases; and, (3) enhancing mental performance and bringing health. Nevertheless, music capabilities were neglected when it came to dealing with engineering challenges and alleviating their complexities. The point to be made here is that these challenges are often expressed as an optimization problem.

In 2001, for the first time, a new optimization algorithm, referred to as an HSA, was developed through inspiration of the fundamental concepts of music to solve different optimization problems and achieve the optimal solution [2]. The HSA is based on the music improvisation process in such a way that players play their musical instruments step by step in order to achieve more harmony and better sound. This process is virtually the same as the optimization process in solving engineering challenges in which the optimal solution can be explored by the evaluation of the objective function. Table 3.1 gives the interdependencies of phenomena and concepts of music and the optimization problem modeled by means of the HSA.

Table 3.1 Interdependencies of phenomena and concepts of music and the optimization problem modeled by the HSA

3 Harmony Search Algorithm

The HSA is a population-based music-inspired meta-heuristic optimization algorithm that was developed in 2001 [2]. This algorithm was inspired by the improvisation process of jazz players seeking to find the best harmony and generate the most beautiful music possible. At each music performance, these players would try to enhance the sound of their musical instruments in order to produce more mature and beautiful music. As set out in Table 3.1, the concepts of music are equivalently expressed with the concept of an optimization problem modeled by the HSA. With that in mind, each player or music instrument, the pitch of the musical instrument at the disposal of each player, and the pitch range of the musical instrument at the disposal of each player are virtually the same as for each decision-making variable, the value of the decision-making variable corresponding to the relevant player, and the value range of the decision-making variable corresponding to the relevant player, respectively. By the same token, the musical harmony, aesthetic standard of the audience, and time/practice refer to the solution vector, objective function, and iteration, respectively. Additionally, the experience of the players, the best harmony and improvisation of the players are equivalent to the solution vector matrix, global optimum point, and local and global optimum searches, respectively. By enhancing the musical harmony by the players in each practice, compared to before practice from the viewpoint of the aesthetic standard of the audience, the solution vector related to the optimization problem is improved in each iteration, compared to before each iteration from the perspective of the proximity to the optimal global point. Form the standpoint of algorithm architecture, the HSA has two main characteristics: single-stage computational structure and single-dimensional structure. The HSA is, therefore, referred to as a single-stage computational, single-dimensional harmony search algorithm (SS-HSA).

The prerequisite for comprehending these characteristics is that you scrutinize features employed in the architecture of the MSA and symphony orchestra search algorithm (SOSA), which are thoroughly discussed in Sect. 3.6 of this chapter and Sect. 4.4 of Chap. 4, respectively. After a detailed investigation of the architecture associated with the MSA and SOSA , you will discover the reasons for the characteristics expressed for the SS-HSA.

The performance-driven architecture of the SS-HSA is generally broken down into four stages [2,3,4], as follows:

  • Stage 1—Definition stage: Definition of the optimization problem and its parameters.

  • Stage 2—Initialization stage.

    • Sub-stage 2.1: Initialization of the parameters of the SS-HSA.

    • Sub-stage 2.2: Initialization of the harmony memory (HM).

  • Stage 3—Computational stage.

    • Sub-stage 3.1: Improvisation of a new harmony vector.

    • Sub-stage 3.2: Update of the HM.

    • Sub-stage 3.3: Check of the stopping criterion of the SS-HSA.

  • Stage 4—Selection stage: Selection of the final optimal solution—the best harmony.

3.1 Stage 1: Definition Stage—Definition of the Optimization Problem and its Parameters

In order to solve an optimization problem using the SS-HSA, stage 1 is used to meticulously define the optimization problem and its parameters. In mathematical terms, the standard form of an optimization problem can be generally indicated based on Eqs. (1.1) and (1.2), which were given in Sect. 1.2.1 of Chap. 1. However, because the original version of the SS-HSA was developed to solve the single-objective optimization problems, now the standard form of an optimization problem must be rewritten according to Eqs. (3.1) and (3.2):

$$ {\displaystyle \begin{array}{l}\underset{\mathrm{x}\in \mathrm{X}}{\operatorname{Minimize}}\kern1.25em \mathrm{F}\left(\mathrm{x}\right)=\left[f\left(\mathrm{x}\right)\right]\\ {}\kern5.25em \mathrm{subject}\ \mathrm{to}:\\ {}\kern5em \mathrm{G}\left(\mathrm{x}\right)=\left[{g}_1\left(\mathrm{x}\right),\dots, {g}_b\left(\mathrm{x}\right),\dots, {g}_{\mathrm{B}}\left(\mathrm{x}\right)\right]=0;\kern1em \forall \left\{\mathrm{B}\ge 0\right\},\kern1em \forall \left\{b\in {\Psi}^{\mathrm{B}}\right\}\\ {}\kern5.25em \mathrm{H}\left(\mathrm{x}\right)=\left[{h}_1\left(\mathrm{x}\right),\dots, {h}_e\left(\mathrm{x}\right),\dots, {h}_{\mathrm{E}}\left(\mathrm{x}\right)\right]\le 0;\kern1em \forall \left\{\mathrm{E}\ge 0\right\},\kern1em \forall \left\{e\in {\Psi}^{\mathrm{E}}\right\}\end{array}} $$
(3.1)
$$ {\displaystyle \begin{array}{ll}\mathrm{x}=& \left[{x}_1,\dots, {x}_v,\dots, {x}_{\mathrm{NDV}}\right];\kern1em \forall \left\{v\in {\Psi}^{\mathrm{NDV}},{\Psi}^{\mathrm{NDV}}={\Psi}^{\mathrm{NCDV}+\mathrm{NDDV}},\mathrm{x}\in \mathrm{X}\right\},\\ {}& \forall \left\{\left.{x}_v^{\mathrm{min}}\le {x}_v\le {x}_v^{\mathrm{max}}\right|v\in {\Psi}^{\mathrm{NCDV}}\right\},\\ {}& \left\{\left.{x}_v\in \left\{{x}_v(1),\dots, {x}_v(w),\dots, {x}_v\left({W}_v\right)\right\}\right|v\in {\Psi}^{\mathrm{NDDV}}\right\}\end{array}} $$
(3.2)

The explanations related to the parameters and variables from Eqs. (3.1) and (3.2) were previously defined in Sect. 1.2.1 of Chap. 1. The vector of the objective function elucidates the illustration of the vector of decision-making variables and contains the value of the objective function, as given by Eq. (3.3):

$$ \mathrm{z}=\mathrm{f}\left(\mathrm{x}\right) $$
(3.3)

It should be pointed out that the illustration of the nonempty feasible decision-making space is recognized as a feasible objective space in objective space Z = f(X) and is explained by the set {f(x)|x ∈ X}. If a solution does not result in any violation in equality and inequality constraints, it is also considered as a feasible solution.

The SS-HSA explores the entire space of the nonempty feasible decision-making in order to find the vector of optimal decision-making variables, or solution vector. The optimal vector has the lowest possible value for the objective function given in Eq. (3.1). Basically, the SS-HSA merely considers the objective function given in Eq. (3.1). Nonetheless, if the solution vector obtained by the SS-HSA gives rise to any violation in equality and/or inequality constraints given in Eq. (3.1), the algorithm can employ one of the two following processes from the perspective of the decision-maker in dealing with this solution vector:

  • First process: The SS-HSA ignores the obtained solution vector.

  • Second process: The SS-HSA takes into account the obtained solution vector by applying a specified penalty coefficient to the objective function of the optimization problem.

3.2 Stage 2: Initialization Stage

After completion of stage 1 and a thorough mathematical description of the optimization problem, stage 2 is employed. This stage is formed by two sub-stages: initialization of the parameters of the SS-HSA and initialization of the HM, which is discussed in detail below.

3.2.1 Sub-stage 2.1: Initialization of the Parameters of the SS-HSA

In sub-stage 2.1, the parameter adjustments of the SS-HSA should be initialized with specific values. Table 3.2 provides a detailed description of the parameter adjustments of the SS-HSA. In the SS-HSA, the HM is a place for storing the solution, or harmony vectors. The HM in the SS-HSA is virtually the same as the mating pool in the GA. The harmony memory size (HMS) represents the number of solution vectors stored in the HM.

Table 3.2 Adjustment parameters of the SS-HSA

The HMS is equivalent to the population size in the GA. In the improvisation process of a new harmony vector, the harmony memory considering rate (HMCR) is employed in order to determine whether the value of a decision-making variable related to a new harmony vector is derived from the HM or from the entire space of the nonempty feasible decision-making. Put another way, the HMCR expresses the rate at which the value of a decision-making variable from a new harmony vector is randomly selected with respect to the player’s memory, or more comprehensively from the HM. In this regards, 1-HMCR indicates the rate at which the value of a decision-making variable from a new harmony vector is haphazardly chosen in terms of the entire space of the nonempty feasible decision-making. By the same token, in the improvisation process of a new harmony vector, the pitch adjusting rate (PAR) is utilized to specify whether the value of a decision-making variable selected from the HM needs an update to its neighbor value or not. More precisely, the PAR describes the rate at which the value of a decision-making variable selected with the HMCR rate from the player’s memory, or more comprehensively from the HM, is altered. With that in mind, 1-PAR clarifies the rate at which the value of a decision-making variable, chosen with the HMCR rate from the player’s memory or more comprehensively from the HM, is not changed. The bandwidth (BW)—fret width—is considered to be an optional length and is exclusively defined for continuous decision-making variables. In music literature, the fret width is a significant element on the neck of a string musical instrument (e.g., a bass guitar) in such a way that the neck of a string musical instrument is broken up into fixed-length segments at intervals pertaining to the musical framework. In the string musical instruments (e.g., the guitar family), each fret illustrates a semitone, and 12 semitones make up an octave in the standard Western style. In the SS-HSA, however, the frets represent arbitrary points that divide the entire space of the nonempty feasible continuous decision-making into fixed parts. The fret width—BW—is defined as the distance between two neighbor frets. The number of decision-making variables (NDV), which is dependent on the optimization problem given in Eqs. (3.1) and (3.2), consists of the sum of the number of continuous decision-making variables (NCDV) and the number of discrete decision-making variables (NDDV). The NDV characterizes the dimensions of the harmony vector in the SS-HSA. The maximum number of improvisations/iterations (MNI) addresses the number of times that the computational stage is repeated in the SS-HSA. The point to be made here is that the SS-HSA improvises a harmony vector in each improvisation/iteration. The MNI is usually employed as a stopping criterion in the SS-HSA.

3.2.2 Sub-stage 2.2: Initialization of the HM

After finalization of sub-stage 2.1 and parameter adjustments of the SS-HSA, the HM must be initialized in sub-stage 2.2. In this sub-stage, the HM matrix, which has a dimension equal to {HMS} ⋅ {NDV + 1}, is filled with a large number of solution vectors generated randomly according to Eqs. (3.4) through (3.6):

$$ {\displaystyle \begin{array}{l}\mathrm{HM}=\left[\begin{array}{c}{\mathrm{x}}^1\\ {}\vdots \\ {}{\mathrm{x}}^s\\ {}\vdots \\ {}{\mathrm{x}}^{\mathrm{HMS}}\end{array}\right]=\left[\begin{array}{ccccccc}{x}_1^1& \cdots & {x}_v^1& \cdots & {x}_{\mathrm{NDV}}^1& \mid & f\left({\mathrm{x}}^1\right)\\ {}\vdots & & \vdots & & \vdots & & \vdots \\ {}{x}_1^s& \cdots & {x}_v^s& \cdots & {x}_{\mathrm{NDV}}^s& \mid & f\left({\mathrm{x}}^s\right)\\ {}\vdots & & \vdots & & \vdots & & \vdots \\ {}{x}_1^{\mathrm{HMS}}& \cdots & {x}_v^{\mathrm{HMS}}& \cdots & {x}_{\mathrm{NDV}}^{\mathrm{HMS}}& \mid & f\left({\mathrm{x}}^{\mathrm{HMS}}\right)\end{array}\right];\\ {}\\ {}\forall \left\{v\in {\Psi}^{\mathrm{NDV}},s\in {\Psi}^{\mathrm{HMS}},{\Psi}^{\mathrm{NDV}}={\Psi}^{\mathrm{NCDV}+\mathrm{NDDV}}\right\}\end{array}} $$
(3.4)
$$ {x}_v^s={x}_v^{\mathrm{min}}+\mathrm{U}\left(0,1\right)\cdot \left({x}_v^{\mathrm{max}}-{x}_v^{\mathrm{min}}\right);\kern1em \forall \left\{v\in {\Psi}^{\mathrm{NCDV}},s\in {\Psi}^{\mathrm{HMS}}\right\} $$
(3.5)
$$ {x}_v^s={x}_v(y);\kern1em \forall \left\{v\in {\Psi}^{\mathrm{NDDV}},s\in {\Psi}^{\mathrm{HMS}},y\sim \mathrm{U}\left\{{x}_v(1),\dots, {x}_v\left({w}_v\right),\dots, {x}_v\left({W}_v\right)\right\}\right\} $$
(3.6)

Equation (3.4) represents the HM. Equations (3.5) and (3.6) are also considered for continuous and discrete decision-making variables, respectively. In Eq. (3.5), U(0, 1) indicates a random number with a uniform distribution between 0 and 1. In addition, Eq. (3.5) expresses how the value of the continuous decision-making variable v from the harmony vector s stored in the HM is randomly determined using the set of candidate admissible values for this decision-making variable, which is confined by lower bound \( {x}_v^{\mathrm{min}} \) and upper bound \( {x}_v^{\mathrm{max}} \). In Eq. (3.6), the y index describes a random integer with a uniform distribution through the set {xv(1),  … , xv(wv),  … , xv(Wv)}—y ∼ U{xv(1),  … , xv(wv),  … , xv(Wv)}. Equation (3.6) describes how the value of the discrete decision-making variable v from the harmony vector s stored in the HM is randomly specified using the set of candidate allowable values for this decision-making variable, which is demonstrated by the set {xv(1),  … , xv(wv),  … , xv(Wv)}. Table 3.3 gives the pseudocode associated with initialization of the HM in the SS-HSA. After filling the HM with random solution vectors, the solution vectors stored in the HM must be sorted from the lowest value to the highest value—in an ascending order—with regard to the value of the objective function of the optimization problem. Table 3.4 presents the pseudocode related to sorting the solution vectors stored in the HM under the SS-HSA.

Table 3.3 Pseudocode associated with initialization of the HM in the SS-HSA
Table 3.4 Pseudocode related to sorting the solution vectors stored in the HM under the SS-HSA

3.3 Stage 3: Computational Stage

After completion of stage 2 and initialization of the parameters of the SS-HSA and the HM, this computational stage must be performed. This stage consists of three sub-stages: (1) improvisation of a new harmony vector; (2) update of the HM; and (3) check of the stopping criterion of the SS-HSA. The mathematical equations expressed at this stage must depend on the improvisation/iteration index—index m—because of the repeatability of the computational stage in the SS-HSA.

3.3.1 Sub-stage 3.1: Improvisation of a New Harmony Vector

In the jazz improvisation process, a musical note can generally be played by a player based on one of the three different styles: (1) selection of a musical note from the corresponding player’s memory; (2) creation of a slight alteration in the selected musical note from the corresponding player’s memory; and, (3) random selection of a musical note from the entire playable range. In the SS-HSA, however, improvisation process refers to the process of producing a harmony vector. Similarly, in the SS-HSA, selection of the value of a decision-making variable corresponding to a player can be accomplished according to one of the three different methods: (1) selection of the value of a decision-making variable from the HMm; (2) creation of a slight alteration in the value of the selected decision-making variable from the HMm; and, (3) selection of the value of a decision-making variable from the entire space of the nonempty feasible decision-making. In an exhaustive definition, the improvisation process of a new harmony vector\( {\mathrm{x}}_m^{\mathrm{new}}=\left({x}_{m,1}^{\mathrm{new}},\dots, {x}_{m,v}^{\mathrm{new}},\dots, {x}_{\mathrm{m},\mathrm{NDV}}^{\mathrm{new}}\right) \)in the SS-HSA can be expressed by three rules: (1) harmony memory consideration; (2) pitch adjustment; and, (3) random selection.

Rule 1: In the harmony memory consideration rule, the values of a new harmony vector are randomly selected from the available harmony vectors in the HMm with the probability of the HMCR. More precisely, the value of the first decision-making variable from a new harmony vector, \( {x}_{m,1}^{\mathrm{new}} \), is randomly chosen from the available corresponding decision-making variable in the harmony vectors stored in the HMm, \( \left({x}_{m,1}^1,\dots, {x}_{m,1}^s,\dots, {x}_{m,1}^{\mathrm{HMS}}\right) \), with the probability of the HMCR. The values for other decision-making variables are also selected in the same way. Applying the harmony memory consideration rule to determine the value of the decision-making variable v from a new harmony vector, \( {x}_{m,v}^{\mathrm{new}} \), is performed using Eq. (3.7):

$$ {x}_{m,v}^{\mathrm{new}}={x}_{m,v}^r;\kern1em \forall \left\{m\in {\Psi}^{\mathrm{MNI}},v\in {\Psi}^{\mathrm{NDV}},r\sim \mathrm{U}\left\{1,2,\dots, \mathrm{HMS}\right\},{\Psi}^{\mathrm{NDV}}={\Psi}^{\mathrm{NCDV}+\mathrm{NDDV}}\right\} $$
(3.7)

Equation (3.7) is employed for continuous and discrete decision-making variables. It is also important to point out that index r is a random integer with a uniform distribution through the set {1, 2,  … , HMS}—r ∼ U{1, 2,  … , HMS}. In other words, in Eq. (3.7), the value of index r is randomly determined through the set of allowable values illustrated by the set {1, 2,  … , HMS}. Determination of this index is represented in accordance with Eq. (3.8):

$$ r=\operatorname{int}\left(\mathrm{U}\left(0,1\right)\cdot \mathrm{HMS}\right)+1 $$
(3.8)

It should be pointed out that other distributions can be utilized for index r, such as (U(0, 1))2. The use of this distribution gives rise to the selection of lower values for index r.

Rule 2: In the pitch adjustment rule, the values of a new harmony vector, which are randomly selected through the existing harmony vectors in the HMm with the probability of the HMCR, are updated with the probability of the PAR to the available values in the neighborhood of the current values. Put another way, after the value of the first decision-making variable from a new harmony vector, \( {x}_{m,1}^{\mathrm{new}} \), is randomly selected from the available corresponding decision-making variable in the harmony vectors stored in the HMm with the probability of the HMCR, this decision-making variable is updated with the probability of the PAR to one of the available values in the neighborhood of its current value. The update process to one of the available values in the neighborhood for this decision-making variable is done by adding a specific value to its current value. The values for other decision-making variables are also selected in the same way. Applying the pitch adjustment rule to specify the value of the decision-making variable v from a new harmony vector, \( {x}_{m,v}^{\mathrm{new}} \), is carried out by using Eqs. (3.9) and (3.10):

$$ {x}_{m,v}^{\mathrm{new}}={x}_{m,v}^{\mathrm{new}}\pm \mathrm{U}\left(0,1\right)\cdot \mathrm{BW};\kern1em \forall \left\{m\in {\Psi}^{\mathrm{MNI}},v\in {\Psi}^{\mathrm{NCDV}}\right\} $$
(3.9)
$$ {\displaystyle \begin{array}{ll}{x}_{m,v}^{\mathrm{new}}=& {x}_{m,v}^{\mathrm{new}}\left(y+t\right);\\ {}& \forall \left\{m\in {\Psi}^{\mathrm{MNI}},v\in {\Psi}^{\mathrm{NDDV}},y\sim \mathrm{U}\left\{{x}_v(1),\dots, {x}_v\left({w}_v\right),\dots, {x}_v\left({W}_v\right)\right\},t\sim \mathrm{U}\left\{-1,+1\right\}\right\}\end{array}} $$
(3.10)

Equations (3.9) and (3.10) are used for the continuous and discrete decision-making variables, respectively. In Eq. (3.10), t represents the neighborhood index. The neighborhood index t is a random integer with a uniform distribution through the set {−1, +1}—t ∼ U{−1, +1}. In other words, in Eq. (3.10), the value of index t is randomly determined through the set of allowable values illustrated by the set {−1, +1}.

Rule 3: In the random selection rule, the values of a new harmony vector are randomly chosen from the entire space of the nonempty feasible decision-making with the probability of the 1-HMCR. In simple terms, the value of the first decision-making variable from a new harmony vector, \( {x}_{m,1}^{\mathrm{new}} \), is randomly selected from the entire space of the nonempty feasible decision-making with the probability of the 1-HMCR. The values for other decision-making variables are also chosen in the same way. The point to be made here is that the random selection rule was already utilized in sub-stage 2.2 for initialization of the HM. Applying the random selection rule to characterize the value of the decision-making variable v from a new harmony vector, \( {x}_{m,v}^{\mathrm{new}} \), is done using Eqs. (3.11) and (3.12):

$$ {x}_{m,v}^{\mathrm{new}}={x}_v^{\mathrm{min}}+\mathrm{U}\left(0,1\right)\cdot \left({x}_v^{\mathrm{max}}-{x}_v^{\mathrm{min}}\right);\kern1em \forall \left\{m\in {\Psi}^{\mathrm{MNI}},v\in {\Psi}^{\mathrm{NCDV}}\right\} $$
(3.11)
$$ {x}_{m,v}^{\mathrm{new}}={x}_v(y);\kern1em \forall \left\{m\in {\Psi}^{\mathrm{MNI}},v\in {\Psi}^{\mathrm{NDDV}},y\sim \mathrm{U}\left\{{x}_v(1),\dots, {x}_v\left({w}_v\right),\dots, {x}_v\left({W}_v\right)\right\}\right\} $$
(3.12)

Equations (3.11) and (3.12) are used for the continuous and discrete decision-making variables, respectively. As further elucidation, assume that the parameter adjustments for the HMCR and PAR are considered to be 0.75 and 0.65, respectively. First, a random number with a uniform distribution between 0 and 1, U(0, 1), is generated. If the generated random number has a lower value than the value of the HMCR parameter (i.e., 0.75), the value of the first decision-making variable from a new harmony vector, \( {x}_{m,1}^{\mathrm{new}} \), is randomly selected from the available corresponding decision-making variable among the harmony vectors stored in the HMm, \( \left({x}_{m,1}^1,\dots, {x}_{m,1}^s,\dots, {x}_{m,1}^{\mathrm{HMS}}\right) \), with the probability of 0.75. Correspondingly, the value of the first decision-making variable from a new harmony vector, \( {x}_{m,1}^{\mathrm{new}} \), is randomly chosen from the entire space of the nonempty feasible decision-making with the probability of (1 – 0.75), provided that the random number generated has a value higher than the value of the HMCR parameter (0.75).

After the value of the first decision-making variable from a new harmony vector, \( {x}_{m,1}^{\mathrm{new}} \), has been randomly selected from the available corresponding decision-making variable in the harmony vectors stored in the HMm with the probability of 0.75, one more random uniform number between 0 and 1, U(0, 1), is generated. If this random number has a value lower than the value of the PAR parameter (0.65), the value of the first decision-making variable from a new harmony vector, \( {x}_{m,1}^{\mathrm{new}} \), is updated to one of the available values in the neighborhood of its current value chosen from the HMm with the probability of 0.65. Mutually, the value of the first decision-making variable from a new harmony vector, \( {x}_{m,1}^{\mathrm{new}} \), which was haphazardly selected from available corresponding decision-making variable in the harmony vectors stored in the HMm, \( \left({x}_{m,1}^1,\dots, {x}_{m,1}^s,\dots, {x}_{m,1}^{\mathrm{HMS}}\right) \), with the probability of 0.75, is not changed if the generated random number has a value higher than the value of the PAR parameter (0.65).

As a general result, the probability that the value of the first decision-making variable from a new harmony vector, \( {x}_{m,1}^{\mathrm{new}} \), can be determined by applying the harmony memory consideration, pitch adjustment, and random selection rules is equal to HMCR × (1  PAR), HMCR × PAR, and 1  HMCR, respectively. The values for other decision-making variables are also chosen in the same way. Table 3.5 presents the pseudocode pertaining to improvisation of a new harmony vector in the SS-HSA.

Table 3.5 Pseudocode pertaining to improvisation of a new harmony vector in the SS-HSA

3.3.2 Sub-stage 3.2: Update of the HM

After finalization of sub-stage 3.1 and improvisation of a new harmony vector, the update process of the HMm must be carried out in sub-stage 3.2. In this sub-stage, a new harmony vector is evaluated and compared with the worst available harmony vector in the HMm—the harmony vector stored in the HMS row of the HMm—from the perspective of the objective function. If a new harmony vector has a better value than the worst available harmony vector in the HMm, from the perspective of the objective function, this new harmony vector replaces the worst harmony vector available in the HMm; the worst available harmony vector is then eliminated from the HMm. Table 3.6 shows the pseudocode related to the update of the HMm in the SS-HSA.

Table 3.6 Pseudocode related to update of the HMm in the SS-HSA

It should be pointed out that the update process of the HMm is not accomplished if the new harmony vector is not notably better than the worst available harmony vector in the HMm, from the standpoint of the objective function. After completion of this process, harmony vectors stored in the HMm must be re-sorted based on the value of objective function—fitness function—in an ascending order. The pseudocode related to sorting the solution vectors stored in the HM was already provided in Table 3.4. Given the dependence of the HM to the improvisation/iteration index of the computational stage—index m—the aforementioned pseudocode must be rewritten according to Table 3.7.

Table 3.7 Pseudocode relevant to sorting the solution vectors stored in the HMm under the SS-HSA

3.3.3 Sub-stage 3.3: Check of the Stopping Criterion of the SS-HSA

After completion of sub-stage 3.2 and an update of the HM, the check process of the stopping criterion of the SS-HSA must be done in sub-stage 3.3. In this sub-stage, the computational efforts of the SS-HSA are terminated when its stopping criterion—the MNI—is satisfied. Otherwise, sub-stages 3.1 and 3.2 are repeated.

3.4 Stage 4: Selection Stage—Selection of the Final Optimal Solution—The Best Harmony

After finalization of stage 3, or accomplishment of the computational stage, the selection of the final optimal solution—the best harmony—must be performed in stage 4. In this stage, the best harmony vector stored in the HM, x1, is taken as the final optimal solution. Table 3.8 gives the pseudocode relevant to the selection of the final optimal solution in the SS-HSA. The designed pseudocode in different stages and sub-stages of the SS-HSA is located in a regular sequence and forms the performance-driven architecture of this algorithm. Table 3.9 presents the pseudocode pertaining to the performance-driven architecture of the SS-HSA. In here, sub-stage 3.3—the check process of the stopping criterion of the SS-HSA—is defined by the WHILE loop in the pseudocode pertaining to the performance-driven architecture of the SS-HSA (see Table 3.9).

Table 3.8 Pseudocode relevant to the selection of the final optimal solution in the SS-HAS
Table 3.9 Pseudocode pertaining to performance-driven architecture of the SS-HSA

4 Enhanced Versions of the Single-Stage Computational, Single-Dimensional Harmony Search Algorithm

As previously mentioned, the original SS-HSA was introduced in 2001. Readers interested in a comprehensive discussion on different applications of the SS-HSA are referred to the work by Manjarres et al. [5]. From 2001 to the present, many enhanced versions of the original SS-HSA have been developed to solve a wide range of optimization problems in the engineering sciences (e.g., electrical, civil, computer, mechanical, and aerospace). In related literature, different classifications for the enhanced versions of the SS-HSA have been presented. Providing a structural classification for the enhanced versions of the SS-HSA can dramatically help interested readers to reasonably understand how to enhance the SS-HSA. By investigating all enhanced versions of the SS-HSA, it can be concluded that all enhancements, from the perspective of implementation, can be broken down into three general categories, as follows:

  • Category 1: Enhancements applied on the SS-HSA from the perspective of the parameter adjustments. The most well-known existing enhanced version of this category is the IHSA.

  • Category 2: Enhancements accomplished on the SS-HSA from the standpoint of a combination of this algorithm with other meta-heuristic optimization algorithms. Enhanced versions of this category are divided into two subcategories.

    • Subcategory 2.1: Enhancements performed on the SS-HSA from the viewpoint of integration of some components associated with other meta-heuristic optimization algorithms in the architecture of the SS-HSA. The best known existing enhanced version of this subcategory is the global-best harmony search algorithm.

    • Subcategory 2.2: Enhancements carried out by the SS-HSA from the perspective of integration of some components pertaining to the SS-HSA in the architecture of other meta-heuristic optimization algorithms. The most well-known existing enhanced version of this subcategory is the adaptive GA using the SS-HSA.

  • Category 3: Enhancements implemented on the SS-HSA from the standpoint of architectural principles. The first existing enhanced version of this category is the MSA.

More detailed descriptions regarding the enhanced versions of the SS-HSA are beyond the scope of this chapter, but the interested reader may look to the work by Moh’d-Alia and Mandava [6] for a thorough discussion of these enhanced versions.

As the IHSA and MSA are widely employed in the second part of this book for comparison purposes, these two existing optimization techniques will be discussed extensively in Sects. 3.5 and 3.6 of this chapter, respectively.

5 Improved Harmony Search Algorithm

As previously mentioned, the IHSA, as the most well-known existing enhanced version of the SS-HSA, was developed by dynamically changing the parameter adjustments in each improvisation/iteration. The architecture of the IHSA is, therefore, quite similar to the architecture of the SS-HSA. In more detail, the IHSA has two main characteristics: a single-stage computational structure and a single-dimensional structure. With that in mind, the IHSA is referred to as the single-stage computational, single-dimensional improved harmony search algorithm (SS-IHSA). In the architecture of the SS-HSA, the PAR and BW parameters play a pivotal role in adjusting the convergence rate of the algorithm to achieve the final optimal solution. Accordingly, desirable performance of the SS-HSA is highly dependent on precise and proper adjustment of these parameters. In view of the fact that the BW parameter can have any value in the range of zero to positive infinity, fine-tuning this parameter is more difficult than the PAR parameter. In the SS-HSA, the values of the PAR and BW parameters are adjusted in stage 2.1 and cannot be changed during subsequent improvisations/iterations. Simply put, the SS-HSA employs invariant values for the PAR and BW parameters in all improvisations/iterations. The main disadvantage of these parameter adjustments appears in the number of iterations required by the SS-HSA to find the final optimal solution. Considering small values for the PAR parameter with large values for the BW parameter can generally bring about a poor performance for the SS-HSA and a significant increase in the number of iterations needed to reach the final optimal solution. Although considering smaller values for the BW parameter in the terminative improvisations/iterations strengthens the probability of more precise adjustment of solution vectors, taking into account large values for the BW parameter in initial improvisations/iterations is certainly a necessity for increasing diversity in the solution vectors of the SS-HSA. Similarly, considering large values for the PAR parameter with small values for the BW parameter can usually improve the solutions in the terminative improvisations/iterations in such a way that the SS-HSA converges towards the optimal solution vector.

In 2007, to overcome the difficulties associated with the invariant values of the BW and PAR parameters, the SS-IHSA was introduced and variant values were employed for the PAR and BW parameters [7]. Given the fact that the different stages in the SS-IHSA are virtually the same as the different stages in the SS-HSA, only the differences caused by the use of variant values for the PAR and BW parameters are referred to here. The major differences between the SS-IHSA and the SS-HSA appear only in sub-stage 2.1 (initialization of the parameters of the algorithm) and in sub-stage 3.1 (improvisation of a new harmony vector). In sub-stage 2.1, the parameter adjustments of the offered SS-HSA are characterized according to Table 3.1, which is presented in Sect. 3.3.2.1 of this chapter. As is clear from Table 3.1, the SS-HSA considers invariant values for the PAR and BW parameters. In this sub-stage, however, the SS-IHSA replaces the PAR and BW parameters with parameters minimum pitch adjusting rate (PARmin) and maximum pitch adjusting rate (PARmax) and minimum bandwidth (BWmin) and maximum bandwidth (BWmax), respectively. Other parameters presented in Table 4.1 remain unchanged for the SS-IHSA. As a result, the detailed descriptions relevant to the adjustment parameters of the SS-IHSA are thoroughly represented in Table 3.10.

Table 3.10 Adjustment parameters of the SS-IHSA

In sub-stage 3.1, unlike the SS-HSA, which uses invariant values for the PAR and BW parameters in the improvisation process of a new harmony vector, the SS-IHSA utilizes the updated values for the PAR and BW parameters in the improvisation process of a new harmony vector. In this sub-stage, the values associated with the PAR and BW parameters are dynamically changed and updated in each improvisation/iteration of the SS-IHSA by using Eqs. (3.13) and (3.14), respectively:

$$ {BW}_m={\mathrm{BW}}^{\mathrm{max}}\cdot \exp \left(\frac{\ln \left({\mathrm{BW}}^{\mathrm{max}}/{\mathrm{BW}}^{\mathrm{min}}\right)}{\mathrm{MNI}}\cdot m\right);\kern1em \forall \left\{m\in {\Psi}^{\mathrm{MNI}}\right\} $$
(3.13)
$$ {PAR}_m={\mathrm{PAR}}^{\mathrm{min}}+\left(\frac{{\mathrm{PAR}}^{\mathrm{max}}-{\mathrm{PAR}}^{\mathrm{min}}}{\mathrm{MNI}}\right)\cdot m;\kern1em \forall \left\{m\in {\Psi}^{\mathrm{MNI}}\right\} $$
(3.14)

In Eq. (3.13), the value of the BWm parameter is represented as an exponential function of the improvisation/iteration index—index m. In this equation, the value of the BWm parameter is exponentially decreased by increasing the value of the improvisation/iteration index.

That is to say that the value of the BWm parameter, by altering the improvisation/iteration index from zero to the MNI, m ∈ {0 → MNI}, is exponentially changed from the value of the BWmax parameter to the value of the BWmin parameter, BWm ∈ {BWmax → BWmin}. In Eq. (3.14), the value of the PARm parameter is expressed as a linear function of the improvisation/iteration index—index m. In this equation, the value of the PARm parameter is linearly grown by increasing the value of the improvisation/iteration index. Put simply, the value of the PARm parameter, by changing the improvisation/iteration index from zero to the MNI, m ∈ {0 → MNI}, is linearly altered from the value of the PARmin parameter to the value of the PARmax parameter, PARm ∈ {PARmin → PARmax}. Table 3.11 shows the rectified pseudocode associated with improvisation of a new harmony vector in the SS-IHSA. Table 3.12 gives the pseudocode related to the performance-driven architecture of the SS-IHSA.

Table 3.11 Pseudocode associated with improvisation of a new harmony vector in the SS-IHSA
Table 3.12 Pseudocode related to the performance-driven architecture of the SS-IHSA

6 Melody Search Algorithm

In a general sense, playing more than one musical note at a time is referred to as a harmony. The difference in the pitch between the two musical notes is called their interval. Given this definition, consider a few simple scenarios: two-note harmonies have one interval; three-note harmonies have three intervals; and four-note harmonies have six intervals. The impressiveness and diversity of each harmony increase geometrically with the addition of each musical note. More precisely, if the number of musical notes played at a given time increases, the richness and variety of harmony increase, owing to the fact that a combination of musical notes is utilized in order to create a beautiful and pleasant tone or complete song. Harmonies with three or more musical notes are called chords. Chords generally make a harmonic structure or a background mode for a piece of music. In these harmonies, intervals are considered structural blocks of the chords.

That is, in music, harmony is the use of simultaneous pitches or chords. In this circumstance, the investigation of harmony involves chords, their construction, and progressions; connection principles are accomplished to form a melody. It is important to note that harmony refers to vertical aspects of the music space, due to simultaneous playing of available music notes in harmony. In contrast to harmony, a melody is a linear sequence of musical notes that can be recognized as a single song by a listener. More precisely, a melody consists of a linear sequence of individual pitches or musical notes, one followed by another one in a certain order. A hybrid ordering of musical notes, then, makes up a song. The point to be made here is that melody refers to the horizontal aspect of the music space, because the available musical notes are played in a linear sequence and read mostly horizontally from left to right. Figure 3.2 depicts the major difference between the structures related to harmony and melody. Harmony is able to convey different types of emotions, impulses, and coloring to the melody. Harmony therefore causes deepening and richness of the melody. Stated another way, if a melody is a boat, harmony is a river along which the boat floats. Where the river is deeper and without stones and obstacles, the boat can move, or flow, more easily—more fluently and beautifully. More detailed descriptions of the entire concepts pertaining to harmony and melody in music are out of the scope of this chapter, but the interested reader may look to the work by Martineau [8] and Sturman [9] for an exhaustive discussion regarding these concepts.

Fig. 3.2
figure 2

The major difference between the structure of harmony and melody

According to what has been described in Sect. 3.4, the SS-HSA and its enhanced versions—categories 1 and 2—have a single-stage computational and one-dimensional structure. These characteristics cause the performance of the SS-HSA and its enhanced versions to be greatly influenced by the process of solving complicated, real-world, large-scale, non-convex, non-smooth optimization problems having a nonlinear, mixed-integer nature with big data in such a way that these algorithms cannot maintain their affordable performance. In order to tackle the disadvantages of the SS-HSA and its enhanced versions, a new meta-heuristic optimization algorithm, referred to as an MSA, was introduced in 2011 [10]. Subsequently, the completed version of this algorithm was presented in 2013 [11]. The MSA is an innovative, population-oriented, meta-heuristic optimization algorithm, which was inspired by borrowing the phenomena and concepts of music as well as principles employed in the SS-HSA.

This newly developed optimization technique basically has a different architecture compared to other meta-heuristic optimization algorithms, because it imitates the process of music performance and interactive relationships among members of a musical group, while each player is looking for the best set of pitches within a melody line. In such a musical group, the presence of multiple players with different tastes, ideas, styles, and experiences under interactive relationships among players can effectively result in attaining the most desirable sequence of pitches more quickly. This process is virtually the same as the optimization process in engineering sciences in which the optimal solution can be explored by evaluating the objective function. Table 3.13 shows the interdependencies of phenomena and concepts of music and the optimization problem modeled by the MSA. As set out in Table 3.13, the concepts of music are equivalently indicated with the concept of the optimization problem modeled by the MSA. With that in mind, each pitch in a particular melody played by a particular player in the musical group, the value of each pitch in a particular melody played by a particular player in the musical group, and the range of each pitch in a particular melody played by a particular player in the musical group are virtually the same as each decision-making variable, value of each decision-making variable, and value range of each decision-making variable, respectively. In the same way, the musical melody played by each existing player in the musical group, aesthetic standard of the audience, and time and practice invested by all existing players in the musical group refer to the solution vector, objective function, and iteration, respectively. Moreover, the experience of all existing players in the musical group, the best melody selected from among all melodies played by all existing players in the musical group, and improvisation of all existing players in the musical group are equivalent to the solution vectors matrix, global optimum point, and local and global optimum searches, respectively. By improving the musical melodies played by all existing players in the group at each practice, compared to before practice from the perspective of the aesthetic standard of audience, the solution vector pertaining to the optimization problem is also enhanced in each iteration, compared to the situation prior to each iteration from the standpoint of proximity to the optimal global point. Although the MSA was designed by employing the phenomena and concepts of music and the principles of the SS-HSA, its architecture is entirely different from the SS-HSA.

Table 3.13 Interdependencies of phenomena and concepts of music and the optimization problem modeled by the MSA

Unlike the SS-HSA, which employs a single-stage computational structure, the MSA utilizes a two-stage computational structure in order to achieve the optimal solution: (1) a single computational stage or single improvisation stage (SIS) and (2) a pseudo-group computational stage or pseudo-group improvisation stage (PGIS). In the SIS, each musician, or player, improvises the melody individually, without the influence of other players in the group. In the PGIS, however, the MSA has a pseudo-group performance. More precisely, in this stage, each player improvises the melody interactively with the influence of other players in the group. Different melodies available in the group can direct the players to select better, albeit random, pitches and strengthen the probability of playing a better melody in the next improvisation/iteration. Furthermore, and in contrast to the SS-HSA, which uses a single HM, the MSA employs multiple player memories (PMs). Multiple PMs form a melody memory (MM). As a result, the SS-HSA is referred to as a single-stage computational (or single-level computational stage), one-dimensional optimization technique, because it has a single improvisation stage and a single or an individual memory. Conversely, the MSA is called as a two-stage (or two-level) computational, multi-dimensional, single-homogeneous MSA (TMS-MSA) owing to the fact that it has two improvisation stages and multiple memories. The point to be made here is that the prerequisite of understanding of the characteristics, such as single-homogeneous and pseudo-group computational stage in the TMS-MSA, is that you first scrutinize the features employed in the architecture of the proposed two-stage (or two-level) computational, multi-dimensional, single-homogeneous enhanced melody search algorithm (TMS-EMSA) and the proposed SOSA, which is described extensively in Sects. 4.3 and 4.4 of Chap. 4, respectively. It is also necessary to state that, unlike the SS-HSA in which the feasible range of each continuous decision-making variable is not changed during different improvisations/iterations, the feasible range of each continuous decision-making variable in any improvisation/iteration associated with the PGIS of the TMS-MSA is updated only for random selection.

The performance-driven architecture of the TMS-MSA is generally broken down into five stages [11], as follows:

  • Stage 1—Definition stage: Definition of the optimization problem and its parameters.

  • Stage 2—Initialization stage.

    • Sub-stage 2.1: Initialization of the parameters of the TMS-MSA.

    • Sub-stage 2.2: Initialization of the MM.

  • Stage 3—Single computational stage or SIS.

    • Sub-stage 3.1: Improvisation of a new melody vector by each player.

    • Sub-stage 3.2: Update of each PM.

    • Sub-stage 3.3: Check of the stopping criterion of the SIS.

  • Stage 4—Pseudo-group computational stage or PGIS.

    • Sub-stage 4.1: Improvisation of a new melody vector by each player taking into account the feasible ranges of the updated pitches.

    • Sub-stage 4.2: Update of each PM.

    • Sub-stage 4.3: Update of the feasible ranges of pitches—continuous decision-making variables—for the next improvisation—only for random selection.

    • Sub-stage 4.4: Check of the stopping criterion of the PGIS.

  • Stage 5—Selection stage: Selection of the final optimal solution—the best melody.

6.1 Stage 1: Definition Stage—Definition of the Optimization Problem and its Parameters

In order to solve an optimization problem using the TMS-MSA, stage 1 is needed to precisely describe the optimization problem and its parameters. In mathematical terms, the standard form of an optimization problem can generally be expressed using Eqs. (1.1) and (1.2), which were presented in Sect. 1.2.1 of Chap. 1. However, because the original version of the TMS-MSA was developed to solve only single-objective optimization problems with continuous decision-making variables, the standard form of an optimization problem must be rewritten, as shown in Eqs. (3.13) and (3.14):

$$ {\displaystyle \begin{array}{l}\underset{\mathrm{x}\in \mathrm{X}}{\operatorname{Minimize}}\kern1.25em \mathrm{F}\left(\mathrm{x}\right)=\left[f\left(\mathrm{x}\right)\right]\\ {}\kern5.25em \mathrm{subject}\ \mathrm{to}:\\ {}\kern5em \mathrm{G}\left(\mathrm{x}\right)=\left[{g}_1\left(\mathrm{x}\right),\dots, {g}_b\left(\mathrm{x}\right),\dots, {g}_{\mathrm{B}}\left(\mathrm{x}\right)\right]=0;\kern1em \forall \left\{\mathrm{B}\ge 0\right\},\kern1em \forall \left\{b\in {\Psi}^{\mathrm{B}}\right\}\\ {}\kern5.25em \mathrm{H}\left(\mathrm{x}\right)=\left[{h}_1\left(\mathrm{x}\right),\dots, {h}_e\left(\mathrm{x}\right),\dots, {h}_{\mathrm{E}}\left(\mathrm{x}\right)\right]\le 0;\kern1em \forall \left\{\mathrm{E}\ge 0\right\},\kern1em \forall \left\{e\in {\Psi}^{\mathrm{E}}\right\}\end{array}} $$
(3.13)
$$ {\displaystyle \begin{array}{ll}\mathrm{x}=& \left[{x}_1,\dots, {x}_v,\dots, {x}_{\mathrm{NCDV}}\right];\kern1em \forall \left\{v\in {\Psi}^{\mathrm{NCDV}},\mathrm{x}\in \mathrm{X}\right\},\\ {}& \kern0.2em \forall \left\{\left.{x}_v^{\mathrm{min}}\le {x}_v\le {x}_v^{\mathrm{max}}\right|v\in {\Psi}^{\mathrm{NCDV}}\right\}\end{array}} $$
(3.14)

The explanations associated with the parameters and variables from Eqs. (3.13) and (3.14) were also previously represented in Sect. 1.2.1 of Chap. 1. In the SIS and PGIS of the TMS-MSA, each player—without and with the influence of other players in the musical group, respectively—explores the entire space of the nonempty feasible decision-making in order to find the optimal decision-making (solution) vector. The optimal decision-making vector has the lowest possible value for the objective function given in Eq. (3.13). Basically, each player in the group merely takes into account the objective function given in Eq. (3.13) in order to solve the optimization problem presented in Eqs. (3.13) and (3.14). Nevertheless, if the solution vector determined by the corresponding player results in any violation in any equality or inequality constraints provided in Eq. (3.13), the player would have to utilize one of the following two processes with respect to the standpoint of the decision maker in dealing with this solution vector:

  • First process: The corresponding player ignores the solution vector.

  • Second process: The corresponding player considers the solution vector by applying a specified penalty coefficient to the objective function of the optimization problem.

6.2 Stage 2: Initialization Stage

After finalization of stage 1 and a thorough mathematical description of the optimization problem, stage 2 must be processed. This stage is organized into two sub-stages: initialization of the parameters of the TMS-MSA and initialization of the MM, which is described at length below.

6.2.1 Sub-stage 2.1: Initialization of the Parameters of the TMS-MSA

In sub-stage 2.1, the parameter adjustments of the TMS-MSA should be initialized with specific values. Table 3.14 gives a detailed description of the parameter adjustments related to the TMS-MSA. In the TMS-MSA, the MM is a place for storing the solution vectors for all existing players in the musical group. The number of player (PN) parameter represents the number of existing players in the group. Each player in the group has a memory defined by a PM parameter. The memory of the player p in the group is a place for storing the corresponding player’s solution vectors. And, multiple PMs form the MM. The player memory size (PMS) describes the number of solution vectors stored in a player’s memory. In the improvisation process of a new melody vector by a particular player under sub-stages 3.1 and 4.1, the player memory considering rate (PMCR) is used to specify whether the value of a decision-making variable relevant to a new melody vector played by the corresponding player is derived from the player’s PM or from the entire space of the nonempty feasible decision-making. In other words, the PMCR indicates the rate at which the value of a decision-making variable from a new melody vector played by a particular player is randomly chosen according to its PM. Conversely, 1-PMCR expresses the rate at which the value of a decision-making variable from a new melody vector played by a particular player is randomly selected in accordance with the entire space of the nonempty feasible decision-making. The PARmin and the PARmax parameters are used to calculate the PAR parameter in the iteration m of the SIS and PGIS—PARm.

Table 3.14 Adjustment parameters of the TMS-MSA

This parameter is dynamically changed and updated in each improvisation/iteration of the SIS and PGIS. For the same reason, in the improvisation process of a new melody vector by a particular player under sub-stages 3.1 and 4.1, the PARm is employed to determine whether the value of a decision-making variable chosen from the corresponding PM needs an update to its neighbor’s value or not. Put simply, the PARm clarifies the rate at which the value of a decision-making variable selected with the PMCR rate from the corresponding PM is changed. Therefore, 1 − PARm addresses the rate at which the value of a decision-making variable chosen with the PMCR rate from the corresponding PM is not altered. The BWmin and the BWmax parameters are employed to determine the BW parameter in the iteration m of the SIS and PGIS—BWm. This parameter is dynamically changed and updated in each improvisation/iteration of the SIS and PGIS. The BWm is taken to be an optional length and is merely defined for continuous decision-making variables. Detailed descriptions of the BWm were provided in sub-stage 2.1 of the SS-HSA (see Sect. 3.3.2.1 of this chapter). The NCDV depends on the optimization problem given in Eqs. (3.13) and (3.14). This parameter specifies the dimension of the melody vector in the TMS-MSA. The maximum number of improvisations/iterations of the SIS (MNI-SIS) denotes the number of times a single computational stage is repeated in the TMS-MSA. Similarly, the maximum number of improvisations/iterations of the PGIS (MNI-PGIS) signifies the number of times a pseudo-group computational stage is repeated in the TMS-MSA. It should be pointed out that each player in the musical group improvises one melody individually without the influence of any other players in each improvisation/iteration of the SIS. The corresponding player also improvises one melody interactively with the influence of other players in each improvisation/iteration of the PGIS. The sum of the MNI-SIS and the MNI-PGIS is employed as a stopping criterion in the TMS-MSA.

6.2.2 Sub-stage 2.2: Initialization of the MM

After completion of sub-stage 2.1 and parameter adjustments of the TMS-MSA, the MM must be initialized in sub-stage 2.2. As previously mentioned, the MM is composed of multiple PMs. With that in mind, Fig. 3.3 shows the architecture of the MM in the TMS-MSA. Given the above descriptions, the MM matrix with the dimensions of {PMS} ⋅ {(NCDV + 1) ⋅ PN} consists of multiple PM submatrices with the dimensions of {PMS} ⋅ {NCDV + 1}.

Fig. 3.3
figure 3

The architecture of the MM in the TMS-MSA

In the TMS-MSA, the number of PMs forming the MM is specified by the PN parameter. The MM matrix and PM submatrices are filled with a large number of solution vectors generated randomly and based on Eqs. (3.15) through (3.17):

$$ \mathrm{MM}=\left[{PM}_1\kern0.5em \cdots \kern0.5em {PM}_p\kern0.5em \cdots \kern0.5em {PM}_{\mathrm{PN}}\right];\kern1em \forall \left\{p\in {\Psi}^{\mathrm{PN}}\right\} $$
(3.15)
$$ {\displaystyle \begin{array}{l}{PM}_p=\left[\begin{array}{c}{\mathrm{x}}_p^1\\ {}\vdots \\ {}{\mathrm{x}}_p^s\\ {}\vdots \\ {}{\mathrm{x}}_p^{\mathrm{PMS}}\end{array}\right]=\left[\begin{array}{ccccccc}{x}_{p,1}^1& \cdots & {x}_{p,v}^1& \cdots & {x}_{p,\mathrm{NCDV}}^1& \mid & f\left({\mathrm{x}}_p^1\right)\\ {}\vdots & & \vdots & & \vdots & & \vdots \\ {}{x}_{p,1}^s& \cdots & {x}_{p,v}^s& \cdots & {x}_{p,\mathrm{NCDV}}^s& \mid & f\left({\mathrm{x}}_p^s\right)\\ {}\vdots & & \vdots & & \vdots & & \vdots \\ {}{x}_{p,1}^{\mathrm{PMS}}& \cdots & {x}_{p,v}^{\mathrm{PMS}}& \cdots & {x}_{p,\mathrm{NCDV}}^{\mathrm{PMS}}& \mid & f\left({\mathrm{x}}_p^{\mathrm{PMS}}\right)\end{array}\right];\\ {}\\ {}\forall \left\{p\in {\Psi}^{\mathrm{PN}},v\in {\Psi}^{\mathrm{NCDV}},s\in {\Psi}^{\mathrm{PMS}}\right\}\end{array}} $$
(3.16)
$$ {x}_{p,v}^s={x}_v^{\mathrm{min}}+\mathrm{U}\left(0,1\right)\cdot \left({x}_v^{\mathrm{max}}-{x}_v^{\mathrm{min}}\right);\kern1em \forall \left\{p\in {\Psi}^{\mathrm{PN}},v\in {\Psi}^{\mathrm{NCDV}},s\in {\Psi}^{\mathrm{PMS}}\right\} $$
(3.17)

Equation (3.15) denotes the MM. Equation (3.16) represents the memory relevant to existing player p (PMp) in the musical group. In Eq. (3.17), U(0, 1) indicates a random number with a uniform distribution between 0 and 1. Furthermore, Eq. (3.17) tells us that the value of the continuous decision-making variable v from melody vector s stored in the memory related to player p (\( {x}_{p,v}^s \)) is randomly specified by the set of candidate-admissible values for this decision-making variable, limited by lower bound \( {x}_v^{\mathrm{min}} \) and upper bound \( {x}_v^{\mathrm{max}} \). Table 3.15 gives the pseudocode relevant to initialization of the entire set of PMs or MM in the TMS-MSA. After filling all of the PMs or the MM with random solution vectors, the solution vectors stored in each PM must be sorted from the lowest value to the highest value—in an ascending order—with respect to the value of the objective function of the optimization problem. Table 3.16 illustrates the pseudocode pertaining to sorting the solution vectors stored in the PMs or MM in the TMS-MSA.

Table 3.15 Pseudocode relevant to initialization of the entire set of PMs or MM in the TMS-MSA
Table 3.16 Pseudocode pertaining to sorting the solution vectors stored in the PMs or MM in the TMS-MSA

6.3 Stage 3: Single Computational Stage or SIS

After finalization of stage 2 and initialization of the parameters of the TMS-MSA and the MM, the single computational stage, or SIS, must be completed. This stage contains three sub-stages: (1) improvisation of a new melody vector by each player; (2) update of each PM; and, (3) check of the stopping criterion of the SIS, which is described below.

The mathematical equations expressed at this stage must depend on the improvisation/iteration index—index m—due to the repeatability of the SIS in the TMS-MSA.

6.3.1 Sub-stage 3.1: Improvisation of a New Melody Vector by Each Player

In sub-stage 3.1, the improvisation process of a new melody vector by each player in the musical group must be carried out. In this sub-stage, each player improvises a new melody vector individually, without the influence of other players. In the TMS-MSA, a new melody vector or a new melody line played by player p\( {\mathrm{x}}_{m,p}^{\mathrm{new}}=\left({x}_{m,p,1}^{\mathrm{new}},\dots, {x}_{m,p,v}^{\mathrm{new}},\dots, {x}_{m,p,\mathrm{NCDV}}^{\mathrm{new}}\right) \)—is generated through a new alternative improvisation procedure (AIP) established according to the main concepts of improvisation of a harmony vector in the SS-HSA. The AIP will be explained in Sect. 3.6.6. The improvisation process of a new melody vector is carried out by other players in the same way.

6.3.2 Sub-stage 3.2: Update of Each PM

After completion of sub-stage 3.1 and improvisation of a new melody vector by each player in the musical group, the update process of the PMs or MM must be done in sub-stage 3.2. To illustrate, consider the memory relevant to player p (PMm, p). In this sub-stage, a new melody vector played by player p\( {\mathrm{x}}_{m,p}^{\mathrm{new}}=\left({x}_{m,p,1}^{\mathrm{new}},\dots, {x}_{m,p,v}^{\mathrm{new}},\dots, {x}_{m,p,\mathrm{NCDV}}^{\mathrm{new}}\right) \)—is evaluated and compared with the worst available melody vector in the PMm,p—the melody vector stored in the PMS row of the PMm,p—from the perspective of the objective function. If the new melody vector played by player p has a better value than the worst available melody vector in the PMm,p, from the standpoint of the objective function, this new melody vector replaces the worst available melody vector in the PMm,p; the worst available melody vector is then eliminated from the PMm,p. This process is also performed for other players in the group. Table 3.17 gives the pseudocode associated with the update of the memory of all existing players in the musical group or the update of the MMm. The update process of the PMm,p is not performed if the new melody vector played by player p in the musical group is not notably better than the worst available melody vector in its memory, from the standpoint of the objective function. After completion of this process, melody vectors stored in the memory of all existing players in the musical group or the MMm must be re-sorted based on the value of objective function—fitness function—in an ascending order.

Table 3.17 Pseudocode associated with the update of the memory of all existing players in the musical group or the update of the MMm in the TMS-MSA

The pseudocode pertaining to sorting the solution vectors stored in the memory of all existing players in the musical group or the MM was formerly presented in Table 3.16. Given the dependence of each PM or more comprehensively the MM to the improvisation/iteration index of the SIS—index m—this pseudocode must be rewritten according to Table 3.18.

Table 3.18 Pseudocode pertaining to sorting the solution vectors stored in the PMs or MMm in the TMS-MSA

6.3.3 Sub-stage 3.3: Check of the Stopping Criterion of the SIS

After completion of sub-stage 3.2 and an update of all PMs, the process of checking the stopping criterion of the single computational stage must be accomplished. If the stopping criterion of the SIS—the MNI-SIS—is satisfied, its computational efforts are terminated. Otherwise, sub-stages 3.1 and 3.2 are repeated.

6.4 Stage 4: Pseudo-Group Computational Stage or PGIS

After finalization of stage 3, or accomplishment of the SIS, the pseudo-group computational stage or the PGIS must be performed. This stage consists of four sub-stages: (1) improvisation of a new melody vector by each player taking into account the feasible ranges of the updated pitches; (2) update of each PM; (3) update of the feasible ranges of pitches—continuous decision-making variables—for the next improvisation—only for random selection; and, (4) check of the stopping criterion of the PGIS. The mathematical equations expressed at this stage must depend on the improvisation/iteration index—index m—due to the repeatability of the PGIS in the TMS-MSA.

6.4.1 Sub-stage 4.1: Improvisation of a New Melody Vector by Each Player Taking into Account the Feasible Ranges of the Updated Pitches

In sub-stage 4.1, the improvisation process of a new melody vector by each player in the group must be performed. In this sub-stage, each player improvises a new melody vector interactively with the influence of other players. In other words, in this sub-stage, player p improvises a new melody vector—\( {\mathrm{x}}_{m,p}^{\mathrm{new}}=\left({x}_{m,p,1}^{\mathrm{new}},\dots, {x}_{m,p,v}^{\mathrm{new}},\dots, {x}_{m,p,\mathrm{NCDV}}^{\mathrm{new}}\right) \)—by the AIP taking into account the feasible range of the pitches—continuous decision-making variables—which are updated in different improvisations/iterations of the PGIS. The improvisation process of a new melody vector is carried out by other players in the same way.

6.4.2 Sub-stage 4.2: Update of Each PM

After completion of sub-stage 4.1 and improvisation of a new melody vector by each player in the group, the update process of the PMs or MM must be performed. This process is similar to sub-stage 3.2 of the SIS, which was explained in Sect. 3.6.3.2.

6.4.3 Sub-stage 4.3: Update of the Feasible Ranges of Pitches—Continuous Decision-Making Variables—for the Next Improvisation—Only for Random Selection

This sub-stage is a major part of the architecture of the TMS-MSA, which can give rise to a remarkable difference between this optimization technique and the SS-HSA. In the SS-HSA, the feasible ranges of continuous decision-making variables in the harmony vector are not changed during different improvisations/iterations. In the TMS-MSA, however, the feasible ranges of continuous decision-making variables in the melody vector are altered and updated during each improvisation/iteration of the PGIS, but only for random selection. This means that the lower bound of the continuous decision-making variable v (\( {x}_v^{\mathrm{min}} \)) and the upper bound of the continuous decision-making variable v (\( {x}_v^{\mathrm{max}} \)) in the PGIS depend on the improvisation/iteration index of the PGIS and change in the form of \( {x}_{m,v}^{\mathrm{min}} \) and \( {x}_{m,v}^{\mathrm{max}} \), respectively. Figure 3.4 displays the process of updating the feasible ranges of continuous decision-making variables in the TMS-MSA. Table 3.19 provides the pseudocode relevant to the update of the feasible ranges of the continuous decision-making variables in the TMS-MSA.

Fig. 3.4
figure 4

Update of the feasible ranges of the continuous decision-making variables in the TMS-MSA

Table 3.19 Pseudocode relevant to the update of the feasible ranges of the continuous decision-making variables in the TMS-MSA

6.4.4 Sub-stage 4.4: Check of the Stopping Criterion of the PGIS

After finalization of sub-stage 4.3 and update of the feasible ranges of the continuous decision-making variables for the next improvisation of the PGIS, the checking process of the stopping criterion of this computational stage must be accomplished. In this sub-stage, the computational efforts of the PGIS are terminated in case if its stopping criterion—the MNI-PGIS—is satisfied. Otherwise, sub-stages 4.1, 4.2, and 4.3 are repeated.

6.5 Stage 5: Selection Stage—Selection of the Final Optimal Solution—The Best Melody

After completion of stage 4, or accomplishment of the PGIS, the selection of the final optimal solution must be made in stage 5. In this stage, the best melody vector stored in the memory of each existing player in the musical group is determined. Then, the best melody vector is selected from among these melody vectors as the final optimal solution. Table 3.20 shows the pseudocode related to the selection of the final optimal solution in the TMS-MSA.

Table 3.20 Pseudocode related to the selection of the final optimal solution in the TMS-MSA

6.6 Alternative Improvisation Procedure

As indicated earlier, in the TMS-MSA, player p in the group improvises a new melody vector—\( {\mathrm{x}}_{m,p}^{\mathrm{new}}=\left({x}_{m,p,1}^{\mathrm{new}},\dots, {x}_{m,p,v}^{\mathrm{new}},\dots, {x}_{m,p,\mathrm{NCDV}}^{\mathrm{new}}\right) \)—using the AIP. This procedure was developed with regard to the fundamental concepts of improvisation of a harmony vector in the SS-HSA. Implementing the AIP by player p is carried out according to three rules: (1) player memory consideration; (2) pitch adjustment; and, (3) random selection.

Rule 1: In consideration of a player’s memory, the values of the new melody vector for player p are randomly selected from the melody vectors stored in the PMm,p with the probability of the PMCR. In this rule, two principles are alternately employed, with each principle consisting of a linear combination of a decision-making variable chosen from the PMm,p and a ratio of the BWm. If the first principle is activated, the value of the first decision-making variable from the new melody vector played by player p,\( {x}_{m,p,1}^{\mathrm{new}} \), is randomly selected from the available corresponding continuous decision-making variable in the melody vectors stored in the PMm,p\( \left({x}_{m,p,1}^1,\dots, {x}_{m,p,1}^s,\dots, {x}_{m,p,1}^{\mathrm{PMS}}\right) \)—with the probability of the PMCR and updated by the BWm parameter. Conversely, the value of the first decision-making variable from the new melody vector played by player p, \( {x}_{m,p,1}^{\mathrm{new}} \), is randomly chosen from the entire set of available continuous decision-making variables stored in the PMm, p\( \left\{\left({x}_{m,p,1}^1,\dots, {x}_{m,p,1}^s,\dots, {x}_{m,p,1}^{\mathrm{PMS}}\right),\dots, \left({x}_{m,p,v}^1,\dots, {x}_{m,p,v}^s,\dots, {x}_{m,p,v}^{\mathrm{PMS}}\right),\dots, \left(\left({x}_{m,p,\mathrm{NCDV}}^1,\dots, {x}_{m,p,\mathrm{NCDV}}^s,\dots, {x}_{m,p,\mathrm{NCDV}}^{\mathrm{PMS}}\right)\right)\right\} \)—with the probability of the PMCR and updated by the BWm parameter, provided that the first principle is not activated or the second principle is activated. The values for other continuous decision-making variables are also selected in the same way.

Implementing the player memory consideration rule to specify the value of the continuous decision-making variable v from a new melody vector played by player p, \( {x}_{m,p,v}^{\mathrm{new}} \), is done using Eqs. (3.18) and (3.19):

$$ {\displaystyle \begin{array}{l}{x}_{m,p,v}^{\mathrm{new}}={x}_{m,p,v}^r\pm \mathrm{U}\left(0,1\right)\cdot {BW}_m;\\ {}\forall \left\{m\in {\Psi}^{\left(\mathrm{MNI}\hbox{-} \mathrm{SIS}\right)+\left(\mathrm{MNI}\hbox{-} \mathrm{PGIS}\right)},p\in {\Psi}^{\mathrm{PN}},v\in {\Psi}^{\mathrm{NCDV}},r\sim \mathrm{U}\left\{1,2,\dots, \mathrm{PMS}\right\}\right\}\end{array}} $$
(3.18)
$$ {\displaystyle \begin{array}{ll}{x}_{m,p,v}^{\mathrm{new}}=& {x}_{m,p,k}^r\pm \mathrm{U}\left(0,1\right)\cdot {BW}_m;\\ {}& \forall \left\{m\in {\Psi}^{\left(\mathrm{MNI}\hbox{-} \mathrm{SIS}\right)+\left(\mathrm{MNI}\hbox{-} \mathrm{PGIS}\right)},p\in {\Psi}^{\mathrm{PN}},v\in {\Psi}^{\mathrm{NCDV}},r\sim \mathrm{U}\left\{1,2,\dots, \mathrm{PMS}\right\},k\sim \mathrm{U}\left\{1,2,\dots, \mathrm{NCDV}\right\}\right\}\end{array}} $$
(3.19)

Equations (3.18) and (3.19) are used for the first and second principles of the player memory consideration rule, respectively. Here, index r is a random integer with a uniform distribution through the set {1, 2,  … , PMS}—r ∼ U{1, 2,  … , PMS}—and index k is a random integer with a uniform distribution through the set {1, 2,  … , NCDV}—k ∼ U{1, 2,  … , NCDV}. Put another way, in Eq. (3.18), the value of index r is randomly specified through the set of admissible values demonstrated by the set {1, 2,  … , PMS}. Determination of this index is elucidated on the basis of Eq. (3.20):

$$ r=\operatorname{int}\left(\mathrm{U}\left(0,1\right)\cdot \mathrm{PMS}\right)+1 $$
(3.20)

In Eq. (3.19), the value of index k is also randomly characterized through the set of permissible values displayed by the set {1, 2,  … , NCDV}. Determination of this index is described based on Eq. (3.21):

$$ k=\operatorname{int}\left(\mathrm{U}\left(0,1\right)\cdot \mathrm{NCDV}\right)+1 $$
(3.21)

The point to be made here is that other distributions can be employed for indexes r and k, such as (U(0, 1))2. The utilization of this distribution results in the selection of lower values for these indexes. In the player memory consideration rule, the first and second principles can effectively give rise to a more desirable convergence and a more substantial increase in the diversity of the generated solutions for the TMS-MSA. Applying the player memory consideration rule is also accomplished for other players in the same way.

Rule 2: In the pitch adjustment rule, the values of a new melody vector played by player p, haphazardly selected from among the existing melody vectors in the PMm, p with the probability of the PMCR, are updated with the probability of the PARm. More precisely, after the value of the first continuous decision-making variable from a new melody vector by player p, \( {x}_{m,p,1}^{\mathrm{new}} \), is haphazardly chosen from the melody vectors stored in the PMm,p with the probability of the PMCR, this continuous decision-making variable is updated with the probability of the PARm. The update process for this continuous decision-making variable is performed by replacing it with the value of the first continuous decision-making variable from the best melody vector available in the PMm,p, \( {x}_{m,p,1}^{\mathrm{best}} \). The values for other continuous decision-making variables are also updated in the same way. Implementing the pitch adjustment rule to determine the value of the continuous decision-making variable v from a new melody vector played by player p, \( {x}_{m,p,v}^{\mathrm{new}} \), is done by using Eq. (3.22):

$$ {x}_{m,p,v}^{\mathrm{new}}={x}_{m,p,v}^{\mathrm{best}};\kern1em \forall \left\{m\in {\Psi}^{\left(\mathrm{MNI}\hbox{-} \mathrm{SIS}\right)+\left(\mathrm{MNI}\hbox{-} \mathrm{PGIS}\right)},p\in {\Psi}^{\mathrm{PN}},v\in {\Psi}^{\mathrm{NCDV}}\right\} $$
(3.22)

Applying the pitch adjustment rule is also carried out for other players in the same way.

Rule 3: In the random selection rule, the values of a new melody vector played by player p are haphazardly selected from the entire space of the nonempty feasible decision-making with the probability of the 1-PMCR. In here, the random selection rule is organized in accordance with two different principles. The first and second principles are activated in the SIS and PGIS, respectively. If the first principle of the random selection rule is activated, the value of the first continuous decision-making variable from the new melody vector played by the player p, \( {x}_{m,p,1}^{\mathrm{new}} \), is haphazardly selected from the entire space of the nonempty feasible decision-making related to this decision-making variable with the probability of the 1-PMCR. In here, the entire space of the nonempty feasible decision-making relevant to the corresponding decision-making variable is characterized by an invariable lower bound, \( {x}_1^{\mathrm{min}} \), and an invariable upper bound, \( {x}_1^{\mathrm{max}} \), which are defined in the first stage of the TMS-MSA—definition of the optimization problem and its parameters—and unchanged in all improvisations/iterations of the SIS. The values for other continuous decision-making variables are also chosen in the same way. This principle of the random selection rule was previously used in sub-stage 2.2 for initialization of the MM.

Implementing the first principle of random selection rule to specify the value of the continuous decision-making variable v from a new melody vector played by player p, \( {x}_{m,p,v}^{\mathrm{new}} \), is performed by using Eq. (3.23):

$$ {x}_{m,p,v}^{\mathrm{new}}={x}_v^{\mathrm{min}}+\mathrm{U}\left(0,1\right)\cdot \left({x}_v^{\mathrm{max}}-{x}_v^{\mathrm{min}}\right);\kern1em \forall \left\{m\in {\Psi}^{\left(\mathrm{MNI}\hbox{-} \mathrm{SIS}\right)},p\in {\Psi}^{\mathrm{PN}},v\in {\Psi}^{\mathrm{NCDV}}\right\} $$
(3.23)

Equation (3.23) tells us that in the first principle of the random selection rule, the player p in order to determine the value of the continuous decision-making variable v from the new vector can utilize the entire space of the nonempty feasible decision-making relevant to this decision-making variable which is specified in the first stage of the TMS-MSA and unchanged in all improvisations/iterations of the SIS.

Conversely, the value of the first continuous decision-making variable from the new melody vector played by player p, \( {x}_{m,p,1}^{\mathrm{new}} \), is randomly chosen from the entire space of the nonempty feasible decision-making pertaining to this continuous decision-making variable with the probability of the 1-PMCR provided that the first principle is not activated or the second principle is activated. In here, the entire space of the nonempty feasible decision-making associated with the corresponding decision-making variable is determined by a variable lower bound, \( {x}_{m,1}^{\mathrm{min}} \), and a variable upper bound, \( {x}_{m,1}^{\mathrm{max}} \), which are dynamically changed and updated in each improvisation/iteration of the PGIS. The values for other continuous decision-making variables are also chosen in the same way.

Implementing the second principle of random selection rule to specify the value of the continuous decision-making variable v from a new melody vector played by player p, \( {x}_{m,p,v}^{\mathrm{new}} \), is performed by using Eq. (3.24):

$$ {x}_{m,p,v}^{\mathrm{new}}={x}_{m,v}^{\mathrm{min}}+\mathrm{U}\left(0,1\right)\cdot \left({x}_{m,v}^{\mathrm{max}}-{x}_{m,v}^{\mathrm{min}}\right);\kern1em \forall \left\{m\in {\Psi}^{\left(\mathrm{MNI}\hbox{-} \mathrm{PGIS}\right)},p\in {\Psi}^{\mathrm{PN}},v\in {\Psi}^{\mathrm{NCDV}}\right\} $$
(3.24)

Equation (3.24) tells us that in the second principle of the random selection rule, the player p in order to specify the value of the continuous decision-making variable v from the new melody vector can merely use the space of the nonempty feasible decision-making related to this decision-making variable which is updated in all improvisation/iteration of the PGIS. Applying the random selection rule is also accomplished for other players in the same way.

As a general consequence, the probability that the value of the continuous decision-making variable v from a new melody vector played by player p, \( {x}_{m,p,v}^{\mathrm{new}} \), can be obtained by applying the player memory consideration, pitch adjustment, and random selection rules equals to PMCR × (1 − PARm), PMCR × PARm, and 1 − PMCR, respectively. In order to provide a more favorable convergence, as well as a more significant increase in the diversity of solution vectors for the TMS-MSA, each player in the musical group employs the updated values of the PARm and BWm parameters in the improvisation process of its melody vector. The PARm and BWm parameters are updated in each improvisation/iteration of the SIS and the PGIS by using Eqs. (3.25) and (3.26), respectively:

$$ {BW}_m={\mathrm{BW}}^{\mathrm{max}}\cdot \exp \left(\frac{\ln \left({\mathrm{BW}}^{\mathrm{max}}/{\mathrm{BW}}^{\mathrm{min}}\right)}{\left(\mathrm{MNI}\hbox{-} \mathrm{SIS}\right)+\left(\mathrm{MNI}\hbox{-} \mathrm{PGIS}\right)}\cdot m\right);\kern1em \forall \left\{m\in {\Psi}^{\left(\mathrm{MNI}\hbox{-} \mathrm{SIS}\right)+\left(\mathrm{MNI}\hbox{-} \mathrm{PGIS}\right)}\right\} $$
(3.25)
$$ {PAR}_m={\mathrm{PAR}}^{\mathrm{min}}+\left(\frac{{\mathrm{PAR}}^{\mathrm{max}}-{\mathrm{PAR}}^{\mathrm{min}}}{\left(\mathrm{MNI}\hbox{-} \mathrm{SIS}\right)+\left(\mathrm{MNI}\hbox{-} \mathrm{PGIS}\right)}\right)\cdot m;\kern1em \forall \left\{m\in {\Psi}^{\left(\mathrm{MNI}\hbox{-} \mathrm{SIS}\right)+\left(\mathrm{MNI}\hbox{-} \mathrm{PGIS}\right)}\right\} $$
(3.26)

The update process of the PARm and BWm parameters by Eq. (3.25) and (3.26) is virtually the same as the update process used in the SS-IHSA. Hence, further explanations are related to the update process of these parameters available in Sect. 3.5 of this chapter.

Table 3.21 gives the pseudocode pertaining to the improvisation of a new melody vector by each player in the musical group of the TMS-MSA. The designed pseudocode in different stages and sub-stages of the TMS-MSA is located in a regular sequence and forms the performance-driven architecture of this algorithm.

Table 3.21 Pseudocode pertaining to improvisation of a new melody vector by each player in the musical group of the TMS-MSA

Table 3.22 also gives the pseudocode associated with the performance-driven architecture of the TMS-MSA. In here, sub-stages 3.3 and 4.4—the check process of the stopping criterion of the SIS and PGIS—are defined by the first and second WHILE loops in the pseudocode pertaining to the performance-driven architecture of the TMS-MSA (see Table 3.22).

Table 3.22 Pseudocode associated with the performance-driven architecture of the TMS-MSA

7 Conclusions

In this chapter, the music-inspired meta-heuristic optimization algorithms were reviewed from past to present, with a focus on the SS-HSA, SS-IHSA, and TMS-MSA. First, a brief review of the definition of music, its history, and the interdependencies of phenomena and concepts of music and the optimization problem was addressed. Second, the fundamental principles of the SS-HSA and its performance-driven architecture were rigorously described. In addition, a structural classification for the enhanced versions of the SS-HSA was provided. In this regard, the basic differences between the SS-IHSA, as a well-known enhanced version of the SS-HSA, and the SS-HSA were carefully examined in detail. Third, the fundamental principles of the TMS-MSA and its performance-driven architecture were meticulously expressed. Given related literature, and after presentation of the different versions of the music-inspired meta-heuristic optimization algorithms and their implementation on optimization problems in different branches of the engineering sciences (e.g., electrical, civil, computer, mechanical, and aerospace), it was observed that these optimization techniques may represent a reasonable and applicable method for solving complicated, real-world, large-scale, non-convex, non-smooth optimization problems having a nonlinear, mixed-integer nature with big data. This is due to the fact that the music-inspired meta-heuristic optimization algorithms have a distinctive and flexible architecture for facing optimization problems compared with other optimization techniques. It can be seen, then, that the willingness of specialists and researchers in different branches of the engineering sciences to employ music-inspired meta-heuristic optimization algorithms with the aim of overcoming difficulties in solving complicated, real-world, large-scale, non-convex, non-smooth optimization problems over recent years has been appreciably increasing.

As a result, this chapter can serve as an aid to using the music-inspired meta-heuristic optimization algorithms. Moreover, this chapter can effectively provide a precious background for explaining innovative versions of music-inspired meta-heuristic optimization algorithms, which will be discussed in the next chapter.