1 Introduction

The proper design of a DC-DC converter manually often is observed to be a process involving time and cost. This has led to applying optimization methods in an attempt to ease the burden of the design process. DC-DC buck converters have improved in performance over the years and some recent advances are due to the progress in the energy storing elements, which have reduced in size and are characterized by lower associated losses. Power electric switches have also been refined, in terms of better blocking voltage, low on-state resistance, and ability to withstand transient stresses. But selecting optimal converter design parameters that match the dimension constraints and gives better efficiency still is an optimization problem [1].

Available literature suggests the use of different methods to optimally select the converter designing parameters. They vary in terms of the objective functions as well as the constraints, in addition to the varying methodologies to implement the optimization process. Balachandran and Lee [2] discuss a practical optimization approach which involves selection of a working design which meets the power circuit performance parameters and concurrently optimizes the weight or total circuit losses. The technique enables a cost-effective design. Additionally, the computer-aided method enables the designer the options in the solution to a tradeoff in weight-efficiency, investigation of the impacts of component characteristics and the converter requirements for the desired design, and optimal configuration in the system power. An design approach for a monolithic DC-DC buck converter is presented in [3] where the criterion for selection involved the decision variables which included voltage swing in MOSFET gate driver, switching frequency and the current ripple minimization. In addition, electromagnetic interference (EMI) minimization is also addressed along with the efficiency of the converters [4, 5]. While authors in [6] make use of Particle Swarm Optimization (PSO) for the control of DC-DC converters. In most cases, the optimization approach involves only one parameter. Other methods involve dividing the process into several stages, considering only one or two parameters at each stage. This doesn’t allow for the selection of a number of constraints for the optimization problem and hence cannot accurately allow for all the possible constraint selection in the design procedure.

Seeman and Sanders [7] applied the Lagrangian function to optimize a switched capacitor converter while the augmented Lagrangian method for optimization of a half bridge DC-DC converter was discussed Wu et al. [8]. Quadratic programming [9] also finds its application in the designing of DC-DC converters. The above mentioned methods all have the drawback of solving the problem for local optima, which is dependent on the initial starting point and hence doesn’t give the global optima. In order to ensure global optimum results are obtained, the current work involves the design of an optimal DC-DC buck converter using soft computing techniques. A state-space dynamic model of the converter is considered along with the design parameters. The converter is optimized using Particle Swarm Optimization (PSO), Simulated Annealing (SA) and Firefly Algorithm (FA) optimization algorithms respectively and a comparison on the performance of the algorithms is also presented.

Following sections present a brief introduction of the algorithms considered, followed by the state space model of the DC-DC buck converter used to optimally select the design parameters for efficient performance. The optimization process is presented which is followed the results and discussion. Conclusion and future work summarizes and draws a conclusion to the work carried out.

2 Optimization Algorithms

The following sub-sections cover in brief the theory of the three optimization algorithms, namely PSO, SA and FA. The underlying governing principle is discussed, which forms the basis for optimal design of dc-dc buck converter parameters used in the current work.

2.1 Particle Swarm Optimization (PSO)

Kennedy and Eberhart introduced Particle Swarm Optimization in 1995 [10]. The algorithm mimics the natural behavior of bird and fish as they make use of collective or swarm intelligence. Optimization problems that have possible multiple solutions adopt PSO as a solution approach to the optimization problem to arrive at the desired optimum. The algorithm involves movement of the swarm or the particles of the swarm to locate the overall best value of velocity and position in the collective swarm in a well-defined or bounded search space. The iterative algorithm involves the evaluation of the position and velocity of each particle of the swarm in each step to arrive at the global optimized value. They are then updated for the position and velocity of the individual components of the swarm is given as [11]:

$$ x_{i}^{k + 1} = x_{i}^{k} + v_{i}^{k + 1} $$
(1)
$$ v_{i}^{k + 1} = wv_{i}^{k} + c_{1} r_{1} P_{besti} + c_{2} r_{2} g_{besti} $$
(2)

Where, \( \text{v}_{\text{i}}^{{\text{k} + 1}} \) represents the swarm velocity of the ith particle in k + 1th iteration, Pbesti gives the particles best value while the global best value of the swarm is represented by gbesti. For the learning factor is defined as w, and the position constants are represented by c1, c2 while r1, r2 have a random value in the range of (0, 1) [11]. Figure 1 gives a representation of the movement of the particles in the swarms.

Fig. 1.
figure 1

Movement of particles in a swarm in the PSO algorithm [11]

2.2 Simulated Annealing (SA)

Simulated Annealing (SA) is used in the solutions for global optimization problems and is a random search technique in nature. The algorithm covers the imitation of the annealing phenomenon where a metal cools and freezes to form the crystalline state with the minimum of energy and the large crystals are formed to reduce the defects in the metal crystals. The application of SA into optimization process was initiated by Kirkpatrick, Gellat and Vecchi in 1983 [12]. Unlike gradient-based methods and deterministic search methods which are limited by their property of being trapped into the local minima, SA is able to avoid getting stuck at the local minima.

SA makes use of random search in terms of Markov Chain, which functions to accept improvement in the objective function and at the same time keeps some changes which are not ideal. For example, if we consider the case of the minimization problem, ideally we want to keep the changes in the iteration process that leads to an overall decrease in the value of the objective function f. However the algorithm also keeps solutions or values that lead to an increase in the value of f with a probability p called transitional probability which is defined as:

$$ p = e^{{ - \frac{\Delta E}{{k_{B} T}}}} $$
(3)

Where kB is the Boltzmann’s constant and for simplicity, we use k = 1. T gives the value for the Temperature that regulates the annealing process. Change in energy levels is given by ΔE. The transition probability which is based on Boltzmann distribution and links ΔE to Δf, i.e. the change of objective function is given by:

$$ \Delta E = \gamma \Delta f $$
(4)

Where γ is a real constant. We assume its value to be unity for ease of computation. Thus the probability thus becomes

$$ p(\Delta f,T) = e^{{ - \frac{\Delta f}{T}}} $$
(5)

A random number is usually considered for the threshold regardless of whether the change is accepted or not.

2.3 Firefly Algorithm

Developed by Yang [13], Firefly Algorithm (FA) makes use of the attractiveness of the firefly by their brightness. Light absorption experiences exponential decay and light variation with distance is related by an inverse square law. The algorithm models this intensity variation of light or attractiveness as a non-linear term. The FA can be equated for the solution vector xi as:

$$ x_{i}^{t + 1} = x_{i}^{t} + \beta_{0} e^{{ - \gamma r_{ij}^{2} }} (x_{j}^{t} - x_{i}^{t} ) + \alpha \varepsilon_{i}^{t} $$
(6)

Where α represents a scaling factor which controls the step sizes for the randomized walks, γ is the parameter that controls the visibility of the fireflies the search mode, β0 being the attractiveness constant for fireflies with zero distance between them. rij which denotes the distance between firefly i and firefly j and is represented in-terms of their Cartesian co-ordinates. To speed up the overall algorithm convergence the degree of randomness is usually reduced gradually using:

$$ \alpha = \alpha_{0} \theta^{t} $$
(7)

3 State Space Modeling of DC-DC Buck Converter

DC-DC buck converter performs well in the tracking of photovoltaic (PV) systems with power point tracking under low radiation and temperatures [14, 15]. The state-space dynamic modeling of the DC-DC buck converter considering the system losses, current and voltage ripples and other constraints that are intrinsic to the process of optimal converter design is presented in this section. A buck converter circuit is shown in Fig. 2.

Fig. 2.
figure 2

Electrical circuit of a buck converter [14]

Using a state vector approach for the converter we have [16, 17]:

$$ x = \left( {\begin{array}{*{20}c} {i_{1} } \\ {v_{0} } \\ \end{array} } \right) $$
(8)
$$ \frac{di}{dt} = \frac{{ - V_{0} }}{L} + \frac{{V_{i} }}{L}u $$
(9)
$$ \frac{{dv_{0} }}{dt} = \frac{{i_{1} }}{C} - \frac{{V_{0} }}{RC} $$
(10)

Where i1 represents inductor current while output voltage is given by v0. The equations are dependent on the initial state determined by u which acts as the control signal for the switching device in the ON (u = 1) or OFF (u = 0) state. The inductor, the capacitor and the load resistance values are given by L, C and R and Vi represents the input voltage. For the design of the converter the current ripples, the voltage ripple, size of inductor for continuous conduction mode (CCM) and the constraints imposed by the Bandwidth (BW) are to be determined along with the state space dynamic model. For the buck converter we have:

$$ \Delta i = \frac{{V_{0} }}{{Lf_{s} }}(1 - D) $$
(11)
$$ \Delta v_{o} = \frac{{V_{0} }}{{8Lf_{s}^{2} C}}(1 - D) $$
(12)
$$ Lf_{s} \ge \frac{{V_{0} }}{{2I_{0} }}(1 - D) $$
(13)
$$ BW > 2\pi (10\% f_{s} ) $$
(14)

Where VO gives the output voltage; fs = 1/Ts is the switching frequency. The power calculations are important for the optimal design of the converter. This involves the losses due to parasitic resistances, switching losses caused by parasitic capacitance. The total power loss is given as:

$$ P_{Q1} = P_{ONQ1} + P_{SWQ1} $$
(15)
$$ P_{ONQ1} = (\text{I}_{0}^{2} + \frac{{\Delta i_{1}^{2} }}{12})\text{DR}_{DS} $$
(16)
$$ P_{SWQ1} = V_{i} I_{0} (T_{swON} + T_{swOFF} )f_{s} $$
(17)
$$ P_{IND} = (\text{I}_{0}^{2} + \frac{{\Delta i_{1}^{2} }}{12})R_{L} $$
(18)
$$ P_{CAP} = (\frac{{\Delta i_{1}^{2} }}{12})R_{C} $$
(19)
$$ P_{BUCK} = P_{onQ1} + P_{SW} + P_{IND} + P_{CAP} $$
(20)

The efficiency of the converter is given by:

$$ \eta = \frac{{P_{Load} }}{{P_{Load} + P_{Buck} }} $$
(21)

TSWON and TSWOFF represents the transition time to ON and OFF the device, I0 is the average output current, RDS is the on-state resistance of the MOSFET. RL represents the loss component in the inductor while RC represents the equivalent series resistance in the capacitor. PLoad gives the averaged power at the load.

4 Optimal Design of DC-DC Buck Converter

To set up the optimization problem, we need to frame the objective function subject to certain constraints. The current work follows the optimal design of the converter by reducing the total power loss in the converter, i.e. to minimize Pbuck, such that the efficiency of the converter is maximum. The considered constraints are the maximum and minimum size of the design variables, the admissible ripples, the CCM criterion (Eqs. 1113) and the BW (Eq. 14). Mathematically it can be represented as:

Minimize PBuck for

$$ L_{\hbox{min} } \, \le \,L\, \le \,L_{\hbox{max} } $$
$$ C_{\hbox{min} } \, \le \,C\, \le \,C_{\hbox{max} } $$
$$ f_{\hbox{min} } \, \le \,f_{s} \, \le \,f_{\hbox{max} } $$
$$ D_{\hbox{min} } \, \le \,D\, \le \,D_{\hbox{max} } $$
$$ \Delta i_{1} \, \le \,a\% I_{0} $$
$$ \Delta v_{0} \, \le \,b\% \text{V}_{0} $$

Where a and b limit the averaged magnitude of current and voltage in percentage respectively. In order to design a converter the parameters considered are presented in Table 1. These include the values of the converter parameters required for the design as well as the range of the design variables considered for the optimization process.

Table 1. Design parameters considered for optimization.

5 Result and Discussion

In order to optimize the buck converter, the optimization algorithms are formulated in m-files in the MATLAB software interface. The algorithms are implemented on a PC with 64 bit Windows 8 operating system having a Intel® core processor (i5) with 4.00 GB RAM. As the algorithms are inherently random in nature each algorithm is run for 100 iterations. Each iteration result is recorded and the results for best value, average and standard deviation obtained for each of the algorithms is summarized in Table 2.

Table 2. Optimized design parameters using PSO, SA and HAS

Table 2 shows the results of the optimization process for the PSO, SA and FA algorithm. The execution times for the three algorithms for 100 iterations were 1.56 s, 26.86 s, and 1.88 s for PSO, SA and FA algorithms respectively.

From the optimization results it is evident that the best result for designing a optimized buck converter, with the most efficient performance and minimum losses is obtained by the PSO algorithm. The PSO algorithm also takes less computational time to execute the 100 iterations and the variation in average values for power loss are 15% and 16% when compared to FA and SA respectively. The variation in average efficiency is within 1% with both SA and FA. Thus the PSO algorithm outperforms both SA and FA in-terms of computational time and to best optimize designed configuration at the minimum power loss.

6 Conclusion and Future Prospects

The paper discusses the problem of the optimal design of a DC-DC buck converter. Comparison is drawn on the performance of three popular optimization algorithms, namely PSO, SA and FA with respect to their performance in the design optimization problem. It has been found that PSO outperforms both SA and FA in arriving at the best results of converter design at minimal power loss. Additionally, the PSO algorithm gives average results with better design parameters at minimum losses which is an improvement by 15% and 16% when compared to the average results obtained using SA and FA respectively.

In recent times, many new optimization algorithms have been developed. The results of the current work can be compared to algorithms such as Biogeography based Optimization (BBO), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm(CS), Bat Algorithm(BA), which have shown good results in solving design optimization problems. Additionally, in order to find the statistically significant results, Wilcoxon Rank sum test can also be performed to compare the algorithms.