Skip to main content

Entrainment Limit of Weakly Forced Nonlinear Oscillators

  • Chapter
  • First Online:
Mathematical Approaches to Biological Systems

Abstract

Nonlinear oscillators exhibit entrainment (injection locking) to external periodic forcings. Despite its history of entrainment, and the wide utility of injection locking to date, it has been an open problem to establish an ideally efficient injection locking in a given oscillator. In this chapter, I identify a universal mechanism governing the entrainment limit under weak forcings, which enables us to understand how and why the ideal injection locking is realized. Using this mechanism, I prove that the maximization of the entrainment range or the stability of entrainment for general forcings, including pulse trains, and a fundamental limit of general m:n entrainment are realized and systematically designed. These results support the design principle of efficient injection locking, which is easily applied to experimental systems in a real environment, and this experimental systems applicability is tested in the Hodgkin–Huxley neuron model here as an example.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Besides the case for p = 2, the case for p > 2 can be useful for engineering nonlinear systems with generalized energy functions.​

  2. 2.

    The constraints (4.2) are generalized as \(\frac{1} {2\pi }\langle f(\theta )\rangle \! =\! C\) and \(\vert \vert f\! -\! C\vert \vert _{p}\! =\! M\), for which we define \(\bar{f}(\theta )\! \equiv \! f(\theta )\! -\! C\) and \(\bar{\omega }\!\equiv \!\omega \! +\!Cz_{0}\), with z 0 being the dc part of Z(θ); \(z_{0}\! =\! \frac{1} {2\pi }\int _{-\pi }^{\pi }Z(m\theta )d\theta\). Then, from Eq. (4.1), we obtain \(\frac{d\phi } {dt} =\bar{\omega } -\frac{m} {n} \Omega +\bar{\varGamma } _{m/n}(\phi ) =\bar{ \Delta \omega } +\bar{\varGamma } _{m/n}(\phi )\), where \(\bar{\Delta \omega } \equiv \bar{\omega }-\frac{m} {n} \Omega \) and \(\bar{\varGamma }_{m/n}(\phi )\! \equiv \! \frac{1} {2\pi }\langle Z(m\theta \!+\!\phi )\bar{f}(n\theta )\rangle\). Thus, by replacing f(θ) with \(\bar{f}(\theta )\), the argument under the charge–balance constraint in Eq. (4.2) is repeated for the above equation, since the above equation has the same form as Eq. (4.1) and \(\bar{f}(\theta )\) satisfies the charge–balance constraint: \(\frac{1} {2\pi }\langle \bar{f}(\theta )\rangle \! =\! 0\). In this situation, another constraint | | f | |  p ​ = ​ M becomes \(\vert \vert f\! -\! C\vert \vert _{p}\! =\! M\).

  3. 3.

    These optimal forcings f opt,  p are obtained in L p(S). This implies that these optimals belong to a broader class of functions, compared with the one considered in the calculus of variations. More precisely, in the calculus of variations, we usually assume the function space to consist of piecewise-smooth functions (or absolutely continuous functions, at best), which is a subset of L p. Namely, the result here is stronger than the one obtained by the conventional calculus of variations.

  4. 4.

    The reason we represent f opt,  1 in such an asymptotic form (rather than using a formal delta function) is that f opt,  1 belongs to L 1(S) from the context of Hölder’s inequality, and that what counts here for optimization is the resulting Γ 0(ϕ) (rather than the form of f opt,  1 itself).

  5. 5.

    We note \(\Delta \phi _{\mathrm{max}}\) is close to, but slightly different from, the phase difference between the maximum and minimum of \(Z(\theta )(\equiv \Delta \phi _{Z})\); \(\Delta \phi _{\mathrm{max}} \approx -1.360\) in Fig. 4.2a and \(\Delta \phi _{Z} \approx -1.342\) in Fig. 4.2c.

  6. 6.

    Since we have assumed Z is twice differentiable for 1 < p ≤ , here we further assume f ( ∈ L p(S)) is piecewise continuous, which implies that \(\langle Zf\rangle\) is piecewise smooth and it is obtained by integrating term by term.

  7. 7.

    The construction is as follow. Starting from m copies with the optimal forcing with prime period T 0 for 1:1 entrainment, add a certain small perturbation such that the m copies of the forcing become a single forcing with prime period mT 0 while still satisfying the constraints (4.2). The resulting locking range becomes arbitrarily close to the ideal one (which is realized only in 1:1 entrainment) as the perturbation becomes smaller, since the associated Γ m∕1 in Eq. (4.1) becomes arbitrarily close to the Γ 1∕1 of the best forcing for 1:1 entrainment.

References

  1. Winfree AT (2001) The geometry of biological time, 2nd edn. Springer, New York

    Book  Google Scholar 

  2. Kuramoto Y (1984) Chemical oscillations, waves and turbulence. Springer, Berlin

    Book  Google Scholar 

  3. Pikovsky AS, Rosenblum MG, Kurths J (2001) Synchronization: a universal concept in nonlinear sciences. Cambridge University Press, Cambridge

    Book  Google Scholar 

  4. Hoppensteadt FC, Izhikevich EM (1997) Weakly connected neural networks. Springer, New York

    Book  Google Scholar 

  5. Ermentrout GB, Terman DH (2010) Mathematical foundations of neuroscience. Springer, New York

    Book  Google Scholar 

  6. Kori H, Kawamura Y, Nakao H, Arai K, Kuramoto Y (2009) Collective-phase description of coupled oscillators with general network structure. Phys Rev E 80:036207

    Article  Google Scholar 

  7. Kawamura Y, Nakao H, Kuramoto Y (2011) Collective phase description of globally coupled excitable elements. Phys Rev E 84:046211

    Article  Google Scholar 

  8. Nakao H, Yanagita T, Kawamura Y (2012) Phase description of stable limit-cycle solutions in reaction-diffusion systems. Procedia IUTAM 5:227–233

    Article  Google Scholar 

  9. Sato M, Hubbard BE, Sievers AJ, Ilic B, Czaplewski DA, Craighead HG (2003) Observation of locked intrinsic localized vibrational modes in a micromechanical oscillator array. Phys Rev Lett 90:044102

    Article  CAS  PubMed  Google Scholar 

  10. Feng J, Tuckwell HC (2003) Optimal control of neuronal activity. Phys Rev Lett 91:018101

    Article  PubMed  Google Scholar 

  11. Forger D, Paydarfar D (2008) Starting, stopping, and resetting biological oscillators: in search of optimum perturbations. J Theor Biol 230:521–532

    Article  Google Scholar 

  12. Lebiedz D, Sager S, Bock HG, Lebiedz P (2005) Annihilation of limit-cycle oscillations by identification of critical perturbing stimuli via mixed-integer optimal control. Phys Rev Lett 95:108303

    Article  CAS  PubMed  Google Scholar 

  13. Gintautas V, Hübler AW (2008) Resonant forcing of nonlinear systems of differential equations. Chaos 18:033118

    Article  PubMed  Google Scholar 

  14. Bagheri N, Stelling J, Doyle FJ (2008) Circadian phase resetting via single and multiple control targets. PLoS Comput Biol 4:e1000104

    Article  PubMed Central  PubMed  Google Scholar 

  15. Gat O, Kielpinski D (2013) Frequency comb injection locking of mode locked lasers. New J Phys 15:033040

    Article  Google Scholar 

  16. Harada T, Tanaka HA, Hankins MJ, Kiss IZ (2010) Optimal waveform for the entrainment of a weakly forced oscillator. Phys Rev Lett 105:088301

    Article  PubMed  Google Scholar 

  17. Zlotnik A, Chen Y, Kiss IZ, Tanaka HA, Li JS (2013) Optimal waveform for fast entrainment of weakly forced nonlinear oscillators. Phys Rev Lett 111:024102

    Article  PubMed  Google Scholar 

  18. Takano K, Motoyoshi M, Fujishima M (2007) 4.8GHz CMOS frequency multiplier with subharmonic pulse-injection locking. In: Proceedings of IEEE asian solid-state circuits conference, Jeju, pp 336–339

    Google Scholar 

  19. Feng XL, White CJ, Hajimiri A, Roukes ML (2008) A self-sustaining ultrahigh-frequency nanoelectromechanical oscillator. Nat Nanotechnol 3:342–346

    Article  CAS  PubMed  Google Scholar 

  20. Jackson JC, Windmill JF, Pook VG, Robert D (2009) Synchrony through twice-frequency forcing for sensitive and selective auditory processing. Proc Natl Acad Sci USA 106:10177–10182

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  21. Zlotnik A, Li JS (2014) Optimal subharmonic entrainment of weakly forced nonlinear oscillators. Siam J Appl Dyn Syst 13:1654–1693

    Article  Google Scholar 

  22. Moehlis J, Shea-Brown E, Rabitz H (2006) Optimal inputs for phase models of spiking neurons. ASME J Comput Nonlinear Dyn 1:358–367

    Article  Google Scholar 

  23. Nabi A, Moehlis J (2012) Time optimal control of spiking neurons. J Math Biol 64:981–1004

    Article  PubMed  Google Scholar 

  24. Kirk DE (1970) Optimal control theory: an introduction. Prentice-Hall, New Jersey

    Google Scholar 

  25. Rudin W (1987) Real and complex analysis, 3rd edn. McGraw-Hill, New York

    Google Scholar 

  26. Matheny MH, Grau M, Villanueva LG, Karabalin RB, Cross MC, Roukes ML (2014) Phase synchronization of two anharmonic nanomechanical oscillators. Phys Rev Lett 112:014101

    Article  PubMed  Google Scholar 

  27. Yoshimura K, Arai K (2008) Phase reduction of stochastic limit cycle oscillators. Phys Rev Lett 101:154101

    Article  PubMed  Google Scholar 

  28. Goldobin DS, Teramae JN, Nakao H, Ermentrout GB (2010) Dynamics of limit-cycle oscillators subject to general noise. Phys Rev Lett 105:154101

    Article  PubMed  Google Scholar 

  29. Tsallis C (2009) Introduction to nonextensive statistical mechanics. Springer, New York

    Google Scholar 

  30. Magnus JR, Neudecker H (1989) Matrix differential calculus with applications in statistics and econometrics, revised edn. Wiley, Chichester

    Google Scholar 

Download references

Acknowledgements

The author is indebted to Dr. Y. Tsubo and Dr. T. Shimada for their enlightening suggestions and critical reading of the manuscript. This work was supported by MEXT (No. 23360047) and by the Support Center for Advanced Telecommunications Technology Research (SCAT).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hisa-Aki Tanaka .

Editor information

Editors and Affiliations

Appendices

Appendix 1 Assumptions on the Phase Response Function and Outlines of the Presented Proofs

In this appendix, we consider assumptions on the phase response functions Z(θ) which are required to prove the existence of entrainment limits, i.e., the optimal forcingsf opt,  p , and we present outlines of the corresponding proofs for P1 and P2 in the main text, for the cases (1) 1 < p <  and p = , and(2) p = 1,respectively.

[ Case of 1 < p <  and p = ]

For 1 < p < , we assume that Z(θ) satisfies the following assumptions (i–iv):

  1. (i)

    Z is twice differentiable (and hence, locally Lipschitz continuous);

  2. (ii)

    \(g(\theta ) =\bar{ Z}(\theta )+\lambda = 0\) has (a finite number of) isolated zeros θ ;

  3. (iii)

    \(H(\Delta \phi,\lambda )\) has a finite number of isolated optimal solutions \((\Delta \phi _{{\ast}},\lambda _{{\ast}})\); and,

  4. (iv)

    At each optimal solution of H, the bordered Hessian \(\vert \mathcal{H}(H)\vert \neq 0\), and \(\bar{Z}^{{\prime}}(\theta _{{\ast}})\neq 0\) for each θ in (ii).

Namely, assume that Z is smooth and generic in the above sense; then the best forcing \((\equiv \overline{f}_{\mathrm{opt},\ p} \in L^{p}(S))\) is uniquely given by f in Eq. (4.4) in the main text, if the “best” solution \((\overline{\Delta \phi }_{{\ast}},\overline{\lambda }_{{\ast}})\) exists, which is determined by the nonlinear equations (4.5) in the main text; in this case, \(F(\overline{\Delta \phi }_{{\ast}},\overline{\lambda }_{{\ast}})\) is the largest among all possible optima of \(F(\Delta \phi _{{\ast}},\lambda _{{\ast}})\) under the constraint \(G(\Delta \phi,\lambda ) = 0\).

The outline of the proof to the above statement is as follows. Firstly, f  ∈ L p(S) and g ∈ L q(S) are satisfied by assumption (i). Then, from Hölder’s inequality, we start from a candidate f of Eq. (4.4) in the main text, and the problem reduces to finite-dimensional optimization of \(H(\Delta \phi,\lambda )\) as shown in the main text. This optimization of H is possible, owing to the following facts. First, from assumption (iii), H has only isolated optimal points. Second, from assumptions (i), (ii), and (iv), the derivatives \(\frac{\partial ^{2}H} {\partial \Delta \phi ^{2}}\), \(\frac{\partial ^{2}H} {\partial \Delta \phi \partial \lambda }\), \(\frac{\partial ^{2}H} {\partial \lambda \partial \Delta \phi }\), and \(\frac{\partial ^{2}H} {\partial \lambda ^{2}}\) are continuous in a neighborhood of the optimal points; this is directly verified by the (ε, δ)-definition of limit, using the following (a) and (b): (a) all required derivatives of H are explicitly given as in Eqs. (4.15) below, and (b) although some integrals in Eqs. (4.15) are singular, all of them are verified to have finite values.

Then, as the next step, we obtain the necessary and sufficient conditions for existence of the optimal solutions of H. Finally, using these conditions, all (locally) optimal forcings f opt,  p are captured, and we verify that the best (optimal) forcing indeed maximizes the locking range in the phase Eq. (4.1) in the main text.

As for the case of p = , the same assumptions (i–iv) are required and almost the same argument as above is repeated. First, for \(f_{\mathrm{opt},\ \infty }(\theta ) \equiv M\mathrm{sig}[g(\theta )]\), this satisfies equality in Hölder’s inequality: \(2\pi J[f_{\mathrm{opt},\ \infty }] =\Vert f_{\mathrm{opt},\ \infty }\ g\Vert _{1} =\Vert f_{\mathrm{opt},\ \infty }\Vert _{\infty }\Vert g\Vert _{1} = M\Vert g\Vert _{1}\), which implies that f opt,   is optimal in L (S). (We note that uniqueness of f opt,   can be proved, but the proof is omitted here.)   Next, to determine the parameters \((\Delta \phi,\lambda )\), the same argument as in the case of P1 for 1 < p <  is repeated, resulting in the following equations: \(\left \langle \mathrm{sig}[g(\theta )]Z^{{\prime}}(\theta +\Delta \phi )\right \rangle = 0\), \(\left \langle \mathrm{sig}[g(\theta )]\right \rangle = 0\), and \(\vert \mathcal{H}(H)\vert = \mathcal{H}_{13}(\mathcal{H}_{12}^{2} -\mathcal{H}_{13}\mathcal{H}_{22}) > 0\). Note, we can verify that such f opt,   indeed maximizes (minimizes) Γ at ϕ + (ϕ ).

[ Case of p = 1]

For p = 1, we assume that Z(θ) satisfies the following assumptions (i), (ii), and (iii):

  1. (i)

    Z is locally Lipschitz continuous,

  2. (ii)

    \(g(\theta ) =\bar{ Z}(\theta )+\lambda\) achieves the maximum and the minimum, respectively, at some θ = θ max and θ = θ min, and

  3. (iii)

    \(\mathrm{The\ maximum\ of}\ \bar{Z}(\theta ) -\mathrm{the\ minimum\ of}\ \bar{Z}(\theta )\) [\(=\bar{ Z}(\theta _{\mathrm{max}}) -\bar{ Z}(\theta _{\mathrm{min}})\)] is maximized for a particular value of \(\Delta \phi = \Delta \phi _{\mathrm{max}}\) and this choice of \(\Delta \phi _{\mathrm{max}}\) (and its associated θ max,  θ min) is unique for a given Z.

Namely, assume that Z is continuous and generic in the above sense; then the pair of two pulses

$$\displaystyle\begin{array}{rcl} f_{\mathrm{opt},\ 1}(\theta ) \equiv -M[\Delta (\theta +\Delta \phi _{\mathrm{max}}) - \Delta (\theta )],& &{}\end{array}$$
(4.7)

where

$$\displaystyle\begin{array}{rcl} \Delta (\theta ) = \frac{1} {2n\epsilon }\ (\mathrm{for}\ \vert \theta \vert \leq \epsilon ),\ 0\ (\mathrm{otherwise}),& &{}\end{array}$$
(4.8)

realizes the ideal largest locking range \(\frac{M} {2\pi } \Vert g\Vert _{\infty }\) as ε → +0 (in \(\Delta (\theta ) = \frac{1} {2n\epsilon }\)), where \(\Delta \phi _{\mathrm{max}}\) satisfies \(\Delta \phi _{\mathrm{max}} =\theta _{\mathrm{max}} -\theta _{\mathrm{min}}\).

The outline of the proof to the above statement is as follows. Firstly, g ∈ L (S) is satisfied by assumption (i). Then, we construct a candidate of optimal f opt,  1 ( ∈ L 1(S)) as shown in Eq. (4.7), taking care of the constraints (4.2) in the main text, from Hölder’s inequality under assumptions (ii) and (iii). Next, we verify that this f opt,  1 realizes the best possible locking range (\(= \frac{M} {2\pi } \Vert g\Vert _{\infty }\)) obtained from Hölder’s inequality as ε → +0, again using assumptions (i), (ii), and (iii).

More precisely, for f opt,  1, we assume that \(\bar{Z}(\theta ) \equiv Z(\theta +\Delta \phi ) - Z(\theta )\) has a maximum and minimum at θ = θ max and θ = θ min, respectively (assumption (ii)), and further, without loss of generality, we assume that the value of \((\mathrm{the\ maximum\ of\ }\bar{Z}) - (\mathrm{the\ minimum\ of\ }\bar{Z}) =\bar{ Z}(\theta _{\mathrm{max}}) -\bar{ Z}(\theta _{\mathrm{min}})\) is largest at some unique value of \(\Delta \phi = \Delta \phi _{\mathrm{max}}\) (assumption (iii)). Note that this assumption is satisfied if Z is (locally Lipschitz) continuous (assumption (i)), and a simple mathematical argument shows that \(\Delta \phi _{\mathrm{max}} =\theta _{\mathrm{max}} -\theta _{\mathrm{min}}\) if such \(\Delta \phi _{\mathrm{max}}\) exists. Then, we represent f opt,  1 as \(f_{\mathrm{opt},\ 1}(\theta ) = -M[\Delta (\theta +\Delta \phi _{\mathrm{max}}) - \Delta (\theta )]\), as shown above. (Intuitively, this \(f_{\mathrm{opt},\ 1}\) is an arbitrarily tall pair of pulses satisfying the constraints (4.2) in the main text.) For this f opt,  1, the equality in Hölder’s inequality \(2\pi J[f_{\mathrm{opt},\ 1}] =\Vert f_{\mathrm{opt},\ 1}\Vert _{1}\Vert g\Vert _{\infty } = M\Vert g\Vert _{\infty }\) is satisfied as ε → 0, which implies that f opt,  1 becomes optimal (in L 1(S)) as ε → 0. Also, the associated \(\varGamma (\phi ) \rightarrow \frac{M} {4\pi } [Z(\theta _{\mathrm{max}}+\phi ) - Z(\theta _{\mathrm{min}}+\phi )] \equiv \varGamma _{0}(\phi )\) (uniformly, ε → 0) holds if Z is locally Lipschitz continuous. Similarly to in the above cases of 1 < p <  and p = , we can verify that the associated Γ 0(ϕ) is maximized (minimized) at ϕ + (ϕ ), and \(\varGamma _{0}(\phi _{+}) -\varGamma _{0}(\phi _{-}) = \frac{M} {2\pi } \Vert g\Vert _{\infty }\), where \(\phi _{+} -\phi _{-}\) is defined by \(\phi _{+} -\phi _{-} = \Delta \phi _{\mathrm{max}}\), if \(\Delta \phi _{\mathrm{max}}\) exists.

So far, we have assumed the existence of the best choice of \(\Delta \phi = \Delta \phi _{\mathrm{max}}\), which results in a possible optimal forcing f opt,  1. Now, we are in a position to determine \(\Delta \phi _{\mathrm{max}}\) for a given Z. This \(\Delta \phi _{\mathrm{max}}\) is numerically obtained as follows (if Z is continuous): we plot the graph \(\varGamma _{0}(\phi ) = M[Z(\phi +\Delta \phi ) - Z(\phi )] = M\bar{Z}(\theta )\) for a given \(\Delta \phi \in [-\pi,0]\) and then gradually vary this parameter, again plotting the graph of Γ 0(ϕ) for each value. Thus, the locking range R = (the maximum of Γ 0) − (the minimum of Γ 0) is obtained as a function of \(\Delta \phi\). Note that \(R(\Delta \phi )\) is an even function, due to the symmetry of the wave form of f opt, 1. Once \(\Delta \phi _{\mathrm{max}}\) is obtained, \(\Delta \phi _{\mathrm{max}}\) determines θ max and θ min from the graph of Γ 0.

Appendix 2 Derivation of the Nonlinear Equations Determining Optimal Forcings

First, note that \(\left (\frac{\partial G} {\partial \Delta \phi }, \frac{\partial G} {\partial \lambda } \right )\neq \mathbf{0}\) is always satisfied, since \(\frac{\partial G} {\partial \lambda } > 0\) as obtained in Eq. (4.13) below. Hence, the Lagrange multiplier rule is applied, and some value of μ (which can be 0) in \(H = F +\mu G\) exists and its associated optimal solution \((\Delta \phi _{{\ast}},\lambda _{{\ast}})\) to \(H(\Delta \phi,\lambda )\) satisfies

$$\displaystyle\begin{array}{rcl} \left (\frac{\partial H} {\partial \Delta \phi }, \frac{\partial H} {\partial \lambda } \right ) = \mathbf{0},& &{}\end{array}$$
(4.9)

if it exists. Namely, candidates for the optimal solution to the optimization of \(H(\Delta \phi,\lambda )\) are obtained by solving Eq. (4.9) for \(\Delta \phi _{{\ast}},\lambda _{{\ast}},\) and μ ; the derivation process is as follows.

For conciseness of expressions, we start by defining \(\alpha = \frac{p} {p-1}\) ( > 0) and \(\beta = \frac{1} {p-1}\) ( > 0). Then, the derivatives of \(F(\Delta \phi,\lambda )\) are obtained as:

$$\displaystyle\begin{array}{rcl} & & \frac{\partial F} {\partial \Delta \phi } =\alpha \left \langle \mathrm{sig}[g(\theta )]\vert g(\theta )\vert ^{\beta }Z^{{\prime}}(\theta +\Delta \phi )\right \rangle \equiv F_{ 1}(\Delta \phi,\lambda ),{}\end{array}$$
(4.10)
$$\displaystyle\begin{array}{rcl} & & \frac{\partial F} {\partial \lambda } =\alpha \left \langle \mathrm{sig}[g(\theta )]\vert g(\theta )\vert ^{\beta }\right \rangle =\alpha G \equiv F_{2}(\Delta \phi,\lambda ),{}\end{array}$$
(4.11)

where \(g(\theta ) =\bar{ Z}(\theta )+\lambda\). Likewise, the derivatives of \(G(\Delta \phi,\lambda )\) are obtained as:

$$\displaystyle\begin{array}{rcl} \frac{\partial G} {\partial \Delta \phi }& =& \beta \left \langle \vert g(\theta )\vert ^{\beta -1}Z^{{\prime}}(\theta +\Delta \phi )\right \rangle,{}\end{array}$$
(4.12)
$$\displaystyle\begin{array}{rcl} \frac{\partial G} {\partial \lambda } & =& \beta \left \langle \vert g(\theta )\vert ^{\beta -1}\right \rangle > 0.{}\end{array}$$
(4.13)

The derivation of Eqs. (4.10)–(4.13) is straightforward, and it is omitted here. Note that in Eq. (4.11), \(F_{2}(\Delta \phi,\lambda ) =\alpha \left \langle \mathrm{sig}(g(\theta ))\vert g(\theta )\vert ^{\beta }\right \rangle = 0\) is simply the constraint \(G(\Delta \phi,\lambda ) = 0\). Since in Eq. (4.9) we have \(\frac{\partial H} {\partial \lambda } = \frac{\partial F} {\partial \lambda } +\mu \frac{\partial G} {\partial \lambda } = 0\), and \(\frac{\partial F} {\partial \lambda } =\alpha G(\Delta \phi,\lambda ) = 0\) and \(\frac{\partial G} {\partial \lambda } > 0\) follow from the above arguments, μ is uniquely determined as μ  = 0. (This sounds a bit contradictory, as the μ G term vanishes in \(H = F +\mu G\) if \(\mu =\mu _{{\ast}} = 0\); however, for μ  = 0, Eq. (4.9) reduces to \(F_{1}(\Delta \phi,\lambda ) = F_{2}(\Delta \phi,\lambda ) = 0\), and the solutions to this automatically satisfy the charge–balance constraint (4.2) in the main text. Hence, the situation here does not contradict the result from the Lagrange multiplier rule.) Thus, the candidates for optimal solutions are obtained from \(F_{1} = F_{2} = 0\).

Now we are in a position to distinguish optimal solutions from nonoptimal ones. For this purpose, the so-called bordered Hessian matrix of H [30] is introduced as:

$$\displaystyle\begin{array}{rcl} \mathcal{H}(H) = \left [\begin{array}{ccc} 0 &\mathcal{H}_{12} & \mathcal{H}_{13} \\ \mathcal{H}_{21} & \mathcal{H}_{22} & \mathcal{H}_{23} \\ \mathcal{H}_{31} & \mathcal{H}_{32} & \mathcal{H}_{33}\\ \end{array} \right ],& &{}\end{array}$$
(4.14)

whose elements are given by:

$$\displaystyle\begin{array}{rcl} & & \mathcal{H}_{12} = \mathcal{H}_{21} = \frac{\partial G} {\partial \Delta \phi } =\beta \left \langle \vert g(\theta )\vert ^{\beta -1}Z^{{\prime}}(\theta +\Delta \phi )\right \rangle,{}\end{array}$$
(4.15a)
$$\displaystyle\begin{array}{rcl} & & \mathcal{H}_{13} = \mathcal{H}_{31} = \frac{\partial G} {\partial \lambda } =\beta \left \langle \vert g(\theta )\vert ^{\beta -1}\right \rangle > 0,{}\end{array}$$
(4.15b)
$$\displaystyle\begin{array}{rcl} \mathcal{H}_{22} = \frac{\partial ^{2}F} {\partial \Delta \phi ^{2}}& =& \alpha \beta \left \langle \left \vert g(\theta )\right \vert ^{\beta -1}Z^{{\prime}}(\theta +\Delta \phi )^{2}\right \rangle \\ & +& \alpha \left \langle \mathrm{sig}[g(\theta )]\vert g(\theta )\vert ^{\beta }Z^{{\prime\prime}}(\theta +\Delta \phi )\right \rangle,{}\end{array}$$
(4.15c)
$$\displaystyle\begin{array}{rcl} & & \mathcal{H}_{23} = \frac{\partial ^{2}F} {\partial \Delta \phi \partial \lambda } = \alpha \beta \left \langle \vert g(\theta )\vert ^{\beta -1}Z^{{\prime}}(\theta +\Delta \phi )\right \rangle,{}\end{array}$$
(4.15d)
$$\displaystyle\begin{array}{rcl} \mathcal{H}_{32} = \frac{\partial ^{2}F} {\partial \lambda \partial \Delta \phi }& =& \alpha \frac{\partial } {\partial \Delta \phi }\left \langle \mathrm{sig}[g(\theta )]\vert g(\theta )\vert ^{\beta }\right \rangle \\ & =& \alpha \beta \left \langle \vert g(\theta )\vert ^{\beta -1}Z^{{\prime}}(\theta +\Delta \phi )\right \rangle,{}\end{array}$$
(4.15e)
$$\displaystyle\begin{array}{rcl} \mathcal{H}_{33} = \frac{\partial ^{2}F} {\partial \lambda ^{2}} & =& \alpha \frac{\partial } {\partial \lambda }\left \langle \mathrm{sig}[g(\theta )]\vert g(\theta )\vert ^{\beta }\right \rangle \\ & =& \alpha \beta \left \langle \vert g(\theta )\vert ^{\beta -1}\right \rangle =\alpha \mathcal{H}_{ 13} > 0,{}\end{array}$$
(4.15f)

where \(g(\theta ) =\bar{ Z}(\theta )+\lambda\). The Hessian \(\vert \mathcal{H}(H)\vert\) is then obtained as:

$$\displaystyle\begin{array}{rcl} \vert \mathcal{H}(H)\vert = \mathcal{H}_{13}(\alpha \mathcal{H}_{12}^{2} -\mathcal{H}_{ 13}\mathcal{H}_{22}),& &{}\end{array}$$
(4.16)

which turns out to be particularly useful because the solution \((\Delta \phi _{{\ast}},\lambda _{{\ast}})\) to \(F_{1}(\Delta \phi,\lambda ) = F_{2}(\Delta \phi,\lambda ) = 0\) is maximal if it satisfies \(\vert \mathcal{H}(H)\vert > 0\), and it is minimal if \(\vert \mathcal{H}(H)\vert < 0\) [30]. Hence, from the above calculations, the optimal solution \((\Delta \phi,\lambda )\) to \(F(\Delta \phi,\lambda ) = 0\) under the charge–balance constraint (4.2) in the main text is found to exist if the following conditions are satisfied:

$$\displaystyle\begin{array}{rcl} & & \left \langle \mathrm{sig}[\bar{Z}(\theta )+\lambda ]\vert \bar{Z}(\theta )+\lambda \vert ^{\beta }Z^{{\prime}}(\theta +\Delta \phi )\right \rangle = 0,{}\end{array}$$
(4.17)
$$\displaystyle\begin{array}{rcl} & & \left \langle \mathrm{sig}[\bar{Z}(\theta )+\lambda ]\vert \bar{Z}(\theta )+\lambda \vert ^{\beta }\right \rangle = 0,{}\end{array}$$
(4.18)
$$\displaystyle\begin{array}{rcl} & & \vert \mathcal{H}(H)\vert = \mathcal{H}_{13}(\alpha \mathcal{H}_{12}^{2} -\mathcal{H}_{ 13}\mathcal{H}_{22}) >\ 0,{}\end{array}$$
(4.19)

as in the nonlinear equations (4.5) in the main article.

Appendix 3 Detailed Information Regarding Optimal Forcings

Here we present detailed information regarding the optimal forcings for the example in Fig. 4.3 in the main text. These are numerically obtained by the algorithms related to P1 and P2, presented in the main text. We note these optimals are consistent with results from a brute force search for optimal forcings with a genetic algorithm, which are omitted here due to space limitations.

Fig. 4.3
figure 3

All the optimal forcings for various p ( = 1, 1. 01, 2, 5, ) obtained for the Hodgkin–Huxley neuron phase model [17]: (a) p = 1, 1. 01, (b) p = 2, , and (c) p = 5

All the solutions \((\Delta \phi _{{\ast}},\lambda _{{\ast}})\) to \(H(\Delta \phi,\lambda ) = 0\), the associated locking range, and the sign of the Hessian \(\vert \mathcal{H}(H)\vert\) are listed below for p = 1. 01, 2, 5, .

  1. (a)

    p = 1. 01 (cf. Fig. 4.3a)

    solution 1::

    \((\Delta \phi,\lambda ) = (-\pi,0)\),

    locking range R = 2. 2585700, \(\vert \mathcal{H}(H)\vert < 0\),

    solution 2::

    \((\Delta \phi,\lambda ) = (-1.3649363,0.57040837)\),

    R = 2. 7613265, \(\vert \mathcal{H}(H)\vert > 0\);

  2. (b)

    p = 2 (cf. Fig. 4.3b)

    solution 1::

    \((\Delta \phi,\lambda ) = (-\pi,0)\),

    locking range R = 1. 3287487, \(\vert \mathcal{H}(H)\vert < 0\),

    solution 2::

    \((\Delta \phi,\lambda ) = (-1.6150653,0.0000000)\),

    R = 1. 4926698, \(\vert \mathcal{H}(H)\vert > 0\);

  3. (c)

    p = 5 (cf. Fig. 4.3c)

    solution 1::

    \((\Delta \phi,\lambda ) = (-\pi,0)\),

    locking range R = 1. 21853096, \(\vert \mathcal{H}(H)\vert > 0\),

    solution 2::

    \((\Delta \phi,\lambda ) = (-2.5085617,-0.23864305)\),

    R = 1. 209864499, \(\vert \mathcal{H}(H)\vert < 0\),

    solution 3::

    \((\Delta \phi,\lambda ) = (-1.948572843,-0.23837645)\),

    R = 1. 2161630, \(\vert \mathcal{H}(H)\vert > 0\);

  4. (d)

    p =  (cf. Fig. 4.3b)

    solution 1::

    \((\Delta \phi,\lambda ) = (-\pi,0)\),

    locking range R = 1. 1784578, \(\vert \mathcal{H}(H)\vert > 0\).

    These solutions and the corresponding optimal wave forms, which have a positive \(\vert \mathcal{H}(H)\vert\) in the above list, are shown in Fig. 4.3.

For the case of p = 1, \(\Delta \phi _{\mathrm{max}}\) is (uniquely) obtained as \(\Delta \phi _{\mathrm{max}}\ \approx -1.36094\) from the algorithm regarding P2. Note that this value of \(\Delta \phi _{\mathrm{max}}\) corresponds to the optimal solution of \(\Delta \phi \ \approx -1.364\) for p = 1. 01 shown in Fig. 4.3a.

Appendix 4 Derivation of Optimal Forcings in Two Limits

Here, we consider the optimal forcings in the following two cases: (1) p →  and (2) p → 1.

[ Case of p → ]

First, we assume \(0 <\Vert g\Vert _{q} < \infty \) and \(0 \leq \vert g(\theta )\vert < \infty \), \(\forall \theta \in S\), which is natural since g(θ) is given as \(g(\theta ) =\bar{ Z}(\theta )+\lambda\). Then, it is obvious that \(0 \leq \vert g(\theta )\vert /\Vert g\Vert _{q} < \infty \), and hence, as p → ,

$$\displaystyle\begin{array}{rcl} \left (\frac{\vert g(\theta )\vert } {\Vert g\Vert _{q}} \right )^{ \frac{1} {p-1} } \rightarrow \left \{\begin{array}{ll} 1,&\mathrm{for}\ \vert g(\theta )\vert > 0\\ 0, &\mathrm{for }\ \vert g(\theta )\vert = 0. \end{array} \right.& &{}\end{array}$$
(4.20)

Thus, taking the limit of Eq. (4.20) for f in Eq. (4.4) in the main text, the limit of f (θ) → Msig[g(θ)] is obtained.

[ Case of p → 1]

The limiting value of \((\vert g(\theta )\vert /\Vert g\Vert _{q})^{ \frac{1} {p-1} }\) for f in Eq. (4.4) in the main text is obtained by the following calculations at θ = θ and at θθ . First, rescaling \(\vert g(\theta )\vert /\Vert g\Vert _{q}\) by using \(\bar{g}(\theta ) = g(\theta )/\vert g(\theta _{{\ast}})\vert\), we have \(\vert \bar{g}(\theta _{{\ast}})\vert ^{ \frac{1} {p-1} } = 1\) (for any p), and for any θ we have:

$$\displaystyle\begin{array}{rcl} \left (\frac{\vert g(\theta )\vert } {\Vert g\Vert _{q}} \right )^{ \frac{1} {p-1} } = \left (\frac{\vert C\bar{g}(\theta )\vert } {\Vert C\bar{g}\Vert _{q}} \right )^{ \frac{1} {p-1} } ={\Bigl ( \frac{\vert \bar{g}(\theta )\vert } {\Vert \bar{g}\Vert _{q}} \Bigr )}^{ \frac{1} {p-1} },& &{}\end{array}$$
(4.21)

where C denotes \(\vert g(\theta _{{\ast}})\vert \ (< \infty )\). Here we assume θ to have 0 measure. Then, from \(\vert \bar{g}(\theta )\vert < 1\) and \(\frac{1} {p} \rightarrow 1\) (p → 1), we have \(\vert \bar{g}(\theta )\vert ^{ \frac{p} {p-1} } \rightarrow 0\) a.e. on S, resulting in \(\langle \vert \bar{g}(\theta )\vert ^{ \frac{p} {p-1} }\rangle ^{ \frac{1} {p} } \rightarrow 0\) (p → 1). Thus, for θ = θ we obtain \(\left (\frac{\vert g(\theta _{{\ast}})\vert } {\Vert g\Vert _{q}} \right )^{ \frac{1} {p-1} } \rightarrow +\infty \) (p → 1) from Eq. (4.21).

In contrast, for θθ , we have \(\Vert \bar{g}\Vert _{q} \rightarrow \Vert \bar{ g}\Vert _{\infty } = 1\ (q \rightarrow +\infty )\), and \(\bar{g}(\theta ) < 1\). This implies \(\frac{\vert \bar{g}(\theta )\vert } {\Vert \bar{g}\Vert _{q}} \rightarrow \vert \bar{ g}(\theta )\vert < 1\) (q → +). Then, by taking the logarithm of Eq. (4.21), \(\log {\Bigl [{\Bigl (\frac{\vert \bar{g}(\theta )\vert } {\Vert \bar{g}\Vert _{q}} \Bigr )}^{ \frac{1} {p-1} }\Bigr ]} = (q - 1)\log \left (\frac{\vert \bar{g}(\theta )\vert } {\Vert \bar{g}\Vert _{q}} \right ) \rightarrow -\infty \ (q \rightarrow +\infty )\), and hence \({\Bigl ( \frac{\vert g(\theta )\vert } {\Vert \bar{g}(\theta )\Vert _{q}}\Bigr )}^{ \frac{1} {p-1} } \rightarrow 0\) (p → 1). Thus, from these calculations, we obtain the limit of f (θ) in Eq. (4.6b) in the main text.

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer Japan

About this chapter

Cite this chapter

Tanaka, HA. (2015). Entrainment Limit of Weakly Forced Nonlinear Oscillators. In: Ohira, T., Uzawa, T. (eds) Mathematical Approaches to Biological Systems. Springer, Tokyo. https://doi.org/10.1007/978-4-431-55444-8_4

Download citation

Publish with us

Policies and ethics