1 Introduction

1.1 Time scale calculus

We assume that the reader is familiar with the notion of time scales. Thus, note just that T, [ a , b ] T :=[a,b]T (resp. ( a , b ) T :=(a,b)T or similarly, we define any combination of right and left open or closed interval), [ a , ) T :=[a,)T, σ, ρ, μ and f Δ stand for the time scale, a finite time scale interval, an infinite time scale interval, a forward jump operator, a backward jump operator, graininess and a Δ-derivative of f. Further, the symbols C(T), C rd (T) and C rd 1 (T) stand for the class of continuous, rd-continuous and rd-continuous Δ-derivative functions. See [1], which is the initiating paper of the time scale theory, the thesis [2] and [3] containing a lot of information on time scale calculus.

Now, we remind further aspects of time scale calculus, which will be needed later; see, e.g., [3].

Definition 1 Let T be a time scale. A function f:T×RR is called

  1. (i)

    rd-continuous, if g defined by g(t):=f(t,y(t)) is rd-continuous for any rd-continuous function y:TR;

  2. (ii)

    bounded on a set ST×R, if there exists a constant M>0 such that

    | f ( t , y ) | Mfor all (t,y)S;
  3. (iii)

    Lipschitz continuous on a set ST×R, if there exists a constant L>0 such that

    | f ( t , y 1 ) f ( t , y 2 ) | L| y 1 y 2 |for all (t, y 1 ),(t, y 2 )S.

1.2 Delay dynamic equations on time scales

Let τ:TT be an increasing rd-continuous function satisfying τ(t)t for all tT. Let the function f:T×RR be rd-continuous. We consider the delay dynamic equation

y Δ (t)=f ( t , y ( τ ( t ) ) )
(1)

on time scales T. (Note that y(τ(t)) C rd (T) in view of Definition 1.)

For a given t 0 T, a function y: [ τ ( t 0 ) , ) T R is said to be a solution of (1) on [ τ ( t 0 ) , ) T provided y C rd ( [ τ ( t 0 ) , ) T ), y C rd 1 ( [ t 0 , ) T ) and y satisfies (1) for all t [ t 0 , ) T . If, moreover, an initial function φC( [ τ ( t 0 ) , t 0 ] T ) be given and

y(t)=φ(t),t [ τ ( t 0 ) , t 0 ] T ,
(2)

then we say that y is a solution of the initial value problem (IVP) (1) and (2).

1.3 Existence and uniqueness of solutions of delay dynamic equations

For the next study, it is important to verify the existence and uniqueness of solutions of IVP (1) and (2). However, the following theorem (in a more general form) can be found in [[4], Theorem 2.1].

Theorem 1 (Picard-Lindelöf theorem)

Let t 1 T, t 1 > t 0 , m>0. Let

Y m := { y R : | y φ ( t ) | m  for all  t [ τ ( t 0 ) , t 0 ] T } ,

where the properties of φ are described in previous Section  1.2. Assume that f C rd ( [ t 0 , t 1 ] T × Y m ) is bounded on [ t 0 , t 1 ] T × Y m , with bound M>0, and Lipschitz continuous on [ t 0 , t 1 ] T × Y m . Then the initial value problem (1) and (2) has a unique solution y on the interval [ τ ( t 0 ) , σ ( ξ ) ] T [ τ ( t 0 ) , t 1 ] T , where

ξ:=max [ t 0 , t 0 + δ ] T

and

δ:=min{ t 1 t 0 ,m/M}.

Carefully tracing the proof of Theorem 1 in [4], it easy to verify that if Theorem 1 holds, then the solution of the IVP (1) and (2) depends continuously on the initial data.

2 Problem under consideration

Let b,c:TR be rd-continuous functions such that b(t)<c(t) for all t [ τ ( t 0 ) , ] T and

b(t)<φ(t)<c(t)for all t [ τ ( t 0 ) , t 0 ] T .
(3)

We define a set ΩT×R as

Ω:= { ( t , y ) : t [ τ ( t 0 ) , ] T , b ( t ) < y < c ( t ) } .

Then the closure Ω ¯ equals

Ω ¯ := { ( t , y ) : t [ τ ( t 0 ) , ] T , b ( t ) y c ( t ) }

and the boundary Ω= Ω B Ω C , where

Ω B := { ( t , y ) : t [ τ ( t 0 ) , ] T , y = b ( t ) }

and

Ω C := { ( t , y ) : t [ τ ( t 0 ) , ] T , y = c ( t ) } .

Consider the delay dynamic equation (1) and the initial value problem (2). Let

b :=min { b ( t ) : t [ τ ( t 0 ) , t 0 ] T }

and

c :=max { c ( t ) : t [ τ ( t 0 ) , t 0 ] T } .

Let t 1 T, t 1 > t 0 . Throughout, we will assume that a function f is bounded and Lipschitz continuous on a domain S=S(t,y)T×R and

{ [ τ ( t 0 ) , t 1 ] T × [ b , c ] } Ω ¯ S.

This condition says that by Theorem 1 every initial value problem (1) and (2) with φ satisfying (3) has exactly one solution on the interval [ τ ( t 0 ) , σ ( ξ ) ] T , σ(ξ)> t 0 . It is also easy to show that this solution depends continuously on the initial function φ.

Our aim is to establish sufficient conditions for the right-hand side of equation (1) in order to guarantee the existence of at least one solution y(t) of (1) defined on [ τ ( t 0 ) , ] T such that (t,y(t))Ω for each t [ τ ( t 0 ) , ] T . The main result generalizes some previous results of the first author (and his co-authors) concerning the asymptotic behavior of solutions of discrete equations; see, e.g., [516]. In papers [5, 7], in our best knowledge, the retract principle is extended to discrete equations (see [911] as well). In [6, 8, 14, 16] delayed discrete equations are considered by the retract technique, and in [12] the retract principle is given for discrete time scales. Papers [13, 15] are devoted to the extension of the retract principle to dynamic equations. In the present paper, we give an attempt to enlarge the retract principle to delayed dynamic equations.

For further consideration, it is convenient to establish the following concept.

Definition 2 A point M=(t,b(t)) Ω B , t t 0 , is called the point of strict egress for the set Ω with respect to equation (1) if

f ( t , ψ ( τ ( t ) ) ) < b Δ (t),
(4)

where ψ: [ τ ( t ) , t ] T R is an arbitrary rd-continuous function such that b(s)<ψ(s)<c(s) for every s [ τ ( t ) , t ) T and ψ(t)=b(t).

A point M=(t,c(t)) Ω C , t t 0 , is called the point of strict egress for the set Ω with respect to equation (1) if

f ( t , ψ ( τ ( t ) ) ) > c Δ (t),
(5)

where ψ: [ τ ( t ) , t ] T R is an arbitrary rd-continuous function such that b(s)<ψ(s)<c(s) for every s [ τ ( t ) , t ) T and ψ(t)=c(t).

Remark 1 The geometrical meaning of the point of strict egress is evident. If a point ( t ,b( t )) Ω B is a point of strict egress for the set Ω with respect to (1), and y(t) is a (unique) solution of (1) satisfying y( t )=b( t ), then due to (4),

( y ( t ) b ( t ) ) Δ =f ( t , ψ ( τ ( t ) ) ) b Δ ( t )<0.

From the definition of a Δ-derivative, we get y(t)b(t)<0 (or (t,y(τ(t))) Ω ¯ ) for t ( t , t + δ ) T with a small positive δ if t is a right-dense point and for t=σ( t ) if t is right-scattered.

By analogy, if ( t ,c( t )) Ω C is a point of strict egress for the set Ω with respect to (1), and y(t) is a (unique) solution of (1) satisfying y( t )=c( t ), then due to (5),

( y ( t ) c ( t ) ) Δ =f ( t , ψ ( τ ( t ) ) ) c Δ ( t ) >0.

From the definition of a Δ-derivative, we get y(t)c(t)>0 (or (t,y(t)) Ω ¯ ) for t ( t , t + δ ) T with a small positive δ if t is a right-dense point and for t=σ( t ) if t is right-scattered.

Definition 3 ([17])

If AB are subsets of a topological space and π:BA is a continuous mapping from B onto A such that π(p)=p for every pA, then π is said to be a retraction of B onto A. When a retraction of B onto A exists, A is called a retract of B.

3 Existence theorem

The proof of the following theorem is based on the retract method, which is well known for ordinary differential equations and goes back to Ważewski [18]. Below we will assume that the function f, except for the indicated conditions, satisfies all the assumptions given in Section 2.

Theorem 2 Let f:T×RR. Let b,c:TR be delta differentiable functions such that b(t)<c(t) for each t [ τ ( t 0 ) , ) T . If, moreover, every point M Ω B Ω C is the point of strict egress for the set Ω with respect to equation (1), then there exists an rd-continuous initial function φ : [ τ ( t 0 ) , t 0 ] T R satisfying

b(t)< φ (t)<c(t)for all t [ τ ( t 0 ) , t 0 ] T ,

such that the initial problem

y(t)= φ (t),t [ τ ( t 0 ) , t 0 ] T
(6)

defines a solution y of (1) on the interval [ τ ( t 0 ) , ) T satisfying

b(t)<y(t)<c(t)for all t [ τ ( t 0 ) , ) T .
(7)

Proof The idea of the proof is simple. We suppose that the statement of the theorem is not valid. Then it is possible to prove that there exists a retraction of a segment B:=[α,β] with α<β onto a two-point set A:={α,β}. But it is well known that the boundary of a nonempty (closed) interval cannot be its retract (see [19]). So, in our case, such a retractive mapping cannot exist because it is incompatible with continuity.

Without any special comment, throughout the proof, we use the property that the initial value problem in question has a unique solution and the property of continuous dependence of solutions on their initial data.

Suppose now that φ satisfying the inequality

b(t)< φ (t)<c(t)for all t [ τ ( t 0 ) , t 0 ] T

and generating the solution y=y(t) which satisfies (7) for any t [ τ ( t 0 ) , ) T does not exist. This means that for any rd-continuous initial function φ 0 satisfying the inequality

b(t)< φ 0 (t)<c(t)for all t [ τ ( t 0 ) , t 0 ] T ,
(8)

there exists a t 0 T, t 0 > t 0 such that for a corresponding solution y= y 0 (t) of the initial problem

y 0 (t)= φ 0 (t),t [ τ ( t 0 ) , t 0 ] T ,

we have

( t 0 , y 0 ( t 0 ) ) Ω

and,

( t , y 0 ( t ) ) Ωfor all t [ t 0 , t 0 ) T .

Let us define auxiliary mappings P 1 , P 2 and P 3 .

First, define the mapping P 1 : B 1 T×R, where

B 1 = { ( t 0 , φ 0 ( t 0 ) ) T × R : b ( t 0 ) φ 0 ( t 0 ) c ( t 0 ) } ,

such that

  1. (i)

    for φ 0 (t), b(t)< φ 0 (t)<c(t), t [ τ ( t 0 ) , t 0 ] T , we define

    P 1 : ( t 0 , φ 0 ( t 0 ) ) ( t 0 , y 0 ( t 0 ) ) ;
  2. (ii)

    for φ 0 (t) satisfying b(t)< φ 0 (t)<c(t), t [ τ ( t 0 ) , t 0 ) T and φ 0 ( t 0 )=b( t 0 ), we put t 0 = t 0 and define

    P 1 : ( t 0 , φ 0 ( t 0 ) ) ( t 0 , y 0 ( t 0 ) ) = ( t 0 , b ( t 0 ) ) ;
  3. (iii)

    for φ 0 (t), b(t)< φ 0 (t)<c(t), t [ τ ( t 0 ) , t 0 ) T and φ 0 ( t 0 )=c( t 0 ), we put t 0 = t 0 and define

    P 1 : ( t 0 , φ 0 ( t 0 ) ) ( t 0 , y 0 ( t 0 ) ) = ( t 0 , c ( t 0 ) ) .

Second, we define the mapping P 2 : B 2 T×R, where

B 2 =P( B 1 )= { ( t 0 , y 0 ( t 0 ) ) T × R : y 0 ( t 0 ) b ( t 0 )  or  y 0 ( t 0 ) c ( t 0 ) } ,

as

P 2 : ( t 0 , y 0 ( t 0 ) ) { ( t 0 , c ( t 0 ) ) if  y 0 ( t 0 ) c ( t 0 ) , ( t 0 , b ( t 0 ) ) if  y 0 ( t 0 ) b ( t 0 ) .

Third, we define the mapping P 3 : B 3 T×R, where

B 3 =P( B 2 )= { ( t 0 , y ˜ ) T × R : y ˜ = b ( t 0 )  or  y ˜ = c ( t 0 ) } ,

as

P 3 : ( t 0 , y ˜ ) { ( t 0 , c ( t 0 ) ) if  y ˜ = c ( t 0 ) , ( t 0 , b ( t 0 ) ) if  y ˜ = b ( t 0 ) .

We will show that the composite mapping

P:= P 3 P 2 P 1 ,P: B 1 A 1 ,

where

A 1 = { ( t 0 , b ( t 0 ) ) , ( t 0 , c ( t 0 ) ) } ,

is continuous with respect to the second coordinate φ 0 ( t 0 ) of the point ( t 0 , φ 0 ( t 0 )) B 1 . The definition of the mapping P implies that only two resulting points are possible, namely either P( B 1 )=( t 0 ,c( t 0 )) or P( B 1 )=( t 0 ,b( t 0 )).

  1. (I)

    We consider the first possibility, i.e., P( B 1 )=( t 0 ,c( t 0 )). Let b(t)< φ 0 (t)<c(t) for all t [ τ ( t 0 ) , t 0 ) T . Then

    P 1 ( t 0 , φ 0 ( t 0 ) ) = ( t 0 , y 0 ( t 0 ) ) , ( t 0 , y 0 ( t 0 ) ) Ωand y 0 ( t 0 ) c ( t 0 ) .

Let y 0 ( t 0 )>c( t 0 ). Then ρ( t 0 )< t 0 and the continuity of the mapping P 1 is obvious. Indeed, if

φ 0 , ε (t),t [ τ ( t 0 ) , t 0 ] ,b(t)< φ 0 , ε (t)<c(t),t[τ( t 0 ), t 0 )
(9)

is the initial problem defining the solution y ε (t), ε is a sufficiently small number and

| φ 0 , ε ( t ) φ 0 ( t ) | <ε,t [ τ ( t 0 ) , t 0 ] ,

then due to the property of continuous dependence of solutions on their initial data, t 0 , ε = t 0 and

P 1 ( t 0 , φ 0 , ε ( t 0 ) ) = ( t 0 , ε , y ε ( t 0 , ε ) ) = ( t 0 , y ε ( t 0 ) ) with y ε ( t 0 ) >c ( t 0 ) .

Consequently, P( t 0 , φ 0 , ε ( t 0 ))=( t 0 ,c( t 0 )).

Let y 0 ( t 0 )=c( t 0 ). By the assumption of the theorem, every boundary point of Ω is the point of strict egress for the set Ω with respect to equation (1). Then for the solution y ε (t) defined by (9), we have

P 1 ( t 0 , φ 0 , ε ( t 0 ) ) = ( t 0 , ε , y ε ( t 0 , ε ) )

either with y ε ( t 0 , ε )>c( t 0 , ε ) or with t 0 , ε = t 0 , y ε ( t 0 , ε )=c( t 0 ). (We do not describe all the possibilities for the occurrence of the first or of the second alternative.) In both alternatives we get P( t 0 , φ 0 , ε ( t 0 ))=( t 0 ,c( t 0 )) again. Hence, the mapping P is continuous in the considered case.

  1. (II)

    We proceed analogously with the case P( B 1 )=( t 0 ,b( t 0 )).

The continuity of the mapping P was proved for initial functions φ 0 satisfying b(t)< φ 0 (t)<c(t) for all t [ τ ( t 0 ) , t 0 ) T and b( t 0 )< φ 0 ( t 0 )<c( t 0 ). The desired retraction P r can be defined as a mapping of the second coordinates realized by P. Then the mapping

[ b ( t 0 ) , c ( t 0 ) ] P r { b ( t 0 ) , c ( t 0 ) }

is continuous and

{ b ( t 0 ) } P r { b ( t 0 ) } , { c ( t 0 ) } P r { c ( t 0 ) } ,

i.e., the points {b( t 0 )}, {c( t 0 )} are stationary.

In this situation we proved that there exists a retraction P r of the set B:=[b( t 0 ),c( t 0 )] onto the two-point set A:={b( t 0 ),c( t 0 )} (see Definition 3). In regard to the above mentioned fact, this is impossible. Our supposition is false, and there exists the initial problem (6) such that the corresponding solution y= y (t) satisfies the inequalities (7) for every t [ τ ( t 0 ) , ] T . The theorem is proved. □

4 Example

Let us consider the delay dynamic equation of the type (1)

y Δ =f ( t , y ( τ ( t ) ) ) := 1 t 3 y ( τ ( t ) ) + sin ( y ( τ ( t ) ) ) 1 + t 3 + cos ( y ( τ ( t ) ) ) 1 + t 3
(10)

defined for each t [ a , ) T with aT, τ(a)>1 and μ(t)=O(t) (which means that there exists d>0 such that μ(t)dt for each tT). Note that we have no further requirements for the function τ (except for those mentioned in Section 1.2). Moreover, let t 0 T, t 0 >a be sufficiently large. With the aid of Theorem 1, we will show that there exists an initial function

φ (t) ( t 1 , t 1 ) ,t [ τ ( t 0 ) , t 0 ] T ,
(11)

which defines a solution y(t) for all t [ τ ( t 0 ) , ) T of the dynamic equation (10) satisfying

| y ( t ) | < t 1 .
(12)

We define Δ-differentiable functions b,c: [ τ ( t 0 ) , ) T R satisfying b(t)<c(t) for each tT as

b(t):= t 1 ,c(t):= t 1 .

We will verify that every point MΩ= Ω B Ω C , where

Ω B : = { ( t , y ) : t [ τ ( t 0 ) , ) T , y = t 1 } , Ω C : = { ( t , y ) : t [ τ ( t 0 ) , ) T , y = t 1 } ,

is a point of strict egress for the set

Ω:= { ( t , y ) : t [ τ ( t 0 ) , ) T , t 1 <y< t 1 }

with respect to the dynamic equation (10).

For an arbitrary function ψ: [ τ ( t ) , t ] T R, t [ t 0 , ) T such that b(s)<ψ(s)<c(s), s [ τ ( t ) , t ) T and ψ(t)=b(t), we need (see (4))

1 t 3 ψ ( τ ( t ) ) + sin ( ψ ( τ ( t ) ) ) 1 + t 3 + cos ( ψ ( τ ( t ) ) ) 1 + t 3 < ( 1 t ) Δ ,
(13)

and analogously, for an arbitrary function ψ: [ τ ( t ) , t ] T R, t [ t 0 , ) T such that b(s)<ψ(s)<c(s), s [ τ ( t ) , t ) T and ψ(t)=c(t), we need (see (5))

1 t 3 ψ ( τ ( t ) ) + sin ( ψ ( τ ( t ) ) ) 1 + t 3 + cos ( ψ ( τ ( t ) ) ) 1 + t 3 > ( 1 t ) Δ .
(14)

Inequalities (13) and (14) will be valid if

1 t 3 + 2 1 + t 3 < 1 t 2 ( 1 + d ) .

Indeed, (we underline that for t 0 sufficiently large the last inequality holds) we have

| 1 t 3 ψ ( τ ( t ) ) + sin ( ψ ( τ ( t ) ) ) 1 + t 3 + cos ( ψ ( τ ( t ) ) ) 1 + t 3 | < 1 t 3 + 2 1 + t 3 < 1 t 2 ( 1 + d ) < | ( 1 t ) Δ | = 1 t ( t + μ ( t ) ) .

Hence, in view of Definition 2, every point MΩ is a point of strict egress for the set Ω. Therefore, all the assumptions of Theorem 1 hold and there exists an initial value function φ with the property (11) such that the initial problem y(t)= φ (t) defines a solution y on the interval T= [ τ ( t 0 ) , ) T of (10) satisfying the inequality (12) for every t [ τ ( t 0 ) , ) T . Note that this solution (due to (12)) tends to zero as t.

Remark 2 In our example of equation (1) with bounded and vanishing solution, the graininess μ(t) plays an important role. Roughly speaking, ‘the bigger’ the graininess will be, ‘the harder’ it will be to construct an example of equation (1) with a bounded solution. This follows from the formulas (4) and (5), where the function f is between Δ-derivatives of two functions, and these derivatives decrease to zero if μ(t) goes to infinity. However, in the cases (as is, e.g., the example above) the graininess satisfying μ(t)=O(t) is ‘sufficiently big’ and covers every well-known case of time scales (e.g., T=R, T=Z, T=hZ, h>0, and T= q N 0 , q>1).