Journal of Molecular Modeling

, 25:259 | Cite as

Role of electronic kinetic energy and resultant gradient information in chemical reactivity

  • Roman F. NalewajskiEmail author
Open Access
Original Paper
Part of the following topical collections:
  1. Zdzislaw Latajka 70th Birthday Festschrift


The role of resultant gradient-information concept, reflecting the kinetic energy of electrons, in shaping the molecular electronic structure and reactivity preferences of open reactants is examined. This quantum-information descriptor combines contributions due to both the modulus (probability) and phase (current) components of electronic wavefunctions. The importance of resultant entropy/information concepts for distinguishing the bonded (entangled) and nonbonded (disentangled) states of molecular fragments is emphasized and variational principle for the minimum of ensemble-average electronic energy is interpreted as a physically equivalent rule for the minimum of resultant gradient-information, and the information descriptors of charge-transfer (CT) phenomena are introduced. The in situ reactivity criteria, represented by the populational CT derivatives of the ensemble-average values of electronic energy or resultant information, are mutually related, giving rise to identical predictions of electron flows in the acid(A) — base(B), reactive systems. The virial theorem decomposition of electronic energy is used to reveal changes in the resultant information content due to the chemical bond formation, and to rationalize the Hammond postulate of reactivity theory. The complementarity principle of structural chemistry is confronted with the regional hard (soft) acid and bases (HSAB) rule by examining the polarizational and relaxational flows in such acceptor–donor reactive systems, responses to the external potential and CT displacements, respectively. The frontier-electron basis of the HSAB principle is reexamined and the intra- and inter-reactant communications in A—B systems are explored.


Chemical reactivity Complementarity principle Hammond postulate HSAB rule Information theory Virial theorem 


Thermodynamic principles for the minimum electronic energy in molecules can be interpreted as the variational rule for the minimum of the ensemble-average resultant gradient-information [1, 2], related to average kinetic energy of electrons in such (mixed) quantum states. In the grand-ensemble representation of the externally open molecular systems, they both determine the same set of the optimum probabilities of the system (pure) stationary states. This equivalence resembles identical predictions resulting from the minimum-energy and maximum-entropy principles in ordinary thermodynamics [3]. The energy and resultant gradient-information rules thus represent physically equivalent sources of reactivity criteria. Such an information transcription of the familiar energy principle allows one to reinterpret criteria for the charge transfer (CT) in reactive systems, the populational derivatives of electronic energy, as the associated derivatives of the overall measure of the quantum-information in molecular states, which combines the “classical” (modulus, probability) and “nonclassical” (phase, current) aspects of molecular wavefunctions. The proportionality between the resultant gradient-information and the system kinetic energy then allows one to use the molecular virial theorem [4] in general reactivity considerations [1, 2].

The classical information theory (IT) of Fisher and Shannon [5, 6, 7, 8, 9, 10, 11, 12] has been successfully applied to interpret in chemical terms the molecular probability distributions, e.g., [13, 14, 15, 16]. Information principles have been explored [17, 18, 19, 20, 21, 22] and density pieces attributed to atoms in molecules (AIM) have been approached [13, 17, 21, 22, 23, 24, 25], providing the information basis for the intuitive (stockholder) division of Hirshfeld [26]. Patterns of chemical bonds have been extracted from molecular electronic communications [13, 14, 15, 16, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37] and entropy/information distributions in molecules have been examined [13, 14, 15, 16, 38, 39]. The nonadditive Fisher information [13, 14, 15, 16, 40, 41] has been linked to electron localization function (ELF) [42, 43, 44] of modern density functional theory (DFT) [45, 46, 47, 48, 49, 50]. This analysis has also formulated the contragradience (CG) probe for localizing chemical bonds [13, 14, 15, 16, 51], and the orbital communication theory (OCT) of the chemical bond [27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37] has identified the bridge-bonds originating from the cascade propagations of information between AIM, which involve intermediate atomic orbitals [15, 16, 51, 52, 53, 54, 55, 56, 57].

In entropic theories of molecular electronic structure, one ultimately requires such quantum extensions of the complementary classical measures of Fisher [5, 6] and Shannon [7, 8], of the information/entropy content in probability distributions, which are appropriate for complex probability amplitudes (wavefunctions) of quantum mechanics (QM). The IT distinction between the bonded (entangled) and nonbonded (disentangled) states of molecular subsystems also calls for their generalized (resultant) information descriptors [16, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70], which combine the classical (probability) and nonclassical (current) contributions. Probability distributions generate the classical entropy/information descriptors of electronic states. These contributions reflect only the wavefunction modulus, while the wavefunction phase, or its gradient determining the current density, give rise to the corresponding nonclassical supplements in the resultant measure the overall information content in molecular electronic states [16, 58, 59, 60]. The variational principles of such generalized entropy concepts have been used to determine the phase-equilibria in molecules and their constituent fragments [16, 61, 62, 63, 64, 65].

Paraphrasing Prigogine [71], one could regard the molecular probability distribution as determining an instantaneous structure of “being”, while the system’s current pattern generates the associated structure of “becoming”. Both of these levels of the system electronic organization contribute to the state overall entropy/information content. In quantum information theory (QIT), the classical information term, conceptually rooted in DFT, probes the entropic content of the incoherent (disentangled) local “events”, while it is the nonclassical supplement that provides the information contribution due to coherence (entanglement) of such local events. For example, resultant measures combining the probability and phase/current contributions allow one to distinguish the information content of states generating the same electron density but differing in their phase/current distributions [47, 72, 73].

The resultant Fisher-type gradient-information in specified electronic state is proportional to the state average kinetic energy [1, 2, 13, 16, 18, 40]. This allows one to interpret the variational principle for electronic energy as equivalent quantum-information rule. The latter forms a basis for the novel, information-treatment of reactivity phenomena [1, 2]. Various DFT-based approaches to classical issues in reactivity theory [74, 75, 76, 77, 78, 79, 80] use the energy-centered arguments in justifying the observed reaction paths and relative yields of their products. Qualitative considerations on preferences in chemical reactions usually emphasize changes in energies of both reactants and of the whole reactive system, which are induced by displacements (perturbations) in parameters describing the relevant (real or hypothetical) electronic states. In such treatments, usually covering also the linear responses to these primary shifts, one explores reactivity implications of the electronic equilibrium and stability criteria [13, 15, 74, 75, 79]. For example, in charge sensitivity analysis (CSA) [74, 75], the energy derivatives with respect to the system external potential (v) and its overall number of electrons (N) and the associated charge responses of both the whole reactive systems and their constituent subsystems have been explored as potential reactivity descriptors. In R = acid(A) ← base(B) ≡ A--B complexes, consisting of the coordinated electron-acceptor and electron-donor reactants, respectively, such responses can be subsequently combined into the corresponding in situ indices characterizing the B → A CT [74, 75]. These difference characteristics of polarized subsystems can be expressed in terms of elementary (principal) charge sensitivities of reactants [74, 75, 78, 79]. The nonclassical IT descriptors of polarized subsystems can be similarly combined into the corresponding in situ properties describing the whole reactive system.

In this work, the role of resultant gradient-information/kinetic-energy in shaping the chemical reactivity preferences will be explored and variations of the kinetic energy of electrons in the bond-forming/bond-breaking processes will be examined. The continuities of the principal physical degrees-of-freedom of electronic states, the modulus/probability and phase/current distributions, respectively, will be summarized and the virial theorem will be used to interpret, in information terms, the bond-formation process. The theorem implications for the Hammond [81] postulate of reactivity theory will also be explored. The frontier-electron approximation to molecular interactions will be adopted to extract the information perspective on Pearson’s [82] hard (soft) acids and bases (HSAB) principle of structural chemistry (see also [83]), and physical equivalence of the energy and information reactivity descriptors in the grand-ensemble representation of molecular thermodynamic equilibria will be stressed. The populational derivatives of resultant gradient-information will be examined and advocated as alternative indices of chemical reactivity, adequate in predicting both the direction and magnitude of electron flows in reactive systems. The phase-description of hypothetical stages of reactants in chemical reactions will be explored, the activation (“promotion”) of molecular substrates will be examined, and the in situ populational derivatives of resultant-information will be applied to determine the optimum amount of CT in donor–acceptor reactive systems.

Classical and nonclassical sources of the structure-information in molecular states

Consider, for reasons of simplicity, a single electron moving in the external potential v(r) created by the fixed nuclei of the molecule. Its quantum state at time t, |ψ(t)〉 ≡ |ψ(t)〉, is then described by the generally complex wavefunction
$$ \psi \left(\boldsymbol{r},t\right)=\left\langle \boldsymbol{r}|\psi (t)\right\rangle =R\left(\boldsymbol{r},t\right)\ \exp \left[\mathrm{i}\phi \left(\boldsymbol{r},t\right)\right], $$
where (real) functions R(r, t) = p(r, t)1/2 ≥ 0 and ϕ(r, t) ≥ 0 stand for its modulus and phase components, respectively. They generate the state two principal physical degrees-of-freedom: its instantaneous probability distribution,
$$ p\left(\boldsymbol{r},t\right)=\psi \left(\boldsymbol{r},t\right)\ \psi {\left(\boldsymbol{r},t\right)}^{\ast }=R{\left(\boldsymbol{r},t\right)}^2, $$
and the current density
$$ {\displaystyle \begin{array}{l}\boldsymbol{j}\left(\boldsymbol{r},t\right)=\left[\hslash /\left(2m\mathrm{i}\right)\right]\kern0.24em \left[\psi {\left(\boldsymbol{r},t\right)}^{\ast}\nabla \psi \left(\boldsymbol{r},t\right)-\psi \left(\boldsymbol{r},t\right)\nabla \psi {\left(\boldsymbol{r},t\right)}^{\ast}\right]\\ {}=\left(\hslash /m\right)p\left(\boldsymbol{r},t\right)\nabla \phi \left(\boldsymbol{r},t\right)\equiv p\left(\boldsymbol{r},t\right)\kern0.24em V\left(\boldsymbol{r},t\right),\end{array}} $$
where the current-per-particle V(r, t) = j(r, t)/p(r, t) determines an effective velocity field dr(t)/dt for the probability “fluid”. The probability descriptor of the (pure) molecular state thus measures a product of the conjugate states ψ and ψ*, while the phase component reflects their ratio:
$$ \phi \left(\boldsymbol{r},t\right)={\left(2\mathrm{i}\right)}^{-1}\ \ln \left[\psi \left(\boldsymbol{r},t\right)/\psi {\left(\boldsymbol{r},t\right)}^{\ast}\right], $$
In the current definition of Eq. (3), the probability “velocity”
$$ \boldsymbol{V}\left(\boldsymbol{r},t\right)=\boldsymbol{V}\left[\boldsymbol{r}(t),t\right]=\left(\hbar /m\right)\ \nabla \phi \left(\boldsymbol{r},t\right) $$
reflects the state phase-gradient ∇ϕ(r, t).

The physical descriptors p(r, t) and j(r, t) of a complex quantum state constitute independent sources of an overall information content of the molecular electronic structure: the probability distribution alone generates its classical contribution while the current (velocity) density determines its nonclassical complement in the resultant measure [16, 58, 59, 60].

The IT gradient descriptors extract the information contained in local inhomogeneities of these two principal physical distributions, reflected by their gradient and divergence, respectively. These gradient probes reflect the complementary facets of the state “structure” content: ∇p = 2R ⋅∇ R extracts the spatial inhomogeneity of the probability density, the structure of “being”, while ∇⋅ j =  ∇ p ⋅V  = (/m) ∇ p ⋅ ∇ ϕ uncovers the current structure of “becoming”. We have used above a direct implication of the probability-continuity,
$$ {\displaystyle \begin{array}{l} dp\left[\boldsymbol{r}(t),t\right]/ dt\equiv {\sigma}_p\left(\boldsymbol{r},t\right)=\partial p\left(\boldsymbol{r},t\right)/\partial t+\nabla \cdot \boldsymbol{j}\left(\boldsymbol{r},t\right)\\ {}=\partial p\left(\boldsymbol{r},t\right)/\partial t+\left[\partial p\left(\boldsymbol{r},t\right)/\partial \boldsymbol{r}\right]\ \left[d\boldsymbol{r}(t)/ dt\right]\\ {}=\partial p\left(\boldsymbol{r},t\right)/\partial t+\nabla p\left(\boldsymbol{r},t\right)\cdot \boldsymbol{V}\left(\boldsymbol{r},t\right)=0,\mathrm{or}\\ {}\partial p\left(\boldsymbol{r},t\right)/\partial t=-\nabla \cdot \boldsymbol{j}\left(\boldsymbol{r},t\right)\\ {}=-\left[\nabla p\left(\boldsymbol{r},t\right)\cdot \boldsymbol{V}\left(\boldsymbol{r},t\right)+p\left(\boldsymbol{r},t\right)\nabla \cdot \boldsymbol{V}\left(\boldsymbol{r},t\right)\right]\\ {}=-\nabla p\left(\boldsymbol{r},t\right)\cdot \boldsymbol{V}\left(\boldsymbol{r},t\right),\end{array}} $$
that divergence of the effective velocity field V(r, t), determined by the state phase-Laplacian, identically vanishes:
$$ \nabla \cdot \boldsymbol{V}\left(\boldsymbol{r},t\right)=\left(\hbar /m\right)\ \Delta \phi \left(\boldsymbol{r},t\right)=0. $$
Here, dp/dtσp and ∂p/∂t denote the total and partial time-derivatives of probability density p(r, t) = p[r(t), t], respectively. The local probability “source” (“production”) σp is reflected by the total derivative dp/dt, which measures the time rate of change in an infinitesimal volume element of the probability fluid flowing with the probability current, while the partial derivative ∂p/∂t represents the corresponding rate at the specified (fixed) point in space. One observes that the total time derivative of Eq. (6) expresses the sourceless continuity relation for electronic probability distribution: σp(r, t) = 0.
In a molecular scenario, one envisages the system electrons moving in the external potential v(r) due to the “frozen” nuclei of the familiar Born–Oppenheimer (BO) approximation. The mono-electronic system is then described by the Hamiltonian
$$ \hat{\mathrm{H}}\left(\boldsymbol{r}\right)=-\left({\hslash}^2/2m\right){\nabla}^2+v\left(\boldsymbol{r}\right)\kern0.40em \equiv \kern0.40em \hat{\mathrm{T}}\left(\boldsymbol{r}\right)+v\left(\boldsymbol{r}\right) $$
where \( \hat{\mathrm{T}}\left(\boldsymbol{r}\right) \) stands for its kinetic part. The dynamics of electronic wavefunction ψ(r, t) is determined by the Schrödinger equation (SE) of molecular quantum mechanics (QM),
$$ \mathrm{i}\hslash \kern0.28em \partial \psi \left(\boldsymbol{r},t\right)/\partial t=\hat{\mathrm{H}}\left(\boldsymbol{r}\right)\psi \left(\boldsymbol{r},t\right), $$
which further implies specific temporal evolutions of p(r, t) and ϕ(r, t).
The probability-velocity descriptor should be also attributed to the current concept associated with the state phase-component:
$$ \boldsymbol{J}\left(\boldsymbol{r},t\right)=\phi \left(\boldsymbol{r},t\right)\ \boldsymbol{V}\left(\boldsymbol{r},t\right). $$
The phase field ϕ(r, t) and its current J(r, t) then determine a nonvanishing source term σϕ(r, t) in the phase-continuity equation:
$$ {\displaystyle \begin{array}{l}{\sigma}_{\phi}\left(\boldsymbol{r},t\right)\equiv d\phi \left(\boldsymbol{r},t\right)/ dt=\partial \phi \left(\boldsymbol{r},t\right)/\partial t+\nabla \cdot \boldsymbol{J}\left(\boldsymbol{r},t\right)\kern0.5em \mathrm{or}\\ {}\partial \phi \left(\boldsymbol{r},t\right)/\partial t=-\nabla \cdot \boldsymbol{J}\left(\boldsymbol{r},t\right)+{\sigma}_{\phi}\left(\boldsymbol{r},t\right).\end{array}} $$
Using Eq. (7) gives the following expression for this phase-source:
$$ {\displaystyle \begin{array}{l} d\phi \left[\boldsymbol{r}(t),t\right]/ dt=\partial \phi \left[\boldsymbol{r}(t),t\right]/\partial t+d\boldsymbol{r}(t)/ dt\cdot \partial \phi \left[\boldsymbol{r}(t),t\right]/\partial \boldsymbol{r}\\ {}=\partial \phi \left(\boldsymbol{r},t\right)/\partial t+\boldsymbol{V}\left(\boldsymbol{r},t\right)\cdot \nabla \phi \left(\boldsymbol{r},t\right)\\ {}=\partial \phi \left(\boldsymbol{r},t\right)/\partial t+\left(\hslash /m\right)\kern0.28em {\left[\nabla \phi \left(\boldsymbol{r},t\right)\right]}^2.\end{array}} $$
The phase-dynamics from SE,
$$ \partial \phi /\partial t=\left[\hslash /(2m)\right]\left[{R}^{-1}\varDelta R-{\left(\nabla \phi \right)}^2\right]-v/\hslash, $$
ultimately identifies the state phase-production of Eq. (11):
$$ {\sigma}_{\phi }=\left[\hbar /(2m)\right]\ \left[{R}^{-1}\Delta R+{\left(\nabla \phi \right)}^2\right]-v/\hbar . $$

To summarize, the classical continuity relation of QM expresses a sourceless character of the electron probability distribution, while its nonclassical companion introduces a nonvanishing phase-source combining both the classical (modulus) and nonclassical (phase) inputs.

Resultant information and kinetic energy

Let us consider the fixed time t = t0 and for simplicity suppress this parameter in the list of state arguments. The average Fisher’s measure [5, 6] of the classical gradient information for locality events contained in probability density p(r) = R(r)2 is reminiscent of von Weizsäcker’s [84] inhomogeneity correction to the kinetic-energy functional:Here, Open image in new window denotes functional’s overall density and Ip(r) stands for the associated density-per-electron. The amplitude form I[R] reveals that this classical descriptor reflects a magnitude of the state modulus-gradient. It characterizes an effective “narrowness” of the particle spatial probability distribution, i.e., a degree of determinicity in the particle position.
This classical functional of the gradient information in probability distribution generalizes into the corresponding resultant descriptor, functional of the quantum state |ψ(t)〉 itself, which combines the modulus (probability) and phase (current) contributions [40]. It is defined by the quantum expectation value of the Hermitian operator of the overall gradient information [16, 40], related to the kinetic energy operator \( \hat{T}\left(\boldsymbol{r}\right) \) of Eq. (8),
$$ \hat{I}\left(\boldsymbol{r}\right)=-4\varDelta ={\left(2\mathrm{i}\mathrm{\nabla}\right)}^2=\left(8m/{\hslash}^2\right)\kern0.28em \hat{T}\left(\boldsymbol{r}\right). $$
The integration by parts then gives the following expression for the state average (resultant) gradient-informationwith Open image in new window again denoting its overall density and Iψ(r) standing for the corresponding density-per-electron. This quantum-information concept, I[ψ] = I[p, ϕ] = I[p, j], is seen to combine the classical (probability) contribution I[p] of Fisher and the corresponding nonclassical (phase/current) supplement I[ϕ] = I[j]. It also reflects the particle average (dimensionless) kinetic energy T[ψ]:
$$ I\left[\psi \right]=\left(8m/{\hslash}^2\right)\left\langle \psi |\hat{\mathrm{T}}|\psi \right\rangle \equiv \left(8m/{\hslash}^2\right)\ T\left[\psi \right]\equiv \sigma\ T\left[\psi \right]. $$
This one-electron development can be straightforwardly generalized into general N-electron systems in the corresponding quantum state |Ψ(N)〉 exhibiting electron density ρ(r) = Np(r), where p(r) stands for its probability (shape) factor. The corresponding N-electron information operator then combines terms due to each particle,
$$ \hat{\mathrm{I}}(N)=\sum \limits_{i=1}^N\kern0.28em \hat{\mathrm{I}}\left({\boldsymbol{r}}_i\right)=\left(8m/{\hslash}^2\right)\kern0.28em \sum \limits_{i=1}^N\hat{\mathrm{T}}\left({\boldsymbol{r}}_i\right)\equiv \left(8m/{\hslash}^2\right)\kern0ex \hat{\mathrm{T}}(N), $$
and determines the state overall gradient-information,
$$ I(N)=\left\langle \Psi (N)|\hat{\mathrm{I}}(N)|\Psi (N)\right\rangle =\left(8m/{\hslash}^2\right)\left\langle \Psi (N)|\hat{\mathrm{T}}(N)|\Psi (N)\right\rangle =\left(8m/{\hslash}^2\right)T(N), $$
proportional to the associated expectation value T(N) of the system kinetic-energy operator \( \hat{\mathrm{T}}(N) \).
For example, in one-determinantal representation, of a single electron (orbital) configuration Ψ(N) = |ψ1ψ2ψN|, e.g., in the familiar Hartree–Fock of Kohn–Sham theories, these N-electron descriptors combine the additive contributions due to the (singly) occupied, {ns = 1}, spin molecular orbitals (MO) ψ = (ψ1, ψ2, …, ψN) = {ψs}:
$$ T(N)={\sum}_s\ {n}_s\ \left\langle {\psi}_s|\hat{\mathrm{T}}|{\psi}_s\right\rangle \equiv {\sum}_s\ {n}_s\ {T}_s=\left({\hbar}^2/8m\right){\sum}_s\ {n}_s\ \left\langle {\psi}_s|\hat{\mathrm{I}}|{\psi}_s\right\rangle \equiv \left({\hbar}^2/8m\right){\sum}_s\ {n}_s\ {I}_s. $$
In the analytical LCAO MO representation, with the occupied MO expressed as linear combinations of (orthogonalized) atomic orbitals (AO) χ = (χ1, χ2, …, χk, …),
$$ \mid \boldsymbol{\psi} \left\rangle =\kern0.5em \mid \boldsymbol{\chi} \right\rangle\ \mathbf{C},\kern1.5em \mathbf{C}=\left\langle \boldsymbol{\chi} |\boldsymbol{\psi} \right\rangle =\left\{{C}_{k,s}=\left\langle {\chi}_k|{\psi}_s\right\rangle \right\}, $$
the average gradient information in the orbital configuration Ψ(N), for the unit matrix of MO occupations, n = {ns δs,s = δs,s}, reads:
$$ I(N)={\sum}_s\ {n}_s\left\langle {\psi}_s|\hat{\mathrm{I}}|{\psi}_s\right\rangle ={\sum}_k{\sum}_l\left\{{\sum}_s\ {C}_{k,s}\ {n}_s\ {C_{s,l}}^{\ast}\right\}\left\langle {\chi}_l|\hat{\mathrm{I}}|{\chi}_k\right\rangle \equiv {\sum}_k{\sum}_l\ {\gamma}_{k,l}\ {I}_{l,k}=\mathrm{tr}\left(\boldsymbol{\upgamma} \mathbf{I}\right). $$
Here, the AO representation of the resultant gradient-information operator,
$$ \mathbf{I}=\left\{{I}_{k,l}=\left\langle {\chi}_k|\hat{I}|{\chi}_l\right\rangle \propto \left\langle {\chi}_k|\hat{\mathrm{T}}|{\chi}_l\right\rangle ={T}_{k,l}\right\}, $$
and the charge/bond-order (CBO) (density) matrix of LCAO MO theory,
$$ \boldsymbol{\upgamma} ={\mathbf{CnC}}^{\dagger }=\left\langle \boldsymbol{\chi} |\boldsymbol{\psi} \right\rangle\ \mathbf{n}\ \left\langle \boldsymbol{\psi} |\boldsymbol{\chi} \right\rangle \equiv \left\langle \boldsymbol{\chi} |{\hat{\mathrm{P}}}_{\psi }|\boldsymbol{\chi} \right\rangle, $$
is seen to provide the AO-representation of the projection onto the occupied MO-subspace,
$$ {\hat{\mathrm{P}}}_{\psi }=N\left[{\sum}_s\mid {\psi}_s\right\rangle \left({n}_s/N\right)\left\langle {\psi}_s\mid \right]\equiv N\left[{\sum}_s\mid {\psi}_s\right\rangle {p}_s\left\langle {\psi}_s\mid \right]\equiv N\hat{\mathrm{d}}, $$
proportional to the density operator \( \hat{\mathrm{d}} \) of the configuration MO “ensemble”.

This average overall information thus assumes thermodynamic-like form, as the trace of the product of CBO matrix, the AO representation of the (occupation-weighted) MO projector, which establishes the configuration density operator, and the corresponding AO matrix of the Hermitian operator for the resultant gradient information, related to the system electronic kinetic energy. In this MO “ensemble”-averaging, the AO information matrix I constitutes the quantity-matrix, while the CBO (density) matrix γ provides the “geometrical” weighting factors in this MO “ensemble”, reflecting the system electronic state. It has been argued elsewhere [16, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37] that elements of the CBO matrix generate amplitudes of electronic communications between molecular AO “events”. This observation thus adds a new angle to interpreting this average-information expression: it is seen to represent the communication-weighted (dimensionless) kinetic energy of the system electrons.

The relevant separation of the modulus- and phase-components of general N-electron states calls for wavefunctions yielding the specified electron density [47]. It can effected using the Harriman–Zumbach–Maschke (HZM) [83, 84] construction of DFT. It uses N (complex) equidensity orbitals, each generating the molecular probability distribution p(r) and exhibiting the density-dependent spatial phases, which safeguard the MO orthogonality.

Bonded (entangled) and nonbonded (disentangled) states of reactants

The resultant entropy/information concepts of QIT have been applied to describe the substrate current activation and to distinguish the bonded and nonbonded states of molecular fragments [1, 2, 66, 67, 68, 69, 70]. In the course of a chemical reaction, one conventionally recognizes several of its hypothetical stages [1, 2, 13, 16, 72, 73] involving either the mutually closed [nonbonded (n), disentangled] or open [bonded (b), entangled] reactants, e.g., the electron acceptor and donor substrates in a bimolecular reactive system R = A----B involving the acidic (A) and basic (B) partners, respectively. The nonbonded status of such closed (polarized) subsystems in R+ ≡ (A+|B+), conserving the initial numbers of electrons of isolated reactants {α0}, {Nα+ = Nα0}, and - at a finite separation - relaxed in a presence of the reaction partner, {ρα+ ≠ ρα0}, is symbolized by the solid vertical line separating the two reactants at this polarization (+) stage. Only due to this mutual closure the identity of the two substrates remains a meaningful concept. The overall electron density of R+ as a whole then reads:
$$ {\displaystyle \begin{array}{l}{\rho_{\mathrm{R}}}^{+}={\rho_{\mathrm{A}}}^{+}+{\rho_{\mathrm{B}}}^{+}\kern0.40em \equiv \kern0.40em {N_{\mathrm{A}}}^{+}{p_{\mathrm{A}}}^{+}+{N_{\mathrm{B}}}^{+}{p_{\mathrm{B}}}^{+}={N}_{\mathrm{R}}{p_{\mathrm{R}}}^{+},\\ {}{p_{\alpha}}^{+}={\rho_{\alpha}}^{+}/{N_{\alpha}}^{+},{N_{\alpha}}^{+}=\int {\rho_{\alpha}}^{+}d\boldsymbol{r}={N_{\alpha}}^0,\alpha =\mathrm{A},\mathrm{B},\end{array}} $$
$$ {p_{\mathrm{R}}}^{+}=\left({N_{\mathrm{A}}}^{+}/{N}_{\mathrm{R}}\right)\ {p_{\mathrm{A}}}^{+}+\left({N_{\mathrm{B}}}^{+}/{N}_{\mathrm{R}}\right)\ {p_{\mathrm{B}}}^{+}\equiv {P_{\mathrm{A}}}^{+}\ {p_{\mathrm{A}}}^{+}+{P_{\mathrm{B}}}^{+}\ {p_{\mathrm{B}}}^{+} $$
denotes the system global probability distribution, the shape-factor of ρR+, and the condensed probabilities {Pα+ = Nα+/NR = Nα0/NR = Pα0} reflect reactant “shares” in the overall number of electrons: NR+ = ∑α Nα+ = ∑α Nα0 = NR0 ≡ NR. These subsystems lose their “identity” in the bonded status, as mutually open parts of the externally closed (isoelectronic) reactive system R ≡ (A*¦B*) conserving NR, where the absence of a barrier for internal electron flows between the two substrates has been symbolically represented by the broken vertical line separating the two reactants. Indeed, in absence of the dividing “wall”, each “part” physically exhausts the whole reactive system.
However, one can also contemplate the external flows of electrons, between the mutually nonbonded reactants and their separate (external, macroscopic) reservoirs of electrons Open image in new window . The formal mutual-closure then implies the relevancy of subsystem identities, while the external-openness in now macroscopic (composite) subsystems Open image in new window of the whole composite reactive systemallows one to independently manipulate the chemical potentials of both parts, Open image in new window , and hence also their ensemble-average electron densities {ρα* = Nα*pα*} and populations Nα* = ∫ρα* dr. In particular, the substrate chemical potentials equalized at the molecular level in both composite subsystems, {μα* = μR ≡ μR(NR)}, thus conserving the overall (ensemble-average) electron number 〈NRens. = NR(μR), define the equilibrium macroscopic systemin which one observes the equilibrium reactant distributions {ρα* = ρα(μR)} and the associated populations {Nα* = Nα(μR)} of the “bonded” molecular fragments {α*} in Open image in new window . They must also characterize the equilibrium “bonded” (entangled) substrates in a hypothetical reactive system R* = (A*¦B*) ≡ R, corresponding to the equalized fragment chemical potentials, at molecular level, and the conserved (ensemble-average) number of electrons: 〈NRens. = NR(μR) = NR. These effective electron populations thus exhibit the equilibrium amount of the inter-reactant CT:
$$ {N}_{\mathrm{CT}}={N_{\mathrm{A}}}^{\ast }-{N_{\mathrm{A}}}^0={N_{\mathrm{B}}}^0-{N_{\mathrm{B}}}^{\ast }>0, $$
in the globally isoelectronic reaction:
$$ {N}_{\mathrm{R}}\equiv {N_{\mathrm{A}}}^{\ast }+{N_{\mathrm{B}}}^{\ast }={N_{\mathrm{A}}}^{+}+{N_{\mathrm{B}}}^{+}\equiv {N_{\mathrm{R}}}^{+}={N_{\mathrm{A}}}^0+{N_{\mathrm{B}}}^0\equiv {N_{\mathrm{R}}}^0 $$

To summarize, the fragment identity can be retained only for the mutually closed (nonbonded) status of the acidic and basic reactants, e.g., in the polarized reactive system Rn+ or in the equilibrium composite system Open image in new window . The subsystem electron densities {ρα = Nα pα} can be either “frozen”, e.g., in the promolecular reference R0 = (A0|B0) ≡ Rn0 consisting of the isolated-reactant distributions shifted to their actual positions in the molecular reactive system R, or “polarized” in R+ or R, i.e., relaxed in presence of the reaction partner. The final equilibrium in R as a whole, combining the bonded subsystems {α*} after CT, accounts for the extra CT-induced polarization of reactants compared to R+. As we have argued above, descriptors of this state, of the mutually bonded (formally open) reactants, can be inferred only indirectly, by examining the chemical potential equalization in the composite system Open image in new window . Similar external reservoirs are involved, when one examines the independent population displacements on reactants, e.g., in defining the fragment chemical potentials and their hardness tensor.

In this chain of hypothetical reaction “events”, the polarized system R+ appears as the launching stage for the subsequent CT and the accompanying induced polarization, after the hypothetical barrier for the flow of electrons between the two subsystems has been effectively lifted. This density polarization is also accompanied by the subsystem current-promotion, reflected by the modified electron flow patterns in both substrates, compared to promolecule R0, in accordance with their current equilibrium-phase distributions [16, 66, 67, 68, 69]. This nonclassical (current) activation of both subsystems complements the classical (probability) polarization of reactants in presence of their reaction partners. The phase aspect is thus vital for accounting for the mutual coherence (entanglement) of reactants in the reactive system as a whole.

The fragment chemical potentials μR+ = {μα+} and elements of the hardness matrix ηR+ = {ηα,β} of the polarized reactants represent the populational derivatives of the ensemble-average electronic energy Open image in new window in the reactant resolution, reflecting the mutually closed but externally open substrates {α(μα+)} in composite subsystems Open image in new window of the macroscopic polarized systemcalculated for the fixed external potential of the whole system, v = vA + vB, reflecting the “frozen” molecular geometry in R+:
The associated global properties of R = (A*¦B*) are defined by the corresponding derivatives with respect to the overall (ensemble-average) number of electrons NR in the R fragment of the combined system Open image in new window ,
The optimum amount of the (fractional) CT is then determined by the difference in chemical potentials of the (equilibrium) polarized reactants in R+, the CT gradientand the effective in situ hardness (ηCT) or softness (SCT) for this process,
$$ {\displaystyle \begin{array}{c}{\eta}_{\mathrm{CT}}=\partial {\mu}_{\mathrm{CT}}/\partial {N}_{\mathrm{CT}}\\ {}=\left({\eta}_{\mathrm{A},\mathrm{A}}-{\eta}_{\mathrm{A},\mathrm{B}}\right)+\left({\eta}_{\mathrm{B},\mathrm{B}}-{\eta}_{\mathrm{B},\mathrm{A}}\right)\equiv \kern0.35em {\eta_{\mathrm{A}}}^{\mathrm{R}}+{\eta_{\mathrm{B}}}^{\mathrm{R}}={S_{\mathrm{CT}}}^{-1},\end{array}} $$
representing the CT Hessian and its inverse, respectively; here ηXR denotes the effective chemical hardness of the “embedded” reactant X in R [72, 73]. The optimum amount of CT,
$$ {N}_{\mathrm{CT}}=-{\mu}_{\mathrm{CT}}\ {S}_{\mathrm{CT}}, $$
then generates the associated (second-order) stabilization energy:
$$ {E}_{\mathrm{CT}}={\mu}_{\mathrm{CT}}\ {N}_{\mathrm{CT}}/2=-{\mu_{\mathrm{CT}}}^2\ {S}_{\mathrm{CT}}/2<0. $$

Grand-ensemble principle for thermodynamic equilibrium

The populational derivatives of the average electronic energy or of the resultant gradient-information call for the grand-ensemble representation [1, 2, 85, 86]. Indeed, only the average overall number of electrons Open image in new window in the externally open molecular part 〈M(v)〉ens., identified by the system external potential v, of the equilibrium combined (macroscopic) system Open image in new window including the external electron reservoir Open image in new window ,exhibits the continuous (fractional) spectrum of values, thus justifying the very concept of the populational ( Open image in new window ) derivative itself. Here,
$$ \hat{\mathrm{N}}={\sum}_i\mid {\psi}_i\left\rangle\;{N}_i\right\langle {\psi}_i\mid ={\sum}_i{N}_i\left[{\sum}_j\mid {\psi_j}^i\right\rangle\;\left\langle {\psi_j}^i\mid \right] $$
stands for the particle-number operator in Fock’s space and the density operator
$$ \hat{\mathrm{D}}={\sum}_i{\sum}_j\mid {\psi_j}^i\left\rangle\;{P_j}^i\;\right\langle {\psi_j}^i\mid $$
identifies the statistical mixture of the system (pure) states {|ψi〉 ≡ |ψ(Ni)〉 = (ψji, j = 0, 1, …)}, defined for different (integer) numbers of electrons {Ni}, which appear in the ensemble with the external (thermodynamic) probabilities {Pji}. Such Open image in new window -derivatives are involved in definitions of the system reactivity criteria, e.g., its chemical potential (negative electronegativity) [74, 75, 85, 86, 87, 88, 89] or the chemical hardness (softness) [74, 75, 90] and Fukui function (FF) [74, 75, 91] descriptors.
These Open image in new window -derivatives are thus definable only in the mixed electronic states, e.g., those corresponding to thermodynamic equilibria in the externally open molecule 〈M(v)〉ens.. In the grand-ensemble, this state is determined by the equilibrium density operator specified by the corresponding thermodynamic (externally imposed) intensive parameters: the chemical potential of electron reservoir Open image in new window , and the absolute temperature T of heat bath Open image in new window . These intensities ultimately determine the relevant Legzendre-transformof the ensemble-average energywhich minimizes at the optimum ensemble probabilities for these thermodynamic conditions, {(Pji)opt. = Pji(μ, T; v)},
The grand-potential corresponds to replacing the “extensive” state parameters of the particle number Open image in new window and thermodynamic entropy [92].
$$ S\left[\hat{D}\right]=\mathrm{tr}\left(\hat{\mathrm{D}}\hat{\mathrm{S}}\right)=-{k}_{\mathrm{B}}{\sum}_i{\sum}_j{P_j}^i\kern0.28em \ln {P}_j^i, $$
where kB denotes the Boltzmann constant, by their “intensive” conjugates, the chemical potential μ and absolute temperature T, respectively. The Legendre-transform (41) includes these “intensities” as Lagrange multipliers enforcing at the grand-potential minimum constraints of the specified values of the system ensemble-average values of the conjugate “extensive” parameters: the overall number of electrons,and average thermodynamic (von Neumann’s [92]) entropy,
$$ {\displaystyle \begin{array}{c}{\left\langle S\right\rangle}_{ens.}=S\left[{\hat{\mathrm{D}}}_{eq.}\right]={\left\langle S\left(\mu, T\right)\right\rangle}_{ens.}\\ {}=-{k}_B{\sum}_i{\sum}_j{P_j}^i\left(\mu, T;v\right)\kern0.28em \ln\ {P_j}^i\left(\mu, T;v\right)\\ {}=S\left[\mu, T;v\right]\equiv S.\end{array}} $$
This allows one to formally identify the (external) “intensive” parameters as partial derivatives of the average energy,with respect to the corresponding constraint-values:
The externally imposed parameters μ and T thus determine the associated optimum probabilities of the (pure) stationary states {|ψji〉 ≡ |ψj[Ni,v]〉}, eigenstates of partial Hamiltonians,
$$ {\displaystyle \begin{array}{l}\hat{\mathrm{H}}\left({N}_i,v\right)\mid {\psi}_j\left[{N}_i,v\right]\left\rangle ={E}_j^i\mid {\psi}_j\left[{N}_i,v\right]\right\rangle, \\ {}{P}_j^i\left(\mu, T;v\right)={\varXi}^{-1}\exp \left[\beta \left(\mu {N}_i-{E}_j^i\right)\right],\end{array}} $$
which define the associated density operator of the (mixed) equilibrium state in the grand-ensemble:
$$ \hat{\mathrm{D}}\left(\mu, T;v\right)={\sum}_i{\sum}_j\mid {\psi}_j^i\left\rangle {P}_j^i\left(\mu, T;v\right)\right\langle {\psi}_j^i\mid \equiv {\hat{\mathrm{D}}}_{eq.} $$
Here, Ξ stands for the grand-ensemble partition-function and β = (kBT)−1. In the limit T → 0 such a mixture of molecular ground-states {|ψi〉 = ψ[Ni, v]} corresponding to integer numbers of electrons {Ni} and energies
$$ {E_j}^i=\left\langle {\psi}_j^i|\hat{\mathrm{H}}\left({N}_i,v\right)|{\psi}_j^i\right\rangle ={E}_j\left[{N}_i,v\right], $$
appearing in the grand-ensemble with probabilities {Pji(μ, T → 0; v)}, represents an externally open molecule 〈M(μ, T → 0; v)〉ens. in these thermodynamic conditions.

Information descriptors of chemical reactivity

The ensemble-average value of the resultant gradient-information, given by the weighted expression in terms of the equilibrium probabilities in this thermodynamic (mixed) state,is related to the ensemble-average kinetic energy T:Therefore, the thermodynamic rule of Eq. (43), for the minimum of the constrained average electronic energy, can be alternatively interpreted as the corresponding principle for the constrained average content of resultant gradient-information:Here, the ensemble-average value of the system overall potential energy,combines contributions due to electron-nuclear attraction ( Open image in new window ) as well as the electron and nuclear repulsions ( Open image in new window ).
The information principle of Eq. (53) is seen to contain an additional constraint of the fixed overall potential energy, Open image in new window , multiplied by the Lagrange multiplierIt also includes the “scaled” intensities associated with the remaining constraints:
It should be stressed that the two conjugate thermodynamic principles, for the constrained minima of the ensemble-average energyand overall gradient-informationhave the same optimum-probability solutions of Eq. (49). This manifests the physical equivalence of the energetic and entropic principles for determining the equilibrium states in thermodynamics [3].
Several Open image in new window -derivatives of the ensemble-average electronic energy or resultant gradient-information define useful reactivity criteria [74, 75]. The physical equivalence of the energy and information principles in molecular thermodynamics indicates that such concepts are mutually related, being both capable of describing the CT phenomena in donor–acceptor systems [1, 2]. The ensemble interpretation also applies to diagonal and mixed second derivatives of the electronic energy, which involve the differentiation with respect to electron population variable Open image in new window . For example, in the electronic energy representation the chemical hardness reflects Open image in new window derivative of the chemical potential,while the information “hardness” measures the Open image in new window derivative of information “potential”:

The positive signs of these “diagonal” populational derivatives assure the external stability of 〈M(v)〉ens. with respect to hypothetical electron flows between the molecular system and its reservoir [74, 75]. Indeed, they imply an increase (a decrease) of the global energetic and information “intensities” (μ and ξ), which are coupled to Open image in new window , in response to the perturbation created by the primary electron inflow (outflow). This is in accordance with the familiar Le Châtelier and Le Châtelier-Braun principles of thermodynamics [3] that spontaneous responses in system intensities to the initial population displacements diminish effects of such primary perturbations.

By the Maxwell cross-differentiation relation the mixed second derivative of the system ensemble-average energy,measuring its global FF [91], can be alternatively interpreted as either the density response per unit populational displacement or the response in global chemical potential per unit displacement in the external potential. The associated mixed derivative of the average resultant gradient information then reads:

Use of virial-theorem partitioning

It is of interest to examine the ground-state variations of the electronic resultant gradient-information in specific geometrical displacements ΔQ of the molecular or reactive systems. Its proportionality to the system kinetic-energy component calls for using the virial theorem [4] in the BO approximation of molecular QM,
$$ {\displaystyle \begin{array}{l}2\Delta T\left(\boldsymbol{Q}\right)+\Delta W\left(\boldsymbol{Q}\right)+\boldsymbol{Q}\cdot \left[\mathrm{\partial \Delta }E\left(\boldsymbol{Q}\right)/\partial \boldsymbol{Q}\right]=0,\\ {}\Delta E\left(\boldsymbol{Q}\right)=\Delta T\left(\boldsymbol{Q}\right)+\Delta W\left(\boldsymbol{Q}\right),\end{array}} $$
which allows one to extract changes in the kinetic, ΔT(Q), and potential, ΔW(Q), components of the overall electronic energy ΔE(Q) for the system current geometrical structure Q:
$$ {\displaystyle \begin{array}{l}\Delta T\left(\boldsymbol{Q}\right)=-\Delta E\left(\boldsymbol{Q}\right)-\boldsymbol{Q}\cdot \left[\mathrm{\partial \Delta }E\left(\boldsymbol{Q}\right)/\partial \boldsymbol{Q}\right]\ \mathrm{and}\\ {}\Delta W\left(\boldsymbol{Q}\right)=2\Delta E\left(\boldsymbol{Q}\right)+\boldsymbol{Q}\cdot \left[\mathrm{\partial \Delta }E\left(\boldsymbol{Q}\right)/\partial \boldsymbol{Q}\right].\end{array}} $$
These virial relations assume a particularly simple form in diatomics, for which the internuclear distance R uniquely specifies the molecular geometry,
$$ {\displaystyle \begin{array}{l}\Delta E(R)=\Delta T(R)+\Delta W(R),\\ {}2\Delta T(R)+\Delta W(R)+R\left[d\Delta E(R)/ dR\right]=0,\\ {}\mathrm{or}\\ {}\Delta T(R)=-\Delta E(R)-R\left[d\Delta E(R)/ dR\right]=-d\left[R\Delta E(R)\right]/ dR\kern0.28em \mathrm{and}\\ {}\Delta W(R)=2\Delta E(R)+R\left[d\Delta E(R)/\partial R\right]={R}^{-1}d\left[{R}^2\Delta E(R)\right]/ dR.\end{array}} $$
Figure 1 presents qualitative plots for a diatomic molecule: of the BO potential ΔE(R) and its kinetic-energy component ΔT(R), which also reflects the ground-state resultant gradient-information ΔI(R). It follows from the figure that during a mutual approach by both atoms the kinetic-energy/gradient-information is first diminished relative to the separated-atom limit (SAL), due to the longitudinal Cartesian component of the kinetic energy associated with the “z” direction along the bond axis [93, 94, 95, 96]. At the equilibrium distance Re the resultant information rises above the SAL value, due to an increase in transverse components of the kinetic energy, corresponding to “x” and “y” directions perpendicular to the bond axis. Therefore, at the equilibrium separation Re, for which ΔT(Re) = − ΔE(Re), the bond-formation results in a net increase of the molecular resultant gradient-information relative to SAL, due to generally more compact electron distribution in the field of both nuclei.
Fig. 1

Qualitative diagram of variations in electronic energy ΔE(R) (solid line) with the internuclear distance R in a diatomic molecule, and of its kinetic energy component ΔT(R) = −d/dR[RΔE(R)] (broken line) reflecting also the state resultant gradient-information ΔI(R) ∝ ΔT(R)

Another interesting case of variations in molecular geometry is the (intrinsic) reaction coordinate Rc, or the associated progress variable P of the arc-length along this trajectory, for which the virial relations also assume the diatomic-like form [4]. Let us examine the virial theorem decomposition of the energy profile along Rc in typical bimolecular reaction
$$ \mathrm{A}+\mathrm{B}\to {\mathrm{R}}^{\ddagger}\to \mathrm{C}+\mathrm{D}, $$
where R denotes the transition-state (TS) complex, to which the qualitative Hammond postulate [81] of the chemical reactivity theory applies (see Fig. 2). The virial-theorem application to extract qualitative plots of the resultant gradient-information from energy profiles in the endo- and exo-ergic reactions (upper panel), and in the energy-neutral chemical processes on symmetric potential energy surfaces (PES) (lower panel) has been reported elsewhere [1, 2, 4]. These analyses have shown that this qualitative rule of chemical reactivity is fully explained by the sign of the P-derivative of the overall gradient-information measure at TS complex.
Fig. 2

Variations of the electronic total (E) and kinetic (T) energies in the exo-ergic (ΔEr < 0) and endo-ergic (ΔEr > 0) reactions (upper panel). The lower panel provides qualitative plots for the symmetrical PES (ΔEr = 0)

The qualitative Hammond postulate emphasizes a relative resemblance of the reaction TS complex R to its substrates (products) in the exo-ergic (endo-ergic) reactions, while for the vanishing reaction energy the position of TS complex is predicted to be located symmetrically between substrates and products. The activation barrier thus appears “early” in exo-ergic reactions, e.g., H2 + F → H + HF, with the reaction substrates being only slightly modified in TS, R ≈ [A---B], both electronically and geometrically. Accordingly, in endo-ergic bond-breaking-bond-forming process, e.g., H + HF → H2 + F, the barrier is “late” along the reaction-progress coordinate P and the activated complex resembles more the reaction products: R ≈ [C---D]. This qualitative statement has been subsequently given several more quantitative formulations and theoretical explanations using both the energetic and entropic arguments [20, 97, 98, 99, 100, 101, 102, 103].

The energy profile along the reaction “progress” coordinate P, ΔE(P) = E(P) - E(Psub.) is directly “translated” by the virial theorem into the associated displacement ΔT(P) = T(P) - T(Psub.) in its kinetic-energy contribution, proportional to the corresponding change ΔI(P) = I(P) - I(Psub.) in the system resultant gradient-information, ΔI(P) = σ ΔT(P),
$$ \Delta T(P)=-\Delta E(P)-P\ \left[d\Delta E(P)/ dP\right]=-d\left[P\Delta E(P)\right]/ dP $$
The energy profile ΔE(P) in the endo- or exo-direction, for the positive and negative reaction energy ΔEr = E(Pprod.) - E(Psub.), respectively, thus uniquely determines the associated profiles of the kinetic-energy or resultant-information: ΔT(P) ∝ ΔI(P). A reference to qualitative plots in Fig. 2 shows that the latter distinguishes these two directions by the sign of its derivative at TS:
$$ {\displaystyle \begin{array}{l} endo- direction:{\left( dI/ dP\right)}_{\ddagger }>0\ and\ {\left( dT/ dP\right)}_{\ddagger }>0,\varDelta {E}_r>0;\\ {} energy- neutral:{\left( dI/ dP\right)}_{\ddagger }=0\ and\ {\left( dT/ dP\right)}_{\ddagger }=0,\varDelta {E}_r=0;\\ {} exo- direction:{\left( dI/ dP\right)}_{\ddagger }<0\ and\ {\left( dT/ dP\right)}_{\ddagger }<0,\varDelta {E}_r<0.\end{array}} $$

This observation demonstrates that RC derivative of the resultant gradient-information at TS complex, dI/dP|, proportional to dT/dP|, can serve as an alternative detector of the reaction energetic character: its positive/negative values respectively identify the endo/exo-ergic reactions exhibiting the late/early activation energy barriers, with the neutral case (ΔEr = 0 or dT/dP| = 0) exhibiting an equidistant position of TS between the reaction substrates and products on a symmetrical potential energy surface, e.g., in the hydrogen exchange reaction H + H2 → H2 + H.

The reaction energy ΔEr determines the corresponding change in the resultant gradient-information, ΔIr = I(Pprod.) - I(Psub.), proportional to ΔTr = T(Pprod.) - T(Psub.) = -ΔEr. The virial theorem thus implies a net decrease of the resultant gradient information in endo-ergic processes, ΔIr(endo) < 0, its increase in exo-ergic reactions, ΔIr(exo) > 0, and a conservation of the overall gradient-information in the energy-neutral chemical rearrangements: ΔIr(neutral) = 0. One also recalls that the classical part of this information displacement probes an average spatial inhomogeneity of the electronic density. Therefore, the endo-ergic processes, requiring a net supply of energy to R, give rise to relatively less compact electron distributions in the reaction products, compared with the substrates. Accordingly, the exo-ergic transitions, which net release the energy from R, generate a relatively more concentrated electron distributions in products, compared to substrates, and no such an average change is predicted for the energy-neutral case.

Regional HSAB versus complementary coordinations

Some subtle preferences in chemical reactivity result from the induced (polarizational or relaxational) electron-flows in reactive systems, reflecting responses to the primary or induced displacements in the electronic structure of the reaction complex, e.g., [16, 104]. Such flow patterns can be diagnosed, estimated, and compared by using either the energetical or information reactivity criteria defined above. One such still-problematic issue is the best mutual arrangement of the acidic and basic parts of molecular reactants in the donor–acceptor systems [16, 104, 105].

Consider the reactive complex A—B consisting of the basic reactant B = (aB|…|bB) ≡ (aB|bB) and the acidic substrate A = (aA|…|bA) ≡ (aA|bA), where aX and bX denote the acidic and basic parts of subsystem X, respectively. The acidic (electron acceptor) part is relatively harder, i.e., less responsive to external perturbation, exhibiting lower values of the fragment FF descriptor, while the basic (electron donor) fragment is relatively softer, more polarizable, as reflected by its higher density/population responses. The acidic part aX exerts an electron-accepting (stabilizing) influence on the neighboring part of the other reactant Y, while the basic fragment bX produces an electron-donor (destabilizing) effect on a fragment of Y in its vicinity.

There are two ways in which both reactants can mutually coordinate in the corresponding reactive complexes [16, 104, 105]. In the complementary (c) arrangement of Fig. 3,
$$ {\mathrm{R}}_c\equiv \left[\begin{array}{c}{a}_{\mathrm{A}}-{b}_{\mathrm{B}}\\ {}{b}_{\mathrm{A}}-{a}_{\mathrm{B}}\end{array}\right] $$
the reactants orient themselves in such a way that geometrically accessible a-fragment of one reactant faces the geometrically accessible b-fragment of the other substrate. This pattern follows from the maximum complementarity (MC) rule [104] of chemical reactivity, which reflects a simple electrostatic preference that electron-rich (repulsive, basic) fragment of one reactant prefers to face the electron-deficient (attractive, acidic) part of the reaction partner. In the alternative regional HSAB-type structure of Fig. 4, the acidic (basic) fragment of one reactant faces the like-fragment of the other substrate:
$$ {\mathrm{R}}_{\mathrm{HSAB}}\equiv \left[\begin{array}{c}{a}_{\mathrm{A}}-{a}_{\mathrm{B}}\\ {}{b}_{\mathrm{A}}-{b}_{\mathrm{B}}\end{array}\right] $$
Fig. 3

Polarizational {Pα = (aα → bα)}, (α, β) ∈ {A, B}, and charge-transfer, CT1 = (bB → aA) and CT2 = (bA → aB), electron flows involving the acidic A = (aA|bA) and basic B = (aB|bB) reactants in the complementary arrangement Rc of their acidic (a) and basic (b) parts, with the chemically “hard” (acidic) fragment of one substrate facing the chemically “soft” (basic) fragment of its reaction partner. The polarizational flows {Pα} (black arrows) in the mutually closed substrates, relative to the substrate “promolecular” references, preserve the overall numbers of electrons in isolated reactants {α0}, while the two partial CT fluxes (white arrows), from the basic fragment of one reactant to the acidic part of the other reactant, generate a substantial resultant B → A transfer of NCT = CT1 - CT2 electrons between the mutually open reactants. These electron flows in the “complementary complex” are seen to produce an effective concerted (“circular”) flux of electrons between the four fragments invoked in this regional “functional” partition, which precludes an exaggerated depletion or concentration of electrons on any fragment of this reactive system

Fig. 4

Polarizational {Pα = (bα → aα)}, (α, β) ∈ {A, B}, and charge-transfer, CT1 = (bB → bA) and CT2 = (aB → aA), electron flows involving the acidic A = (aA|bA) and basic B = (aB|bB) reactants in the HSAB complex RHSAB, in which the chemically hard (acidic) and soft (basic) fragments of one reactant coordinate to the like-fragments of the other substrate. The two partial CT fluxes (white arrows) now generate a moderate overall B → A transfer of NCT = CT1 + CT2 electrons between the mutually open reactants. These electron flows in the regional-HSAB complex are seen to produce a disconcerted pattern of four elementary fluxes, producing an exaggerated outflow of electrons from bB and their accentuated inflow to aA. This electron removal/accumulation pattern of the charge reconstruction is predicted to be energetically less favorable compared to the concerted-flow model of Fig. 3

The complementary complex, in which the “excessive” electrons of bX are in the attractive field generated by the electron “deficiency” of aY, is expected to be electrostatically preferred since the other arrangement produces the regional repulsion either between two acidic or two basic sites of both reactants.

An additional rationale for this complementary preference over the regional HSAB alignment of reactants comes from examining the charge flows created by the dominating shifts in the site chemical potential due to the presence of the (“frozen”) coordinated site of the nearby part of the reaction partner. At finite separations between the two subsystems, these displacements trigger the polarizational flows {PX} shown in Figs. 3 and 4, which restore the internal equilibria in both subsystems, initially displaced by the presence of the other reactant.

In Rc, the harder (acidic) site aY initially lowers the chemical potential of the softer (basic) site bX, while bY rises the chemical potential level of aX. These shifts trigger the internal (polariaztional) flows {aX → bX}, which enhance the acceptor capacity of aX and donor ability of bX, thus creating more favorable conditions for the subsequent inter-reactant CT of Fig. 3. A similar analysis of RHSAB (Fig. 4) predicts the bX → aX polarizational flows, which lowers the acceptor capacity of aX and donor ability of bX, i.e., the electron accumulation on aX and electron depletion on bX, thus creating less favorable conditions for the subsequent inter-reactant CT.

The complementary preference also follows from the electronic stability considerations, in spirit of the familiar Le Châtelier-Braun principle of the ordinary thermodynamics [3]. In contrast to analysis of Figs. 3 and 4, where the CT responses follow the internal polarizations of reactants, the equilibrium responses to displacements {ΔvX = vY} in the external potential on subsystems, one now assumes the primary (inter-reactant) CT displacements {ΔCT1, ΔCT2} of Figs. 3 and 4, in the internally closed but externally open reactants, and then examines the induced (secondary) relaxational responses {IX} to these perturbations.

Let us first examine the CT-displaced complementary complex Rc of Fig. 3,defined by the initial populational shifts:
$$ \left[\Delta \left({\mathrm{CT}}_1\right)=\Delta N\left({a}_{\mathrm{A}}\right)=-\Delta N\left({b}_{\mathrm{B}}\right)\right]>\left[\Delta \left({\mathrm{CT}}_2\right)=\Delta N\left({a}_{\mathrm{B}}\right)=-\Delta N\left({b}_{\mathrm{A}}\right)\right]. $$
In accordance with the Le Châtelier stability principle [3], an inflow (outflow) of electrons from the given site c increases (decreases) the site chemical potential, as indeed reflected by the positive value of the site (diagonal) hardness descriptor
$$ {\eta}_{c,c}=\partial {\mu}_c/\partial {N}_c\equiv {\eta}_c>0. $$
The initial CT flows {ΔCTk} thus create the following shifts in the site chemical potentials, compared to the equalized levels in isolated reactants A0 = (aA0¦bA0) and B0 = (aB0¦bB0),
$$ {\displaystyle \begin{array}{l}\left[\varDelta {\mu}_{a_{\mathrm{A}}}\left({\mathrm{CT}}_1\right)>0\right]>\left[\varDelta {\mu}_{b_{\mathrm{A}}}\left({\mathrm{CT}}_2\right)<0\right]\;\mathrm{and}\\ {}\left[\varDelta {\mu}_{a_{\mathrm{B}}}\left({\mathrm{CT}}_2\right)>0\right]>\left[\varDelta {\mu}_{b_{\mathrm{B}}}\left({\mathrm{CT}}_1\right)<0\right]\end{array}}. $$
These CT-induced shifts in the fragment electronegativities thus trigger the following secondary, induced flows {IX} in RcCT:
$$ {a}_{\mathrm{A}}\overset{{\mathrm{I}}_{\mathrm{A}}}{\to }{b}_{\mathrm{A}}\kern0.53em \mathrm{and}\ \kern0.28em {a}_{\mathrm{B}}\overset{{\mathrm{I}}_{\mathrm{B}}}{\to }{b}_{\mathrm{B}} $$
which diminish effects of the initial CT perturbations by reducing the extra charge accumulations/depletions created by these primary CT displacements.
In the CT-displaced HSAB complex RHSAB of Fig. 4,the primary CT perturbations,
$$ \left[\Delta \left({\mathrm{CT}}_1\right)=\Delta N\left({b}_{\mathrm{A}}\right)=-\Delta N\left({b}_{\mathrm{B}}\right)\right]<\left[\Delta \left({\mathrm{CT}}_2\right)=\Delta N\left({a}_{\mathrm{A}}\right)=-\Delta N\left({a}_{\mathrm{B}}\right)\right], $$
where the inequality sign reflects magnitudes of the associated in situ chemical potentials,
$$ \left[|\mu \left({\mathrm{CT}}_1\right)|=\mu \left({b}_{\mathrm{B}}\right)-\mu \left({b}_{\mathrm{A}}\right)\right]<\left[|\mu \left({\mathrm{CT}}_2\right)|=\mu \left({a}_{\mathrm{B}}\right)-\mu \left({a}_{\mathrm{A}}\right)\right], $$
induce the internal relaxations in reactants,
$$ {a}_{\mathrm{A}}\overset{{\mathrm{I}}_{\mathrm{A}}}{\to }{b}_{\mathrm{A}}\kern0.53em \mathrm{and}\ \kern0.28em {b}_{\mathrm{B}}\overset{{\mathrm{I}}_{\mathrm{B}}}{\to }{a}_{\mathrm{B}}, $$
which further exaggerate the charge depletions/accumulations created by the primary perturbation, thus giving rise to a less stable reactive complex compared to RcCT.
The global CT equilibrium in the reactive complex as a whole is reached when reactants are both internally and mutually open, as a result of the hypothetical barrier for inter-subsystem flows of electrons being lifted,
$$ \mathrm{R}=\left({\mathrm{A}}^{\ast}\brokenvert {\mathrm{B}}^{\ast}\right)=\left({a_{\mathrm{A}}}^{\ast}\brokenvert {b_{\mathrm{A}}}^{\ast}\brokenvert {a_{\mathrm{B}}}^{\ast}\brokenvert {b_{\mathrm{B}}}^{\ast}\right)=\left({i_{\mathrm{A}}}^{\ast}\brokenvert {i_{\mathrm{A}}^{\prime}}^{\ast}\brokenvert \dots \brokenvert {j_{\mathrm{B}}}^{\ast}\brokenvert {j_{\mathrm{B}}^{\prime}}^{\ast}\brokenvert \dots \right), $$
as symbolized by the broken vertical lines separating the two reactants and their constituent parts. This global equilibrium marks the chemical potential equalization throughout R:
$$ {\displaystyle \begin{array}{l}{\mu_{\mathrm{A}}}^{\ast }=\partial E\left(\mathrm{R}\right)/\partial {N_{\mathrm{A}}}^{\ast }=\left\{{\mu_i}^{\ast}\left(\mathrm{R}\right)=\partial E\left(\mathrm{R}\right)/\partial {N_i}^{\ast}\right\}\\ {}={\mu_{\mathrm{B}}}^{\ast }=\partial E\left(\mathrm{R}\right)/\partial {N_{\mathrm{B}}}^{\ast }=\left\{{\mu_j}^{\ast}\left(\mathrm{R}\right)=\partial E\left(\mathrm{R}\right)/\partial {N_j}^{\ast}\right\}\\ {}={\mu}_{\mathrm{R}}=\partial E\left(\mathrm{R}\right)/\partial {N}_{\mathrm{R}}.\end{array}} $$
The final electron densities {ρX*} of reactants, marking the equilibrium distributions in the reactive system as a whole, then account for the extra, CT-induced substrate polarizations {ΔρXCT = ρX* - ρX+} in the resultant displacements {ΔρX* = ρX* - ρX0 = ΔρX+ + ΔρXCT} relative to isolated reactants {X0}. They integrate to fractional, CT-displaced changes in electron populations of reactants {ΔNX* = NX* - NX0}. Here, ΔρX+ = ρX+ − ρX0 stands for the density shift of X+ due to the substrate internal polarization, in the polarized complex R+ = (A+|B+) consisting of the internally open but externally closed reactants.
The two partial CT-responses in Rc generate the overall CT between reactants,
$$ {\displaystyle \begin{array}{l}{N}_{\mathrm{CT}}=\mathrm{C}{\mathrm{T}}_1-\mathrm{C}{\mathrm{T}}_2={N_{\mathrm{A}}}^{\ast }-{N_{\mathrm{A}}}^0\equiv \Delta {N}_{\mathrm{A}}\\ {}={N_{\mathrm{B}}}^0-{N_{\mathrm{B}}}^{\ast}\equiv -\Delta {N}_{\mathrm{B}}>0,\end{array}} $$
where {NX* = NX0 + ΔNX} stand for the equilibrium electron populations in the molecular complex R = (A*¦B*) = (aA*¦bA*¦aB*¦bB*) consisting of the mutually open reactants {X*} and their constituent parts {aX* and bX*}. Its magnitude is determined by in situ chemical potential of this reactive system, measured by the difference of chemical potentials of the mutually closed (internally polarized) acidic and basic reactants in R+ = (A+|B+) = (aA+¦bA+|aB+¦bB+) [see Eq. (36)]:
$$ \Delta {\mu_{\mathrm{R}}}^{+}={\mu_{\mathrm{A}}}^{+}-{\mu_{\mathrm{B}}}^{+}\equiv {\mu}_{\mathrm{CT}}<0, $$
and the reactant-resolved hardness tensor of the polarized reactive system R+,
$$ {{\boldsymbol{\upeta}}_{\mathrm{R}}}^{+}=\left\{{\eta}_{\mathrm{X},\mathrm{Y}}=\partial {\mu}_{\mathrm{X}}/\partial {N}_{\mathrm{Y}};\kern1.0em \mathrm{X},\mathrm{Y}\in \left(\mathrm{A},\mathrm{B}\right)\right\}, $$
which generates the in situ hardness (ηCT) and softness (SCT) descriptors [74, 75] for this CT process [see Eq. (37)]:
$$ {\eta}_{\mathrm{CT}}=\partial {\mu}_{\mathrm{CT}}/\partial {N}_{\mathrm{CT}}=\left({\eta}_{\mathrm{A},\mathrm{A}}-{\eta}_{\mathrm{A},\mathrm{B}}\right)+\left({\eta}_{\mathrm{B},\mathrm{B}}-{\eta}_{\mathrm{B},\mathrm{A}}\right)\equiv {\eta_{\mathrm{A}}}^{+}+{\eta_{\mathrm{B}}}^{+}={S_{\mathrm{CT}}}^{-1}>0. $$
The resultant CT,
$$ {N}_{\mathrm{CT}}=-{\mu}_{\mathrm{CT}}/{\eta}_{\mathrm{CT}}=-{\mu}_{\mathrm{CT}}\ {S}_{\mathrm{CT}}>0, $$
then generates the associated (second–order) CT stabilization energy of Eq. (39):
$$ {E}_{\mathrm{CT}}={\mu}_{\mathrm{CT}}\ {N}_{\mathrm{CT}}/2=-{\left({\mu}_{\mathrm{CT}}\right)}^2\ {S}_{\mathrm{CT}}/2<0. $$

Frontier-electron and communication outlooks on HSAB principle

The physical equivalence of reactivity concepts formulated in the energy and resultant gradient-information representations has also direct implications [1, 2] for the communication theory of the chemical bond (CTCB) [13, 14, 15, 16, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37]. In OCT, the theory orbital realization, one treats a molecule as an information channel propagating signals of the AO origins of electrons in the bond system determined by the system occupied MO. It has been argued elsewhere [14, 15, 16] that elements of the CBO matrix γ = {γk,l} [Eq. (25)], the weighting factors in expression of Eq. (23) for the average resultant gradient-information, determine amplitudes of conditional probabilities defining the direct communications between AO. Entropic descriptors of this channel then generate the information bond orders and their covalent/ionic components, which ultimately facilitate an IT understanding of molecular electronic structure in chemical terms.

The communication noise (orbital indeterminicity) in this network, measured by the channel conditional entropy, is due to the electron delocalization in the bond system of a molecule. It represents the overall bond “covalency”, while the channel information capacity (orbital determinicity), reflected by the mutual information of this communication network, measures the resultant bond “ionicity”. Therefore, the more scattering (indeterminate) the molecular information system, the higher its covalent character. Accordingly, a less noisy (more deterministic) channel represents a more ionic molecular system [13, 14, 15, 16, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37].

In chemistry the bond covalency, a common possession of electrons by interacting atoms, is indeed synonymous with the electron delocalization generating the communication noise. A classical example is provided by bonds connecting identical atoms, e.g., hydrogens in H2 or carbons in ethylene, when the interacting AO in the familiar MO diagrams of chemistry exhibit the same levels of AO energies. The bond ionicity accompanies large differences in atomic electronegativities generating a substantial CT. Such bonds correspond to a wide separation of the interacting AO energies in the familiar MO diagrams of chemistry. The ionic bond component introduces more determinicity (less noise) into molecular AO communications, thus representing a bond mechanism competitive with bond covalency [29, 106, 107, 108, 109, 110].

One of the celebrated (qualitative) rules of chemistry deals with stability preferences in molecular coordinations. The HSAB principle of Pearson [82] predicts that chemically hard (H) acids (A) prefer to coordinate hard bases (B) in the [HH]-complexes, and soft (S) acids prefer to coordinate soft bases in [SS]-complexes, whereas the “mixed” [HS]- or [SH]-complexes, of hard acids with soft bases or of soft acids with hard bases, respectively, are relatively unstable [83, 90]. As we have emphasized in the preceding section, this global preference is no longer valid regionally, between fragments of reactants, where the complementarity principle [104, 105] dictates the preferred arrangement between the acidic and basic sites of both reactants.

Little is known about the communication implications of the HSAB principle [1, 2]. The following questions arise in the reactivity context:
  • How does the [HH] or [SS] preference shape the intra- and inter-reactant communications in the whole reactive complex?

  • How is the H or S character of a substrate reflected by its internal communications?

  • How does the HSAB preference influence the inter-reactant propagations of information?

In the communication perspective on reactive systems [16, 111], the H and S reactants correspond to the internally ionic (deterministic) and covalent (noisy) reactant channels, respectively. The former involves localized orbital communications between chemically bonded atoms, while the latter corresponds to strongly delocalized information scatterings between AO basis states. A natural question then arises: what is the overall character of communications responsible for the mutual interaction between reactants? Do the S-substrates in [SS]-complex predominantly interact “covalently”, and H substrates of the [HH]-complex “ionically”?

In the frontier electron (FE) approach [112, 113, 114] to molecular interactions and CT phenomena, the orbital energy of the substrate highest occupied MO (HOMO) determines its donor (basic) level of the chemical potential, while the lowest unoccupied MO (LUMO) energy establishes its acceptor (acidic) capacity (see Fig. 5). The HOMO-LUMO energy gap then reflects the molecular hardness. One also recalls that the interaction between the reactant MO of comparable orbital energies is predominantly covalent (chemically “soft”) in character, while that between the subsystem MO of distinctly different energies becomes mostly ionic (chemically “hard”). A qualitative diagram of Fig. 5 summarizes the alternative, relative positions of the donor (HOMO) levels of the basic reactant, relative to the acceptor (LUMO) levels of its acidic partner, for all admissible hardness combinations in the R = A---B reactive system. In view of the proportionality relations between the energetic and information reactivity criteria, these relative MO energy levels also reflect the corresponding information potential and hardness quantities of subsystems, including the in situ derivatives driving the information transfer between reactants.
Fig. 5

Schematic diagram of the in situ chemical potentials μCT(B → A) ≡ μR(B → A), determining the effective internal CT from the basic (B) reactant to its acidic (A) partner in R = [A----B] complex, for their alternative hard (H) and soft (S) combinations. The subsystem hardnesses, measured by the HOMO-LUMO gaps in their MO energies, are also indicated

A magnitude of the ionic (CT) stabilization energy in A---B systems is then determined by the corresponding in situ populational derivatives in R,
$$ \Delta {\varepsilon}_{ion.}=\mid {E}_{\mathrm{CT}}\mid ={\mu_{\mathrm{CT}}}^2/\left(2{\eta}_{\mathrm{CT}}\right), $$
where μCT and ηCT stand for the effective chemical potential and hardness descriptors of R involving the relevant FE of reactants. Since the donor/acceptor properties of reactants are already implied by their (known) relative acidic or basic character, one applies the biased estimate of the CT chemical potential.
In this FE approximation the chemical potential difference μCT for the effective internal B → A CT thus reads (see Fig. 5):
$$ {\displaystyle \begin{array}{c}{\mu}_{\mathrm{CT}}\left(\mathrm{B}\to \mathrm{A}\right)={\mu_{\mathrm{A}}}^{\left(-\right)}-{\mu_{\mathrm{B}}}^{\left(+\right)}\\ {}={\varepsilon}_{\mathrm{A}}\left(\mathrm{LUMO}\right)-{\varepsilon}_{\mathrm{B}}\left(\mathrm{HOMO}\right)\approx {I}_{\mathrm{B}}-{A}_{\mathrm{A}}>0.\end{array}} $$
It determines the associated first-order energy change for this electron-transfer process:
$$ \Delta {E}_{\mathrm{B}\to \mathrm{A}}\left({N}_{\mathrm{CT}}\right)={\mu}_{\mathrm{CT}}\left(\mathrm{B}\to \mathrm{A}\right)\ {N}_{\mathrm{CT}}<0. $$
The CT chemical potential of Eq. (89) combines the electron-removal potential of the basic reactant, i.e., its negative ionization potential IB = E(B+1) - E(B0) > 0,
$$ {\mu_{\mathrm{B}}}^{\left(+\right)}={\varepsilon}_{\mathrm{B}}\left(\mathrm{HOMO}\right)\approx -{I}_{\mathrm{B}}, $$
and the electron-insertion potential of the acidic substrate, i.e., its negative electron affinity AA = E(A0) - E(A−1) > 0,
$$ {\mu_{\mathrm{A}}}^{\left(-\right)}={\varepsilon}_{\mathrm{A}}\left(\mathrm{LUMO}\right)\approx -{A}_{\mathrm{A}}. $$
The energy of the CT disproportionation process,
$$ {\displaystyle \begin{array}{c}\left[\mathrm{A}---\mathrm{B}\right]+\left[\mathrm{A}---\mathrm{B}\right]\kern1.50em \\ {}\to \left[{\mathrm{A}}^{-1}---{\mathrm{B}}^{+1}\right]+\left[{\mathrm{A}}^{+1}---{\mathrm{B}}^{-1}\right],\end{array}} $$
then generates the (unbiased) finite-difference measure of the effective hardness descriptor for this implicit CT [48, 74, 75]:
$$ {\displaystyle \begin{array}{l}{\eta}_{\mathrm{CT}}=\left({I}_{\mathrm{A}}-{A}_{\mathrm{A}}\right)+\left({I}_{\mathrm{B}}-{A}_{\mathrm{B}}\right)\\ {}\approx \left[{\varepsilon}_{\mathrm{A}}\left(\mathrm{LUMO}\right)-{\varepsilon}_{\mathrm{A}}\left(\mathrm{LUMO}\right)\right]+\left[{\varepsilon}_{\mathrm{B}}\left(\mathrm{LUMO}\right)-{\varepsilon}_{\mathrm{B}}\left(\mathrm{HOMO}\right)\right]\\ {}={\eta}_A+{\eta}_B>0.\end{array}} $$
These in situ populational derivatives ultimately determine a magnitude of the CT stabilization energy of Eq. (88), the ionic part of the overall interaction energy,
$$ \Delta {\varepsilon}_{ion.}={\mu_{\mathrm{CT}}}^2/\left(2{\eta}_{\mathrm{CT}}\right)={\left[{\varepsilon}_{\mathrm{A}}\left(\mathrm{LUMO}\right)-{\varepsilon}_{\mathrm{B}}\left(\mathrm{HOMO}\right)\right]}^2/\left[2\left({\eta}_{\mathrm{A}}+{\eta}_{\mathrm{B}}\right)\right]. $$
In the FE framework of Fig. 5, the CT (ionic) interaction energy is thus proportional to the squared gap between the LUMO orbital energy of the acidic reactant and the HOMO level of the basic substrate. This ionic interaction is thus predicted to be strongest in the [HH] pair of subsystems and weakest in the [SS]-arrangement, with the mixed [HS]- and [SH]-combinations representing the intermediate magnitudes of the ionic-stabilization effect [83].
It should be realized, however, that the ionic and covalent energy contributions complement each other in the resultant bond energy. Therefore, the [SS]-complex, for which the energy gap between the interacting orbitals, εA(LUMO) - εB(HOMO), reaches the minimum value, implies the strongest covalent-stabilization of the reactive complex. Indeed, the lowest (bonding) energy level εb of this FE interaction, corresponding to the bonding combination of the (positively overlapping), S = 〈φA(LUMO)|φB(HOMO)〉 > 0, frontier MO of subsystems,
$$ {\varphi}_b={N}_b\ \left[{\varphi}_{\mathrm{B}}\left(\mathrm{HOMO}\right)+{\lambda \varphi}_{\mathrm{A}}\left(\mathrm{LUMO}\right)\right], $$
then exhibits the maximum bonding energy due to covalent effect:
$$ \Delta {\varepsilon}_{\operatorname{cov}.}={\varepsilon}_{\mathrm{B}}\left(\mathrm{HOMO}\right)-{\varepsilon}_b>0. $$
It follows from the familiar secular equations of the Ritz method that this covalent energy can be approximated by the limiting MO expression
$$ \Delta {\varepsilon}_{\operatorname{cov}.}\cong {\left(\beta -{\varepsilon}_b\ S\right)}^2/\left[{\varepsilon}_{\mathrm{A}}\left(\mathrm{LUMO}\right)-{\varepsilon}_{\mathrm{B}}\left(\mathrm{HOMO}\right)\right], $$
where the coupling matrix element of the system electronic Hamiltonian,
$$ \beta =\left\langle {\varphi}_{\mathrm{A}}\left(\mathrm{LUMO}\right)|\hat{H}|{\varphi}_{\mathrm{B}}\left(\mathrm{HOMO}\right)\right\rangle, $$
is expected to be proportional to the overlap integral S between the frontier MO.

It follows from Eq. (97) that the maximum covalent component of the inter-reactant chemical bond is expected in interactions between soft, strongly overlapping reactants [83], since then the numerator assumes the highest value while the denominator reaches its minimum. For the same reason one predicts the smallest covalent stabilization in interactions between the hard, weakly overlapping substrates, with the mixed hardness combinations giving rise to intermediate bond covalencies.

To summarize, the [HH]-complex exhibits the maximum ionic-stabilization, the [SS]-complex the maximum covalent-stabilization, while the mixed combinations of reactant hardnesses in [HS]- and [SH]-coordinations exhibit a mixture of moderate covalent and ionic interactions between the acidic and basic subsystems [83]. Therefore, communications representing the inter-reactant bonds between the chemically soft (covalent) reactants are also expected to be predominantly “soft” (delocalized, indeterministic) in character, while those between the chemically hard (ionic) subsystems are predicted to be dominated by the “hard” (localized, deterministic) propagations in the communication system for R as a whole [1, 2].

The electron communications between reactants {α = A, B} in the acceptor–donor reactive system R = A----B are determined by the corresponding matrix of conditional probabilities in AO-resolution (or of their amplitudes), which can be partitioned into the corresponding intra-reactant (diagonal) parts, combining internal communications within individual substrates, and the inter-reactant (off-diagonal) blocks of external communications, between different subsystems,
$$ \left[\mathrm{R}\to \mathrm{R}\right]=\left\{\left[\alpha \to \beta \right]\right\}=\left\{\left[\alpha \to \alpha \right]{\delta}_{\alpha, \beta}\right\{+\left\{\left[\alpha \to \beta \right]\ \left(1-{\delta}_{\alpha, \beta}\right)\right\}=\left\{ intra\right\}+\left\{ inter\right\} $$
The [SS] complexes combining the “soft” (noisy), delocalized (internal) blocks of such probability propagations imply similar covalent character of the external blocks of electron AO communications between reactants, i.e., strongly indeterministic scatterings between subsystems:
$$ \left\{ intra-\mathrm{S}\right\}\Rightarrow \left\{ inter-\mathrm{S}\right\} $$
The “hard” (ionic) internal channels are similarly associated with the ionic (localized) external communications:
$$ \left\{ intra-\mathrm{H}\right\}\Rightarrow \left\{ inter-\mathrm{H}\right\} $$

This observation adds a communication angle to the classical HSAB principle of chemistry.


In this work, we have attempted the QIT description of the bimolecular donor–acceptor reactive system, including all hypothetical processes that accompany the bond-breaking/bond-forming processes of chemical reactions. The present (resultant) information analysis of reactivity phenomena complements earlier (classical) DFT-IT approaches, e.g., [115, 116, 117, 118, 119, 120, 121]. It should be emphasized, however, that the present resultant-information analysis has followed the standard thermodynamic approach to open microscopic systems, which does not imply any new “thermodynamic” transcription of DFT, see, e.g., [120, 121]. The continuities of the classical (modulus/probability) and nonclassical (phase/current) state parameters have been examined and contributions, that these molecular degrees-of-freedom generate in the resultant gradient-information descriptor of a quantum state, have been identified. The need for nonclassical (phase/current) complements of the classical (probability) measures of the information content in molecular electronic states has been reemphasized. It has been argued that the electron density alone reflects only the structure of “being”, missing the structure of “becoming” contained in the current distribution. Both of these manifestations of the molecular “organization” ultimately contribute to the overall information content in generally complex electronic wavefunctions, reflected by the resultant QIT concepts. Their importance in describing the mutual bonding and nonbonding status of reactants has been stressed and the in situ populational derivatives in the energy and information representations have been examined.

The DFT-based theory of chemical reactivity distinguishes several intuitive, hypothetical stages involving either the mutually bonded (entangled) or nonbonded (disentangled) states of reactants for the same electron distribution in constituent subsystems. These two categories are discerned only by the phase aspect of the quantum entanglement between molecular fragments. The equilibrium phases and currents of reactants can be related to the relevant electron densities using the entropic principles of QIT. This generalized approach deepens our understanding of the molecular promotions of constituent fragments and provides a more precise framework for monitoring the reaction progress.

The grand-ensemble description of thermodynamic equilibria in externally open molecular systems has been used to demonstrate the physical equivalence of the energy and resultant gradient-information principles. The populational derivatives of the resultant gradient-information, related to the system average kinetic energy, have been suggested as reliable reactivity criteria. They were shown to predict both the direction and magnitude of the electron flows in reactive systems. The grand-ensemble description of thermodynamic equilibria in the externally open molecular systems has been outlined and the physical equivalence of variational principles for the electronic energy and resultant gradient-information has been emphasized. The virial theorem has been used to explain the qualitative Hammond postulate of the theory of chemical reactivity, and the information production in chemical reactions has been addressed. The ionic and covalent interactions between frontier MO of the acidic and basic reactants have been examined to justify the HSAB principle of chemistry and to provide the communication perspective on interaction between reactants. It has been argued that the internally soft and hard reactants prefer to externally communicate in the like manner, consistent with their internal communications. This preference should be also reflected by the predicted character of the inter-reactant bonds/communications in stable coordinations: covalent in [SS] and ionic in [HH] complexes.



  1. 1.
    Nalewajski RF (2019) On entropy/information description of reactivity phenomena. In: Baswell AR (ed) Advances in mathematics research, vol 26. Nova Science Publishers, New York, pp 97–157Google Scholar
  2. 2.
    Nalewajski RF (2019) Information description of chemical reactivity. Current Physical Chemistry, in pressGoogle Scholar
  3. 3.
    Callen HB (1962) Thermodynamics: an introduction to the physical theories of equilibrium thermostatics and irreversible thermodynamics. Wiley, New YorkGoogle Scholar
  4. 4.
    Nalewajski RF (1980) Virial theorem implications for the minimum energy reaction paths. Chem Phys 50:127–136Google Scholar
  5. 5.
    Fisher RA (1925) Theory of statistical estimation. Proc Cambridge Phil Soc 22:700–725Google Scholar
  6. 6.
    Frieden BR (2004) Physics from the Fisher information—a unification. Cambridge University Press, CambridgeGoogle Scholar
  7. 7.
    Shannon CE (1948) The mathematical theory of communication. Bell System Tech J 27:379–493, 623–656Google Scholar
  8. 8.
    Shannon CE, Weaver W (1949) The mathematical theory of communication. University of Illinois, UrbanaGoogle Scholar
  9. 9.
    Kullback S, Leibler RA (1951) On information and sufficiency. Ann Math Stat 22:79–86Google Scholar
  10. 10.
    Kullback S (1959) Information theory and statistics. Wiley, New YorkGoogle Scholar
  11. 11.
    Abramson N (1963) Information theory and coding. McGraw-Hill, New YorkGoogle Scholar
  12. 12.
    Pfeifer PE (1978) Concepts of probability theory. Dover, New YorkGoogle Scholar
  13. 13.
    Nalewajski RF (2006) Information theory of molecular systems. Elsevier, AmsterdamGoogle Scholar
  14. 14.
    Nalewajski RF (2010) Information origins of the chemical bond. Nova Science Publishers, New YorkGoogle Scholar
  15. 15.
    Nalewajski RF (2012) Perspectives in electronic structure theory. Springer, HeidelbergGoogle Scholar
  16. 16.
    Nalewajski RF (2016) Quantum information theory of molecular states. Nova Science Publishers, New YorkGoogle Scholar
  17. 17.
    Nalewajski RF, Parr RG (2000) Information theory, atoms-in-molecules and molecular similarity. Proc Natl Acad Sci U S A 97:8879–8882PubMedPubMedCentralGoogle Scholar
  18. 18.
    Nalewajski RF (2003) Information principles in the theory of electronic structure. Chem Phys Lett 272:28–34Google Scholar
  19. 19.
    Nalewajski RF (2003) Information principles in the loge theory. Chem Phys Lett 375:196–203Google Scholar
  20. 20.
    Nalewajski RF, Broniatowska E (2003) Information distance approach to Hammond postulate. Chem Phys Lett 376:33–39Google Scholar
  21. 21.
    Nalewajski RF, Parr RG (2001) Information-theoretic thermodynamics of molecules and their Hirshfeld fragments. J Phys Chem A 105:7391–7400Google Scholar
  22. 22.
    Nalewajski RF (2002) Hirschfeld analysis of molecular densities: subsystem probabilities and charge sensitivities. Phys Chem Chem Phys 4:1710–1721Google Scholar
  23. 23.
    Parr RG, Ayers PW, Nalewajski RF (2005) What is an atom in a molecule? J Phys Chem A 109:3957–3959PubMedGoogle Scholar
  24. 24.
    Nalewajski RF, Broniatowska E (2007) Atoms-in-molecules from the stockholder partition of molecular two-electron distribution. Theoret Chem Acc 117:7–27Google Scholar
  25. 25.
    Heidar-Zadeh F, Ayers PW, Verstraelen T, Vinogradov I, Vöhringer-Martinez E, Bultinck P (2018) Information-theoretic approaches to atoms-in-molecules: Hirshfeld family of partitioning schemes. J Phys Chem A 122:4219–4245PubMedGoogle Scholar
  26. 26.
    Hirshfeld FL (1977) Bonded-atom fragments for describing molecular charge densities. Theoret Chim Acta (Berl) 44:129–138Google Scholar
  27. 27.
    Nalewajski RF (2000) Entropic measures of bond multiplicity from the information theory. J Phys Chem A 104:11940–11951Google Scholar
  28. 28.
    Nalewajski RF (2004) Entropy descriptors of the chemical bond in information theory: I. basic concepts and relations. Mol Phys 102:531-546; II. Application to simple orbital models. Mol Phys 102:547–566Google Scholar
  29. 29.
    Nalewajski RF (2004) Entropic and difference bond multiplicities from the two-electron probabilities in orbital resolution. Chem Phys Lett 386:265–271Google Scholar
  30. 30.
    Nalewajski RF (2005) Reduced communication channels of molecular fragments and their entropy/information bond indices. Theoret Chem Acc 114:4–18Google Scholar
  31. 31.
    Nalewajski RF (2005) Partial communication channels of molecular fragments and their entropy/information indices. Mol Phys 103:451–470Google Scholar
  32. 32.
    Nalewajski RF (2011) Entropy/information descriptors of the chemical bond revisited. J Math Chem 49:2308–2329Google Scholar
  33. 33.
    Nalewajski RF (2014) Quantum information descriptors and communications in molecules. J Math Chem 52:1292–1323Google Scholar
  34. 34.
    Nalewajski RF (2009) Multiple, localized and delocalized/conjugated bonds in the orbital-communication theory of molecular systems. Adv Quant Chem 56:217–250Google Scholar
  35. 35.
    Nalewajski RF, Szczepanik D, Mrozek J (2011) Bond differentiation and orbital decoupling in the orbital communication theory of the chemical bond. Adv Quant Chem 61:1–48Google Scholar
  36. 36.
    Nalewajski RF, Szczepanik D, Mrozek J (2012) Basis set dependence of molecular information channels and their entropic bond descriptors. J Math Chem 50:1437–1457Google Scholar
  37. 37.
    Nalewajski RF (2017) Electron communications and chemical bonds. In: Wójcik M, Nakatsuji H, Kirtman B, Ozaki Y (eds) Frontiers of quantum chemistry. Springer, Singapore, pp 315–351Google Scholar
  38. 38.
    Nalewajski RF, Świtka E, Michalak A (2002) Information distance analysis of molecular electron densities. Int J Quantum Chem 87:198–213Google Scholar
  39. 39.
    Nalewajski RF, Broniatowska E (2003) Entropy displacement analysis of electron distributions in molecules and their Hirshfeld atoms. J Phys Chem A 107:6270–6280Google Scholar
  40. 40.
    Nalewajski RF (2008) Use of Fisher information in quantum chemistry. Int J Quantum Chem (Jankowski K issue) 108:2230–2252Google Scholar
  41. 41.
    Nalewajski RF, Köster AM, Escalante S (2005) Electron localization function as information measure. J Phys Chem A 109:10038–10043PubMedGoogle Scholar
  42. 42.
    Becke AD, Edgecombe KE (1990) A simple measure of electron localization in atomic and molecular systems. J Chem Phys 92:5397–5403Google Scholar
  43. 43.
    Silvi B, Savin A (1994) Classification of chemical bonds based on topological analysis of electron localization functions. Nature 371:683–686Google Scholar
  44. 44.
    Savin A, Nesper R, Wengert S, Fässler TF (1997) ELF: the electron localization function. Angew Chem Int Ed Engl 36:1808–1832Google Scholar
  45. 45.
    Hohenberg P, Kohn W (1964) Inhomogeneous electron gas. Phys Rev 136B:864–971Google Scholar
  46. 46.
    Kohn W, Sham LJ (1965) Self-consistent equations including exchange and correlation effects. Phys Rev 140A:133–1138Google Scholar
  47. 47.
    Levy M (1979) Universal variational functionals of electron densities, first-order density matrices, and natural spin-orbitals and solution of the v-representability problem. Proc Natl Acad Sci U S A 76:6062–6065PubMedPubMedCentralGoogle Scholar
  48. 48.
    Parr RG, Yang W (1989) Density-functional theory of atoms and molecules. Oxford University Press, New YorkGoogle Scholar
  49. 49.
    Dreizler RM, Gross EKU (1990) Density functional theory: an approach to the quantum many-body problem. Springer, BerlinGoogle Scholar
  50. 50.
    Nalewajski, RF (ed) (1996) Density functional theory I-IV, Topics Currt Chemistry vols 180–183Google Scholar
  51. 51.
    Nalewajski RF, de Silva P, Mrozek J (2010) Use of nonadditive Fisher information in probing the chemical bonds. Theochem J Mol Struct 954:57–74Google Scholar
  52. 52.
    Nalewajski RF (2011) Through-space and through-bridge components of chemical bonds. J Math Chem 49:371–392Google Scholar
  53. 53.
    Nalewajski RF (2011) Chemical bonds from through-bridge orbital communications in prototype molecular systems. J Math Chem 49:546–561Google Scholar
  54. 54.
    Nalewajski RF (2011) On interference of orbital communications in molecular systems. J Math Chem 49:806–815Google Scholar
  55. 55.
    Nalewajski RF, Gurdek P (2011) On the implicit bond-dependency origins of bridge interactions. J Math Chem 49:1226–1237Google Scholar
  56. 56.
    Nalewajski RF (2012) Direct (through-space) and indirect (through-bridge) components of molecular bond multiplicities. Int J Quantum Chem 112:2355–2370Google Scholar
  57. 57.
    Nalewajski RF, Gurdek P (2012) Bond-order and entropic probes of the chemical bonds. Struct Chem 23:1383–1398Google Scholar
  58. 58.
    Nalewajski RF (2016) Complex entropy and resultant information measures. J Math Chem 54:1777–1782Google Scholar
  59. 59.
    Nalewajski RF (2014) On phase/current components of entropy/information descriptors of molecular states. Mol Phys 112:2587–2601Google Scholar
  60. 60.
    Nalewajski RF (2017) Quantum information measures and their use in chemistry. Curr Phys Chem 7:94–117Google Scholar
  61. 61.
    Nalewajski RF (2013) Exploring molecular equilibria using quantum information measures. Ann Phys (Leipzig) 525:256–268Google Scholar
  62. 62.
    Nalewajski RF (2014) On phase equilibria in molecules. J Math Chem 52:588–612Google Scholar
  63. 63.
    Nalewajski RF (2014) Quantum information approach to electronic equilibria: molecular fragments and elements of non-equilibrium thermodynamic description. J Math Chem 52:1921–1948Google Scholar
  64. 64.
    Nalewajski RF (2015) Phase/current information descriptors and equilibrium states in molecules. Int J Quantum Chem 115:1274–1288Google Scholar
  65. 65.
    Nalewajski RF (2015) Quantum information measures and molecular phase equilibria. In: Baswell AR (ed) Advances in mathematics research, vol 19. Nova Science Publishers, New York, pp 53–86Google Scholar
  66. 66.
    Nalewajski RF (2018) Phase description of reactive systems. In: Islam N, Kaya S (eds) Conceptual density functional theory. Apple Academic Press, Waretown, pp 217–249Google Scholar
  67. 67.
    Nalewajski RF (2017) Entropy continuity, electron diffusion and fragment entanglement in equilibrium states. In: Baswell AR (ed) Advances in mathematics research, vol 22. Nova Science Publishers, New York, pp 1–42Google Scholar
  68. 68.
    Nalewajski RF (2016) On entangled states of molecular fragments. Trends Phys Chem 16:71–85Google Scholar
  69. 69.
    Nalewajski RF (2017) Chemical reactivity description in density-functional and information theories. In chemical concepts from density functional theory, Liu S (ed). Acta Phys -Chim Sin 33:2491–2509Google Scholar
  70. 70.
    Nalewajski RF (2018) Information equilibria, subsystem entanglement and dynamics of overall entropic descriptors of molecular electronic structure. J Mol Model (Chattaraj PK issue) 24:212–227Google Scholar
  71. 71.
    Prigogine I (1980) From being to becoming: time and complexity in the physical sciences. Freeman WH & Co, San FranciscoGoogle Scholar
  72. 72.
    Harriman JE (1980) Orthonormal orbitals fort the representation of an arbitrary density. Phys Rev A24:680–682Google Scholar
  73. 73.
    Zumbach G, Maschke K (1983) New approach to the calculation of density functionals. Phys rev A28:544-554; erratum. Phys Rev A29:1585–1587Google Scholar
  74. 74.
    Nalewajski RF, Korchowiec J, Michalak A (1996) Reactivity criteria in charge sensitivity analysis. Topics in current chemistry: density functional theory IV. Nalewajski RF (ed). 183:25–141Google Scholar
  75. 75.
    Nalewajski RF, Korchowiec J (1997) Charge sensitivity approach to electronic structure and chemical reactivity. World Scientific, SingaporeGoogle Scholar
  76. 76.
    Geerlings P, De Proft F, Langenaeker W (2003) Conceptual density functional theory. Chem Rev 103:1793–1873PubMedGoogle Scholar
  77. 77.
    Chattaraj PK (ed) (2009) Chemical reactivity theory: a density functional view. CRC Press, Boca RatonGoogle Scholar
  78. 78.
    Nalewajski RF (1994) Sensitivity analysis of charge transfer systems: in situ quantities, intersecting state model and its implications. Int J Quantum Chem 49:675–703Google Scholar
  79. 79.
    Nalewajski RF (1995) In: Dreizler RM, Gross EKU (eds) Charge sensitivity analysis as diagnostic tool for predicting trends in chemical reactivity. Proceedings of the NATO ASI on density functional theory (Il Ciocco, 1993). Plenum, New York, pp 339–389Google Scholar
  80. 80.
    Gatti C, Macchi P (2012) Modern charge-density analysis. Springer, BerlinGoogle Scholar
  81. 81.
    Hammond GS (1955) A correlation of reaction rates. J Am Chem Soc 77:334–338Google Scholar
  82. 82.
    Pearson RG (1973) Hard and soft acids and bases. Dowden, Hutchinson and Ross, StroudsburgGoogle Scholar
  83. 83.
    Nalewajski RF (1984) Electrostatic effects in interactions between hard (soft) acids and bases. J am Chem Soc 106:944-945; see also: Gazquez JL, Mendez F (1994) the hard and soft acids and bases principle: an atoms-in-molecules viewpoint. J Phys Chem 98:4591–4593Google Scholar
  84. 84.
    von Weizsäcker CF (1935) Zur theorie der kernmassen. Z Phys 96:431–458Google Scholar
  85. 85.
    Gyftopoulos EP, Hatsopoulos GN (1965) Quantum-thermodynamic definition of electronegativity. Proc Natl Acad Sci U S A 60:786–793Google Scholar
  86. 86.
    Perdew JP, Parr RG, Levy M, Balduz JL (1982) Density functional theory for fractional particle number: derivative discontinuities of the energy. Phys Rev Lett 49:1691–1694Google Scholar
  87. 87.
    Mulliken RS (1934) A new electronegativity scale: together with data on valence states and on ionization potentials and electron affinities. J Chem Phys 2:782–793Google Scholar
  88. 88.
    Iczkowski RP, Margrave JL (1961) Electronegativity. J Am Chem Soc 83:3547–3551Google Scholar
  89. 89.
    Parr RG, Donnelly RA, Levy M, Palke WE (1978) Electronegativity: the density functional viewpoint. J Chem Phys 69:4431–4439Google Scholar
  90. 90.
    Parr RG, Pearson RG (1983) Absolute hardness: companion parameter to absolute electronegativity. J am Chem Soc 105:7512-7516; see also: Chattaraj PK, lee H, Parr RG (1991) HSAB principle. J am Chem Soc 113:1855-1856; Ayers PW (2007) the physical basis of the hard/soft acid/base principle. Faraday Discuss 135:161–190Google Scholar
  91. 91.
    Parr RG, Yang W (1984) Density functional approach to the frontier-electron theory of chemical reactivity. J Am Chem Soc 106:4049–4050Google Scholar
  92. 92.
    Von Neumann J (1955) Mathematical foundations of quantum mechanics. Princeton University Press, PrincetonGoogle Scholar
  93. 93.
    Ruedenberg K (1962) The physical nature of the chemical bond. Rev mod Phys 34:326-376; Kutzelnigg W (1973) the physical mechanism of the chemical bond. Angew Chem Int Ed 12:546–562Google Scholar
  94. 94.
    Feinberg MJ, Ruedenberg K (1971) Paradoxical role of the kinetic-energy operator in the formation of the covalent bond. J Chem Phys 54:1495–1512Google Scholar
  95. 95.
    Feinberg MJ, Ruedenberg K (1971) Heteropolar one-electron bond. J Chem Phys 55:5805–5818Google Scholar
  96. 96.
    Bacskay GB, Nordholm S, Ruedenberg K (2018) The virial theorem and covalent bonding. J Phys Chem A122:7880–7893PubMedGoogle Scholar
  97. 97.
    Marcus RA (1968) Theoretical relations among rate constants, barriers, and Broensted slopes of chemical reactions. J Phys Chem 72:891–899Google Scholar
  98. 98.
    Agmon N, Levine RD (1977) Energy, entropy and the reaction coordinate: thermodynamic-like relations in chemical kinetics. Chem Phys Lett 52:197–201Google Scholar
  99. 99.
    Agmon N, Levine RD (1979) Empirical triatomic potential energy surfaces defined over orthogonal bond-order coordinates. J Chem Phys 71:3034–3041Google Scholar
  100. 100.
    Miller AR (1978) A theoretical relation for the position of the energy barrier between initial and final states of chemical reactions. J Am Chem Soc 100:1984–1992Google Scholar
  101. 101.
    Ciosłowski J (1991) Quantifying the Hammond postulate: intramolecular proton transfer in substituted hydrogen catecholate anions. J Am Chem Soc 113:6756–6761Google Scholar
  102. 102.
    Nalewajski RF, Formosinho SJ, Varandas AJC, Mrozek J (1994) Quantum mechanical valence study of a bond breaking – bond forming process in triatomic systems. Int J Quantum Chem 52:1153–1176Google Scholar
  103. 103.
    Dunning Jr TH (1984) Theoretical studies of the energetics of the abstraction and exchange reactions in H + HX, with X = F-I. J Phys Chem 88:2469–2477Google Scholar
  104. 104.
    Nalewajski RF (2000) Manifestations of the maximum complementarity principle for matching atomic softnesses in model chemisorption systems. Top Catal 11:469–485Google Scholar
  105. 105.
    Chandra AK, Michalak A, Nguyen MT, Nalewajski RF (1998) On regional matching of atomic softnesses in chemical reactions: two-reactant charge sensitivity study. J Phys Chem A102:10182–10188Google Scholar
  106. 106.
    Nalewajski RF, Köster AM, Jug K (1993) Chemical valence from the two-particle density matrix. Theoret Chim Acta (Berl) 85:463–484Google Scholar
  107. 107.
    Nalewajski RF, Mrozek J (1994) Modified valence indices from the two-particle density matrix. Int J Quantum Chem 51:187–200Google Scholar
  108. 108.
    Nalewajski RF, Mrozek J, Mazur G (1996) Quantum-mechanical valence indices from the one-determinantal difference approach. Can J Chem 100:1121–1130Google Scholar
  109. 109.
    Nalewajski RF, Mrozek J, Michalak A (1997) Two-electron valence indices from Kohn–Sham orbitals. Int J Quantum Chem 61:589–601Google Scholar
  110. 110.
    Nalewajski RF (2006) Atomic resolution of bond descriptors in the two-orbital model. Mol Phys 104:493–501Google Scholar
  111. 111.
    Nalewajski RF (2014) Quantum information description of reactive systems. Indian J Chem (Ghosh SK issue) 53A:1010–1018Google Scholar
  112. 112.
    Fukui K (1975) Theory of orientation and stereoselection. Springer-Verlag, BerlinGoogle Scholar
  113. 113.
    Fukui K (1987) Role of frontier orbitals in chemical reactions. Science 218:747–754Google Scholar
  114. 114.
    Fujimoto H, Fukui K (1974) Intermolecular interactions and chemical reactivity. In: Klopman G (ed) Chemical reactivity and reaction paths. Wiley-Interscience, New York, pp 23–54Google Scholar
  115. 115.
    Sears SB, Parr RG, Dinur U (1980) On the quantum mechanical kinetic energy as a measure of the information in a distribution. Isr J Chem 19:165–173Google Scholar
  116. 116.
    Hô M, Schmider HL, Weaver DF, Smith Jr VH, Sagar RP, Esquivel RO (2000) Shannon entropy of chemical changes: SN2 displacement reactions. Int J Quantum Chem 77:376–382Google Scholar
  117. 117.
    López-Rosa S, Esquivel RO, Angulo JC, Antolin J, Dehesa JS, Flores-Gallegos N (2010) Fisher information study in position and momentum spaces for elementary chemical reactions. J. Chem. Theory Comput. 6:145–154PubMedGoogle Scholar
  118. 118.
    Esquivel RO, Liu SB, Angulo JC, Dehesa JS, Antolin J, Molina-Espiritu M (2011) Fisher information and steric effect: study of the internal rotation barrier in ethane. J. Phys Chem A115:4406–4415PubMedGoogle Scholar
  119. 119.
    Liu SB (2016) Information-theoretic approach in density functional reactivity theory. Acta Phys -Chim Sin 32:98–118Google Scholar
  120. 120.
    Ghosh SK, Berkowitz M, Parr RG (1984) Transcription of ground-state density functional theory into a local thermodynamics. Proc Natl Acad Sci U S A 81:8028–8031PubMedPubMedCentralGoogle Scholar
  121. 121.
    Ayers P, Parr RG, Nagy A (2002) Local kinetic energy and local temperature in the density functional theory of electronic structure. Int J Quantum Chem 90:309–326Google Scholar

Copyright information

© The Author(s) 2019

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Theoretical ChemistryJagiellonian UniversityCracowPoland

Personalised recommendations