Distribution of segments on Gaussian chain

This analysis focuses on the probability distribution function, $P_i(\mathbf{r}_i-\mathbf{r}_\mathrm{cm})$, of the $i$th segment relative to the center of mass in an ideal chain. An ideal chain is modeled as a multidimensional random walk with independent steps. The distribution of each step, with a mean length of $b$, is assumed to be Gaussian: $P(\mathbf{r})\sim\mathcal{N}(0,b^2)$.

Let $\mathbf{b}_i=\mathbf{r}_{i+1}-\mathbf{r}_{i}$ represent the $i$th bond vector. Then, the position of the $i$th segment is given by:

$\mathbf{r}_i=\sum_{j=1}^{i-1} \mathbf{b}_j$

The center of mass, $\mathbf{r}_\mathrm{cm}$, is calculated as:

$\mathbf{r}_\mathrm{cm}=\frac{1}{N}\sum_{i=1}^{N} \mathbf{r}_i = \frac{1}{N}\sum_{j=1}^{N-1}(N-j)\mathbf{b}_j$

Therefore, the displacement of the $i$th segment relative to the center of mass is:

$\mathbf{r}_i-\mathbf{r}_\mathrm{cm}=\sum_{j=1}^{i-1}\frac{j}{N}\mathbf{b}_j+\sum_{j=i}^{N-1}\frac{N-j}{N}\mathbf{b}_j$

If $X$ is a Gaussian random variable with variance $\sigma^2$, then $aX$ is also a Gaussian random variable with variance $a^2\sigma^2$. Using this property, we can write the characteristic function for $P_i(\mathbf{r}_i-\mathbf{r}_\mathrm{cm})$ of a $d$-dimensional ideal chain:

$\phi_i(\mathbf{q})=\Pi_{j} \phi_{\mathbf{b}_j’}(\mathbf{q})=\exp\left(-\frac{1}{2}\mathbf{q}^T\left(\sum_{j=1}^{N-1}\Sigma_j\right)\mathbf{q}\right)$

where $\mathbf{b}’_j=\frac{j}{N}\mathbf{b}_j$ for $j\le i-1$ and $\frac{N-j}{N}\mathbf{b}_j$ for $i\le j \le N-1$. $\phi_{\mathbf{b}_j’}=-\exp(-0.5\mathbf{q}^T\Sigma\mathbf{q})$ is the characteristic function of the probability distribution of bond $j$. $\Sigma_j=\frac{j^2 b^2}{d N^2}\mathbb{I}_d$ for $j\le i-1$ and $\Sigma_j=\frac{(N-j)^2 b^2}{d N^2}\mathbb{I}_d$ for $i\le j \le N-1$, where $\mathbb{I}_d$ is the $d$-dimensional identity matrix. Setting $b=1$ for convenience, we obtain:

$\phi_i(\mathbf{q})=$ $\exp \left(-\frac{\left(6 i^2-6 i (N+1)+2 N^2+3 N+1\right) \left(q_x^2+q_y^2+q_z^2\right)}{36 N}\right)$

The corresponding distribution of this characteristic function is still a Gaussian distribution with $\Sigma=\frac{b^2}{3} \mathbb{I}_3$, where the equivalent bond length $b^2=\frac{\left(6 i^2-6 i (N+1)+2 N^2+3 N+1\right)}{6 N}$. The 6th moment is calculated as $\frac{1}{N}\sum_{i=1}^N \langle(\mathbf{r}_i-\mathbf{r}_\mathrm{cm})^6\rangle=\frac{58 N^6-273 N^4+462 N^2-247}{1944 N^3}$. For large $N$, only the leading term, $\frac{29}{972} N^3$, is significant. For $N=20$, the result is $235.886$, which agrees with simulations. Another example is for $N=5$, where the $R_g^2$ is $0.8$ compared to the expected value of $5/6=0.8\dot{3}$.

Here is the simulation code:

ch = np.random.normal(size=(100000,20,3),scale=1/3**0.5)
ch[:,0,:]=0
ch = ch.cumsum(axis=1)
ch -= ch.mean(axis=1,keepdims=1)
m6 = np.mean(np.linalg.norm(ch,axis=-1)**6)

Eigenvalues of circulant matrices

A circulant matrix is defined as:

$C=\begin{bmatrix}c_{0}&c_{n-1}&\dots &c_{2}&c_{1}\\c_{1}&c_{0}&c_{n-1}&&c_{2}\\\vdots &c_{1}&c_{0}&\ddots &\vdots \\c_{n-2}&&\ddots &\ddots &c_{n-1}\\c_{n-1}&c_{n-2}&\dots &c_{1}&c_{0}\\\end{bmatrix}$

where $C_{j, k}=c_{j-k \mod n}$. The $k$-th eigenvalue $\lambda_k$ and eigenvector $x_k$ satisfy $C\cdot x_k=\lambda_k x_k$, which can be expressed as $n$ equations:

$\sum_{j=0}^{m-1}c_{m-j}x_j+\sum_{j=m}^{n-1}c_{n-j+m}x_j=\lambda_k x_m\quad m=0,1,\dots,n-1$

with $c_n=c_0$, where $x_m$ is the $m$-th component of the eigenvector $x_k$. By changing the dummy summing variables ($j\to m-j$ and $j\to n-j+m$), we obtain:

$\sum_{j=1}^{m}c_j x_{m-j} +\sum_{j=m+1}^{n}c_j x_{n+m-j}=\lambda_k x_m$

with $m=0,1,2,\dots,n-1$. We can “guess” a solution where $x_j=\omega^j$, which transforms the equation into:

$\begin{align}&\sum_{j=1}^{m}c_j \omega^{m-j} +\sum_{j=m+1}^{n}c_j \omega^{n+m-j}=\lambda_k \omega^m\\ \leftrightarrow &\sum_{j=1}^{m}c_j \omega^{-j} +\omega^{n}\sum_{j=m+1}^{n}c_j \omega^{-j}=\lambda_k\end{align}$

Let $\omega$ be one of the $n$-th roots of unity, i.e., $\omega = \exp(-2\pi\mathbb{i}/n)$, then $\omega^{n}=1$. This leads to the eigenvalue:

$\lambda = \sum_{j=0}^{n-1}\omega^{-j} c_j$

with the corresponding eigenvector:

$x= (1, \exp(2\pi\mathbb{i}/n), \exp(2\pi\mathbb{i}/n)^2,\dots,\exp(2\pi\mathbb{i}/n)^{n-1})^T$

The $k$-th eigenvalue is generated from $\omega^{-k} = \exp(2\pi k\mathbb{i}/n)$ with $k\in [0,n-1]$, resulting in the $k$-th eigenvalue and eigenvector:

$\lambda_k = \sum_{j=0}^{n-1}c_j \exp(2\pi k\mathbb{i}/n)$

and

$x_k = (1, \exp(2\pi k\mathbb{i}/n), \exp(2\pi k\mathbb{i}/n)^2,\dots,\exp(2\pi k\mathbb{i}/n)^{n-1})^T$

The eigenspace is simply the DFT matrix, and all circulant matrices share the same eigenspace. It is easy to verify that circulant matrices possess the following properties:

If $A$ and $B$ are circulant matrices, then:

  1. $AB=BA=W^\ast\Gamma W$ where $W$ is the DFT matrix and $\Gamma=\Gamma_A\Gamma_B$, with $\Gamma_i$ representing the diagonal matrix consisting of the eigenvalues of $i$; $\Gamma_A = WAW^\ast$.
  2. $B+A=A+B=W^\ast\Omega W$, where $\Omega=\Gamma_A+\Gamma_B$.
  3. If $\mathrm{det}(A)\ne 0$, then $A^{-1}=W^\ast \Gamma_A^{-1}W$.

The proof is straightforward:

  1. $AB=W^\ast \Gamma_AW W^\ast\Gamma_BW=$
    $W^\ast\Gamma_A\Gamma_BW=W^\ast \Gamma_B\Gamma_AW=BA$
  2. $W(A+B)W^\ast=\Gamma_A+\Gamma_B$.
  3. $AW^\ast \Gamma_A^{-1}W=W^\ast \Gamma_AW W^\ast \Gamma_A^{-1}W=\mathbb{I}$

Saddle point method and end-to-end distribution of polymer models

Introduction

Saddle point methods are widely used to estimate integrals of the form:

$I = \int \exp\left(-f(x)\right)\mathrm{d}x$

where the function $f(x)$ can be approximated by the first two terms of its Taylor series expansion around a point $x_0$:

$f(x)\approx f(x_0) + f^\prime (x_0)(x-x_0) + \frac{1}{2}f^{\prime\prime}(x_0)(x-x_0)^2$

The integral is then approximated by its value at the saddle point, where $f^\prime (x_0)=0$ and $f^{\prime\prime}(x_0)>0$:

$\begin{align} I&\approx \int \exp\left(-f(x_0) – \frac{1}{2}f^{\prime\prime}(x_0)(x-x_0)^2\right) \mathrm{d}x\\ &=\exp(-f(x_0))\sqrt{\frac{2\pi}{f^{\prime\prime}(x_0)}} \end{align}$

Examples

  • Stirling’s Formula:

Using the definition of the Gamma function, we have:

$N!=\int_0^\infty \exp(-x)x^N\mathrm{d}x$

Let $f(x):=x-N\ln(x)$. For large $N$, the negative term is negligible. Solving $f^\prime (x) = 0$, we obtain:

$N!\approx\exp(-N+N\ln(N))\sqrt{2\pi{}N}=\sqrt{2\pi{}N}\left(\frac{N}{e}\right)^N$

  • Partition Function:

The partition function is given by:

$Z = \int \exp(-\beta U(\mathbf{x})) \mathrm{d}\mathbf{x}$

where $U(\mathbf{x})$ can be approximated by:

$U(\mathbf{x})\approx U(\mathbf{x}_0) + \frac{1}{2}(\mathbf{x}-\mathbf{x}_0)^T H[U](\mathbf{x}-\mathbf{x}_0)$

where $H$ represents the Hessian matrix.

End-to-End Distribution Function of Random Walk Model of Polymer Chains

For an $N$-step random walk model, the exact end-to-end vector distribution is:

$\begin{align} P(\mathbf{Y})&=\frac{1}{(2\pi)^3}\int \mathrm{d}\mathbf{k} \exp(-i\mathbf{k}\cdot\mathbf{Y})\tilde{\phi}^N\\ &=\int_{0}^{\infty} k\sin(kY) \left(\frac{\sin(kb)}{kb}\right)^N \mathrm{d}k \end{align}$

where $\phi(\mathbf{x})=\frac{1}{4\pi b^2}\delta(|\mathbf{x}|-b)$ is the distribution of a single step vector (length=$b$) and $\tilde{\phi}$ is its characteristic function; $\mathbf{Y}:=\sum_{i=1}^N \mathbf{x}_i$ is the end-to-end vector. Let $s=kb$ and $f(s):=i\frac{Y}{Nb}s -\ln\frac{\sin(s)}{s}$. Then we have:

$\begin{align} P&=\frac{i}{4\pi^2 b^2 Y}\int_{-\infty}^{+\infty} s \exp\left(-is\frac{Y}{b}\right)\left(\frac{\sin(s)}{s}\right)^N\mathrm{d}s\\ &=\frac{i}{4\pi^2 b^2 Y} \int s \exp(-Nf(s)) \mathrm{d} s\end{align}$

The integral is extended to $(-\infty, +\infty)$ due to the symmetry of the sine and cosine functions. The first sine function is replaced with an exponential form using Euler’s formula: $\exp(ix)=\cos(x)+i\sin(x)$.

Solving $f^\prime(s)=0$, we find that $is$ satisfies:

$\coth(is)-\frac{1}{is}=\frac{Y}{Nb}$

This is the Langevin function, $is_0=L^{-1}(\frac{Y}{Nb})$. Therefore, we have:

$\begin{align} P &\approx\frac{s_0}{4\pi^2b^2Y}\sqrt{\frac{2\pi}{Nf^{\prime\prime}(s_0)}}e^{-Nf(s_0)}\\ &=\frac{1}{(2\pi{}Nb^2)^{3/2}}\frac{L^{-1}(x)^2}{x(1-(L^{-1}(x)\csc\left(L^{-1}(x)\right))^2)^{1/2}}\\&\times\left(\frac{\sinh\left(L^{-1}(x)\right)}{L^{-1}(x)\exp\left(xL^{-1}(x)\right)}\right)^N\end{align}$

where $x:=\frac{Y}{Nb}$.

Steady compliance (Linear viscoelasty)

The Boltzmann superposition principle states that the strain response of a viscoelastic material is a superposition of the responses to all previous stress histories. Assuming no stress before time $t=0$, the constitutive equation can be written as:

$\begin{equation} \gamma(t)=\int_0^t J(t-t^\prime)\dot{\sigma}(t^\prime)\mathrm{d}t^\prime \end{equation}$

Applying the Laplace transform to this equation yields:

$\begin{align} \hat{\gamma}(s)&=\hat{J}(s)\hat{\dot{\sigma}}(s)\\ &=\hat{J}(s)\left(s\hat{\sigma}(s)-\sigma(0^{-})\right) \\ &=\hat{J}(s)\left(s\hat{G}(s)\hat{\dot{\gamma}}(s)-\sigma(0^{-})\right)\\ &=\hat{J}(s)\left(s\hat{G}(s)(s\hat{\gamma}(s)-\gamma(0^-))-\sigma(0^{-})\right)\\ &=s^2\hat{J}(s)\hat{\gamma}(s)\hat{G}(s)  \end{align}$

Here, $\hat{f}(s):=\mathcal{L}\lbrace f(t)\rbrace(s)$ denotes the Laplace transform of $f(t)$. Since the stress starts at time $t=0$, both $\gamma(0^-)$ and $\sigma(0^-)$ are zero. The second to third step is derived from the convolution relation $\sigma(t)=\int_0^t G(t-t^\prime)\dot{\gamma}(t^\prime)\mathrm{d}t^\prime$. Cancelling out $\hat{\gamma}(s)$ and rearranging the last equation gives:

$\begin{equation} \frac{1}{s^2}=\hat{J}(s)\hat{G}(s) \end{equation}$

Taking the inverse Laplace transform of this equation leads to the convolution relation:

$\begin{equation} t=\int_0^{t}J(t-t^\prime)G(t^\prime)\mathrm{d}t^\prime \end{equation}$

Substituting $J(t)=J_e + t/\eta$ into the convolution relation and taking the limit as $t\to\infty$ gives:

$\begin{equation} \begin{aligned} \color{blue}{t} &=\int_0^{t}\left(J_e+\frac{t-t^\prime}{\eta}\right)G(t^\prime)\mathrm{d}t^\prime\\ &= J_e\int_0^\infty G(t^\prime)\mathrm{d}t^\prime + {\color{blue}{\frac{t}{\eta}\int_0^\infty G(t^\prime)\mathrm{d}t^\prime}} – \int_0^\infty G(t^\prime)\frac{t^\prime}{\eta}\mathrm{d}t^\prime \end{aligned} \end{equation}$

Since $\eta:=\int_0^\infty G(t^\prime)\mathrm{d}t^\prime$, the blue terms cancel out, resulting in:

\begin{equation} J_e\int_0^\infty G(t^\prime)\mathrm{d}t^\prime = J_e\eta= \int_0^\infty G(t^\prime)\frac{t^\prime}{\eta}\mathrm{d}t^\prime = \frac{1}{\eta} \int_0^\infty G(t^\prime)t^\prime\mathrm{d}t^\prime \end{equation}

Therefore, we obtain:

$J_e = \frac{1}{\eta^2}\int_0^\infty tG(t)\mathrm{d}t$

$Q.E.D.$

Bending energy and persistence length

Persistence length, $L_p$, is a fundamental mechanical property that quantifies the bending stiffness of a polymer. It’s defined as the characteristic length scale over which the correlation of bond angles decays. This correlation is expressed as the average cosine of the angle, $\theta$, between bonds separated by a distance $s$ along the chain:

$\langle\cos(\theta(s))\rangle = \exp(-sl/L_p)$

To calculate $L_p$, we need to determine $\langle\cos(\theta(s))\rangle$. For adjacent bonds, where $s=1$, this simplifies to:

$\langle\cos(\theta)\rangle = \exp(-l/L_p)$

In simulations, the bending stiffness is often modeled using a bending energy, $U_b$. When $U_b$ is large, and excluded volume effects are negligible, we can calculate $\langle\cos(\theta)\rangle$ as:

$\langle\cos(\theta)\rangle=\frac{\int_0^\pi \cos(\theta)\sin(\theta)\exp(-\beta U_b)\mathrm{d}\theta}{\int_0^\pi\sin(\theta)\exp(-\beta U_b)\mathrm{d}\theta}$

The factor $\sin(\theta)$ represents a geometric weight. When two bonds form an angle $\theta$, the number of possible bond configurations in 3D space is proportional to $\sin(\theta)$.

A common example of bending energy is the harmonic potential:

$U_b(\theta)=-\frac{1}{2}k \theta^2$

where $k$ is the stiffness constant in units of $k_BT$. For this potential, $\langle\cos(\theta)\rangle$ is given by:

$\frac{e^{\frac{3}{4 k}} \left(-2 \text{erf}\left(\frac{1}{\sqrt{k}}\right)+\text{erf}\left(\frac{1+i \pi  k}{\sqrt{k}}\right)+\text{erf}\left(\frac{1-i \pi  k}{\sqrt{k}}\right)\right)}{2 \left(-2 \text{erf}\left(\frac{1}{2 \sqrt{k}}\right)+\text{erf}\left(\frac{1+2 i \pi  k}{2 \sqrt{k}}\right)+\text{erf}\left(\frac{1-2 i \pi  k}{2 \sqrt{k}}\right)\right)}$

For large values of $k$ ($k\ge 6$), $\langle\cos(\theta)\rangle$ approaches the Langevin function, $L(k)$.

Back2Top ^