Deconvolution of GPC data

In my previous post, I developed a GPC post-processing program that uses the continuous wavelet transform (CWT) to detect and correct baselines, and the MH equation to convert GPC data into a molecular weight distribution. In this follow-up work, I determine the molecular weight distributions in a two-step synthesis process. In the first step, polymers are pre-polymerized with a distribution denoted as $P_1(M)$. In the second step, these polymers continue to grow, ultimately yielding a final chain-length distribution $Q(M)$. The goal here is to extract the molecular weight distribution for the second step, denoted as $P_2(M)$.

I present a straightforward method to calculate the molecular weight distribution of the second block using the GPC data from the preceding block and the final diblock copolymer. Assuming that we already know the distribution of the preceding block, $P_1(n)$, it is generally expected that the growth of the second block depends on the preceding chain length $n$. Consequently, the joint probability density function (pdf) for the two blocks is given by
$$ Q(n, m) = P_1(n)P_2(m, n)$$
and the pdf of the final diblock copolymer is obtained as
$$ Q(x) = \int Q(n, x-n) \mathrm{d}n $$

It is reasonable to assume that $P_2(m, n)$ can be factorized into the product of two functions, say $f(m)g(n)$, by neglecting higher-order correlations between $m$ and $n$. In this factorization, $g(n)$ acts as a scaling factor on the distribution $P_1(n)$, and the calculation of $P(x)$ becomes a convolution. The simplest case is when $g(n)=1$, meaning that the growth of the second block is independent of the length of the preceding block. Alternatively, one might assume $g(n) \sim n^{-1}$ to account for diffusion effects, implying that shorter preceding chains tend to grow a longer second block.

The evaluation of the pdf for the second block proceeds in three steps:

  1. Determine the range of molecular weights as $(x_{min} – n_{max},\, x_{max} – n_{min})$, where $x$ is the chain length of the final diblock copolymer and $n$ is the length of the preceding block.
  2. Interpolate the GPC-derived distributions onto an evenly spaced molecular weight grid, setting any negative values to zero.
  3. Deconvolute $Q(x)$ with $P_1(n)$.

This method provides a straightforward pathway to isolate the molecular weight distribution of the second synthetic block from the overall diblock copolymer distribution.

Phenomenological relation of coil to globule transition of polymer chain

Consider a polymer chain dissolved in a solvent, where its behavior is influenced by the Flory parameter, denoted by $\chi$. As the interaction parameter $\chi$ varies, the chain undergoes several conformational transitions:

• For $\chi > \chi_c$, the polymer collapses into a dense, globular structure with a radius of gyration scaling as $R_g \sim N^{1/3}$.

• At the critical point—commonly known as the $\theta$ point—where $\chi \sim \chi_c$, the chain behaves like an ideal random coil, characterized by $R_g \sim N^{1/2}$.

• In a good solvent, when $\chi < \chi_c$, the polymer chain is swollen and stretched, with $R_g \sim N^{3/5}$.

A “universal” expression that captures how the radius of gyration $R_g$ changes with $\chi$ is given by

  $R_g(\chi) = R_g^\mathrm{glob} + \frac{R_g^\mathrm{coil}-R_g^\mathrm{glob}}{1 + \exp[(\chi-\chi_c)/\Delta \chi]}$,

where $\Delta \chi$ quantifies the width of the conformational transition region and $\chi_c$ locates the transition.

To estimate the window width $\Delta \chi$, we first define an order parameter that measures deviations of the polymer’s size from its value at the $\theta$ point. We write

  $R = R_0 (1 + m)$,

with $R_0 \sim N^{1/2}b$, where $b$ is the monomer size and $m$ represents the fractional deviation from the ideal size.

Next, we expand the free energy $F(m)$ in powers of $m$. Close to the $\theta$ point, the expansion takes the form

  $\frac{F(m)}{k_BT} \simeq C_1 N (\chi-\chi_c) m^2 + C_2 N m^4 + \cdots$,

where $C_1$ and $C_2$ are constants, and $k_BT$ is the thermal energy.

In the mean-field (infinite-system) limit, the phase transition is sharp. However, for finite systems, fluctuations smear the transition, leading to a rounded behavior characterized by a finite width $\Delta \chi$ defined by $|\chi-\chi_c|$.

At the edge of the transition, the quadratic and quartic terms in the free energy become comparable. Equating these contributions gives

  $C_1 N\,\Delta\chi\,m^2 \sim C_2 N\,m^4,$

which implies

  $m^2 \sim \frac{C_1}{C_2}\,\Delta\chi$.

Furthermore, the transition is significantly broadened when the free energy barrier, estimated by the quartic term, is of the order of unity (in units of $k_BT$). That is, when

  $N C_2\, m^4 \sim O(1).$

Substituting our earlier estimate for $m^2$ into this condition, we obtain

  $N C_2 \left(\frac{C_1}{C_2}\Delta\chi\right)^2 \sim O(1),$

which simplifies to

  $\Delta\chi^2 \sim \frac{1}{N}.$

Thus, the rounding of the transition in terms of $\chi$ scales as

  $\Delta\chi \sim \frac{1}{\sqrt{N}}.$

HDR Moon 03-19-2025

Technical Approach:

Preparation of Source Images

  • Capture Three Photos:
    • Full Moon: Take a clear photograph of the full moon.
    • Two Waning Moons: Photograph the waning moon twice.
      • Overexposed Waning Moon: For one of the waning moon images, apply an exposure compensation of +3 levels to create an overexposed effect, which will enhance the halo effect around the moon.
(more…)

Distribution of segments on Gaussian chain

This analysis focuses on the probability distribution function, $P_i(\mathbf{r}_i-\mathbf{r}_\mathrm{cm})$, of the $i$th segment relative to the center of mass in an ideal chain. An ideal chain is modeled as a multidimensional random walk with independent steps. The distribution of each step, with a mean length of $b$, is assumed to be Gaussian: $P(\mathbf{r})\sim\mathcal{N}(0,b^2)$.

Let $\mathbf{b}_i=\mathbf{r}_{i+1}-\mathbf{r}_{i}$ represent the $i$th bond vector. Then, the position of the $i$th segment is given by:

$\mathbf{r}_i=\sum_{j=1}^{i-1} \mathbf{b}_j$

The center of mass, $\mathbf{r}_\mathrm{cm}$, is calculated as:

$\mathbf{r}_\mathrm{cm}=\frac{1}{N}\sum_{i=1}^{N} \mathbf{r}_i = \frac{1}{N}\sum_{j=1}^{N-1}(N-j)\mathbf{b}_j$

Therefore, the displacement of the $i$th segment relative to the center of mass is:

$\mathbf{r}_i-\mathbf{r}_\mathrm{cm}=\sum_{j=1}^{i-1}\frac{j}{N}\mathbf{b}_j+\sum_{j=i}^{N-1}\frac{N-j}{N}\mathbf{b}_j$

If $X$ is a Gaussian random variable with variance $\sigma^2$, then $aX$ is also a Gaussian random variable with variance $a^2\sigma^2$. Using this property, we can write the characteristic function for $P_i(\mathbf{r}_i-\mathbf{r}_\mathrm{cm})$ of a $d$-dimensional ideal chain:

$\phi_i(\mathbf{q})=\Pi_{j} \phi_{\mathbf{b}_j’}(\mathbf{q})=\exp\left(-\frac{1}{2}\mathbf{q}^T\left(\sum_{j=1}^{N-1}\Sigma_j\right)\mathbf{q}\right)$

where $\mathbf{b}’_j=\frac{j}{N}\mathbf{b}_j$ for $j\le i-1$ and $\frac{N-j}{N}\mathbf{b}_j$ for $i\le j \le N-1$. $\phi_{\mathbf{b}_j’}=-\exp(-0.5\mathbf{q}^T\Sigma\mathbf{q})$ is the characteristic function of the probability distribution of bond $j$. $\Sigma_j=\frac{j^2 b^2}{d N^2}\mathbb{I}_d$ for $j\le i-1$ and $\Sigma_j=\frac{(N-j)^2 b^2}{d N^2}\mathbb{I}_d$ for $i\le j \le N-1$, where $\mathbb{I}_d$ is the $d$-dimensional identity matrix. Setting $b=1$ for convenience, we obtain:

$\phi_i(\mathbf{q})=$ $\exp \left(-\frac{\left(6 i^2-6 i (N+1)+2 N^2+3 N+1\right) \left(q_x^2+q_y^2+q_z^2\right)}{36 N}\right)$

The corresponding distribution of this characteristic function is still a Gaussian distribution with $\Sigma=\frac{b^2}{3} \mathbb{I}_3$, where the equivalent bond length $b^2=\frac{\left(6 i^2-6 i (N+1)+2 N^2+3 N+1\right)}{6 N}$. The 6th moment is calculated as $\frac{1}{N}\sum_{i=1}^N \langle(\mathbf{r}_i-\mathbf{r}_\mathrm{cm})^6\rangle=\frac{58 N^6-273 N^4+462 N^2-247}{1944 N^3}$. For large $N$, only the leading term, $\frac{29}{972} N^3$, is significant. For $N=20$, the result is $235.886$, which agrees with simulations. Another example is for $N=5$, where the $R_g^2$ is $0.8$ compared to the expected value of $5/6=0.8\dot{3}$.

Here is the simulation code:

ch = np.random.normal(size=(100000,20,3),scale=1/3**0.5)
ch[:,0,:]=0
ch = ch.cumsum(axis=1)
ch -= ch.mean(axis=1,keepdims=1)
m6 = np.mean(np.linalg.norm(ch,axis=-1)**6)

Eigenvalues of circulant matrices

A circulant matrix is defined as:

$C=\begin{bmatrix}c_{0}&c_{n-1}&\dots &c_{2}&c_{1}\\c_{1}&c_{0}&c_{n-1}&&c_{2}\\\vdots &c_{1}&c_{0}&\ddots &\vdots \\c_{n-2}&&\ddots &\ddots &c_{n-1}\\c_{n-1}&c_{n-2}&\dots &c_{1}&c_{0}\\\end{bmatrix}$

where $C_{j, k}=c_{j-k \mod n}$. The $k$-th eigenvalue $\lambda_k$ and eigenvector $x_k$ satisfy $C\cdot x_k=\lambda_k x_k$, which can be expressed as $n$ equations:

$\sum_{j=0}^{m-1}c_{m-j}x_j+\sum_{j=m}^{n-1}c_{n-j+m}x_j=\lambda_k x_m\quad m=0,1,\dots,n-1$

with $c_n=c_0$, where $x_m$ is the $m$-th component of the eigenvector $x_k$. By changing the dummy summing variables ($j\to m-j$ and $j\to n-j+m$), we obtain:

$\sum_{j=1}^{m}c_j x_{m-j} +\sum_{j=m+1}^{n}c_j x_{n+m-j}=\lambda_k x_m$

with $m=0,1,2,\dots,n-1$. We can “guess” a solution where $x_j=\omega^j$, which transforms the equation into:

$\begin{align}&\sum_{j=1}^{m}c_j \omega^{m-j} +\sum_{j=m+1}^{n}c_j \omega^{n+m-j}=\lambda_k \omega^m\\ \leftrightarrow &\sum_{j=1}^{m}c_j \omega^{-j} +\omega^{n}\sum_{j=m+1}^{n}c_j \omega^{-j}=\lambda_k\end{align}$

Let $\omega$ be one of the $n$-th roots of unity, i.e., $\omega = \exp(-2\pi\mathbb{i}/n)$, then $\omega^{n}=1$. This leads to the eigenvalue:

$\lambda = \sum_{j=0}^{n-1}\omega^{-j} c_j$

with the corresponding eigenvector:

$x= (1, \exp(2\pi\mathbb{i}/n), \exp(2\pi\mathbb{i}/n)^2,\dots,\exp(2\pi\mathbb{i}/n)^{n-1})^T$

The $k$-th eigenvalue is generated from $\omega^{-k} = \exp(2\pi k\mathbb{i}/n)$ with $k\in [0,n-1]$, resulting in the $k$-th eigenvalue and eigenvector:

$\lambda_k = \sum_{j=0}^{n-1}c_j \exp(2\pi k\mathbb{i}/n)$

and

$x_k = (1, \exp(2\pi k\mathbb{i}/n), \exp(2\pi k\mathbb{i}/n)^2,\dots,\exp(2\pi k\mathbb{i}/n)^{n-1})^T$

The eigenspace is simply the DFT matrix, and all circulant matrices share the same eigenspace. It is easy to verify that circulant matrices possess the following properties:

If $A$ and $B$ are circulant matrices, then:

  1. $AB=BA=W^\ast\Gamma W$ where $W$ is the DFT matrix and $\Gamma=\Gamma_A\Gamma_B$, with $\Gamma_i$ representing the diagonal matrix consisting of the eigenvalues of $i$; $\Gamma_A = WAW^\ast$.
  2. $B+A=A+B=W^\ast\Omega W$, where $\Omega=\Gamma_A+\Gamma_B$.
  3. If $\mathrm{det}(A)\ne 0$, then $A^{-1}=W^\ast \Gamma_A^{-1}W$.

The proof is straightforward:

  1. $AB=W^\ast \Gamma_AW W^\ast\Gamma_BW=$
    $W^\ast\Gamma_A\Gamma_BW=W^\ast \Gamma_B\Gamma_AW=BA$
  2. $W(A+B)W^\ast=\Gamma_A+\Gamma_B$.
  3. $AW^\ast \Gamma_A^{-1}W=W^\ast \Gamma_AW W^\ast \Gamma_A^{-1}W=\mathbb{I}$
Back2Top ^