As mentioned in the MUSIC FAQ, the MUSIC algorithm is based on the eigenstructure of the Cross Spectral Matrix (CSM) and its separation into two subspaces. The number $$N$$ of eigenvalues associated with the noise results from an estimation of the signal subspace dimension $$D = M - N$$ with $$M$$ array microphones. This estimation must be performed by the user.

The MUSIC approach begins with the following model assumption. It characterizes the incident signals at the $$M$$ array microphones as in $$X = AF + W$$ The vector $$X$$ represents the incident, Fourier transformed signals. $$A (M \times D)$$ is the matrix containing the steering vectors $$g(\vec x_t,\omega_k)$$ with the potential source location $$\vec x_t$$ and the frequency $$\omega_k$$. $$F$$ ($$D$$ entries) is the vector representing the incident signals with amplitude and phase at an arbitrary reference point (e.g. coordinate origin) and $$W$$  is the vector containing internal or external microphone noise.

Assuming uncorrelated microphone signals, this model leads to the Cross Spectral Matrix (CSM):

$$C \widehat{=} \overline{XX^{\ast}} = A \overline{FF^{\ast}} A^{\ast} + \overline{WW^{\ast}}$$

With uncorrelated microphone noise, the corresponding covariance matrix $$\overline{WW^{\ast}}$$ can be written as $$\overline{WW^{\ast}} = \sigma^2I$$ with $$\sigma^2$$ representing the noise variance and $$I$$ the identity matrix. As with OBF, the eigendecomposition of $$C = V \Lambda V^{\ast}$$ is conducted resulting in the diagonal matrix of the eigenvalues $$\Lambda = diag\{\lambda_1,\lambda_2,...,\lambda_M\}$$ and the matrix $$V = [V_1,...,V_M]$$ as well as its conjugate transpose $$V^{\ast}$$ containing the eigenvectors. Instead of using the correct eigenvalues and eigenvectors to describe the signal space, MUSIC only uses the eigenvectors corresponding to the noise subspace. As mentioned before, the number $$N$$ of all noise-corresponding eigenvalues must be estimated to determine this subspace. Hence, all other eigenvalues of $$C$$ can be assigned to the incident signals. A separation of the eigendecomposition is possible now:

$$C = \underbrace{\sum_s V_s \Lambda_s V^{\ast}_s} + \underbrace{\sum_n V_n \Lambda_n V^{\ast}_n}$$
signal part          noise part

The spatial spectrum and thus the result of the MUSIC algorithm can be defined with the $$N$$ noise eigenvectors $$V_n$$:

$$b(\vec{x}_t,\omega_k) = \frac{1}{\sum_n g^{\ast}(\vec{x}_t,\omega_k)V_n V^{\ast}_n g(\vec{x}_t,\omega_k)}$$

If a steering vector is steered towards a noise source, the function in the denominator reaches a local minimum due to the orthogonality. This results in a local maximum on the acoustic m