pyroomacoustics.beamforming module

class pyroomacoustics.beamforming.Beamformer(R, fs, N=1024, Lg=None, hop=None, zpf=0, zpb=0)

Bases: pyroomacoustics.beamforming.MicrophoneArray

At some point, in some nice way, the design methods should also go here. Probably with generic arguments.

Parameters:
  • R (numpy.ndarray) – Mics positions
  • fs (int) – Sampling frequency
  • N (int, optional) – Length of FFT, i.e. number of FD beamforming weights, equally spaced. Defaults to 1024.
  • Lg (int, optional) – Length of time-domain filters. Default to N.
  • hop (int, optional) – Hop length for frequency domain processing. Default to N/2.
  • zpf (int, optional) – Front zero padding length for frequency domain processing. Default is 0.
  • zpb (int, optional) – Zero padding length for frequency domain processing. Default is 0.
far_field_weights(phi)

This method computes weight for a far field at infinity

phi: direction of beam

filters_from_weights(non_causal=0.0)

Compute time-domain filters from frequency domain weights.

Parameters:non_causal (float, optional) – ratio of filter coefficients used for non-causal part
plot(sum_ir=False, FD=True)
plot_beam_response()
plot_response_from_point(x, legend=None)
process(FD=False)
rake_delay_and_sum_weights(source, interferer=None, R_n=None, attn=True, ff=False)
rake_distortionless_filters(source, interferer, R_n, delay=0.03, epsilon=0.005)

Compute time-domain filters of a beamformer minimizing noise and interference while forcing a distortionless response towards the source.

rake_max_sinr_filters(source, interferer, R_n, epsilon=0.005, delay=0.0)

Compute the time-domain filters of SINR maximizing beamformer.

rake_max_sinr_weights(source, interferer=None, R_n=None, rcond=0.0, ff=False, attn=True)

This method computes a beamformer focusing on a number of specific sources and ignoring a number of interferers.

INPUTS
  • source : source locations
  • interferer : interferer locations
rake_max_udr_filters(source, interferer=None, R_n=None, delay=0.03, epsilon=0.005)

Compute directly the time-domain filters maximizing the Useful-to-Detrimental Ratio (UDR).

This beamformer is not practical. It maximizes the UDR ratio in the time domain directly without imposing flat response towards the source of interest. This results in severe distortion of the desired signal.

Parameters:
  • source (pyroomacoustics.SoundSource) – the desired source
  • interferer (pyroomacoustics.SoundSource, optional) – the interfering source
  • R_n (ndarray, optional) – the noise covariance matrix, it should be (M * Lg)x(M * Lg) where M is the number of sensors and Lg the filter length
  • delay (float, optional) – the signal delay introduced by the beamformer (default 0.03 s)
  • epsilon (float) –
rake_max_udr_weights(source, interferer=None, R_n=None, ff=False, attn=True)
rake_mvdr_filters(source, interferer, R_n, delay=0.03, epsilon=0.005)

Compute the time-domain filters of the minimum variance distortionless response beamformer.

rake_one_forcing_filters(sources, interferers, R_n, epsilon=0.005)

Compute the time-domain filters of a beamformer with unit response towards multiple sources.

rake_one_forcing_weights(source, interferer=None, R_n=None, ff=False, attn=True)
rake_perceptual_filters(source, interferer=None, R_n=None, delay=0.03, d_relax=0.035, epsilon=0.005)

Compute directly the time-domain filters for a perceptually motivated beamformer. The beamformer minimizes noise and interference, but relaxes the response of the filter within the 30 ms following the delay.

response(phi_list, frequency)
response_from_point(x, frequency)
snr(source, interferer, f, R_n=None, dB=False)
steering_vector_2D(frequency, phi, dist, attn=False)
steering_vector_2D_from_point(frequency, source, attn=True, ff=False)

Creates a steering vector for a particular frequency and source

Parameters:
  • frequency
  • source – location in cartesian coordinates
  • attn – include attenuation factor if True
  • ff – uses far-field distance if true
Returns:

A 2x1 ndarray containing the steering vector.

udr(source, interferer, f, R_n=None, dB=False)
weights_from_filters()
pyroomacoustics.beamforming.H(A, **kwargs)

Returns the conjugate (Hermitian) transpose of a matrix.

class pyroomacoustics.beamforming.MicrophoneArray(R, fs)

Bases: object

Microphone array class.

record(signals, fs)

This simulates the recording of the signals by the microphones. In particular, if the microphones and the room simulation do not use the same sampling frequency, down/up-sampling is done here.

Parameters:
  • signals – An ndarray with as many lines as there are microphones.
  • fs – the sampling frequency of the signals.
to_wav(filename, mono=False, norm=False, bitdepth=<Mock id='139803633610512'>)

Save all the signals to wav files.

Parameters:
  • filename (str) – the name of the file
  • mono (bool, optional) – if true, records only the center channel floor(M / 2) (default False)
  • norm (bool, optional) – if true, normalize the signal to fit in the dynamic range (default False)
  • bitdepth (int, optional) – the format of output samples [np.int8/16/32/64 or np.float (default)]
pyroomacoustics.beamforming.circular_2D_array(center, M, phi0, radius)

Creates an array of uniformly spaced circular points in 2D

Parameters:
  • center (array_like) – The center of the array
  • M (int) – The number of points
  • phi0 (float) – The counterclockwise rotation of the first element in the array (from the x-axis)
  • radius (float) – The radius of the array
Returns:

The array of points

Return type:

ndarray (2, M)

pyroomacoustics.beamforming.distance(x, y)

Computes the distance matrix E.

E[i,j] = sqrt(sum((x[:,i]-y[:,j])**2)). x and y are DxN ndarray containing N D-dimensional vectors.

pyroomacoustics.beamforming.fir_approximation_ls(weights, T, n1, n2)
pyroomacoustics.beamforming.linear_2D_array(center, M, phi, d)

Creates an array of uniformly spaced linear points in 2D

Parameters:
  • center (array_like) – The center of the array
  • M (int) – The number of points
  • phi (float) – The counterclockwise rotation of the array (from the x-axis)
  • d (float) – The distance between neighboring points
Returns:

The array of points

Return type:

ndarray (2, M)

pyroomacoustics.beamforming.mdot(*args)

Left-to-right associative matrix multiplication of multiple 2D ndarrays.

pyroomacoustics.beamforming.poisson_2D_array(center, M, d)

Create array of 2D positions drawn from Poisson process.

Parameters:
  • center (array_like) – The center of the array
  • M (int) – The number of points in the first dimension
  • M – The number of points in the second dimension
  • phi (float) – The counterclockwise rotation of the array (from the x-axis)
  • d (float) – The distance between neighboring points
Returns:

The array of points

Return type:

ndarray (2, M * N)

pyroomacoustics.beamforming.spiral_2D_array(center, M, radius=1.0, divi=3, angle=None)

Generate an array of points placed on a spiral

Parameters:
  • center (array_like) – location of the center of the array
  • M (int) – number of microphones
  • radius (float) – microphones are contained within a cirle of this radius (default 1)
  • divi (int) – number of rotations of the spiral (default 3)
  • angle (float) – the angle offset of the spiral (default random)
Returns:

The array of points

Return type:

ndarray (2, M * N)

pyroomacoustics.beamforming.square_2D_array(center, M, N, phi, d)

Creates an array of uniformly spaced grid points in 2D

Parameters:
  • center (array_like) – The center of the array
  • M (int) – The number of points in the first dimension
  • M – The number of points in the second dimension
  • phi (float) – The counterclockwise rotation of the array (from the x-axis)
  • d (float) – The distance between neighboring points
Returns:

The array of points

Return type:

ndarray (2, M * N)

pyroomacoustics.beamforming.sumcols(A)

Sums the columns of a matrix (np.array).

The output is a 2D np.array of dimensions M x 1.

pyroomacoustics.beamforming.unit_vec2D(phi)