Room Simulation

Room

The three main classes are pyroomacoustics.room.Room, pyroomacoustics.soundsource.SoundSource, and pyroomacoustics.beamforming.MicrophoneArray. On a high level, a simulation scenario is created by first defining a room to which a few sound sources and a microphone array are attached. The actual audio is attached to the source as raw audio samples.

Then, a simulation method is used to create artificial room impulse responses (RIR) between the sources and microphones. The current default method is the image source which considers the walls as perfect reflectors. An experimental hybrid simulator based on image source method (ISM) 1 and ray tracing (RT) 2, 3, is also available. Ray tracing better capture the later reflections and can also model effects such as scattering.

The microphone signals are then created by convolving audio samples associated to sources with the appropriate RIR. Since the simulation is done on discrete-time signals, a sampling frequency is specified for the room and the sources it contains. Microphones can optionally operate at a different sampling frequency; a rate conversion is done in this case.

Simulating a Shoebox Room with the Image Source Model

We will first walk through the steps to simulate a shoebox-shaped room in 3D. We use the ISM is to find all image sources up to a maximum specified order and room impulse responses (RIR) are generated from their positions.

The code for the full example can be found in examples/room_from_rt60.py.

Create the room

So-called shoebox rooms are pallelepipedic rooms with 4 or 6 walls (in 2D and 3D respectiely), all at right angles. They are defined by a single vector that contains the lengths of the walls. They have the advantage of being simple to define and very efficient to simulate. In the following example, we define a 9m x 7.5m x 3.5m room. In addition, we use Sabine’s formula to find the wall energy absorption and maximum order of the ISM required to achieve a desired reverberation time (RT60, i.e. the time it takes for the RIR to decays by 60 dB).

import pyroomacoustics as pra

# The desired reverberation time and dimensions of the room
rt60 = 0.5  # seconds
room_dim = [9, 7.5, 3.5]  # meters

# We invert Sabine's formula to obtain the parameters for the ISM simulator
e_absorption, max_order = pra.inverse_sabine(rt60, room_dim)

# Create the room
room = pra.ShoeBox(
    room_dim, fs=16000, materials=pra.Material(e_absorption), max_order=max_order
)

The second argument is the sampling frequency at which the RIR will be generated. Note that the default value of fs is 8 kHz. The third argument is the material of the wall, that itself takes the absorption as a parameter. The fourth and last argument is the maximum number of reflections allowed in the ISM.

Note

Note that Sabine’s formula is only an approximation and that the actually simulated RT60 may vary by quite a bit.

Warning

Until recently, rooms would take an absorption parameter that was actually not the energy absorption we use now. The absorption parameter is now deprecated and will be removed in the future.

Randomized Image Method

In highly symmetric shoebox rooms, the regularity of image sources’ positions leads to a monotonic convergence in the time arrival of far-field image pairs. This causes sweeping echoes. The randomized image method adds a small random displacement to the image source positions, so that they are no longer time-aligned, thus reducing sweeping echoes considerably.

To use the randomized method, set the flag use_rand_ism to True while creating a room. Additionally, the maximum displacement of the image sources can be chosen by setting the parameter max_rand_disp. The default value is 8cm. For a full example see examples/randomized_image_method.py

import pyroomacoustics as pra

# The desired reverberation time and dimensions of the room
rt60 = 0.5  # seconds
room_dim = [5, 5, 5]  # meters

# We invert Sabine's formula to obtain the parameters for the ISM simulator
e_absorption, max_order = pra.inverse_sabine(rt60, room_dim)

# Create the room
room = pra.ShoeBox(
    room_dim, fs=16000, materials=pra.Material(e_absorption), max_order=max_order,
    use_rand_ism = True, max_rand_disp = 0.05
)

Add sources and microphones

Sources are fairly straightforward to create. They take their location as single mandatory argument, and a signal and start time as optional arguments. Here we create a source located at [2.5, 3.73, 1.76] within the room, that will utter the content of the wav file speech.wav starting at 1.3 s into the simulation. The signal keyword argument to the add_source() method should be a one-dimensional numpy.ndarray containing the desired sound signal.

# import a mono wavfile as the source signal
# the sampling frequency should match that of the room
from scipy.io import wavfile
_, audio = wavfile.read('speech.wav')

# place the source in the room
room.add_source([2.5, 3.73, 1.76], signal=audio, delay=1.3)

The locations of the microphones in the array should be provided in a numpy nd-array of size (ndim, nmics), that is each column contains the coordinates of one microphone. Note that it can be different from that of the room, in which case resampling will occur. Here, we create an array with two microphones placed at [6.3, 4.87, 1.2] and [6.3, 4.93, 1.2].

# define the locations of the microphones
import numpy as np
mic_locs = np.c_[
    [6.3, 4.87, 1.2],  # mic 1
    [6.3, 4.93, 1.2],  # mic 2
]

# finally place the array in the room
room.add_microphone_array(mic_locs)

A number of routines exist to create regular array geometries in 2D.

Adding source or microphone directivity

The directivity pattern of a source or microphone can be conveniently set through the directivity keyword argument.

First, a pyroomacoustics.directivities.Directivity object needs to be created. As of Sep 6, 2021, only frequency-independent directivities from the cardioid family are supported, namely figure-eight, hypercardioid, cardioid, and subcardioid.

Below is how a pyroomacoustics.directivities.Directivity object can be created, for example a hypercardioid pattern pointing at an azimuth angle of 90 degrees and a colatitude angle of 15 degrees.

# create directivity object
from pyroomacoustics.directivities import (
    DirectivityPattern,
    DirectionVector,
    CardioidFamily,
)
dir_obj = CardioidFamily(
    orientation=DirectionVector(azimuth=90, colatitude=15, degrees=True),
    pattern_enum=DirectivityPattern.HYPERCARDIOID,
)

After creating a pyroomacoustics.directivities.Directivity object, it is straightforward to set the directivity of a source, microphone, or microphone array, namely by using the directivity keyword argument.

For example, to set a source’s directivity:

# place the source in the room
room.add_source(position=[2.5, 3.73, 1.76], directivity=dir_obj)

To set a single microphone’s directivity:

# place the microphone in the room
room.add_microphone(loc=[2.5, 5, 1.76], directivity=dir_obj)

The same directivity pattern can be used for all microphones in an array:

# place microphone array in the room
import numpy as np
mic_locs = np.c_[
    [6.3, 4.87, 1.2],  # mic 1
    [6.3, 4.93, 1.2],  # mic 2
]
room.add_microphone_array(mic_locs, directivity=dir_obj)

Or a different directivity can be used for each microphone by passing a list of pyroomacoustics.directivities.Directivity objects:

# place the microphone array in the room
room.add_microphone_array(mic_locs, directivity=[dir_1, dir_2])

Warning

As of Sep 6, 2021, setting directivity patterns for sources and microphone is only supported for the image source method (ISM). Moreover, source direcitivities are only supported for shoebox-shaped rooms.

Create the Room Impulse Response

At this point, the RIRs are simply created by invoking the ISM via image_source_model(). This function will generate all the images sources up to the order required and use them to generate the RIRs, which will be stored in the rir attribute of room. The attribute rir is a list of lists so that the outer list is on microphones and the inner list over sources.

room.compute_rir()

# plot the RIR between mic 1 and source 0
import matplotlib.pyplot as plt
plt.plot(room.rir[1][0])
plt.show()

Warning

The simulator uses a fractional delay filter that introduce a global delay in the RIR. The delay can be obtained as follows.

global_delay = pra.constants.get("frac_delay_length") // 2

Simulate sound propagation

By calling simulate(), a convolution of the signal of each source (if not None) will be performed with the corresponding room impulse response. The output from the convolutions will be summed up at the microphones. The result is stored in the signals attribute of room.mic_array with each row corresponding to one microphone.

room.simulate()

# plot signal at microphone 1
plt.plot(room.mic_array.signals[1,:])

Full Example

This example is partly exctracted from ./examples/room_from_rt60.py.

import numpy as np
import matplotlib.pyplot as plt
import pyroomacoustics as pra
from scipy.io import wavfile

# The desired reverberation time and dimensions of the room
rt60_tgt = 0.3  # seconds
room_dim = [10, 7.5, 3.5]  # meters

# import a mono wavfile as the source signal
# the sampling frequency should match that of the room
fs, audio = wavfile.read("examples/samples/guitar_16k.wav")

# We invert Sabine's formula to obtain the parameters for the ISM simulator
e_absorption, max_order = pra.inverse_sabine(rt60_tgt, room_dim)

# Create the room
room = pra.ShoeBox(
    room_dim, fs=fs, materials=pra.Material(e_absorption), max_order=max_order
)

# place the source in the room
room.add_source([2.5, 3.73, 1.76], signal=audio, delay=0.5)

# define the locations of the microphones
mic_locs = np.c_[
    [6.3, 4.87, 1.2], [6.3, 4.93, 1.2],  # mic 1  # mic 2
]

# finally place the array in the room
room.add_microphone_array(mic_locs)

# Run the simulation (this will also build the RIR automatically)
room.simulate()

room.mic_array.to_wav(
    f"examples/samples/guitar_16k_reverb_{args.method}.wav",
    norm=True,
    bitdepth=np.int16,
)

# measure the reverberation time
rt60 = room.measure_rt60()
print("The desired RT60 was {}".format(rt60_tgt))
print("The measured RT60 is {}".format(rt60[1, 0]))

# Create a plot
plt.figure()

# plot one of the RIR. both can also be plotted using room.plot_rir()
rir_1_0 = room.rir[1][0]
plt.subplot(2, 1, 1)
plt.plot(np.arange(len(rir_1_0)) / room.fs, rir_1_0)
plt.title("The RIR from source 0 to mic 1")
plt.xlabel("Time [s]")

# plot signal at microphone 1
plt.subplot(2, 1, 2)
plt.plot(room.mic_array.signals[1, :])
plt.title("Microphone 1 signal")
plt.xlabel("Time [s]")

plt.tight_layout()
plt.show()

Hybrid ISM/Ray Tracing Simulator

Warning

The hybrid simulator has not been thoroughly tested yet and should be used with care. The exact implementation and default settings may also change in the future. Currently, the default behavior of Room and ShoeBox has been kept as in previous versions of the package. Bugs and user experience can be reported on github.

The hybrid ISM/RT simulator uses ISM to simulate the early reflections in the RIR and RT for the diffuse tail. Our implementation is based on 2 and 3.

The simulator has the following features.

  • Scattering: Wall scattering can be defined by assigning a scattering coefficient to the walls together with the energy absorption.

  • Multi-band: The simulation can be carried out with different parameters for different octave bands. The octave bands go from 125 Hz to half the sampling frequency.

  • Air absorption: The frequency dependent absorption of the air can be turned by providing the keyword argument air_absorption=True to the room constructor.

Here is a simple example using the hybrid simulator. We suggest to use max_order=3 with the hybrid simulator.

# Create the room
room = pra.ShoeBox(
    room_dim,
    fs=16000,
    materials=pra.Material(e_absorption),
    max_order=3,
    ray_tracing=True,
    air_absorption=True,
)

# Activate the ray tracing
room.set_ray_tracing()

A few example programs are provided in ./examples.

  • ./examples/ray_tracing.py demonstrates use of ray tracing for rooms of different sizes and with different amounts of reverberation

  • ./examples/room_L_shape_3d_rt.py shows how to simulate a polyhedral room

  • ./examples/room_from_stl.py demonstrates how to import a model from an STL file

Wall Materials

The wall materials are handled by the Material objects. A material is defined by at least one absorption coefficient that represents the ratio of sound energy absorbed by a wall upon reflection. A material may have multiple absorption coefficients corresponding to different abosrptions at different octave bands. When only one coefficient is provided, the absorption is assumed to be uniform at all frequencies. In addition, materials may have one or more scattering coefficients corresponding to the ratio of energy scattered upon reflection.

The materials can be defined by providing the coefficients directly, or they can be defined by chosing a material from the materials database 2.

import pyroomacoustics as pra
m = pra.Material(energy_absorption="hard_surface")
room = pra.ShoeBox([9, 7.5, 3.5], fs=16000, materials=m, max_order=17)

We can use different materials for different walls. In this case, the materials should be provided in a dictionary. For a shoebox room, this can be done as follows. We use the make_materials() helper function to create a dict of Material objects.

import pyroomacoustics as pra
m = pra.make_materials(
    ceiling="hard_surface",
    floor="6mm_carpet",
    east="brickwork",
    west="brickwork",
    north="brickwork",
    south="brickwork",
)
room = pra.ShoeBox(
    [9, 7.5, 3.5], fs=16000, materials=m, max_order=17, air_absorption=True, ray_tracing=True
)

For a more complete example see examples/room_complex_wall_materials.py.

Note

For shoebox rooms, the walls are labelled as follows:

  • west/east for the walls in the y-z plane with a small/large x coordinates, respectively

  • south/north for the walls in the x-z plane with a small/large y coordinates, respectively

  • floor/ceiling for the walls int x-y plane with small/large z coordinates, respectively

Controlling the signal-to-noise ratio

It is in general necessary to scale the signals from different sources to obtain a specific signal-to-noise or signal-to-interference ratio (SNR and SIR, respectively). This can be done by passing some options to the simulate() function. Because the relative amplitude of signals will change at different microphones due to propagation, it is necessary to choose a reference microphone. By default, this will be the first microphone in the array (index 0). The simplest choice is to choose the variance of the noise \(\sigma_n^2\) to achieve a desired SNR with respect to the cumulative signal from all sources. Assuming that the signals from all sources are scaled to have the same amplitude (e.g., unit amplitude) at the reference microphone, the SNR is defined as

\[\mathsf{SNR} = 10 \log_{10} \frac{K}{\sigma_n^2}\]

where \(K\) is the number of sources. For example, an SNR of 10 decibels (dB) can be obtained using the following code

room.simulate(reference_mic=0, snr=10)

Sometimes, more challenging normalizations are necessary. In that case, a custom callback function can be provided to simulate. For example, we can imagine a scenario where we have n_src out of which n_tgt are the targets, the rest being interferers. We will assume all targets have unit variance, and all interferers have equal variance \(\sigma_i^2\) (at the reference microphone). In addition, there is uncorrelated noise \(\sigma_n^2\) at every microphones. We will define SNR and SIR with respect to a single target source:

\[ \begin{align}\begin{aligned}\mathsf{SNR} & = 10 \log_{10} \frac{1}{\sigma_n^2}\\\mathsf{SIR} & = 10 \log_{10} \frac{1}{(\mathsf{n_{src}} - \mathsf{n_{tgt}}) \sigma_i^2}\end{aligned}\end{align} \]

The callback function callback_mix takes as argument an nd-array premix_signals of shape (n_src, n_mics, n_samples) that contains the microphone signals prior to mixing. The signal propagated from the k-th source to the m-th microphone is contained in premix_signals[k,m,:]. It is possible to provide optional arguments to the callback via callback_mix_kwargs optional argument. Here is the code implementing the example described.

# the extra arguments are given in a dictionary
callback_mix_kwargs = {
        'snr' : 30,  # SNR target is 30 decibels
        'sir' : 10,  # SIR target is 10 decibels
        'n_src' : 6,
        'n_tgt' : 2,
        'ref_mic' : 0,
        }

def callback_mix(premix, snr=0, sir=0, ref_mic=0, n_src=None, n_tgt=None):

    # first normalize all separate recording to have unit power at microphone one
    p_mic_ref = np.std(premix[:,ref_mic,:], axis=1)
    premix /= p_mic_ref[:,None,None]

    # now compute the power of interference signal needed to achieve desired SIR
    sigma_i = np.sqrt(10 ** (- sir / 10) / (n_src - n_tgt))
    premix[n_tgt:n_src,:,:] *= sigma_i

    # compute noise variance
    sigma_n = np.sqrt(10 ** (- snr / 10))

    # Mix down the recorded signals
    mix = np.sum(premix[:n_src,:], axis=0) + sigma_n * np.random.randn(*premix.shape[1:])

    return mix

# Run the simulation
room.simulate(
        callback_mix=callback_mix,
        callback_mix_kwargs=callback_mix_kwargs,
        )
mics_signals = room.mic_array.signals

In addition, it is desirable in some cases to obtain the microphone signals with individual sources, prior to mixing. For example, this is useful to evaluate the output from blind source separation algorithms. In this case, the return_premix argument should be set to True

premix = room.simulate(return_premix=True)

Reverberation Time

The reverberation time (RT60) is defined as the time needed for the enery of the RIR to decrease by 60 dB. It is a useful measure of the amount of reverberation. We provide a method in the measure_rt60() to measure the RT60 of recorded or simulated RIR.

The method is also directly integrated in the Room object as the method measure_rt60().

# assuming the simulation has already been carried out
rt60 = room.measure_rt60()

for m in room.n_mics:
    for s in room.n_sources:
        print(
            "RT60 between the {}th mic and {}th source: {:.3f} s".format(m, s, rt60[m, s])
        )

Free-field simulation

You can also use this package to simulate free-field sound propagation between a set of sound sources and a set of microphones, without considering room effects. To this end, you can use the pyroomacoustics.room.AnechoicRoom class, which simply corresponds to setting the maximum image image order of the room simulation to zero. This allows for early development and testing of various audio-based algorithms, without worrying about room acoustics at first. Thanks to the modular framework of pyroomacoustics, room acoustics can easily be added, after this first testing stage, for more realistic simulations.

Use this if you can neglect room effects (e.g. you operate in an anechoic room or outdoors), or if you simply want to test your algorithm in the best-case scenario. The below code shows how to create and simualte an anechoic room. For a more involved example (testing a the DOA algorithm MUSIC in an anechoic room), see ./examples/doa_anechoic_room.py.

# Create anechoic room.
room = pra.AnechoicRoom(fs=16000)

# Place the microphone array around the origin.
mic_locs = np.c_[
    [0.1, 0.1, 0],
    [-0.1, 0.1, 0],
    [-0.1, -0.1, 0],
    [0.1, -0.1, 0],
]
room.add_microphone_array(mic_locs)

# Add a source. We use a white noise signal for the source, and
# the source can be arbitrarily far because there are no walls.
x = np.random.randn(2**10)
room.add_source([10.0, 20.0, -20], signal=x)

# run the simulation
room.simulate()

References

1
    1. Allen and D. A. Berkley, Image method for efficiently simulating small-room acoustics, J. Acoust. Soc. Am., vol. 65, no. 4, p. 943, 1979.

2(1,2,3)
  1. Vorlaender, Auralization, 1st ed. Berlin: Springer-Verlag, 2008, pp. 1-340.

3(1,2)
  1. Schroeder, Physically based real-time auralization of interactive virtual environments. PhD Thesis, RWTH Aachen University, 2011.

class pyroomacoustics.room.AnechoicRoom(dim=3, fs=8000, t0=0.0, sigma2_awgn=None, sources=None, mics=None, temperature=None, humidity=None, air_absorption=False)

Bases: ShoeBox

This class provides an API for creating an Anechoic “room” in 2D or 3D.

Parameters
  • dim (int) – Dimension of the room (2 or 3).

  • fs (int, optional) – The sampling frequency in Hz. Default is 8000.

  • t0 (float, optional) – The global starting time of the simulation in seconds. Default is 0.

  • sigma2_awgn (float, optional) – The variance of the additive white Gaussian noise added during simulation. By default, none is added.

  • sources (list of SoundSource objects, optional) – Sources to place in the room. Sources can be added after room creating with the add_source method by providing coordinates.

  • mics (MicrophoneArray object, optional) – The microphone array to place in the room. A single microphone or microphone array can be added after room creation with the add_microphone_array method.

  • temperature (float, optional) – The air temperature in the room in degree Celsius. By default, set so that speed of sound is 343 m/s.

  • humidity (float, optional) – The relative humidity of the air in the room (between 0 and 100). By default set to 0.

  • air_absorption (bool, optional) – If set to True, absorption of sound energy by the air will be simulated.

get_bbox()

Returns a bounding box for the mics and sources, for plotting.

is_inside(p)

Overloaded function to eliminate testing if objects are inside “room”.

plot(**kwargs)

Overloaded function to issue warning when img_order is given.

plot_walls(ax)

Overloaded function to eliminate wall plotting.

class pyroomacoustics.room.Room(walls, fs=8000, t0=0.0, max_order=1, sigma2_awgn=None, sources=None, mics=None, temperature=None, humidity=None, air_absorption=False, ray_tracing=False, use_rand_ism=False, max_rand_disp=0.08)

Bases: object

A Room object has as attributes a collection of pyroomacoustics.wall.Wall objects, a pyroomacoustics.beamforming.MicrophoneArray array, and a list of pyroomacoustics.soundsource.SoundSource. The room can be two dimensional (2D), in which case the walls are simply line segments. A factory method pyroomacoustics.room.Room.from_corners() can be used to create the room from a polygon. In three dimensions (3D), the walls are two dimensional polygons, namely a collection of points lying on a common plane. Creating rooms in 3D is more tedious and for convenience a method pyroomacoustics.room.Room.extrude() is provided to lift a 2D room into 3D space by adding vertical walls and parallel floor and ceiling.

The Room is sub-classed by pyroomacoustics.room.ShoeBox which creates a rectangular (2D) or parallelepipedic (3D) room. Such rooms benefit from an efficient algorithm for the image source method.

Attribute walls

(Wall array) list of walls forming the room

Attribute fs

(int) sampling frequency

Attribute max_order

(int) the maximum computed order for images

Attribute sources

(SoundSource array) list of sound sources

Attribute mics

(MicrophoneArray) array of microphones

Attribute corners

(numpy.ndarray 2xN or 3xN, N=number of walls) array containing a point belonging to each wall, used for calculations

Attribute absorption

(numpy.ndarray size N, N=number of walls) array containing the absorption factor for each wall, used for calculations

Attribute dim

(int) dimension of the room (2 or 3 meaning 2D or 3D)

Attribute wallsId

(int dictionary) stores the mapping “wall name -> wall id (in the array walls)”

Parameters
  • walls (list of Wall or Wall2D objects) – The walls forming the room.

  • fs (int, optional) – The sampling frequency in Hz. Default is 8000.

  • t0 (float, optional) – The global starting time of the simulation in seconds. Default is 0.

  • max_order (int, optional) – The maximum reflection order in the image source model. Default is 1, namely direct sound and first order reflections.

  • sigma2_awgn (float, optional) – The variance of the additive white Gaussian noise added during simulation. By default, none is added.

  • sources (list of SoundSource objects, optional) – Sources to place in the room. Sources can be added after room creating with the add_source method by providing coordinates.

  • mics (MicrophoneArray object, optional) – The microphone array to place in the room. A single microphone or microphone array can be added after room creation with the add_microphone_array method.

  • temperature (float, optional) – The air temperature in the room in degree Celsius. By default, set so that speed of sound is 343 m/s.

  • humidity (float, optional) – The relative humidity of the air in the room (between 0 and 100). By default set to 0.

  • air_absorption (bool, optional) – If set to True, absorption of sound energy by the air will be simulated.

  • ray_tracing (bool, optional) – If set to True, the ray tracing simulator will be used along with image source model.

  • use_rand_ism (bool, optional) – If set to True, image source positions will have a small random displacement to prevent sweeping echoes

  • max_rand_disp (float, optional;) – If using randomized image source method, what is the maximum displacement of the image sources?

add(obj)

Adds a sound source or microphone to a room

Parameters

obj (SoundSource or Microphone object) – The object to add

Returns

The room is returned for further tweaking.

Return type

Room

add_microphone(loc, fs=None, directivity=None)

Adds a single microphone in the room.

Parameters
  • loc (array_like or ndarray) – The location of the microphone. The length should be the same as the room dimension.

  • fs (float, optional) – The sampling frequency of the microphone, if different from that of the room.

Returns

The room is returned for further tweaking.

Return type

Room

add_microphone_array(mic_array, directivity=None)

Adds a microphone array (i.e. several microphones) in the room.

Parameters

mic_array (array_like or ndarray or MicrophoneArray object) –

The array can be provided as an array of size (dim, n_mics), where dim is the dimension of the room and n_mics is the number of microphones in the array.

As an alternative, a MicrophoneArray can be provided.

Returns

The room is returned for further tweaking.

Return type

Room

add_soundsource(sndsrc, directivity=None)

Adds a pyroomacoustics.soundsource.SoundSource object to the room.

Parameters

sndsrc (SoundSource object) – The SoundSource object to add to the room

add_source(position, signal=None, delay=0, directivity=None)

Adds a sound source given by its position in the room. Optionally a source signal and a delay can be provided.

Parameters
  • position (ndarray, shape: (2,) or (3,)) – The location of the source in the room

  • signal (ndarray, shape: (n_samples,), optional) – The signal played by the source

  • delay (float, optional) – A time delay until the source signal starts in the simulation

Returns

The room is returned for further tweaking.

Return type

Room

compute_rir()

Compute the room impulse response between every source and microphone.

direct_snr(x, source=0)

Computes the direct Signal-to-Noise Ratio

extrude(height, v_vec=None, absorption=None, materials=None)

Creates a 3D room by extruding a 2D polygon. The polygon is typically the floor of the room and will have z-coordinate zero. The ceiling

Parameters
  • height (float) – The extrusion height

  • v_vec (array-like 1D length 3, optional) – A unit vector. An orientation for the extrusion direction. The ceiling will be placed as a translation of the floor with respect to this vector (The default is [0,0,1]).

  • absorption (float or array-like, optional) – Absorption coefficients for all the walls. If a scalar, then all the walls will have the same absorption. If an array is given, it should have as many elements as there will be walls, that is the number of vertices of the polygon plus two. The two last elements are for the floor and the ceiling, respectively. It is recommended to use materials instead of absorption parameter. (Default: 1)

  • materials (dict) – Absorption coefficients for floor and ceiling. This parameter overrides absorption. (Default: {“floor”: 1, “ceiling”: 1})

classmethod from_corners(corners, absorption=None, fs=8000, t0=0.0, max_order=1, sigma2_awgn=None, sources=None, mics=None, materials=None, **kwargs)

Creates a 2D room by giving an array of corners.

Parameters
  • corners ((np.array dim 2xN, N>2)) – list of corners, must be antiClockwise oriented

  • absorption (float array or float) – list of absorption factor for each wall or single value for all walls (deprecated, use materials instead)

  • fs (int, optional) – The sampling frequency in Hz. Default is 8000.

  • t0 (float, optional) – The global starting time of the simulation in seconds. Default is 0.

  • max_order (int, optional) – The maximum reflection order in the image source model. Default is 1, namely direct sound and first order reflections.

  • sigma2_awgn (float, optional) – The variance of the additive white Gaussian noise added during simulation. By default, none is added.

  • sources (list of SoundSource objects, optional) – Sources to place in the room. Sources can be added after room creating with the add_source method by providing coordinates.

  • mics (MicrophoneArray object, optional) – The microphone array to place in the room. A single microphone or microphone array can be added after room creation with the add_microphone_array method.

  • kwargs (key, value mappings) – Other keyword arguments accepted by the Room class

Return type

Instance of a 2D room

get_bbox()

Returns a bounding box for the room

get_volume()

Computes the volume of the room

Returns

the volume of the room

Return type

float

get_wall_by_name(name)

Returns the instance of the wall by giving its name.

Parameters

name (string) – name of the wall

Returns

instance of the wall with this name

Return type

Wall

image_source_model()
is_inside(p, include_borders=True)

Checks if the given point is inside the room.

Parameters
  • p (array_like, length 2 or 3) – point to be tested

  • include_borders (bool, optional) – set true if a point on the wall must be considered inside the room

Return type

True if the given point is inside the room, False otherwise.

property is_multi_band
measure_rt60(decay_db=60, plot=False)

Measures the reverberation time (RT60) of the simulated RIR.

Parameters
  • decay_db (float) – This is the actual decay of the RIR used for the computation. The default is 60, meaning that the RT60 is exactly what we measure. In some cases, the signal may be too short to measure 60 dB decay. In this case, we can specify a lower value. For example, with 30 dB, the RT60 is twice the time measured.

  • plot (bool) – Displays a graph of the Schroeder curve and the estimated RT60.

Returns

An array that contains the measured RT60 for all the RIR.

Return type

ndarray (n_mics, n_sources)

property n_mics
property n_sources
plot(img_order=None, freq=None, figsize=None, no_axis=False, mic_marker_size=10, plot_directivity=True, ax=None, **kwargs)

Plots the room with its walls, microphones, sources and images

plot_rir(select=None, FD=False, kind=None)

Plot room impulse responses. Compute if not done already.

Parameters
  • select (list of tuples OR int) – List of RIR pairs (mic, src) to plot, e.g. [(0,0), (0,1)]. Or int to plot RIR from particular microphone to all sources. Note that microphones and sources are zero-indexed. Default is to plot all microphone-source pairs.

  • FD (bool, optional) – If True, the transfer function is plotted instead of the impulse response. Default is False.

  • kind (str, optional) – The value can be “ir”, “tf”, or “spec” which will plot impulse response, transfer function, and spectrogram, respectively. If this option is specified, then the value of FD is ignored. Default is “ir”.

Returns

  • fig (matplotlib figure) – Figure object for further modifications

  • axes (matplotlib list of axes objects) – Axes for further modifications

ray_tracing()
rt60_theory(formula='sabine')

Compute the theoretical reverberation time (RT60) for the room.

Parameters

formula (str) – The formula to use for the calculation, ‘sabine’ (default) or ‘eyring’

set_air_absorption(coefficients=None)

Activates or deactivates air absorption in the simulation.

Parameters

coefficients (list of float) – List of air absorption coefficients, one per octave band

set_ray_tracing(n_rays=None, receiver_radius=0.5, energy_thres=1e-07, time_thres=10.0, hist_bin_size=0.004)

Activates the ray tracer.

Parameters
  • n_rays (int, optional) – The number of rays to shoot in the simulation

  • receiver_radius (float, optional) – The radius of the sphere around the microphone in which to integrate the energy (default: 0.5 m)

  • energy_thres (float, optional) – The energy thresold at which rays are stopped (default: 1e-7)

  • time_thres (float, optional) – The maximum time of flight of rays (default: 10 s)

  • hist_bin_size (float) – The time granularity of bins in the energy histogram (default: 4 ms)

set_sound_speed(c)

Sets the speed of sound unconditionnaly

simulate(snr=None, reference_mic=0, callback_mix=None, callback_mix_kwargs={}, return_premix=False, recompute_rir=False)

Simulates the microphone signal at every microphone in the array

Parameters
  • reference_mic (int, optional) – The index of the reference microphone to use for SNR computations. The default reference microphone is the first one (index 0)

  • snr (float, optional) –

    The target signal-to-noise ratio (SNR) in decibels at the reference microphone. When this option is used the argument pyroomacoustics.room.Room.sigma2_awgn is ignored. The variance of every source at the reference microphone is normalized to one and the variance of the noise \(\sigma_n^2\) is chosen

    \[\mathsf{SNR} = 10 \log_{10} \frac{ K }{ \sigma_n^2 }\]

    The value of pyroomacoustics.room.Room.sigma2_awgn is also set to \(\sigma_n^2\) automatically

  • callback_mix (func, optional) – A function that will perform the mix, it takes as first argument an array of shape (n_sources, n_mics, n_samples) that contains the source signals convolved with the room impulse response prior to mixture at the microphone. It should return an array of shape (n_mics, n_samples) containing the mixed microphone signals. If such a function is provided, the snr option is ignored and pyroomacoustics.room.Room.sigma2_awgn is set to None.

  • callback_mix_kwargs (dict, optional) – A dictionary that contains optional arguments for callback_mix function

  • return_premix (bool, optional) – If set to True, the function will return an array of shape (n_sources, n_mics, n_samples) containing the microphone signals with individual sources, convolved with the room impulse response but prior to mixing

  • recompute_rir (bool, optional) – If set to True, the room impulse responses will be recomputed prior to simulation

Returns

Depends on the value of return_premix option

Return type

Nothing or an array of shape (n_sources, n_mics, n_samples)

unset_air_absorption()

Deactivates air absorption in the simulation

unset_ray_tracing()

Deactivates the ray tracer

property volume
wall_area(wall)

Computes the area of a 3D planar wall.

Parameters

wall (Wall instance) – the wall object that is defined in 3D space

class pyroomacoustics.room.ShoeBox(p, fs=8000, t0=0.0, absorption=None, max_order=1, sigma2_awgn=None, sources=None, mics=None, materials=None, temperature=None, humidity=None, air_absorption=False, ray_tracing=False, use_rand_ism=False, max_rand_disp=0.08)

Bases: Room

This class provides an API for creating a ShoeBox room in 2D or 3D.

Parameters
  • p (array_like) – Length 2 (width, length) or 3 (width, length, height) depending on the desired dimension of the room.

  • fs (int, optional) – The sampling frequency in Hz. Default is 8000.

  • t0 (float, optional) – The global starting time of the simulation in seconds. Default is 0.

  • absorption (float) – Average amplitude absorption of walls. Note that this parameter is deprecated; use materials instead!

  • max_order (int, optional) – The maximum reflection order in the image source model. Default is 1, namely direct sound and first order reflections.

  • sigma2_awgn (float, optional) – The variance of the additive white Gaussian noise added during simulation. By default, none is added.

  • sources (list of SoundSource objects, optional) – Sources to place in the room. Sources can be added after room creating with the add_source method by providing coordinates.

  • mics (MicrophoneArray object, optional) – The microphone array to place in the room. A single microphone or microphone array can be added after room creation with the add_microphone_array method.

  • materials (Material object or dict of Material objects) – See pyroomacoustics.parameters.Material. If providing a dict, you must provide a Material object for each wall: ‘east’, ‘west’, ‘north’, ‘south’, ‘ceiling’ (3D), ‘floor’ (3D).

  • temperature (float, optional) – The air temperature in the room in degree Celsius. By default, set so that speed of sound is 343 m/s.

  • humidity (float, optional) – The relative humidity of the air in the room (between 0 and 100). By default set to 0.

  • air_absorption (bool, optional) – If set to True, absorption of sound energy by the air will be simulated.

  • ray_tracing (bool, optional) – If set to True, the ray tracing simulator will be used along with image source model.

  • use_rand_ism (bool, optional) – If set to True, image source positions will have a small random displacement to prevent sweeping echoes

  • max_rand_disp (float, optional;) – If using randomized image source method, what is the maximum displacement of the image sources?

extrude(height)

Overload the extrude method from 3D rooms

get_volume()

Computes the volume of a room

Return type

the volume in cubic unit

is_inside(pos)
Parameters

pos (array_like) – The position to test in an array of size 2 for a 2D room and 3 for a 3D room

Return type

True if pos is a point in the room, False otherwise.

pyroomacoustics.room.find_non_convex_walls(walls)

Finds the walls that are not in the convex hull

Parameters

walls (list of Wall objects) – The walls that compose the room

Returns

The indices of the walls no in the convex hull

Return type

list of int

pyroomacoustics.room.sequence_generation(volume, duration, c, fs, max_rate=10000)
pyroomacoustics.room.wall_factory(corners, absorption, scattering, name='')

Call the correct method according to wall dimension