You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you very much for sharing a wonderful libraray with us! I recently want to play Sparse Bayesian Learning (SBL) in room acoustics localization. I have already viewed the DOA examples that you have showed. I have 2 questions:
The DOA estimation by MUSIC has specified the frequency range [300, 3500]. And based on my observation, this parameter really matters. But I could not find where the frequency of the source signal is specified in that example. Now, if I want to generate the steering matrix of a uniform circular array, how can I identify the frequency or the wavelength? Is it a mix of multiple frequency components? If so, what are the frequencies of them? That is one of the input parameters of SBL.
In thar example, the received micophone signal (aroom.mic_array.signals) was first transformed into time-frequency domain via STFT. If I want to use SBL, another input I need is the received signal with dimension (#freq, # mic, # snapshot). The received signal should be steered correctly. I am not sure whether I should use the original time domain signal (aroom.mic_array.signals) or the one in the frequency domain. In any of the cases, will the beam steered correctly? Thank you very much!
The text was updated successfully, but these errors were encountered:
Thank you very much for sharing a wonderful libraray with us! I recently want to play Sparse Bayesian Learning (SBL) in room acoustics localization. I have already viewed the DOA examples that you have showed. I have 2 questions:
The DOA estimation by MUSIC has specified the frequency range [300, 3500]. And based on my observation, this parameter really matters. But I could not find where the frequency of the source signal is specified in that example. Now, if I want to generate the steering matrix of a uniform circular array, how can I identify the frequency or the wavelength? Is it a mix of multiple frequency components? If so, what are the frequencies of them? That is one of the input parameters of SBL.
In thar example, the received micophone signal (aroom.mic_array.signals) was first transformed into time-frequency domain via STFT. If I want to use SBL, another input I need is the received signal with dimension (#freq, # mic, # snapshot). The received signal should be steered correctly. I am not sure whether I should use the original time domain signal (aroom.mic_array.signals) or the one in the frequency domain. In any of the cases, will the beam steered correctly? Thank you very much!
The text was updated successfully, but these errors were encountered: