Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't reproduce simulation results with real world data #226

Open
PascalPolygon opened this issue Jun 14, 2021 · 3 comments
Open

Can't reproduce simulation results with real world data #226

PascalPolygon opened this issue Jun 14, 2021 · 3 comments

Comments

@PascalPolygon
Copy link

PascalPolygon commented Jun 14, 2021

Hello,

I am using pyroomacoustics to estimate the direction of arrival of a signal using a 3 mic array. The mics start recording within a short interval of each other and I synchronize the waveforms at the end of the recording. I setup the 3 mic array in pyroomacoustics and placed the source at the same location as in the physical setup. I used an app on an android device to measure the reverberation time in the room and used inverse sabine provided by pra to get absorption and max order. I also estimate the SNR using a statistical method. Despite having a terrible SNR, SRP and MUSIC perform reasonably well in the simulation, but when I run it on audio from my actual recordings it does not work.
Sim results:

simulation_doa

Real world results:

real_doa

All the code is in this notebook:
https://github.com/PascalPolygon/pyroomacoustics_IRL/blob/master/ice_lab_doa_sim_v_real.ipynb

Note: I'm only interested in the azimuth angle.

Any suggestions?

Thank you!

@fakufaku
Copy link
Collaborator

fakufaku commented Jun 17, 2021

Hi @PascalPolygon , do I understand correctly that your are using unsynchronized microphones (i.e., they do not share the same clock for their A/D converter) ? In that case it is not straightforward to apply DOA estimation. In addition to having no common time origin, the sampling frequency of the different devices may be subtly different.
Doing DOA estimation with unsynchronized devices is a lot more challenging.

@PascalPolygon
Copy link
Author

Hello @fakufaku

Thanks for the response. I use the mics on 3 Android devices (3 mics), I save the system time on each phone at the start of recording, and knowing the offset between their clocks I calculate the latency at the start-of-recording between the phones. I use these relative latencies to shift the signals. This is my method for synchronizing the waveforms. However, I have no fix for subtle variations in the sampling rate. Do you have any thoughts on this approach?

@fakufaku
Copy link
Collaborator

@PascalPolygon

I see, this is an interesting, yet challenging setup 😄 I am not convinced that the clocks of the phone may be sufficiently precise for the task at hand. Have you tried assessing what the synchronization error is ? You could do that by playing pulses from a know location (or a sine sweep) and check if the clock offset given by the phone internal clock matches the expected time delays due to propagation.
Also, the two closest microphones are 70 cm apart, if we assume the speed of sound is 343 m/s, the longest propagation delay is 0.7 / 343 = 2 ms. So if the clocks can't give you millisecond accurate synchronization, you won't be able to do DOA. For good DOA estimation you will likely need to get sub-ms synchronization though. You could also evaluate how the synchronization error will affect the the DOA.
I think in your case synchronization is most likely the dominant source of error (compared to sampling fequency mismatch), nevertheless if you are interested in sampling frequency offset evaluation, you could check this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants