I have brought Gong (web app on laptop) into in-person meetings so that the meeting is recorded. I’m not so much fussed on the video but have recorded the audio. The challenge is however is that there’s only one device in the room (my laptop) so 100% of the audio is attributed to me. Is there any best practice on associating individuals and what they’re saying to specific parts of the audio? I wondered whether there could be some sort of ‘calibration’ at the beginning of the meeting (verbal introductions by each participant) so that speech recognition during the meeting could associate the audio with the correct participant?
Optimising Gongs use for in-person meetings
Best answer by Nisha Baxi
When everyone’s in the same room on one device, Gong only gets one audio stream, so it can’t reliably split and attribute what’s said to different people. A quick round of verbal intros at the start is useful context for humans reviewing the call, but it doesn’t change how Gong auto-tags speakers today. For true per-speaker analytics you’d need separate audio paths (each person on their own device/dial-in, or a Zoom Room/Zoom Native setup where Gong can separate room speakers and you can manually relabel them). For pure in-person meetings, we recommend using the Gong mobile app’s meeting recorder on a phone placed in the middle of the table; it will still capture/transcribe everything, but think of it as a shared-room transcript rather than precise per-person stats.
WELCOME VISIONEERS
Login to the community
GONG CUSTOMERS & EMPLOYEES: LOGIN/REGISTER HERE
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.