Skip to main content

Hello everyone,

I’m exploring how to implement a seamless event streaming integration that links our backend systems with Gong’s platform for real time insights. Specifically, I’m curious about architectural patterns and tools for ingesting high volume call data, ensuring low latency and fault tolerance.

When handling Kafka support, how are others setting up consumer groups, retention policies, and schema evolution? Are there established best practices or pitfalls to watch for, especially around managing offsets, handling retry logic, and monitoring throughput?

I’d appreciate examples, tooling recommendations, or lessons learned that could help us build a scalable, reliable streaming pipeline.

Hello everyone,

I’m exploring how to implement a seamless event streaming integration that links our backend systems with Gong’s platform for real time insights. Specifically, I’m curious about architectural patterns and tools for ingesting high volume call data, ensuring low latency and fault tolerance.

When handling Kafka support, how are others setting up consumer groups, retention policies, and schema evolution? Are there established best practices or pitfalls to watch for, especially around managing offsets, handling retry logic, and monitoring throughput?

I’d appreciate examples, tooling recommendations, or lessons learned that could help us build a scalable, reliable streaming pipeline.

thanks in advance for any help


Reply