




Job Summary: Design, develop, and maintain scalable Kafka-based data pipelines and event-driven applications, collaborating with teams and resolving critical issues. Key Highlights: 1. Design and development of scalable data pipelines with Kafka. 2. Collaboration with data, DevOps, and software engineers. 3. Resolution of critical Kafka issues and in-depth analysis. * Design, develop, and maintain scalable Kafka-based data pipelines and event-driven applications (producers/consumers). * Implement and manage Kafka connectors (JDBC, S3, etc.) for data integration. * Work with Kafka streams, topics, schemas, and partitions. * Integrate Kafka with other systems (databases, cloud services, Spark). * Write code in Java, Scala, or Python for data processing. * Collaborate with data, DevOps, and software engineers. * Develop monitoring/alerting for Kafka infrastructure. * Resolve critical and complex Kafka issues (L3/Tier 3). * Conduct in-depth analysis to identify service failures and recurring incidents. * Use tools such as Splunk, Grafana, JMX for system health and observability. * Deploy, configure, and maintain highly available Kafka clusters. * Implement security best practices (SSL, Kerberos, Ranger). Employment Type: Full-time Salary: $45,000.00 per month Application Question(s): * English proficiency level * Availability for hybrid work arrangement Experience: * Apache Kafka: 1 year (Preferred) Work Location: Hybrid remote in Ciudad de México


