···
Log in / Register

Data Engineer

Indeed
Full-time
Onsite
No experience limit
No degree limit
Melchor Ocampo 125-97, Centro, 64000 Monterrey, N.L., Mexico
Favourites
Share

Description

Summary: We are seeking a Middle Data Engineer to build, optimize, and maintain scalable data pipelines for a complex data migration program in a cloud environment. Highlights: 1. Design, build, and maintain ETL/ELT pipelines using AWS Glue and dbt 2. Collaborate with data architects and analysts on critical data migration 3. Work with large scale data processing in a cloud environment Project overview: The program is a multi phase data migration initiative aimed at replacing legacy capital markets systems with a modern platform ecosystem. It includes the migration of critical datasets across custody, clearing and settlement, derivatives processing, and CCP operations. The project spans 18 months and follows an incremental approach with strong emphasis on data integrity, reconciliation, and traceability. * Position overview: We are looking for a Middle Data Engineer to support a complex data migration program by building, optimizing, and maintaining scalable data pipelines. You will work closely with data architects and analysts to ensure data is processed efficiently, reliably, and in line with business and regulatory requirements in a cloud environment. * This role focuses on building, optimizing, and maintaining data pipelines that enable the extraction, transformation, and loading of large volumes of data from legacy systems into a modern data platform. You will work closely with data architects and analysts to ensure data is processed efficiently, reliably, and in alignment with business requirements. * * The position requires strong hands on experience with data processing tools, attention to data quality, and the ability to work with complex datasets in a cloud environment. Technology stack: SQL, AWS Glue, dbt, Oracle, AWS, Git * Responsibilities: Design, build, and maintain ETL/ELT pipelines using AWS Glue and dbt * Extract and process data from legacy systems, ensuring efficient and scalable transformations * Collaborate with data architects to implement target data models and transformation logic * Work with data analysts to ensure data availability and correctness for validation and reporting * Optimize data pipelines for performance, scalability, and cost efficiency * Ensure data quality through validation checks, logging, and monitoring mechanisms * Handle large volumes of structured data across multiple sources and systems * Implement and maintain data workflows following best practices in version control and CI/CD * Troubleshoot data issues and support root cause analysis across the data pipeline * Document data pipelines, transformations, and technical processes * Requirements: Strong experience with SQL for data transformation and querying * Hands on experience with AWS Glue and cloud based data processing * Experience building and maintaining ETL/ELT pipelines * Experience with dbt or similar transformation frameworks * Experience working with large scale data processing and data pipelines * Familiarity with Git and CI/CD practices * Understanding of data modeling concepts and data warehouse architectures * Strong problem solving skills and attention to detail * Nice to have: Experience working in capital markets environments * Experience with Oracle databases and legacy system integrations * Familiarity with performance optimization in data pipelines * Experience with monitoring, logging, and observability in data systems * Exposure to data migration or large scale transformation programs

Source:  indeed View original post
Juan García
Indeed · HR

Company

Indeed
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.