




**Join Stefanini!** At Stefanini, we are more than 30,000 brilliant minds, connected from 40 countries, doing what we love and co-creating a better future. You definitely don’t want to miss out! The **Data Engineer** is responsible for designing, developing, and maintaining data integration and transformation (ETL) processes, ensuring data quality, availability, and integrity across the organization. They work on implementing and optimizing data pipelines on modern platforms, enabling advanced analytics, data mining, and the development of reports and dashboards for business and analytics users. They collaborate closely with other teams to ensure data remains accessible and trustworthy throughout its lifecycle. **Responsibilities and Duties** * Design, develop, and maintain data integration and transformation (ETL) processes using Python. * Implement and optimize data pipelines on platforms such as Cloudera and Snowflake to ensure efficient information flow. * Manage and transform large volumes of data stored in Data Lakes, enabling their use in reporting, dashboards, and advanced analytics. * Develop and optimize high-performance SQL queries for data extraction, manipulation, and modeling. * Collaborate with business and analytics teams to enable data mining projects and advanced analytics solutions. * Ensure data quality, availability, and integrity throughout its lifecycle. * Propose and execute continuous improvements to existing data processes and architectures. **Requirements and Qualifications** * 3–5 years of proven experience in data engineering or similar roles. * Strong analytical ability and problem-solving orientation. * Effective communication skills, both with technical teams and business units. * Autonomy and proactivity in proposing improvements to data processes and workflows. * Ability to work collaboratively in multidisciplinary and remote/hybrid teams. * Proficiency in developing ETL processes using Python. * Experience managing and optimizing data pipelines on platforms such as Cloudera and Snowflake. * Solid expertise in handling large-scale data in Data Lakes. * Extensive experience using SQL for complex query design, tuning, and relational data modeling. * Experience developing stored procedures, optimizing queries, and data modeling in Snowflake (highly valued). * Desired: Basic knowledge of Hadoop, Hive, and Spark. * Desired: Knowledge of advanced analytics. **Additional Information** Are you looking for a place where your ideas shine? With over 38 years of experience and a global presence, at Stefanini we transform tomorrow—together. Here, every action matters and every idea can make a difference. Join a team that values innovation, respect, and commitment. If you’re a disruptive individual who embraces continuous learning and has innovation embedded in your DNA, then we’re exactly what you’re looking for. Come and let’s build a better future—together!


