Job Description

Overview

Join to apply for the Data Engineer role at Emerald

We are seeking a highly motivated and technically proficient Data Engineer to join our growing data and analytics team. This role involves designing, developing, and optimizing scalable data pipelines and integrations across various cloud-based and third-party platforms. The ideal candidate will have handson experience with Databricks, Apache Spark, PySpark, and cloud computing, along with strong problemsolving skills and a solid understanding of data architecture and integration best practices.

Key Responsibilities
  • Develop and optimize data pipelines and workflows using Databricks, Apache Spark, PySpark, and cloudnative services.
  • Integrate data from internal systems and external platforms such as HubSpot, Salesforce, and other CRM systems via APIs.
  • Implement cloudbased data architectures following data mesh principles and best practices.
  • Collaborate on data modeling, transformation, and quality assurance for analytics and reporting purposes.
  • Build and maintain APIs; use Postman and Swagger for testing and documentation.
  • Write efficient and modular code in Python and leverage SQL for data processing.
  • Follow SDLC best practices including version control, CI/CD, and code reviews.
  • Ensure data security, integrity, and governance across the full data lifecycle.
  • Use AWS (or similar platforms like Azure or GCP) for compute, storage

#J-18808-Ljbffr
ATS Score
|
Share