Required Skills & Experience
* 10+ years of experience in a data engineering role, with expertise in designing and building data pipelines, ETL processes, and data warehouses
* 2+ years of experience working as a lead and developing/mentoring other engineers
* Strong proficiency in SQL, Python and PySpark programming languages
* Strong experience with the Azure cloud platform
* Experience working in a retail environment
* Extensive experience working with Databricks and Azure Data Factory for data lake and data warehouse solutions
* Experience in implementing CI/CD pipelines for automating build, test, and deployment processes
* Hands-on experience with big data technologies (such as Hadoop, Spark, Kafka)
Nice to Have Skills & Experience
* Relevant certifications (e.g., Azure, Databricks, Snowflake) would be a plus
* Experience working with Snowflake and/or Microsoft Fabric
Job Description
A client in Charlotte, NC is looking for Lead Data Engineer to join their team. As the Technical Lead Data Engineer, your primary responsibility will be to spearhead the design, development, and implementation of data solutions aimed at empowering the organization to derive actionable insights from intricate datasets. You will take the lead in guiding a team of data engineers, fostering collaboration with cross-functional teams, and spearheading initiatives geared towards fortifying our data infrastructure, CI/CD pipelines, and analytics capabilities. Responsibilities are shown below:
* Apply advanced knowledge of Data Engineering principles, methodologies and techniques to design and implement data loading and aggregation frameworks across broad areas of the organization.
* Gather and process raw, structured, semi-structured and unstructured data using batch and real-time data processing frameworks.
* Implement and optimize data solutions in enterprise data warehouses and big data repositories, focusing primarily on movement to the cloud.
* Drive new and enhanced capabilities to Enterprise Data Platform partners to meet the needs of product / engineering / business.
* Experience building enterprise systems especially using Databricks, Snowflake and platforms like Azure, AWS, GCP etc
* Leverage strong Python, Spark, SQL programming skills to construct robust pipelines for efficient data processing and analysis.
* Implement CI/CD pipelines for automating build, test, and deployment processes to accelerate the delivery of data solutions.
* Implement data modeling techniques to design and optimize data schemas, ensuring data integrity and performance.
* Drive continuous improvement initiatives to enhance performance, reliability, and scalability of our data infrastructure.
* Collaborate with data scientists, analysts, and other stakeholders to understand business requirements and translate them into technical solutions.
* Implement best practices for data governance, security, and compliance to ensure the integrity and confidentiality of our data assets.