POSITION: Manager - Azure Data Engineering
LOCATION: Charlotte, NC 28217 (Hybrid 3 times per week)
PAY RANGE: $85.00 - $90.00 per hour W2 (~$170k - $180k/year)
DURATION: 6-Month Contract-To-Hire (100% career opportunity)
MUST-HAVES:
- 12+ years of experience in a data engineering role, with expertise in Azure, designing and building data pipelines, ETL processes, and data warehouses.
- 2+ years of Managerial experience.
- Advanced proficiency in SQL, specifically being able to handle window functions like rank, average, standard deviation, etc.
- Python, Spark and PySpark programming languages.
- Strong experience with the Azure cloud platform.
- Extensive experience working with Databricks and Azure Data Factory for data lake and data warehouse solutions.
- Experience in implementing CI/CD pipelines for automating build, test, and deployment processes.
- Hands-on experience with big data technologies (such as Hadoop, Spark, Kafka).
PLUSSES:
- Experience working in the retail industry/sector.
- Relevant certifications (e.g., Azure, Databricks, Snowflake, etc.).
- Experience working with Snowflake and/or Microsoft Fabric.
SUMMARY:
A top Insight Global client in Charlotte, NC is looking for an Azure Data Engineering Manager to join their team. As the Manager, your primary responsibility will be to spearhead the design, development, and implementation of data solutions aimed at empowering the organization to derive actionable insights from intricate datasets. You will take the lead in guiding a team of data engineers (onshore and offshore), fostering collaboration with cross-functional teams, and spearheading initiatives geared towards fortifying our data infrastructure, CI/CD pipelines, and analytics capabilities.
Responsibilities are shown below:
- Apply advanced knowledge of Data Engineering principles, methodologies and techniques to design and implement data loading and aggregation frameworks across broad areas of the organization.
- Drive new and enhanced capabilities to Enterprise Data Platform partners to meet the needs of product / engineering / business.
- Implement and optimize data solutions in enterprise data warehouses and big data repositories, focusing primarily on movement to the cloud.
- Experience building Azure enterprise systems using Databricks and Snowflake.
- Leverage strong SQL, Python, Spark and PySpark programming skills to construct robust pipelines for efficient data processing and analysis.
- Implement CI/CD pipelines for automating build, test, and deployment processes to accelerate the delivery of data solutions.
- Implement data modeling techniques to design and optimize data schemas, ensuring data integrity and performance.
- Drive continuous improvement initiatives to enhance performance, reliability, and scalability of our data infrastructure.
- Collaborate with data scientists, analysts, and other stakeholders to understand business requirements and translate them into technical solutions.
- Implement best practices for data governance, security, and compliance to ensure the integrity and confidentiality of our data assets.
Compensation:
$85.00 - $90.00/hr
Exact compensation may vary based on several factors, including skills, experience, and education.
Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.