One of our direct client is in need of Azure Data Engineer with heavy focus on Pyspark, Databricks, ADF:
Job Duties:
Build and optimize data pipelines for efficient data ingestion, transformation and loading from various sources while ensuring data quality and integrity.
Design, develop, and deploy Spark program in data bricks environment to process and analyze large volumes of data.
Experience of Delta Lake, DWH, Data Integration, Cloud, Design and Data Modelling.
Proficient in developing programs in Python and SQL
Experience with Data warehouse Dimensional data modeling.
Working with event based/streaming technologies to ingest and process data.
Working with structured, semi structured and unstructured data.
Optimize Databricks jobs for performance and scalability to handle big data workloads.
Monitor and troubleshoot Databricks jobs, identify and resolve issues or bottlenecks.
Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions.