Position: Senior Data Engineer
Salary: up to $170,000
Location: Fully Remote, US
We are currently partnered with a global organization on their search for a Senior Data Engineer. The Data Engineer will play a critical role in engineering of data solutions that support reporting and analytic needs. As a key member of the Data engineering team, will work on diverse data technologies such as Steramsets, dbt, data ops and others to build insightful, scalable, and robust data pipelines that feed our various analytics platforms.
Responsibilities
- Design and Model Data Engineering Pipelines to Support Reporting and Analytic Needs:
- Engineer efficient, adaptable, and scalable data pipelines for moving data from different sources into our Cloud Lakehouse.
- Understand and analyze business requirements and translate them into well-architected solutions that demonstrate the modern BI & Analytics platform.
- Participate in data modernization projects by providing direction on matters of overall design and technical direction, acting as the primary driver toward establishing guidelines and approaches.
- Develop and deploy performance optimization methodologies.
- Drive timely and proactive issue identification, escalation, and resolution.
- Collaborate effectively within Data Technology teams and Business Information teams to design and build optimized data flows from source to data visualization.
Key Skills & Requirements:
- Experience of in-depth data engineering experience with execution of data pipelines, data operations, scripting, and SQL queries.
- Proven data modeling skills, with demonstrable experience designing models for data warehousing and modern analytics use-cases (e.g., from operational data store to semantic models).
- Experience in modern data architecture that supports advanced analytics, including Snowflake, Azure, etc. Experience with Snowflake and other Cloud Data Warehousing / Data Lake technologies is preferred.
- Expertise in engineering data pipelines using various data technologies, such as ETL/ELT and big data technologies (Hive, Spark) on large-scale data sets, demonstrated through years of experience.
- Hands-on experience with data warehouse design, development, and data modeling best practices for modern data architectures.
- Highly skilled in data orchestration with experience in tools like Ctrl-M and Apache Airflow.
- Experience with Streamsets and dbt is preferred.
- Strong communication skills, with the ability to give and receive information, explain complex information in simple terms, and maintain a strong customer service approach to all users.