Pumex Computing LLC is thrilled to partner with a Fortune 100, top-tier U.S.-based insurance company to find a skilled Data Engineer for a full-time role. With excellent benefits and a rewarding bonus structure, this position offers a unique opportunity to advance your career while working with state-of-the-art technology and a team of industry leaders.
About the Role
In this role, you'll join the AI & Data (AI&D) group as a key member of the Analytics Data Engineering team. Your primary responsibility will be designing, developing, and maintaining robust data pipelines using dbt (Data Build Tool), while ensuring the delivery of clean, transformed data that fuels our data-driven initiatives. You will utilize your deep expertise in Python, SQL, ELT processes, AWS, Redshift, and Snowflake to create and optimize data solutions that adhere to best practices in software engineering.
Responsibilities
- Innovate: Leverage your expertise in Python and SQL to design, build, and maintain highly functional, scalable, and sustainable data products using dbt.
- Transform: Use ELT processes to create and maintain scalable data transformations, delivering clean data ready for analytics and data science.
- Collaborate: Partner with stakeholders to understand data requirements and deliver impactful solutions that meet business needs.
- Optimize: Implement data models and structures on platforms like Redshift and Snowflake, fine-tuning pipelines for performance, reliability, and scalability in a cloud environment such as AWS.
- Ensure Quality: Uphold data quality and integrity through rigorous testing and the application of best practices in data engineering.
- Lead: Oversee code changes, manage pull requests in Git, and translate requirements into actionable Jira stories.
- Collaborate: Build strong relationships with IT and business stakeholders, engaging with data stewards throughout the organization.
- Stay Ahead: Continuously update your knowledge of Analytics and Data Engineering trends, particularly in AWS, Redshift, Snowflake, Python, SQL, and dbt, to enhance our technology stack.
Qualifications
- Experience: 7+ years in data engineering, with a proven track record in building data products using Python, SQL, ELT processes, and dbt.
- Expertise: Advanced knowledge of AWS, Redshift, and Snowflake platforms, with hands-on experience in cloud environments.
- Technical Proficiency: Mastery of Python and SQL for data processing and transformation, combined with deep familiarity with ELT processes.
- Best Practices: Strong command of Git, version control, and experience with Agile/Scrum methodologies.
- Education: A graduate-level degree in computer science, engineering, or equivalent work experience.
- Communication: Exceptional communication skills, with the ability to effectively collaborate within a team and articulate complex ideas to diverse stakeholders.
- Industry Insight: Previous experience in the insurance industry.
Preferred Skills
- AI Enthusiast: Experience with Generative AI and working with unstructured data.
Join us and be at the forefront of data-driven innovation at a Fortune 100 company. Apply today to be part of a team where your skills will lead to transformative change and drive the future of technology integration!