As a Senior Product Owner, you will play a critical role in shaping the vision and execution of our data engineering products. You will be responsible for managing the end-to-end lifecycle of data products, focusing on the seamless ingestion, processing, and optimization of both structured and unstructured datasets. Collaborating closely with agile teams, stakeholders, and leadership, you will ensure that our data infrastructure meets the needs of our business and supports ongoing innovation. Hybrid in Denver, CO.
KEY RESPONSIBILITIES:
- Define and Prioritize Data Pipeline and ETL Development: Establish and prioritize the roadmap for designing, developing, and optimizing data pipelines and ETL processes capable of efficiently handling billions of records. Prioritize development efforts to align with business goals and data strategy.
- Ensure Quality Data Ingestion: Define and oversee processes for ingesting content from various sources, including structured and unstructured datasets. Set standards and best practices to maintain high data quality and integrity throughout the data lifecycle.
- Collaborate on Requirements and Project Prioritization: Work closely with data engineering and data science teams to define clear, actionable requirements. Prioritize projects based on business value, technical feasibility, and resource availability, ensuring successful implementation.
- Drive Workflow Optimization Strategies: Define and prioritize initiatives to optimize data processing workflows, focusing on reducing latency, improving system performance, and enhancing scalability to support growing data volumes.
- Leverage Machine Learning for Enhanced Data Processing: Identify opportunities to utilize machine learning technologies to enhance data processing capabilities, support predictive analytics, and drive automation within data workflows.
- Manage Agile Delivery: Oversee project timelines, cloud budgets, and resource allocation using agile methodologies. Ensure that development efforts are delivered on time, within scope, and meet quality standards.
- Monitor and Improve Data Pipeline Effectiveness: Continuously assess the performance and effectiveness of data pipelines. Define metrics for success, identify areas for improvement, and prioritize enhancements to drive efficiency and reliability.
QUALIFICATIONS:
- 5+ years of experience in data product management, or a related role, with a strong focus on data pipelines, ETL, and workflow optimization.
- Proven experience managing large-scale datasets and data ingestion processes, including both structured and unstructured data.
- Solid understanding of data architecture, ETL frameworks, and integration techniques.
- Familiarity with cloud-based data services (e.g., AWS, Azure, Databricks, Snowflake)
- Knowledge of machine learning concepts and experience with ML frameworks (e.g., TensorFlow, PyTorch) is highly desirable. Some Python development skills may be desirable.
- Strong analytical and problem-solving skills, with the ability to understand complex data systems and workflows.
- Demonstrated ability to collaborate with agile teams, manage multiple priorities, and deliver projects on time and within scope.
- Excellent communication skills, capable of articulating technical concepts to non-technical stakeholders and leadership.
- Bachelor’s or master’s degree in computer science, Data Engineering, Information Systems, or a related field.