We are seeking an experienced and highly skilled Senior DevOps Engineer/Lead. The ideal candidate will have a strong background in data warehousing and significant experience working with Google Cloud Platform (GCP). As a Senior DevOps Engineer, you will play a critical role in designing, implementing, and maintaining the infrastructure and tools that enable our data engineering and analytics teams to operate efficiently and effectively.
Responsibilities:
- Infrastructure as Code:
- Design, build, and maintain scalable and reliable infrastructure on GCP using Infrastructure as Code (IaC) tools such as Terraform and Deployment Manager.
- Automate the provisioning and management of cloud resources to ensure consistency and repeatability.
- Continuous Integration and Continuous Deployment (CI/CD):
- Implement and manage CI/CD pipelines using tools like Jenkins, GitLab CI, or Cloud Build to facilitate seamless code integration and deployment.
- Ensure automated testing and monitoring are integrated into the CI/CD process to maintain high-quality code and rapid delivery cycles.
- Data Pipeline Management:
- Collaborate with data engineers to design and optimize data pipelines on GCP using tools such as Apache Airflow, Cloud Composer, and Cloud Dataflow.
- Implement monitoring and alerting solutions to detect and resolve issues in data pipelines promptly.
- Cloud Platform Expertise:
- Utilize GCP services such as Cloud Storage, Cloud Run, and Cloud Functions to build scalable and cost-effective solutions.
- Implement best practices for cloud security, cost management, and resource optimization.
- Collaboration and Communication:
- Work closely with data engineers, data scientists, and other stakeholders to understand their requirements and provide the necessary infrastructure and tooling support.
- Foster a culture of collaboration and continuous improvement within the team.
- Monitoring and Incident Management:
- Implement robust monitoring, logging, and alerting solutions using tools like Stackdriver, Prometheus, and Grafana.
- Manage and respond to incidents, ensuring minimal downtime and quick resolution of issues.
- Documentation and Training:
- Create and maintain comprehensive documentation for infrastructure, CI/CD pipelines, and operational procedures.
- Provide training and support to team members on DevOps best practices and GCP services.
Experience Required:
- Minimum of 5 years of experience in DevOps or infrastructure engineering, with a strong focus on data warehousing.
- At least 2 years of hands-on experience working with Google Cloud Platform (GCP).
- Proficiency in Infrastructure as Code (IaC) tools such as Terraform, Deployment Manager, or CloudFormation.
- Strong knowledge of CI/CD tools and practices, including Jenkins, GitLab CI, and Cloud Build.
- Experience with data pipeline tools and frameworks such as Apache Airflow, Cloud Composer, and Cloud Dataflow.
- Familiarity with GCP services, including Cloud Storage, Cloud Run, Cloud Functions, and BigQuery.
- Proficiency in scripting languages such as Python, Bash, or PowerShell.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration abilities.
- Ability to work independently and as part of a team in a fast-paced, dynamic environment.
Education Required:
- Bachelor's degree in Computer Science, Information Technology, or a related field is required.
Education Preferred:
- A Master's degree in a relevant field is preferred.