About Us:
Arch Energy Partners is an energy investment group based in Dallas, TX, focused on the acquisition of non-operated and mineral assets. We are looking to add a Data Engineer with a strong software engineering and DevOps background to our team.
Responsibilities:
- Design, build, and maintain robust data warehousing solutions to support the storage of all enterprise data sources. Experience with BigQuery is a plus.
- Develop and manage ELT/ETL processes for new and existing data sources.
- Create and maintain dashboarding solutions for the enterprise, including type curve fit, EUR calculation, and other parameters as requested by engineering, as well as pay status analysis for the Accounting team.
- Lead automation initiatives and streamline workflow execution.
- Manage existing cloud infrastructure and add new solutions using platforms such as Azure, GCP, and AWS. Experience with GCP is a plus.
- Develop and deploy containerized applications using Google Cloud Functions, Google Cloud Run, and Docker.
- Model effective data designs and SQL queries. Experience with BigQuery/Snowflake is a plus.
- Craft resilient data pipelines in Spark-based systems. Experience in Databricks is a plus.
- Implement and manage infrastructure as code.
- Maintain and optimize Linux & Windows virtual machines.
- Build rich data sets that drive innovation in data-driven insights at scale within the company.
- Demonstrate strong experience with IT systems as a whole.
Qualifications:
- Bachelor's degree in Computer Science, Software Engineering, Information Technology, or a related discipline.
- 5+ years of experience in software engineering, data engineering, DevOps, or IT, preferably in the oil & gas industry.
- Proficiency in Python, SQL, and other programming languages.
- Experience with Docker and infrastructure as code.
- Experience with big data tools such as Spark.
- Experience with data visualization tools such as Spotfire, PowerBI, or similar software.
- Strong experience with web scraping, API-based data ingestion, and seamless data system-to-data system replication.
- Proficient in working with shapefiles, using tools like geopandas and Shapely is a plus.
- Experience with setting up and maintaining SQL, NoSQL, and data warehouses.
- Strong written and oral communication skills.
- Ability to collaborate effectively with cross-functional teams such as Operations, Land, Accounting, and Marketing.
- Familiarity with Spark and data processing frameworks.
Preferred Skills:
- Experience with infrastructure as code tools (e.g., Terraform, Ansible).
- Familiarity with container orchestration tools (e.g., Kubernetes).
- Strong understanding of cloud-native architecture and best practices.
- Strong background in designing and maintaining data products and warehouses.
- Proven track record of developing resilient and scalable data pipelines.