Job Title: Senior Data Engineer
Skills: Snowflake, Data Analysis & Data Modeling
Experience: 8-10 Years
Location: Dallas, TX
Job Type: Full-time
We at Coforge are Senior Data Engineer with the following skillset:
- A Senior Data Engineer is proficient in the development of all aspects of data processing including data warehouse architecture/modeling and ETL processing. The position focuses on development and delivery of analytical solutions using various tools including AWS Glue, Coalesce, Airflow, Snowflake and AWS.
- Data Engineer must be able to work autonomously with little guidance or instruction to deliver business value.
- Bachelor’s degree in computer science or MIS related area required or equivalent experience (industry experience substitutable).
- 6-8 years of total experience in Data engineering/Cloud development activity.
- 1+ years of experience in Banking and financial domain is Nice to have.
- Must be extremely proficient in Data Warehouse ETL Design/Architecture, dimensional/relational data modeling.
- Experience in at least one ETL development project, writing/analyzing complex stored procedures.
- Entry-level to intermediate experience in Python/Py Spark – working knowledge on Spark/pandas data frame, Spark multi-threading, exception handling, familiarity with different boto3 libraries, data transformation and ingestion methods, ability to write UDF.
- Familiarity with Snowflake – Familiarity with stages and external tables, commands in snowflake like copy, unload data to/from S3, working knowledge of variant data type, flattening nested structure through SQL, familiarity with marketplace integrations, role-based masking, pipes, data cloning, logs, user and role management is nice to have.
- Familiarity with Coalesce/dbt is an added advantage for this job.
- Collibra integration experience for Data Quality and Governance in ETL.
- AWS – Should have hands-on experience with S3, Glue (jobs, triggers, workflow, catalog, connectors, crawlers), CloudWatch, RDS and Secrets Manager.
- Knowledge of AWS VPC, IAM, Lambda, SNS, SQS, and MWAA is a plus.
- Hands-on experience with version control tools like GitHub and working knowledge of configuring CI/CD pipelines using YAML and pip files.
- Streaming Services – Familiarity with Confluent Kafka or Spark Streaming, or Kinesis (or equivalent) is nice to have.
- Experience with Data Vault 2.0 (hubs, satellites, links) is a plus.
- Highly proficient in Publisher, PowerPoint, SharePoint, Visio, Confluence and Azure DevOps.
- Working knowledge of best practices in value-driven development (requirements management, prototyping, hypothesis-driven.