Job Title: GCP Data Engineer (Hadoop/Spark/Python)
Location: Irving, TX
Duration: Long-term
Note: Hadoop/Spark/Python is a must-have skillset
Job Description:
- 10+ Year of overall IT Experience
- Big data expert with 6+ years’ experience in Hadoop Big data ecosystem
- Spark - Batch & Streaming (Python, Scala )
- Apache Kafka hands on experience
- Experience in cloud environment, specially GCP
- Experience in developing both batch and real-time streaming data pipelines
- Python/Shell scripting
- Expertise in SQL
- Experience in Hive, Impala, Ozone, Iceberg
- Oozie and Airflow scheduler hands on experience