Role: Java Developer (Hadoop & Spark)
Location: Boston, MA (Onsite)
Duration: Longterm
Job Description:
We are seeking an experienced Java Developer with strong expertise in Hadoop and Apache Spark to join our data engineering team. In this role, you will develop, optimize, and maintain large-scale data processing applications using Java, Hadoop, and Spark. You’ll work on building and improving ETL pipelines, ensuring high performance in distributed computing environments, and collaborating with cross-functional teams to deliver data-driven solutions.
Key Requirements:
- 8+ years of Java development experience.
- Hands-on experience with Hadoop (HDFS, MapReduce) and Spark.
- Proficient in building scalable data processing pipelines.
- Strong understanding of distributed computing and performance optimization.
Preferred: Experience with cloud platforms (AWS, Azure) and Kafka.