Important Skills and Experience:
- Candidate must have strong coding skills and must be hands-on
- Candidate needs to have strong experience in Scala programming, Spark and Azure
- In case candidate has experience in any other cloud and willing to learn Azure, he may be considered
- Locals willing to work in hybrid setting will be 1st preference, however, open to consider non locals also in case he or she is willing to relocate.
We’re excited about you if you have:
- 8-10 years of software development and deployment experience with at least 5 years of hands-on experience with Hadoop applications (e.g. administration, configuration management, monitoring, debugging, and performance tuning)
- Strong experience building data ingestion pipelines (simulating Extract, Transform, Load workload), data warehouse or database architecture
- Hands-on development experience using open source big data components such as Hadoop, Hive, Pig, Spark, HBase, Hawk, Oozie, Mahout, Flume, Kafka, ZooKeeper, Sqoop etc. preferably with Hortonworks distro
- Strong experience with data modeling, design patterns, building highly scalable Big Data Solutions and distributed applications
- Knowledge of cloud platforms, for example:
- Experience with Azure, AWS or equivalent cloud platforms
- Microsoft Azure: Experience designing, deploying, and administering scalable, available, and fault tolerant systems on Microsoft Azure using HDInsights or Analytics Platform System (APS)
- Experience with Azure Management Portal, Azure Machine Learning, and Azure SQL Server
- Hadoop: Experience with storing, joining, filtering, and analyzing data using Spark, Hive and Map Reduce
- Experience working with continuous integration framework, building regression-able code within data world using GitHub, Jenkins and related applications
- Experience with programming/scripting languages such as Scala/Java/Python/R etc. (any combination)