Note:
- No Recruiters Please.
- Must be in office 5 days / week.
- Minimum 3+ years of experience.
- Must come onsite for interview.
Responsibilities
- Design and develop aggregation data pipelines, data models etc.
- Write great quality, maintainable code following best practices.
- Apply analytical skills to solve complex finops problems.
- Develop efficient data model for storing analytical data.
Qualifications
- Bachelors/Master's degree in Computer Science or relevant field of study.
- Hands-on experience building batch or streaming production data pipelines.
- Experience developing applications using Python, Java, Go, Rust.
- Must have working knowledge of any one of the Cloud Services (AWS, GCP,Azure, Kubernetes etc).
- Must have working experience in SQL and NoSQL databases such as MySQL, PostgreSQL, Elasticsearch, Clickhouse, Redis, Neo4j, etc.
- Must have knowledge of distributed systems like Apache Kafka or Apache Flink or Apache Spark.
Bonus
- Knowledge of Data Engineering tools and technologies
- Knowledge of workflow orchestration management engines such as Airflow, Dagster, DBT, etc
- Knowledge of AI and ML technologies.
About Us Usage AI (usage.ai), helps companies of all sizes get significant savings on their cloud bills with zero code change and zero downtime, all in under 5 minutes.