Job Description:
Pay Range: $67 – $72
- Experience in Hadoop systems.
- 5-8 years of experience in Spark is a must.
- 5-8 years of strong coding skills in Java or Scala (need to know both, but good at either one is fine)Understand the big picture at the solution level of how big data works together.
- Hands-on programming experience in any one + programming languages like Java, Scala or Python.
- Good experience with Distributed systems like Hadoop, HDFs and No SQL databases.
- Experience in Spark, Hive or Presto.
- Experience with AWS cloud services: EC2, EMR, Athena.
- Experience with AWS S3, DynamoDB, RDS.
- Hands-on experience with AWS Lambdas.
- Designing and developing dashboards using Client.
- Competent in design/implementation for reliability, availability, scalability and performance.
- Deep problem-solving skills to perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
amazon-s3 Lambdas Scala Problem-solving Python Apache Spark Amazon Web Services (AWS) performance availability scalability Amazon Athena reliability Amazon EMR Apache Hive Big data remote work distributed-systems amazon-dynamodb Java presto amazon-rds Hadoop amazon-ec2