Expected, Scala, Apache Spark, Hadoop, Jenkins, Impala, Jira, Bitbucket, Git
Optional, Kafka
About the project, Only from Poland on B2B!!!
Your responsibilities, Building distributed and highly parallelized BigData processing pipeline which process massive amount of data (both structured and unstructured) in near real-time, Leverage Spark to enrich and transform corporate data to enable searching, data visualization, and advanced analytics, Work closely with DevOps, QA and Product Management teams in a Continuous Delivery environment
Minimum experience – 6 years, Experience in Scala and Spark, Knowledge of Hadoop stack: YARN, EMR, Sqoop, Hive, Impala, Jenkins, JIRA, Bitbucket and Git
Optional, Kafka (Nice to have, not strictly required), Experience of implementing data security capabilities such as encryption and anonymization, Excellent communication skills and experience of distributed global teams, Data driven thinking capabilities, Experience in using Agile methods, Prior Financial Service experience is considered a plus
This is how we work, in house, at the client’s site, you have influence on the choice of tools and technologies, you have influence on the technological solutions applied, you have influence on the product
Benefits, sharing the costs of sports activities, private medical care
Recruitment stages, Short interview with recruiter, Technical interview with Tech Lead (1 Hour)
EXPINIT & KAMELAK SPÓŁKA JAWNA, Od 2016 roku pomagamy naszym Klientom prowadzić projekty, utrzymywać środowiska produkcyjne 24g/dobę 7d/tygodniu 365d/roku., , Pomagamy przeprowadzać wdrożenia, integrację, testowanie, optymalizację na wszystkich etapach rozwijania aplikacji w środowiskach testowych, preprodukcyjnych, produkcyjnych oraz także budowę tych środowisk., , Nasi Specjaliści są absolwentami uczelni państwowych oraz prywatnych w Polsce takich jak Politechnika Warszawska, Uniwersytet Warszawski, Polsko-Japońska Wyższa Szkoła Technik Komputerowych., , Razem stanowimy zespół Ekspertów na rynku IT (EXPerts in IT – EXPinIT)
This is how we work,
Yarn Git sqoop Agile Scala Apache Spark encryption Apache Kafka Amazon EMR Apache Hive Big data Jira impala Hadoop Jenkins Bitbucket