Middle Big Data Engineer at KITRUM #vacancy #remote

Expected, Spark, SQL, Python

Optional, Kafka, Terraform, Airflow, Jira, JetBrains IDEs, Git, GitLab, Docker, Jenkins

About the project, Client is an American e-book and audiobook subscription service that includes one million titles. Platform hosts 60 million documents on its open publishing platform., The platform allows:, — anyone to share his/her ideas with the world;, — access to audio books;, — access to world’s composers who publish their music;, — incorporates articles from private publishers and world magazines;, — allows access to exclusive content., , Core Platform provides robust and foundational software, increasing operational excellence to scale apps and data. We are focused on building, testing, deploying apps and infrastructure which will help other teams rapidly scale, inter-operate, integrate with real-time data, and incorporate machine learning into their products. Working with our customers in the Data Science and Content Engineering, and our peers in Internal Tools and Infrastructure teams we bring systems-level visibility and focus to our projects., , Client’s goal is not total architectural or design perfection, but rather choosing the right trade-offs to strike a balance between speed, quality and cost.

Your responsibilities, Manage data quality and integrity, Assist with building tools and technology to ensure that downstream customers can have faith in the data they’re consuming, Cross-functional work with the Data Science or Content Engineering teams to troubleshoot, process, or optimize business-critical pipelines, Work with Core Platform to implement better processing jobs for scaling the consumption of streaming data sets

3+ years of experience in data engineering creating or managing end-to-end data pipelines on large complex datasets., Proficiency in Spark, Expertise in Scala, and/or Python, Fluency with at least one dialect of SQL, Level of English: Upper-Intermediate

Optional, Experience with Streaming platforms, typically based around Kafka, Experience in Terraform, Airflow, Strong grasp of AWS data platform services and their strengths/weaknesses, Strong experience using Jira, Slack, JetBrains IDEs, Git, GitLab, GitHub, Docker, Jenkins, Experience using DataBricks

This is how we work, at the client’s site, you have influence on the choice of tools and technologies, you have influence on the technological solutions applied, you have influence on the product, agile

Development opportunities we offer, development budget, external training

What we offer, High compensation according to your technical skills, Long-term projects (12m+) with great Customers, 5-day working week, 8-hour working day, flexible schedule, Democratic management style & friendly environment, WFH mode, Annual Paid vacation — 30 b/days + unpaid vacation, Paid sick leaves — 6 b/days per year, Corporate Perks (external training, English course, business speaking club, corporate events/team buildings), Professional and personal growth

Recruitment stages, Recruitment interview, Technical interview, Client interview

KITRUM, KitRUM is a one-stop custom software development company headquartered in sunny Florida with tech hubs in Ukraine, Poland, Kazakhstan, and Mexico, With a pool of 300+ top-notch engineering resources, we help CxOs of VC-backed startups and fast-growing tech companies in the US, EU, and Australia to custom-build software engineering teams packed with top-tier talent.

This is how we work,

databricks GitLab Git Agile Scala Python Terraform Apache Spark Amazon Web Services (AWS) Apache Kafka Big data Jira Docker SQL Airflow Jenkins

Leave a Reply