Data Engineer with Java or Scala at emagine Polska #vacancy #remote

Renumeration: 150- 180 PLN/h (depending on experience)

Remote work: Warsaw (obligatorily a few days a week work from the Warsaw office)

Assignment type: B2B

Start: ASAP

Project length: 12 months + ext.

Project language: English

Job Description:

Are you a skilled data engineer with a strong software engineering background? Do you have a passion for building and optimising data pipelines and architectures? If so, we want you to apply! We are seeking a talented and experienced data engineer for one of our clients to help them build robust data solutions and drive innovation within our organisation.

RESPONSIBILITIES:

  • Build and optimize data pipelines and architectures.
  • Develop prototypes and proof of concepts.
  • Write automated unit tests and follow coding best practices.
  • Work with git for version control.
  • Handle relational data sets using SQL.
  • Focus on DevOps and CI/CD practices.
  • Engage in data modeling and develop data solutions.
  • Work in an Agile environment with a team-focused mindset.
  • Collaborate directly with stakeholders to develop prototypes and solutions.
  • Adjust to the skillsets of team members and help them advance.
  • Communicate effectively with a friendly and outgoing attitude.

REQUIREMENTS:

  • Relevant education in Computer Science, Engineering, Mathematics, Economics, Information Management, or Statistics.
  • 4+ years of experience as a Data Engineer or in a Software engineering role with a focus on data.
  • Experience with at least one JVM, object-oriented, or functional/scripting language (Scala, Java, etc.). Python is a plus.
  • Proficient in writing automated unit tests and hands-on with git.
  • Experience working with relational data sets using SQL.
  • Strong focus on DevOps and CI/CD practices.
  • Experience with Spark.
  • Experience with Azure, AWS, or other cloud providers.
  • Experience in data modeling.
  • Agile mindset and team player.
  • Fluent in English

Nice to have:

  • Familiarity with big data tools such as Databricks and Hadoop.
  • Knowledge of data pipeline and workflow tools like Airflow, ADF, Luigi, etc.
  • Experience with infrastructure tools like Docker, Kubernetes, and Helm.
  • Familiarity with stream-processing systems such as Kafka.

WE OFFER:

  • Technical growth, including education and certifications
  • International projects in Scandinavian business culture
  • Long term cooperation across multiple projects and sectors
  • Transparently built relations based on trust and fair play
  • Co-financed benefit package (private healthcare, Multisport card)

databricks Agile CI/CD kubernetes-helm Scala Python Apache Spark Amazon Web Services (AWS) Apache Kafka Data Engineering Azure DevOps Docker Luigi SQL Airflow Kubernetes Java Hadoop

Leave a Reply