Data Engineer at hubQuest #vacancy #remote

We are a team of experts, bringing together the best talents in IT and analytics. Our mission is to provide solutions through our flagship service, which includes forming tech teams from scratch, and growing existing units, all tailored to help our partners to become real data driven entities. 

Currently we are looking for a Data Engineer as we are supporting our partner in developing Global Analytics unit which is a global, centralized team with the ambition to strengthen data-driven decision-making and the development of smart data products for day-to-day operations.

Team’s essence is an innovative spirit, which permeates throughout the company, nurturing a data-first approach in every facet of the business. From sales and logistics to marketing and purchasing, our smart data products have been pivotal in rapid growth and operational excellence. As team expands its analytics solutions on a global scale, we are on the lookout for an experienced Data Engineer.

The Global Analytics team is a diverse collective of Data Scientists, Data Engineers, Business Intelligence Specialists, and Analytics Translators, with footprints across three continents and five countries. Its ethos revolves around fostering collaboration, driving innovation, and ensuring reliability. Together, we’re committed to transforming whole organization into a leader in data-driven decision-making, leveraging global diversity to tackle challenges and create value.

The team has a lot of freedom to shape this, especially in the use of tools and technology, but also by introducing new concepts, solutions and ways of working.

If you want to:

  • take part in the development and implementation of a complex system of smart data solutions
  • have opportunity to work on bleeding-edge projects
  • have a chance to see how your visions come true
  • work with the world’s top IT professionals
  • carry out projects which address real business challenges
  • work in a global and diverse team with global reach
  • have a real impact on the projects you work on and the environment you work in
  • have a chance to propose innovative solutions and initiatives,

it’s probably a good match.

Moreover, if you like:

  • flexible working hours
  • casual working environment and no corporate bureaucracy
  • having an access to such benefits as Multisport and private medical care
  • working in modern office in the centre of Warsaw with good transport links or working remotely as much as you want
  • a relaxed atmosphere at work where your passions and commitment are appreciated
  • vast opportunities for self-development (e.g. online courses and library, experience exchange with colleagues around the world, partial grant of certification),

it’s certainly a good match!

If you join us, your responsibilities will include:

  • structure end-to-end data processes including extraction, transformation, and storage using serverless Azure services to deploy models and analytical solutions
  • manage activities such as quality assurance, data migration, integration, and solution deployment to maximize business value
  • write and maintain ETL processes in Python, design database systems, and develop tools for real-time and offline analytic processing
  • implement, maintain, and enhance Python packages for ETL processes, data lineage, and operator inputs, focusing on building robust logic
  • troubleshoot software and processes to ensure data consistency and integrity
  • integrate large-scale data from various sources using Databricks and other tools to enable business partners to generate insights and make informed decisions.
  • design and implement data flows, and conduct unit and integration tests of Python modules
  • participating in mission critical processes of the data pipeline
  • participate in critical phases of the data pipeline and provide technical support in developing smart data products

 

We expect:

  • significant commercial experience in a similar position
  • strong data analytics skills using Python, including experience with PySpark
  • excellent software engineering skills, including unit testing, integration testing, and object-oriented programming (OOP)
  • proficiency with Azure services, particularly in deploying and managing data solutions
  • familiarity with Databricks for big data processing and analytics
  • solid SQL skills and fluency in extracting information from databases
  • ability to work independently and effectively manage tasks and timelines
  • passion for data science and continuous learning in the field
  • excellent communication skills to collaborate effectively with team members an stakeholders
  • strong team player with the ability to contribute to a collaborative work environment
  • fluent English communication skills for effective collaboration and documentation

Nice to have

  • experience with DBT (Data Build Tool) for transforming data in analytics pipelines
  • experience in building and releasing Infrastructure as Code with working knowledge of such tools as Terraform 

If interested please let us get to know you by sending your CV using “Apply” button.

Please add to your CV the following clause:

I hereby agree to the processing of my personal data included in my job offer by hubQuest spółka z ograniczoną odpowiedzialnością located in Warsaw for the purpose of the current recruitment process.”

If you want to be considered in the future recruitment processes please add the following statement:

I also agree to the processing of my personal data for the purpose of future recruitment processes.”

OOP databricks PySpark Information technology (IT) SQL Python Terraform Analytics Data Engineering Azure DBT

Leave a Reply