Data Engineer at Inuits #vacancy #remote

Embark on a journey with Hypersonix A ( ( as we search for a proficient Data Engineer to enhance their advanced AI Platform. ProfitGPT empowers enterprises with actionable insights derived from their vast array of databases and application systems. Intrigued? Read on for more details.

Hypersonix specializes in creating an AI-driven platform that turns vast amounts of data into clear, actionable insights. ProfitGPT is a cutting-edge AI tool by Hypersonix that enhances business profitability by providing smart insights and recommendations. It dives deep into business data to identify growth opportunities and areas for cost savings, offering actionable advice to improve decision-making and boost the bottom line.


About the project:

As a Data Engineer, you will collaborate closely with our Engineering, Product, and Implementation teams to develop state-of-the-art machine learning solutions tailored for our business-to-business (B2B) software as a service (SaaS) applications.


Main tasks:

  • Design, implement, and manage large-scale data pipelines, ensuring data quality and accessibility;
  • Develop and optimize ETL processes to integrate, transform, and load data from diverse sources;
  • Utilize parallel computing libraries such as PySpark and Dask to efficiently process and analyze large datasets;
  • Apply statistical and machine learning techniques to derive deep insights from data, contributing to strategic planning and innovation;
  • Manage version control using Git and deploy scalable data infrastructures on cloud platforms like AWS;
  • Automate repetitive tasks through scripting (Bash) and orchestrate complex data workflows with tools like Airflow.

What you already have:

  • A minimum of 3 years of experience in Data Engineering, with demonstrated expertise in handling big data projects;
  • Strong programming skills in Python and proficiency in SQL for complex data manipulation and analysis;
  • Familiarity with parallel computing libraries (e.g., PySpark, Dask) and hands-on experience in large-scale data processing;
  • Solid understanding of statistics, machine learning algorithms, and analytical methods for data analysis;
  • Experience with Git for version control and cloud platforms (AWS preferred) for data infrastructure management;
  • Competence in scripting (Bash) and workflow management tools (e.g., Airflow) for efficient data operation orchestration;
  • Degree in Computer Science, Engineering, Operations Research, or a related field. Advanced degrees preferred;
  • Exceptional analytical and problem-solving skills, with a knack for innovative solutions to complex data engineering challenges.

In exchange for your skills, we offer:

  • Supportive relationships, built on transparency and a flat structure, in a diverse and multinational team;
  • Office in the center of historical Kraków, where your dog is always welcome;
  • Flexibility when it comes to working from the office or home;
  • Perks incl. Multikafeteria, Group Life Generali insurance, Luxmed, Multisport, language lessons 1:1;
  • Sport and other events, including, weekly running, squash, and team lunches on the house;
  • Tea, coffee, and all-you-can-eat fruits and nuts in the office.

The recruitment process includes:

  • HR screening 30min Zoom call;
  • 1h technical interview;
  • General call with the team leader;
  • Making an offer meeting.

Git PySpark statistics SQL Airflow Python Amazon Web Services (AWS) Machine Learning Bash dask Data Engineering

Leave a Reply