Our partner is an energy provider that uses renewable resources and high-efficiency energy production systems. Its objective is to grow by creating value and to become an innovative yet accessible energy service provider.
– Use Python, Apache Spark and/or Hadoop to design, develop and maintain scalable data processing pipelines.
– Implement complex data transformations and aggregations to support big data analytics and reporting.
– Optimise data retrieval and processing performance in large-scale environments.
– Implement machine learning models and algorithms in collaboration with data science teams.
– Ensure data integrity and system stability through comprehensive testing and error handling.
– Lead the integration of big data technologies with existing systems and infrastructure.
– Be a mentor and guide to junior engineers and team members.
Requirements
– Knowledge of Python programming and big data frameworks such as Apache Spark or Hadoop.
– 4+ years of Python programming experience.
– Experience with workflow orchestration tools such as Dagster.
– Familiarity with messaging systems, especially Apache Kafka.
– Understanding of caching solutions such as Redis.
– Strong analytical and problem solving skills.
– Excellent communication and team skills
Must have: Hungarian language skills
Nice to have: English language skills
Benefits
– Work remotely (hybrid available)
– Join us as an employee or contractor
– Professional challenge and development opportunities, stable market environment
– Several foreign partners
– Environmentally friendly company
– Premium listed company
– Extra days off