We’re looking for someone with the following background:
- Master’s degree in computer science, engineering, software development or a related field, with at least 5 years’ experience working with contemporary cloud technologies, preferably AWS.
- Experience deploying with CI/CD and working with modern build, test and deployment tools, ideally in large organisations
- Strong skills with Python and at least one more of the following programming languages: TypeScript, Go, Java, Scala, C++, C# or Rust.
- Strong skills in SQL and experienced with modern data warehouses and data modelling principles
- Experienced with containerization using Docker or similar tools
- Experienced with Spark (preferably on AWS Glue)
- Experienced with software development best practices for writing, reviewing and refactoring code
- The ability to understand complex technical matters, form an educated opinion, and communicate with impact to a variety of stakeholders with different levels of technical expertise.
- Experience working with a regulated industry would be an advantage
For our client, a company from pharmaceutical area, we realize a recruitment process on the position Senior Data Engineer.
Are you passionate about data and technology? Do you seek out best practices and developments in data, software, and cloud engineering, and bring them into your work? Do you love collaborating with your colleagues, sparring and reviewing each other’s code, and bringing them along your own professional development?
We’re looking for a Senior Data Engineer to help us drive improvements in our data pipeline development for stakeholders across the organisation. You’ll be a part of the Data Management Centre of Excellence within Digital, Data & IT, where we focus on creating high value, innovative data products.
Behind the positions
You will be anchored in our data platform product team, where you will join a team forming partnerships with the line of business. You will participate in projects, workshops and PoCs with data engineers and product teams. You will bring your expertise to our partners, with the possibility of more long-term assignments to specific products.
As part of the data platform team, you will be responsible for tools, templates and data engineering practices of strategic importance for data products, and for mentoring colleagues of less experience than yourself.
Since you will be working in a global environment, you must expect travelling activities to Copenhagen estimated to 1-2 times pr. quarter (5-10 days) and meetings to take place outside Polish working hours at times (1-2 meetings per quarter up to 18.00). Fluent spoken and written English is a prerequisite.
About the department
You will join our team of architects, developers, analysts, and data engineers in a global, high performing department with a variety of technical, analytical, and multinational backgrounds. We are expanding our offerings to the global organisation, so we need extra resources to succeed with our journey and ambitious vision.
We do data acquisition and ingestion; data analysis and data engineering; evangelising the use of data and advanced analytics; and supporting data science users of our platform. We are cloud native and we work primarily on AWS. We deliver solutions using Scrum methodology. We have built and are operating the enterprise data lake, data mesh and data science platform on AWS to support our vision, which is to be able to deliver world class management of data.
,[As Senior Data Engineer, you will join a new team of data engineers in Poland driving ongoing development and delivery of our data management solutions. You will be working in an Agile environment with regular sprint meetings and a daily focus on building new data solutions on AWS, Azure and other platforms. In this role, you will be: , Providing technical expertise and guidance for the components that make up our data ecosystem on AWS , In collaboration with your colleagues, leading and defining the end-to-end architecture of our data pipeline solutions, Taking the role of hands-on developer, tester, integration and deployment specialist on data pipelines, Build, evangelize and improve data pipeline templates, that other colleagues can utilize to build validated solutions in a streamlined fashion, Driving continuous improvement using behaviour-driven design principles, DevOps practices, and awareness of compliance requirements, Coaching and sharing knowledge with colleagues through demos, co-development and sparring, ensuring that expertise is spread across our Data Engineers to enable collaborative ways of working] Requirements: Python, TypeScript, Java, Scala, C++, AWS, Rust, SQL, Data warehouses, Data modelling, Docker, Spark, Glue Additionally: Private healthcare, Sport subscription, International projects.
C++ Docker TypeScript Scala SQL Python Java Apache Spark Amazon Web Services (AWS) aws-glue Rust