Mid Big Data Engineer @ Capco Poland at Capco Poland #vacancy #remote

SKILLS & EXPERIENCES YOU NEED TO GET THE JOB DONE

  • min. 3-4 years of experience as a Data Engineer/Big Data Engineer 
  • University degree in computer science, mathematics, natural sciences, or similar field and relevant working experience 
  • Excellent SQL skills, including advanced concepts 
  • Very good programming skills in Python or Scala 
  • Experience in Spark and Hadoop 
  • Experience in OOP 
  • Experience using agile frameworks like Scrum 
  • Interest in financial services and markets 
  • Experience or knowledge with GCP 
  • Fluent English communication and presentation skills 
  • Sense of humor and positive attitude 

WHY JOIN CAPCO?

  • Employment contract and/or Business to Business – whichever you prefer 
  • Possibility to work remotely 
  • Speaking English on daily basis, mainly in contact with foreign stakeholders and peers 
  • Multiple employee benefits packages (MyBenefit Cafeteria, private medical care, life-insurance) 
  • Access to 3.000+ Business Courses Platform (Udemy) 
  • Access to required IT equipment 
  • Paid Referral Program 
  • Participation in charity events e.g. Szlachetna Paczka 
  • Ongoing learning opportunities to help you acquire new skills or deepen existing expertise 
  • Being part of the core squad focused on the growth of the Polish business unit 
  • A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients 
  • A work culture focused on innovation and creating lasting value for our clients and employees 

ONLINE RECRUITMENT PROCESS STEPS*

  • Screening call with the Recruiter (30 mins)
  • Technical interview: first stage (1 hour)
  • Client Interview (1 hour)
  • Feedback/Offer 

,[Design, develop and maintaining robust data pipelines using SQL, Python, Spark for batch and streaming data processing , Collaborate with cross-functional teams to understand data requirements and design efficient solutions that meet business needs , Implement data ingestion transformation and storage processes leveraging GCP services such as BigQuery, Dataflow, Dataproc, Pub/Sub. , Optimize Spark jobs and data processing workflows for performance, scalability and reliability , Ensure data quality, integrity and security throughout the data lifecycle , Troubleshoot and resolve data pipeline issues in a timely manner to minimize downtime and impact on business operations , Stay updated on industry best practices, emerging technologies, and trends in big data processing and analytics , Document, design specifications, deployment procedures and operational guidelines for data pipelines and systems , Provide technical guidance and mentorship for new joiners ] Requirements: Big data, Scala, python, Kafka, Degree, SQL, Spark, Hadoop, OOP, GCP Additionally: Private healthcare, Employee referral bonus, MyBenefit, Udemy for business.

OOP Big data Google Cloud Platform (GCP) Agile Scala SQL Python Apache Spark Data Engineering Scrum Hadoop

Leave a Reply