Mid+ Senior Data Engineer GCP at Capco Poland #vacancy #remote

*We are looking for Poland based candidate. The job is remote but may require some business trips.

Joining Capco means joining an organisation that is committed to an inclusive working environment where you’re encouraged to #BeYourselfAtWork. We celebrate individuality and recognize that diversity and inclusion, in all forms, is critical to success. It’s important to us that we recruit and develop as diverse a range of talent as we can and we believe that everyone brings something different to the table – so we’d love to know what makes you different. Such differences may mean we need to make changes to our process to allow you the best possible platform to succeed, and we are happy to cater to any reasonable adjustments you may require. You will find the section to let us know of these at the bottom of your application form or you can mention it directly to your recruiter at any stage and they will be happy to help.

Capco Poland is a global technology and management consultancy specializing in driving digital transformation across the financial services industry. We are passionate about helping our clients succeed in an ever-changing industry.

Currently we’re seeking a skilled Data Engineer with expertise in Google Cloud Platform (GCP) to join our Team. The ideal candidate will be responsible for designing, implementing and maintaining scalable data pipelines and solutions on on-prem / migration / cloud projects for large scale data processing and analytics.

THINGS YOU WILL DO

  • Design, develop and maintaing robust data pipelines using SQL, Python/Scala, Spark for batch and streaming data processing
  • Collaborate with cross-functional teams to understand data requirements and design efficient solutions that meet business needs 
  • Implement data ingestion transformation and storage processes leveraging GCP services such as BigQuery, Dataflow, Dataproc, Pub/Sub.
  • Optimize Spark jobs and data processing workflows for performance, scalability and reliability
  • Ensure data quality, integrity and security throughout the data lifecycle
  • Troubleshoot and resolve data pipeline issues in a timely manner to minimize downtime and impact on business operations
  • Stay updated on industry best practices, emerging technologies, and trneds in big data processing and analytics
  • Document, design specyfications, deployment procedured and operational guidelines for data pipelines and systems
  • Provide technical guidance and mentorship for new joiners

 

TECH STACK: Python or Scala, Spark, GCP, SQL, Hadoop, Pub/Sub, Big Query, Kafka

 

SKILLS & EXPERIENCES YOU NEED TO GET THE JOB DONE

  • University degree in computer science, mathematics, natural sciences, or similar field and relevant working experience
  • Excellent SQL skills, including advanced concepts
  • Very good programming skills in Python or Scala
  • Experience in Spark, Hadoop, ETL, Big Data Technologies: Hive, Impala
  • Experience in CI/CD
  • Experience in Kafka
  • Experience using agile frameworks like Scrum
  • Interest in financial services and markets
  • Experience with GCP is a nice to have
  • Fluent English communication and presentation skills
  • Sense of humor and positive attitude

 

WHY JOIN CAPCO?

  • Employment contract and/or Business to Business – whichever you prefer
  • Possibility to work remotely
  • Speaking English on daily basis, mainly in contact with foreign stakeholders and peers
  • Multiple employee benefits packages (MyBenefit Cafeteria, private medical care, life-insurance)
  • Access to 3.000+ Business Courses Platform (Udemy)
  • Access to required IT equipment
  • Paid Referral Program
  • Participation in charity events e.g. Szlachetna Paczka
  • Ongoing learning opportunities to help you acquire new skills or deepen existing expertise
  • Being part of the core squad focused on the growth of the Polish business unit
  • A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients
  • A work culture focused on innovation and creating lasting value for our clients and employees

 

ONLINE RECRUITMENT PROCESS STEPS*

  • Screening call with the Recruiter
  • Technical interview: first stage
  • Client Interview
  • Feedback/Offer

*The recruitment process may be modified

CI/CD Scala Python Apache Spark Apache Kafka Data Engineering Google Dataflow Apache Hive dataproc Google Cloud Platform (GCP) impala publish-subscribe SQL ETL Scrum google-bigquery Hadoop

Залишити відповідь