Currently our customer is gearing up to revolutionize our data landscape by building a cutting-edge Enterprise Data Lakehouse Platform. We are forming multiple teams that will spearhead the creation of the platform’s foundational components. These teams go beyond traditional data ingestion; they are architects of a microservices-driven platform, providing abstractions that empower other teams to seamlessly extend the platform.
We are seeking a dynamic and highly skilled Data Engineer who has extensive experience building self -service enterprise scale data platforms with microservices architecture and lead these foundational efforts. This role demands someone who not only possesses a profound understanding
of the data engineering landscape but also has a very strong software engineering background specially building microservices frameworks and architectures. The ideal candidate will be an individual contributor as well as the technical lead and contribute significantly to platform development and actively shape our data ecosystem.
Requirements:
▪ Prior experience architecting and building successful self-service enterprise scale data platforms in a green field environment with microservices based architecture.
▪ Proficiency in building end to end data platforms and data services in GCP is a must.
▪ Proficiency in tools and technologies: BigQuery, Cloud Functions, Cloud Run, Dataform, Dataflow, Dataproc, SQL, Python, Airflow, PubSub.
▪ Experience with Microservices architectures – Kubernetes, Docker. Our microservices are build using TypeScript, NestJS, NodeJS stack. Prefer candidates with this experience.
▪ Experience building Symantec layers.
▪ Proficiency in architecting and designing and development experience with batch and real time streaming infrastructure and workloads.
▪ Solid experience with architecting and implementing metadata management including data catalogues, data lineage, data quality and data observability for big data workflows
▪ Hands-on experience with GCP ecosystem and data lakehouse architectures.
▪ Strong understanding of data modeling, data architecture, and data governance principles.
▪ Excellent experience with DataOps principles and test automation.
▪ Excellent experience with observability tooling: Grafana, Datadog.
We offer:
- Flexible working format – remote, office-based or flexible
- A competitive salary and good compensation package
- Personalized career growth
- Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
- Active tech communities with regular knowledge sharing
- Education reimbursement
- Memorable anniversary presents
- Corporate events and team buildings
- Other location-specific benefits
Datadog google-cloud-run Node.js Data Engineering dataproc DataOps data-modeling Google Cloud Platform (GCP) publish-subscribe data governance Docker symantec Airflow Grafana microservices data-quality Python NestJS Software Development Engineer Data Architect Google Dataflow automated testing TypeScript SQL Kubernetes Google Dataform google-bigquery data-lineage