Lead Data Engineer at N-iX #vacancy #remote

Requirements:

  • Prior experience architecting and building successful self-service enterprise scale data platforms in a green field environment with microservices based architecture.
  • Proficiency in building end to end data platforms and data services in GCP is a must.
  • Proficiency in tools and technologies: BigQuery, Cloud Functions, Cloud Run, Dataform, Dataflow, Dataproc, SQL, Python, Airflow, PubSub.
  • Experience with Microservices architectures – Kubernetes, Docker. Our microservices are build using TypeScript, NestJS, NodeJS stack. Prefer candidates with this experience.
  • Experience building Symantec layers.
  • Proficiency in architecting and designing and development experience with batch and real time streaming infrastructure and workloads.
  • Solid experience with architecting and implementing metadata management including data catalogues, data lineage, data quality and data observability for big data workflows
  • Hands-on experience with GCP ecosystem and data lakehouse architectures.
  • Strong understanding of data modeling, data architecture, and data governance principles.
  • Excellent experience with DataOps principles and test automation.
  • Excellent experience with observability tooling: Grafana, Datadog.

Our customer is gearing up to revolutionise our data landscape by building a cutting-edge Enterprise Data Lakehouse Platform. We are forming multiple teams that will spearhead the creation of the platform’s foundational components. These teams go beyond traditional data ingestion; they are architects of a microservices-driven platform, providing abstractions that empower other teams to seamlessly extend the platform.

We are seeking a dynamic and highly skilled Data Engineer who has extensive experience building self -service enterprise scale data platforms with microservices architecture and lead these foundational efforts. This role demands someone who not only possesses a profound understanding
of the data engineering landscape but also has a very strong software engineering background specially building microservices frameworks and architectures. The ideal candidate will be an individual contributor as well as the technical lead and contribute significantly to platform development and actively shape our data ecosystem.

,[Design and build self-service enterprise-scale data platforms using a microservices-based architecture from scratch in a greenfield environment., Architect and implement microservices architectures. Create microservices frameworks and components that provide abstractions for seamless extension by other teams., Build and integrate Symantec layers within the data platform to enhance functionality and efficiency., Design and develop infrastructure for batch and real-time streaming data workloads, ensuring efficient and scalable data processing., Implement comprehensive metadata management solutions, including data catalogues, data lineage, data quality, and data observability for big data workflows., Utilize hands-on experience with the GCP ecosystem to build and maintain data lakehouse architectures that support the company’s data strategy., Implement DataOps principles and automate testing processes to ensure continuous integration, delivery, and deployment of data solutions., Develop and use observability tools like Grafana and Datadog to monitor platform performance and ensure system reliability., Act as both an individual contributor and a technical lead, guiding the development of the platform and shaping the data ecosystem. Provide mentorship and direction to other engineers in the team., Work closely with multiple teams to align platform development with business goals and technical requirements. Facilitate seamless integration and extension of platform components by other teams.] Requirements: GCP, BigQuery, Microservices architecture, Python, Docker, SQL, Airflow, Kubernetes, TypeScript, NestJS Tools: Agile, Scrum. Additionally: Flexible working hours and remote work possibility, Mentoring program, Life insurance, Training budget, English lessons, Compensation of Certifications, Active tech community, International team, Referral program, Cafeteria, Modern office, Free coffee, Kitchen, Friendly atmosphere.

Agile Datadog data-management google-cloud-run Python Data Engineering NestJS Google Dataflow dataproc DataOps Google Cloud Platform (GCP) publish-subscribe Docker SQL TypeScript Airflow Kubernetes Scrum google-bigquery Google Dataform Grafana microservices

Залишити відповідь