Big Data Engineer @ Kontakt.io at Kontakt.io #vacancy #remote

We are looking for a Big Data Engineer who:

  • designs and implements scalable, reliable, and efficient data processing pipelines for ingesting, transforming, and storing large volumes of data.
  • develops and maintains ETL (Extract, Transform, Load) processes to ensure data quality and consistency.
  • optimizes data storage and retrieval processes for performance and cost efficiency.
  • monitors and troubleshoots data processing jobs to ensure system stability and reliability.
  • works closely with cross-functional teams to integrate Big Data solutions into existing systems and applications.
  • acknowledges that simplicity does not mean easy
  • understands that product company means strong customer focus. Our financial success depends on the product.
  • is not afraid of taking ownership and is pride in their contribution to the product
  • has a high level of commitment to driving things to completion (delivering real value)
  • enjoys technical challenges, is eager to explore new technologies and solutions, and is driven by curiosity

Mission Statement:

We help businesses deploy resources and processes efficiently and make their customers and staff feel seen and valued.

Kontakt.io is a leader in location IoT. Our mission is to simplify the delivery of location and sensor data insights. We create the data foundation that drastically improves and automates decision making in resource planning, operations, and customer experience workflows.

Our portfolio of complete IoT and location solutions combine hardware, software, and cloud to bring real-time visibility, analytics, and AI to operations. Today, we serve over 2,000 customers across diverse sizes and industries, from transportation and logistics to manufacturing, healthcare, airports, governments and public spaces. They use Kontakt.io to reduce emergency incident time, decrease asset search-times, introduce activity-based-costing, automate manual processes, digitize physical order traceability or prevent machine downtimes.

,[Leading the scaling and execution of big data architecture., Providing technical guidance and leadership to the team on Big Data, ensuring the creation of scalable and fault-tolerant data processing pipelines for ingesting, transforming, and storing data, Collaborating with the Apps Leads to understand data requirements and designing scalable and efficient software solutions. Ensuring adherence to design principles, maintainable code, and performance optimization., Creating a framework for Big Data to streamline processes and enhance efficiency., Troubleshooting and resolving complex technical issues] Requirements: Python, Java, Kafka, AWS, Microservices architecture, MongoDB, PostgreSQL, Prometheus, Kubernetes, Big Data, Kotlin, Scala Tools: Confluence, Wiki, GitHub, GIT, Jenkins, Agile, Scrum. Additionally: Sport subscription, Training budget, Private healthcare, Flat structure, Small teams, International projects, Free coffee, Bike parking, Shower, Free parking, No dress code, Startup atmosphere.

PostgreSQL Git Agile Scala Python Amazon Web Services (AWS) MongoDB Apache Kafka Kotlin Confluence Big data Prometheus GitHub Kubernetes Java ETL Scrum Jenkins

Залишити відповідь