Python Spark Support Data Engineer – REMOTE or HYBRID at NTT DATA #vacancy #remote

Req ID: 283296 NTT DATA Services strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Python Spark Support Data Engineer – REMOTE or HYBRID to join our team in Chicago, Illinois (US-IL), United States (US). Day to Day Responsibilities: Troubleshoot issues related to the platform infrastructure performance and ensure its stability. including server architecture, networking, security protocols, API details, process flow and scalability. Effectively triage and manage L3 support tickets, ensuring proper categorization, documentation, and tracking of reported issues. Understanding data ingestion and Extraction, Transformation, and Loading (ETL) processes is crucial for resolving issues related to data pipeline activities. Experience with data visualization and reporting concepts to assist end users with actual issues on visualization and data components. Analyze complex issues, break them down into manageable components, and propose effective solutions. Take ownership of critical incidents, conduct thorough investigations, identify root causes, and perform in-depth problem analysis to prevent recurrence. Drive incident resolution through coordination with various internal teams Conduct in-depth root cause analysis for major incidents and recurring issues, documenting findings, and recommending long-term solutions or process improvements to prevent future occurrences. Clear and concise communication and able to understand user queries and relay technical information effectively, whether it’s providing step-by-step guidance or documenting issues and resolutions. Offer comprehensive user guidance and assistance, ensuring users can navigate the UI product and maximize its features. Collaborate with internal teams to escalate complex issues, communicate bug reports, usability concerns, and contribute to feature enhancement discussions. Conduct deep analysis and troubleshooting for L3 support, investigating complex UI-related problems, analyzing logs, and performing root cause analysis. Identify opportunities to enhance support processes, tools, and workflows. Proactively seek ways to improve service quality, enhance customer satisfaction, and optimize support team efficiency. Prepare incident reports, post-mortems, or service outage notifications to keep stakeholders informed about the incidents, their root causes, and the steps taken to mitigate future risks. Familiarity with incident management tools like Jira, Zendesk, or ServiceNow is beneficial. Understanding the incident management process ensures proper tracking, prioritization, and resolution of support tickets related to data and platform features. Basic Qualifications: Bachelor’s degree in information science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. 4+ years of experience as a data developer/Support Engineering on Python-spark, Azure SQL Server 4+ years experience building/maintaining Data Pipelines on Azure Databricks and well versed with CICD/DevOps process. 4+ Years coding skills in languages like Python, SQL & Java are essential for understanding, troubleshooting and fixing data and platform features. Desired Skills: Hands-on Support engineer who is curious about technology, should be able to quickly adopt to change and one who understands the technologies supporting areas such as Cloud Computing (AWS, Azure(preferred), etc.), Data concepts, Data debugging and understanding, Micro Services, UI features, Data Security, etc. A strong understanding of Databricks and Apache Spark is preferred for troubleshooting data pipeline issues. This includes knowledge of Spark job execution, data ingestion, transformations, and optimizations. Proficiency in SQL is important for querying and manipulating data within Databricks. Additionally, knowledge of programming languages like Python or Scala enables support personnel to analyse and debug code related to data pipeline activities. Strong understanding of UI product features, functionality, and user experience. Proficient understanding of JavaScript and React JS, including the ability to identify and debug issues using logs and provide detailed information. Strong attention to detail, organizational skills, and the ability to prioritize tasks effectively. Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams Where required by law, NTT DATA provides a reasonable range of compensation for specific roles. The starting pay range for this remote role is $91,548 – 177,076. This range reflects the minimum and maximum target compensation for the position across all US locations. Actual compensation will depend on a number of factors, including the candidate’s actual work location, relevant experience, technical skills, and other qualifications

INDHCLSMC About NTT DATA Services NTT DATA Services is a recognized leader in IT and business services, including cloud, data and applications, headquartered in Texas. As part of NTT DATA, a $30 billion trusted global innovator with a combined global reach of over 80 countries, we help clients transform through business and technology consulting, industry and digital solutions, applications development and management, managed edge-to-cloud infrastructure services, BPO, systems integration and global data centers. We are committed to our clients’ long-term success. Visit nttdata.com or LinkedIn to learn more. NTT DATA Services is an equal opportunity employer and considers all applicants without regarding to race, color, religion, citizenship, national origin, ancestry, age, sex, sexual orientation, gender identity, genetic information, physical or mental disability, veteran or marital status, or any other characteristic protected by law. We are committed to creating a diverse and inclusive environment for all employees. If you need assistance or an accommodation due to a disability, please inform your recruiter so that we may connect you with the appropriate team. #J-18808-Ljbffr

Python Apache Spark Data Engineering

Leave a Reply