Remote Data Engineer – INTL at Insight Global #vacancy #remote

An enterprise utilities client is seeking a Data Engineer to join their team remotely.

Responsibilities include:

Design, develop, and maintain data pipelines using Python and PySpark.

Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and transform raw data into actionable insights.

Implement ETL processes to extract, transform, and load data from various sources into our data lake or warehouse.

Optimize data pipelines for performance, scalability, and reliability.

Work with Databricks or similar big data platforms to process and analyze large datasets.

Develop and maintain data transformation logic for data movement within the data warehouse.

Ensure data quality, security, and governance best practices are followed.

This role can sit remote working a Pacific Standard Time schedule.

We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to .

To learn more about how we collect, keep, and process your private information, please review Insight Global’s Workforce Privacy Policy: .

Required Skills & Experience

Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field or relevant experience.

7+ years of experience as a Data Engineer or similar role.

Strong programming skills in Python and expertise in PySpark for both batch and streaming data processing.

5+ years of hands-on experience with ETL tools and processes

Familiarity with Azure-based data platforms

Nice to Have Skills & Experience

Utilities industry experience

Experience with Terraform

.NET or Java experience

PowerBI experience

Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.

databricks PySpark SQL Python Java Terraform ETL Data Engineering .NET Azure Power BI

Leave a Reply