Top Skills (3) & Years of Experience: 3 years advanced hands-on experience designing AWS data lake solutions, integrating Redshift with other AWS services, such as DMS, Glue, Lambda, S3, Athena, Airflow, experience with Pyspark and Glue ETL scripting including functions like relationalize, performing joins and transforming dataframes with pyspark code Nice to Have: Proficiency in Python programming with a focus on developing efficient Airflow DAGs and operators, Competency developing CloudFormation templates to deploy AWS infrastructure, including YAML defined IAM policies and roles, Experience with Airflow DAG creation, Familiarity with debugging serverless applications using AWS tooling like Cloudwatch Logs & Log Insights, Cloudtrail, IAM. Job Overview: We are seeking an experienced AWS Redshift Data Engineer to assist our team in designing, developing, and optimizing data pipelines for our AWS Redshift-based data lake house. Priority needs are cloud formation and event-based data processing utilizing SQS to support ingestion and movement of data from Workday to AWS Redshift for consumer and analytic use. Key Responsibilities: Collaborate with data engineering, business analysts, and development teams to design, develop, test, and maintain robust and scalable data pipelines from Workday to AWS Redshift. Architect, implement, and manage end-to-end data pipelines, ensuring data accuracy, reliability, data quality, performance, and timeliness. Provide expertise in Redshift database optimization, performance tuning, and query optimization. Assist with design and implementation of workflows using Airflow. Perform data profiling and analysis to troubleshoot data-related challenges / issues and build solutions to address those concerns. Proactively identify opportunities to automate tasks and develop reusable frameworks. Work closely with version control team to maintain a well-organized and documented repository of codes, scripts, and configurations using Git/Bitbucket. Provide technical guidance and mentorship to fellow developers, sharing insights into best practices, tips, and techniques for optimizing Redshift-based data solutions. Required Qualifications and Skills: Advanced hands-on experience designing AWS data lake solutions. Experience integrating Redshift with other AWS services, such as DMS, Glue, Lambda, S3, Athena, Airflow. Proficiency in Python programming with a focus on developing efficient Airflow DAGs and operators. Experience with Pyspark and Glue ETL scripting including functions like relationalize, performing joins and transforming dataframes with pyspark code. Competency developing CloudFormation templates to deploy AWS infrastructure, including YAML defined IAM policies and roles. Experience with Airflow DAG creation. Familiarity with debugging serverless applications using AWS tooling like Cloudwatch Logs & Log Insights, Cloudtrail, IAM. Ability to work in a highly complex python object oriented platform. Strong understanding of ETL best practices, data integration, data modeling, and data transformation. Proficiency in identifying and resolving performance bottleneck and fine-tuning Redshift queries. Familiarity with version control systems, particularly Git, for maintaining a structured code repository. Strong coding and problem-solving skills, and attention to detail in data quality and accuracy. Ability to work collaboratively in a fast-paced, agile environment and effectively communicate technical concepts to non-technical stakeholders. Additional Useful Experience: Docker Airflow Server Administration Parquet file formats AWS Security Jupyter Notebooks API Best Practices, API Gateway, Route Structuring and standard API authentication protocols including tokens Git, Git flow best practices Release management and DevOps Shell scripting AWS certifications related to data engineering or databases are a plus. Experience with DevOps technologies and processes. Experience with complex ETL scenarios, such as CDC and SCD logics, and integrating data from multiple source systems. Experience in converting Oracle scripts and Stored Procedures to Redshift equivalents. Experience working with large-scale, high-volume data environments. Exposure to higher education, finance, and/or human resources data is a plus. Proficiency in SQL programming and Redshift stored procedures for efficient data manipulation and transformation. Beacon Hill is an Equal Opportunity Employer that values the strength diversity brings to the workplace. Individuals with Disabilities and Protected Veterans are encouraged to apply. If you would like to complete our voluntary self-identification form, please click here or copy and paste the following link into an open window in your browser: Completion of this form is voluntary and will not affect your opportunity for employment, or the terms or conditions of your employment. This form will be used for reporting purposes only and will be kept separate from all other records. Company Profile: Beacon Hill Technologies, a premier National Information Technology Staffing Group, provides world class technology talent across all industries utilizing a complete suite of staffing services. Beacon Hill Technologies’ dedicated team of recruiting and staffing experts consistently delivers quality IT professionals to solve our customers’ technical and business needs. Beacon Hill Technologies covers a broad spectrum of IT positions, including Project Management and Business Analysis, Programming/Development, Database, Infrastructure, Quality Assurance, Production/Support and ERP roles. Learn more about Beacon Hill Staffing Group and our specialty divisions, Beacon Hill Associates, Beacon Hill Financial, Beacon Hill HR, Beacon Hill Legal, Beacon Hill Life Sciences and Beacon Hill Technologies by visiting We look forward to working with you. Beacon Hill. Employing the Future (TM)
api-gateway API dms shell Amazon Web Services (AWS) YAML Docker Airflow jupyter parquet aws-glue workday-api PySpark Git amazon-s3 Lambdas Python Amazon Redshift scd Amazon Athena Oracle amazon-cloudwatch DevOps logging amazon-cloudformation SQL amazon-sqs ETL change-data-capture amazon-iam stored-procedures Bitbucket