What you will do
Do you enjoy working with clients from different industries to investigate complex business problems and to design end-to-end analytical solutions that will improve their existing processes and ability to derive data-driven insights?
Aimpoint Digital is a dynamic and fully remote data and analytics consultancy. We work alongside the most innovative software providers in the data engineering space to solve our clients’ toughest business problems. At Aimpoint Digital, we believe in blending modern tools and techniques with tried-and-true principles to deliver optimal data engineering solutions.
You will:
- Become a trusted advisor working together with our clients, from data owners and analytic users to C-level executives
- Work independently as part of a small team to solve complex data engineering use-cases across a variety of industries
- Design and develop the analytical layer, building cloud data warehouses, data lakes, ETL/ELT pipelines, and orchestration jobs
- Work with modern tools such as Snowflake, Databricks, Fivetran, and dbt and credentialize your skills with certifications
- Write code in SQL, Python, and Spark, and use software engineering best-practices such as Git and CI/CD
- Support the deployment of data science and ML projects into production
- Note: You will not be developing machine learning models or algorithms
Who you are:
We are building a diverse team of talented and motivated people who deeply understand business problems and enjoy solving them. You are a self-starter who loves working with data to build analytical tools that business users can leverage daily to do their jobs better. You are passionate about contributing to a growing team and establishing best practices.
As a Senior Data Engineer, you will be expected to work independently on client engagements, take part in the development of our practice, aid in business development, and contribute innovative ideas and initiatives to our company.
- Degree educated in Computer Science, Engineering, Mathematics, or equivalent experience
- Experience with managing stakeholders and collaborating with customers
- Strong written and verbal communication skills required
- 3+ years working with relational databases and query languages
- 3+ years building data pipelines in production and ability to work across structured, semi-structured and unstructured data
- 3+ years data modeling (e.g. star schema, entity-relationship)
- 3+ years writing clean, maintainable, and robust code in Python, Scala, Java, or similar coding languages
- Ability to manage an individual workstream independently
- Expertise in software engineering concepts and best practices
- DevOps experience preferred
- Experience working with cloud data warehouses (Snowflake, Google BigQuery, AWS Redshift, Microsoft Synapse) preferred
- Experience working with cloud ETL/ELT tools (Fivetran, dbt, Matillion, Informatica, Talend, etc.) preferred
- Experience working with cloud platforms (AWS, Azure, GCP) and container technologies (Docker, Kubernetes) preferred
- Experience working with Apache Spark preferred
- Experience preparing data for analytics and following a data science workflow to drive business results preferred
- Consulting experience strongly preferred
- Willingness to travel
This position is fully-remote within the United Kingdom.
databricks Informatica fivetran Amazon Web Services (AWS) Apache Spark Data Querying Languages Computer Science Data Engineering Azure Talend snowflake-cloud-data-platform data-modeling Google Cloud Platform (GCP) Docker data-pipelines Machine Learning Engineering DBT Git Consulting CI/CD mathematics Scala Python Amazon Redshift cloud-platforms Matillion DevOps SQL Kubernetes Java RDBMS google-bigquery