Freelance Data Engineer
Location: Fully Remote within EU + UK
Contract Type: Freelance 6+ Months
About the Company
A leading European organisation in the mobility and smart infrastructure sector is seeking a skilled Data Engineer to support the ongoing development of its modern, cloud-based data platform. The company operates large-scale digital systems powering connected parking, transport, and urban mobility solutions across multiple countries. This role will play a key part in transforming operational and customer data into actionable insights to drive efficiency and innovation.
Key Responsibilities
- Design, build, and maintain data pipelines and ETL/ELT workflows within the Azure Databricks environment.
- Develop and scale data platform components using Terraform and Infrastructure as Code (IaC) best practices.
- Manage and optimise data storage, cataloging, and governance through Azure Unity Catalog.
- Automate and orchestrate complex data workflows using Databricks Workflows and Delta Live Tables (DLT) for reliable, end-to-end pipeline management.
- Own and maintain CI/CD and DevOps processes to ensure smooth deployment and infrastructure automation.
- Establish and uphold data modeling standards that support scalability, consistency, and data accessibility across systems.
- Ensure data quality, reliability, and performance across all stages of the data lifecycle.
- Work independently on multiple initiatives, delivering robust and scalable data engineering solutions.
- Contribute to the ongoing improvement of the data platform architecture and internal engineering best practices.
Required Skills & Experience
- 5-8 years of experience in data engineering or a related role.
- Hands-on expertise with the Azure ecosystem – particularly Azure Databricks, Azure Data Lake, and Azure Unity Catalog.
- Proven experience using Terraform and implementing Infrastructure as Code (IaC).
- Strong understanding of data platform engineering, pipeline design, and ETL/ELT processes.
- Solid grounding in data modeling (conceptual, logical, and physical) and building scalable data structures.
- Proficiency in PySpark, with strong understanding of Spark architecture, and working knowledge of Scala and SQL for advanced data transformations.
- Experience managing CI/CD pipelines and DevOps automation for data infrastructure.
- Excellent analytical, problem-solving, and communication skills.
- Comfortable working independently in a fast-paced, agile environment.
