Microsoft Fabric Data Engineer

Description
The Microsoft Fabric Data Engineer is responsible for designing, building, and maintaining scalable, secure, and high-performance data pipelines and data models within Microsoft Fabric. This role focuses on ingesting data from multiple enterprise source systems into OneLake, transforming data across Bronze, Silver, and Gold layers, and enabling trusted, analytics-ready datasets for reporting, dashboards, and advanced analytics.
The ideal candidate has hands-on experience with Microsoft Fabric, Lakehouse architecture, and modern ELT patterns, and works closely with Power BI developers and business stakeholders to deliver reliable and governed data solutions.
Requirements
Data Engineering & Pipelines
- Design, develop, and maintain end to end data pipelines in Microsoft Fabric using Pipelines, Dataflows Gen2, and Notebooks
- Ingest structured and semi structured data from SQL databases, APIs, SFTP, CSV, SaaS platforms, and file based sources into OneLake
- Implement incremental and full load ingestion strategies with error handling, logging, and monitoring
Lakehouse & Data Modeling
- Build and manage Lakehouse architectures following Bronze, Silver, and Gold data patterns
- Perform data cleansing, standardization, and transformation to produce analytics ready fact and dimension tables
- Optimize data models to support Direct Lake and Power BI semantic models
Performance, Reliability & Optimization
- Optimize pipeline execution, storage layout, and query performance across Fabric workloads
- Monitor capacity usage and proactively identify performance bottlenecks
- Troubleshoot and resolve data quality, pipeline, and refresh failures
Security, Governance & Compliance
- Implement data security controls using Microsoft Entra ID, workspace roles, and access policies
- Support data governance, lineage, and cataloging through Microsoft Purview
- Follow enterprise standards for data retention, privacy, and compliance (HIPAA/HITRUST where applicable)
Collaboration & Delivery
- Partner with Power BI Developers to ensure datasets are optimized for reporting and analytics
- Work with business and analytics teams to understand data requirements and translate them into technical solutions
- Support UAT, production deployments, and ongoing enhancements
DevOps & Best Practices
- Participate in CI/CD processes using Azure DevOps or equivalent tooling for data assets
- Maintain technical documentation, data dictionaries, and pipeline design artifacts
- Contribute to data engineering standards, frameworks, and reusable patterns
Qualifications:
Education:
- Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or a related field (or equivalent experience)
Qualifications:
- Hands-on experience with Microsoft Fabric (Lakehouse, Pipelines, Notebooks, OneLake)
- Strong proficiency in SQL and data transformation logic
- Experience with modern data architecture concepts (ELT, Lakehouse, dimensional modeling)
- Experience with Azure data services (Azure Data Factory, Synapse, SQL, Storage)
- Familiarity with Power BI and Direct Lake integration patterns
- Experience with CI/CD for data platforms
- Healthcare or regulated-industry data experience
- Microsoft Fabric, Azure Data Engineer, or related certifications
Skills:
- Strong analytical and problem-solving skills
- High attention to data accuracy, quality, and reliability
- Ability to work cross-functionally with technical and non-technical stakeholders
- Strong documentation and communication skills
- Ability to manage multiple data initiatives in parallel
You'll be redirected to
the company's application page