JOB DETAILS

Data Engineer

CompanyHART, INC.
LocationBozeman
Work ModeOn Site
PostedMay 7, 2026
About The Company
Hart is a healthcare data solutions company specializing in seamless data migration, archival, and integration. Founded in 2012 in Orange County, CA, we help healthcare organizations move, transform, and connect data with speed and efficiency. Whether migrating data between EHRs, retiring legacy systems to reduce costs, supporting M&A transitions, or continuously syncing EHR and non-EHR data into analytics, Hart delivers end-to-end support with a flexible, problem-solving, tech-first approach. Our patented process streamlines even the most complex data transitions—completing EHR migrations in as little as 12 weeks—while ensuring security, compliance, and interoperability. Hart is recognized for our collaborative approach, providing direct access to our experts and delivering clear, effective solutions to even the most complex healthcare data challenges. Data problems? Solved.
About the Role

Description


Hart is a healthcare technology company focused on clinical data integration and archiving. We build the infrastructure that connects disparate health systems, providing a unified source of information for providers and patients.


Job Summary

We are looking for a Data Engineer to build and maintain our data infrastructure. You will be responsible for developing robust ETL/ELT pipelines and managing data workflows within the Databricks ecosystem. We need a builder who values simplicity and pragmatism—someone who can navigate the necessary rigor of a regulated industry without over-engineering technical solutions. While we are open to exceptional remote talent, we have a strong preference for candidates who can work on-site with us in Bozeman, MT.


Key Responsibilities 

  • Data Pipeline Development: Design and implement scalable ETL/ELT pipelines to ingest and transform complex healthcare data.
  • Databricks Development: Build and optimize data models and processing workflows within the Databricks ecosystem.
  • Pragmatic Engineering: Focus on clean, maintainable code and simple solutions that solve the problem effectively without unnecessary complexity.
  • Performance Optimization: Write and refine high-signal Python and SQL code to improve system reliability and efficiency.
  • AI/ML Support: Develop the data foundation for machine learning workflows and assist in moving models into production.
  • Security & Compliance: Ensure all data flows meet HIPAA and security standards through intentional documentation and validation.

Culture

We handle high-stakes data where precision is mandatory, but we solve those problems with an engineering-first mindset. This isn’t a "corporate suit" environment—we are a meritocracy where the best idea wins, regardless of title.


We value high-signal, direct communication and prioritize technical execution over unnecessary fluff. Our goal is to maintain a disciplined, professional standard for our systems while keeping our team structure lean and focused. If you prefer building foundational technology in a simple, pragmatic, and fluff-free environment, you will fit in here.

Requirements

  • Professional Experience: Proven track record in data engineering or a software engineering role with a heavy focus on data.
  • Databricks: Hands-on experience with the Databricks platform.
  • Languages: High proficiency in Python and SQL.
  • ETL/ELT: Experience building and maintaining production-grade data pipelines.
  • Cloud Infrastructure: Experience working within AWS environments.


Preferred Qualifications

  • AI/ML: Experience with MLflow, model training, or integrating AI into data pipelines.
  • Healthcare Data: Familiarity with healthcare standards like HL7 and FHIR.
  • Software Fundamentals: Strong understanding of version control (Git) and CI/CD practices.

Work Authorization

Successful candidates must be able to provide proof of legal authorization to work in the United States without requiring sponsorship now or in the future.

Key Skills
Data EngineeringDatabricksPythonSQLETLELTAWSData PipelinesData ModelingMachine LearningMLflowHealthcare DataHL7FHIRGitCI/CD
Categories
Data & AnalyticsTechnologySoftwareHealthcareEngineering
Job Information
📋Core Responsibilities
Design and implement scalable ETL/ELT pipelines to ingest and transform complex healthcare data within the Databricks ecosystem. Develop the data foundation for machine learning workflows and ensure all data processes comply with HIPAA and security standards.
📋Job Type
full time
📊Experience Level
2-5
💼Company Size
48
📊Visa Sponsorship
No
💼Language
English
🏢Working Hours
40 hours
Apply Now →

You'll be redirected to
the company's application page