JOB DETAILS

Principal Engineer – Data Platforms & MLOps (Databricks)

CompanyCodvo.ai
LocationPune
Work ModeOn Site
PostedMarch 29, 2026
About The Company
At Codvo.ai, we help enterprises move beyond AI experimentation to real AI execution. We design, train, and operate AI systems that integrate seamlessly into enterprise workflows—delivering reliable, governed, and measurable outcomes. Powered by Hybrid Teams and our NeIO 2.0 Operational AI Platform, we enable organizations to scale AI from pilots to production—transforming operations with intelligent automation, digital coworkers, and continuous optimization. Codvo.ai — Enterprise AI Execution
About the Role
<div><span><h1><strong>Principal Engineer – Data Platforms &amp; MLOps (Databricks)</strong></h1><h3><br></h3><p style="margin-bottom:0cm;"><strong>Company Overview&nbsp;</strong><br>At Codvo, software and people transformations go hand-in-hand. We are a global empathy-led technology services company. Product innovation and mature software engineering are part of our core DNA. Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day.</p><p style="margin-bottom:0cm;">We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results.</p><h3><strong>Role Overview</strong></h3><div>We are looking for a <strong>hands-on Principal Engineer</strong> with deep expertise in <strong>Databricks</strong> to design, build, and scale enterprise-grade data platforms and MLOps pipelines. You will be the technical authority on how enterprises adopt and maximize Databricks — from ingestion to governance to machine learning deployment — and a mentor who raises the bar for engineering excellence.</div><hr><h3><strong>Key Responsibilities</strong></h3><ul><li><strong>Platform Architecture</strong>: Design and implement end-to-end data architectures on <strong>Databricks Lakehouse</strong>, covering ingestion, transformation, storage, and analytics.</li><li><strong>Pipelines &amp; Workflows</strong>: Build and optimize <strong>ETL/ELT pipelines</strong> with <strong>Delta Live Tables</strong>, <strong>Spark Structured Streaming</strong>, and workflow orchestration.</li><li><strong>Governance &amp; Security</strong>: Implement <strong>Unity Catalog</strong>, fine-grained access controls, and compliance frameworks across enterprise data estates.</li><li><strong>MLOps at Scale</strong>: Operationalize ML models using <strong>MLflow</strong>, <strong>Model Registry</strong>, and CI/CD pipelines integrated with cloud DevOps tools.</li><li><strong>Performance &amp; Cost Optimization</strong>: Tune Databricks clusters, jobs, and workflows for <strong>scale, speed, and efficiency</strong> across multi-cloud deployments.</li><li><strong>Client Advisory</strong>: Work closely with enterprise stakeholders to provide <strong>best practices, reference architectures, and accelerators</strong> tailored to their use cases.</li><li><strong>Mentorship &amp; Standards</strong>: Guide engineers in Databricks best practices, enforce coding standards, and lead design/code reviews.</li></ul><hr><h3><strong>Qualifications</strong></h3><ul><li><strong>8+ years</strong> in large-scale <strong>data engineering / platform engineering</strong>, with <strong>3+ years hands-on Databricks</strong> experience.</li><li>Deep expertise in:<ul><li><strong>Databricks Lakehouse Platform</strong> (Delta Lake, Delta Live Tables, Databricks SQL).</li><li><strong>Governance &amp; Security</strong> with <strong>Unity Catalog</strong>.</li><li><strong>MLOps with MLflow</strong> and model lifecycle management.</li></ul></li><li>Strong programming skills in <strong>PySpark, SQL, Python</strong>; experience with <strong>Scala</strong> a plus.</li><li>Hands-on with <strong>cloud integration</strong> (AWS, Azure, or GCP) and DevOps pipelines (Terraform, GitHub Actions, Azure DevOps, etc.).</li><li>Proven track record of <strong>building and scaling Databricks workloads in production</strong> for enterprise clients.</li></ul><div>&nbsp;</div></span></div>
Key Skills
Databricks Lakehouse PlatformDelta LakeDelta Live TablesDatabricks SQLUnity CatalogMLOpsMLflowModel Lifecycle ManagementPySparkSQLPythonScalaAWSAzureGCPTerraformGitHub Actions
Categories
TechnologyData & AnalyticsEngineeringSoftware
Job Information
📋Core Responsibilities
The Principal Engineer will design and implement end-to-end data architectures on Databricks Lakehouse, covering ingestion, transformation, storage, and analytics. They will also build and optimize ETL/ELT pipelines and operationalize ML models using various tools and frameworks.
📋Job Type
full time
📊Experience Level
10+
💼Company Size
153
📊Visa Sponsorship
No
💼Language
English
🏢Working Hours
40 hours
Apply Now →

You'll be redirected to
the company's application page