JOB DETAILS

Data Engineer for AZURE cloud Platform

CompanySandisk
LocationBengaluru
Work ModeOn Site
PostedApril 24, 2026
About The Company
Sandisk is a leading developer, manufacturer and provider of data storage devices and solutions based on NAND flash technology. With a differentiated innovation engine driving advancements in storage and semiconductor technologies, our broad and ever-expanding portfolio delivers powerful flash storage solutions for AI workloads in datacenters, edge devices, and consumers. Our technologies enable everyone from students, gamers and home offices, to the largest enterprises and public clouds to produce, analyze, and store data. Our solutions include a broad range of solid state drives, embedded products, removable cards, and universal serial bus drives.
About the Role

Company Description

Sandisk understands how people and businesses consume data and we relentlessly innovate to deliver solutions that enable today’s needs and tomorrow’s next big ideas. With a rich history of groundbreaking innovations in Flash and advanced memory technologies, our solutions have become the beating heart of the digital world we’re living in and that we have the power to shape.

Sandisk meets people and businesses at the intersection of their aspirations and the moment, enabling them to keep moving and pushing possibility forward. We do this through the balance of our powerhouse manufacturing capabilities and our industry-leading portfolio of products that are recognized globally for innovation, performance and quality.

Sandisk has two facilities recognized by the World Economic Forum as part of the Global Lighthouse Network for advanced 4IR innovations. These facilities were also recognized as Sustainability Lighthouses for breakthroughs in efficient operations. With our global reach, we ensure the global supply chain has access to the Flash memory it needs to keep our world moving forward.

Job Description

Position Overview

We seek a results-oriented Data Engineer with a minimum of  2+ years of experience in data pipeline development within cloud environments. The successful candidate shall be responsible for designing, constructing, and optimizing Azure-based data ingestion and transformation pipelines using PySpark and Spark SQL. This role requires collaboration with cross-functional teams to deliver high-quality, reliable, and scalable data solutions.

Duties and Responsibilities

  • Design, develop, and maintain high-performance ETL/ELT pipelines using PySpark and Spark SQL.
  • Build and orchestrate data workflows in AZURE.
  • Implement hybrid data integration between on-premise databases and Azure Databricks using tools such as ADF, HVR/Fivetran, and secure network configurations.
  • Enhance/optimize Spark jobs for performance, scalability, and cost efficiency.
  • Implement and enforce best practices for data quality, governance, and documentation.
  • Collaborate with data analysts, data scientists, and business users to define and refine data requirements.
  • Support CI/CD processes and automation tools and version control systems like Git.
  • Perform root cause analysis, troubleshoot issues, and ensure the reliability of data pipelines.

Qualifications

Required Qualifications

  • Bachelor's degree in Computer Science, Engineering, or related field.
  • 2+ years of hands-on experience in data engineering.
  • Proficiency in PySpark, Spark SQL, and distributed processing.
  • Strong knowledge of Azure cloud services including ADF, Databricks, and ADLS.
  • Experience with SQL, data modeling, and performance tuning.
  • Familiarity with Git, CI/CD pipelines, and agile practices.

Preferred Qualifications

  • Experience with orchestration tools such as Airflow or ADF pipelines.
  • Knowledge of real-time streaming tools (Kafka, Event Hub, HVR).
  • Exposure to APIs, data integrations, and cloud-native architectures.
  • Familiarity with enterprise data ecosystems

Additional Information

 

  • Job Type (exemption status): Exempt position - Please see related compensation & benefits details below
  • Business Function: Business Applications
  • Work Location: Bangalore Cosmos Office--LOC_WDT_Bangalore Cosmos Office
  • Key Skills
    AzurePySparkSpark SQLETLELTData EngineeringAzure DatabricksADFSQLData ModelingGitCI/CDAgileData PipelinesCloud Computing
    Categories
    Data & AnalyticsTechnologySoftwareEngineering
    Job Information
    📋Core Responsibilities
    The Data Engineer will design, construct, and optimize Azure-based data ingestion and transformation pipelines using PySpark and Spark SQL. They will also collaborate with cross-functional teams to ensure the delivery of reliable, scalable, and high-quality data solutions.
    📋Job Type
    full time
    📊Experience Level
    2-5
    💼Company Size
    7938
    📊Visa Sponsorship
    No
    💼Language
    English
    🏢Working Hours
    40 hours
    Apply Now →

    You'll be redirected to
    the company's application page