JOB DETAILS

Data Engineer

CompanyLusha
LocationTel-Aviv
Work ModeOn Site
PostedApril 26, 2026
About The Company
Lusha is the leader in Sales Streaming – a new sales paradigm that streams top leads straight to salespeople and handles all the outreach, so they can escape the lead grind and just sell. Lusha’s Sales Streaming Platform is built around Sales Playlists that continuously fill up with their ideal prospects – think “Spotify for sales.” With AI doing the heavy lifting, Lusha uncovers great-fit leads salespeople never knew existed and executes tailored, perfectly timed cadences that get meetings booked. And the more you use Lusha, the smarter it gets. With Sales Streaming, salespeople spend most of their time face-to-face with relevant prospects, driving 4-6X more business.
About the Role

At Lusha, we’re building for builders. We build fast and AI-first - so we look for builders.

By a builder, we mean someone who turns “maybe” into “done”.

We’re looking for a Data Engineer to build and scale the data infrastructure behind our Sales Streaming platform. This role is about owning pipelines that process massive volumes of data and power real-time, AI-driven features used by millions of sales professionals.

You’ll work end to end — from data lakes to real-time streaming — collaborating closely with Data Science, ML, and Product teams to turn complex data into high-impact product capabilities.

This role is based in Tel Aviv. We work in a hybrid model, with 3 days a week in the office.

This might be for you if:

  • You enjoy building data systems that run at scale and serve real users
  • You like owning your work end to end, from design to production
  • You’re a problem solver who enjoys turning messy data into reliable systems
  • You value autonomy, impact, and fast decision-making
  • You’re comfortable working in dynamic, AI-forward environments

Requirements

  • 5+ years of experience building scalable data systems
  • Strong Python and SQL skills
  • Experience using GenAI for software development and improving work processes
  • Hands-on experience with modern data stacks (Spark, Airflow, AWS, Kubernetes)
  • Experience with batch and streaming data pipelines
  • A strong builder mindset, curiosity, and willingness to learn
Key Skills
PythonSQLData EngineeringSparkAirflowAWSKubernetesData PipelinesStreaming DataBatch ProcessingGenAIData LakesMachine LearningData InfrastructureSystem Design
Categories
Data & AnalyticsSoftwareTechnologyEngineering
Job Information
📋Core Responsibilities
You will own and scale data pipelines that process massive volumes of data to power real-time, AI-driven product features. You will collaborate with Data Science, ML, and Product teams to build end-to-end data infrastructure from data lakes to streaming systems.
📋Job Type
full time
📊Experience Level
5-10
💼Company Size
366
📊Visa Sponsorship
No
💼Language
English
🏢Working Hours
40 hours
Apply Now →

You'll be redirected to
the company's application page