JOB DETAILS
ML Engineer (ONSITE IN SF)
CompanyPulseRise Technologies
LocationSan Francisco
Work ModeOn Site
PostedMay 2, 2026

About The Company
PulseRise Technologies — The Pulse of Global IT Talent
At PulseRise Technologies, we don’t just build tech teams — we breathe life into innovation. We are an international IT outstaffing company based in Cyprus, helping ambitious organizations across Europe, LATAM, and USA connect with exceptional technology professionals.
We are the pulse behind global success stories — keeping companies in motion, projects in sync, and ideas alive.
Because every thriving company needs one thing: a heartbeat that never stops.
Headquartered in Cyprus, PulseRise Technologies operates across Europe and Latin America, bridging businesses and talent through a multilingual, multicultural team. We understand local markets, respect global standards, and align perfectly with your working culture.
From emerging startups to enterprise giants, PulseRise Technologies is the silent rhythm that keeps your delivery on time, your teams inspired, and your goals alive.
Because when your company finds its pulse — everything flows.
Our essence inspired by the word “pulse” — the rhythm, heartbeat, and energy that powers every connection. Just as a pulse gives life to the body, PulseRise Technologies keep your business alive, vibrant, and constantly moving forward.
Our Services:
🔹 Software & Web Development – Scalable, secure, and tailored to your goals.
🔹 Cloud & DevOps Engineering – Fast, reliable, and automated for performance.
🔹 Data Analytics & BI – Turning numbers into decisions that drive growth.
🔹 QA & Automation Testing – Precision that ensures perfection.
🔹 UI/UX & Product Design – Interfaces that connect logic with emotion.
🔹 L1–L3 Technical Support – Always-on reliability that builds trust with your users.
We build high-performing remote tech teams and deliver end-to-end solutions that keep your business agile, competitive, and cost-efficient.
About the Role
<div style="box-sizing: border-box; color: rgb(51, 51, 51); text-align: center;"><strong style="box-sizing: border-box;"><span style="box-sizing: border-box; color: rgb(184, 49, 47);">Dear applicants, please keep in mind that applications without provided salary expectations and active LN profile will not be considered. </span></strong></div><div style="box-sizing: border-box; color: rgb(51, 51, 51); text-align: center;"><strong style="box-sizing: border-box;"><span style="box-sizing: border-box; color: rgb(184, 49, 47);">Hope for your understanding.</span></strong></div><div style="box-sizing: border-box; color: rgb(51, 51, 51); font-family: Oswald, sans-serif, ;"><br></div><div><strong>Location: San Francisco, CA (In-person)<br>Employment Type: Full-Time<br>Equity: 0.5% – 1%<br>Visa: Not available<br>Experience: 1+ years (exceptional new grads welcome)</strong></div><div><br></div><div style="text-align: justify;">We are hiring ML Engineers to implement research ideas reliably and operate full training pipelines end-to-end. This is not a research-only role. This is research-engineering at scale. A seed-stage research-driven ML company focused on mechanistic understanding of model architectures and optimizers.</div><div><br></div><div><strong>The team studies:</strong></div><ul><li>Optimizer–architecture co-design</li><li>Orthogonalized optimizers and manifold-based training</li><li>Sparse attention mechanics</li><li>Data-efficient reasoning models</li><li>Learning dynamics in data-sparse regimes</li></ul><div>The environment blends academic rigor with industrial compute and speed. The team is deliberately long-term oriented and avoids premature commercialization pressure.</div><div><br></div><div><strong>You will:</strong></div><ul><li>Translate research papers into working PyTorch/JAX implementations</li><li>Run distributed transformer training</li><li>Debug divergence and instability</li><li>Optimize throughput</li><li>Build full pipelines (data → training → evaluation)</li><li>Reason about learning dynamics and architecture tradeoffs</li><li>The bar is slope and research intuition, not years.</li></ul><div><br></div><div><strong>What You’ll Own</strong></div><ul><li>Reliable implementation of novel architectures</li><li>Distributed transformer training at scale</li><li>Training stability and performance debugging</li><li>Evaluation frameworks</li><li>Optimization reasoning alongside researchers</li></ul><div><br></div><div><strong>Must-Have Requirements</strong></div><ul><li>Strong PyTorch or JAX proficiency</li><li>Hands-on transformer training experience</li><li>Experience with distributed training setups</li><li>Debugging divergence and instability</li><li>Ability to read and implement research papers</li><li>Research intuition around optimization and learning dynamics</li><li>High growth slope</li></ul><div><br></div><div><strong>Nice to Have</strong></div><ul><li>Megatron-LM, DeepSpeed, xformers</li><li>End-to-end pipeline ownership</li><li>Research-engineering team experience</li><li>Mathematical depth (optimization, information theory, etc.)</li><li>Competitive programming / theory-heavy background</li></ul>
Key Skills
PyTorchJAXTransformer trainingDistributed trainingDebuggingOptimizationLearning dynamicsResearch implementationMegatron-LMDeepSpeedXformersPipeline developmentInformation theory
Categories
TechnologySoftwareData & AnalyticsScience & ResearchEngineering
Benefits
Equity
Job Information
📋Core Responsibilities
You will translate research papers into functional PyTorch or JAX implementations and operate full-scale distributed transformer training pipelines. Additionally, you will be responsible for debugging training instability and optimizing throughput for research-driven model architectures.
📋Job Type
full time
📊Experience Level
0-2
💼Company Size
2
📊Visa Sponsorship
No
💼Language
English
🏢Working Hours
40 hours
Apply Now →
You'll be redirected to
the company's application page