Talent.com
LLM Systems Performance Engineer (CUDA)

LLM Systems Performance Engineer (CUDA)

PhinityBangalore, IN
20 hours ago
Job description

We look forward to when AI can discover the next quantum AI accelerator, or when AI can make RL much more compute-efficient. We want to enable AI to bootstrap its own intelligence, to discover new computational paradigms. Just as AlphaEvolve discovered a 23% speedup in Gemini's critical kernels and achieved 32.5% improvements in FlashAttention, we're building the infrastructure that will enable every AI model to optimize its own compute stack. Of course, to automate algorithm and hardware discovery, we need to break the data barrier. CUDA is a low-resource language, and kernel optimization depends a lot on context and hardware that models simply are not trained on.

Phinity is building the canonical training data infrastructure that will enable agentic hardware engineering and optimization, which will fuel algorithmic discovery. We are building environments for agents to learn to write kernel from a spec and optimize them on specific hardware, and eventually, to discover new hardware breakthroughs. Our customers include one of the largest frontier model labs.

We're seeking top engineers for a contractor role who can optimize hardware for model training and inference workloads, who can bake their industry experience into a model. This is a hybrid Systems Engineer / AI research role where you will be looking through and debugging model reasoning traces and designing the optimal CUDA problems to teach unreleased models to automate your work in industry. Please do not apply unless you have optimized kernels before.

Skill requirements :

Languages : CUDA, C++, Python,

Frameworks : JAX / XLA, PyTorch, TensorFlow (at the C++ level), Pallas

Libraries : cuBLAS, cuDNN, CUTLASS, CUB, Thrust

Compiler Tools : NVCC, PTX assembly, MLIR / XLA understanding

Hardware Knowledge : SM architecture, tensor cores, memory hierarchies (HBM, L2, shared, registers)

Apply if you have :
  • Achieved >
  • 10x speedups on production ML workloads

    • Written kernels that outperform vendor libraries
    • Optimized attention, GEMM, or convolution at the assembly level
    • Built custom fusions that beat XLA / Triton compiler output
    • Published papers or open-source kernels used in production
    Create a job alert for this search

    Performance Engineer • Bangalore, IN