Senior AI Platform Engineer, Core Cloud Engineering
Vultr
Software Engineering, Data Science
Remote
USD 110k-140k / year
Location
Remote - United States
Employment Type
Full time
Location Type
Remote
Department
Engineering
Who We Are
Vultr is on a mission to make high-performance cloud infrastructure easy to use, affordable, and locally accessible for enterprises and AI innovators around the world. With 32 global cloud data center locations, Vultr is trusted by hundreds of thousands of active customers across 185 countries for its flexible, scalable, global Cloud Compute, Cloud GPU, Bare Metal, and Cloud Storage solutions. In December 2024 Vultr announced an equity financing at a $3.5 billion valuation. Founded by David Aninowsky and self-funded for over a decade, Vultr has grown to become the world’s largest privately-held cloud infrastructure company.
Vultr Cares
100% company-paid insurance premiums for employee medical, dental and vision plans.
401(k) plan that matches 100% up to 4%, with immediate vesting
Professional Development Reimbursement of $2,500 each year
11 Holidays + Paid Time Off Accrual + Rollover Plan
Commitment matters to Vultr! Increased PTO at 3 year and 10 year anniversary + 1 month paid sabbatical every 5 years + Anniversary Bonus each year
$500 stipend for remote office setup in first year + $400 each following year
Internet reimbursement up to $75 per month
Gym membership reimbursement up to $50 per month
-
Company paid Wellable subscription
Join Vultr
Vultr is seeking a highly skilled and experienced AI Platform Engineer to own the strategy and execution for embedding AI into the day-to-day workflows of our software engineering organization. The ideal candidate combines strong software engineering fundamentals with hands-on experience deploying LLM inference infrastructure and a genuine passion for accelerating how engineers work. This is your opportunity to leave your mark on the future of Cloud Infrastructure and transform how Vultr builds.
Key Responsibilities
Evaluate and curate open-source models — Llama, Mistral, Qwen, DeepSeek, Kimi, and others — for fit across engineering use cases including code generation, review, test writing, and summarization.
Build and maintain MCP (Model Context Protocol) servers that expose internal context — codebases, runbooks, incident history, architecture docs, development environments, and testing suites — to AI assistants and coding agents.
Integrate AI capabilities directly into GitLab CI/CD pipelines: automated code review, test generation, changelog drafting, PR summarization, and anomaly detection in build output.
Own the model lifecycle: versioning, A/B routing, quantization tradeoffs, and performance benchmarking under real engineering workloads.
Drive AI adoption across the software engineering organization — identify high-leverage workflows, instrument usage, and iterate based on real data on time-savings and quality impact.
Build and configure IDE tooling integrations — Cursor, Continue, and Copilot alternatives — backed by internal inference endpoints, keeping code off third-party APIs wherever possible.
Produce documentation, internal workshops, and working examples that help engineers go from AI-curious to AI-reliant — including a shared library of prompts, system instructions, and RAG pipelines tuned for Vultr’s stack.
Collaborate closely with Software Engineers, SREs, and Network Engineers to ensure the AI platform layer serves all teams without becoming a bottleneck or single point of failure.
Qualifications
Hands-on experience deploying and operating LLM inference systems — vLLM, SGLang, TGI, or comparable — at non-trivial scale.
Strong Docker and container skills; comfortable owning the full container lifecycle from image build to production.
Deep familiarity with GitLab CI/CD — pipeline authoring, custom runners, artifact management, and integrating external tooling.
Working knowledge of MCP or similar context-injection patterns for grounding LLMs against private or internal data.
Demonstrated ability to evaluate open-source models for specific task fit — not just benchmarks, but real use-case performance against internal workloads.
Strong software engineering fundamentals — this role writes real code, not just configuration.
Experience with RAG pipelines — vector databases, chunking strategies, retrieval evaluation — especially over code or technical documentation.
GPU infrastructure familiarity — CUDA basics, multi-GPU serving, memory management under inference load.
Ability to communicate technical tradeoffs clearly to engineers, managers, and leadership; track record of moving organizations toward new practices.
Compensation
$110,000 - $140,000
Final compensation will vary depending on years of experience, background/skill set, location, and applicable laws.
Inclusion & Privacy
We are an equal opportunity employer and are committed to creating an inclusive environment for all employees. We welcome applications from individuals of all backgrounds and experiences, and we prohibit discrimination based on race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other protected status under applicable laws. Vultr will consider qualified applicants with arrest or conviction records in accordance with applicable laws and will not conduct a background check until after an offer of employment has been extended and accepted.
We also take your privacy seriously. We handle personal information responsibly and follow applicable laws, including U.S. privacy rules and India’s Digital Personal Data Protection Act, 2023. Your data is used only for legitimate business purposes and is protected with proper security measures.
Where allowed by law, applicants may request details about the data we collect, access or delete their information, withdraw consent for its use, and opt out of nonessential communications. For more details, please see our Privacy Policy.