Back to Jobs
ArmadaAI & Machine Learning 2h ago

AI Factory Customer Engineer

AustraliaAustralia
Full-time
Not Disclosed

Job Description

About the Company

Armada is a full-stack edge infrastructure company delivering compute, connectivity, and sovereign AI/ML to some of the world’s most remote places. Named one of Fast Company's Most Innovative Companies, Armada’s solutions are deployed in over 60 countries globally for organizations ranging from energy to defense.

With over $200 million in funding, Armada is backed by top investors such as Microsoft (M12), Founders Fund, and has strategic partnerships including Starlink, Skydio, and NVIDIA. We are looking for the most brilliant minds in the world to join us.

Working at Armada means taking ownership, driving autonomy, and delivering impact. You’ll tackle challenges that haven’t been solved before and help build something transformative from the ground up. What you do here will not only define your career but help further Armada’s mission to bridge the digital divide for customers around the world.

About the Role

The AI Factory Customer Engineer plays a pivotal role in bridging the gap between customers and Armada’s Product and Engineering teams. This role requires deep technical credibility, hands‑on infrastructure experience, and strong interpersonal skills to translate complex AI infrastructure and data center architectures into clear, practical customer solutions.

This role goes beyond traditional pre‑sales. The Customer Engineer acts as the primary technical interface between customers and Armada Engineering, ensuring customer requirements, constraints, and feedback are accurately represented in solution design and incorporated into the evolution of the AI Factory platform.

Armada’s AI Factory serves a broad and growing customer base across enterprise, industrial, and infrastructure‑heavy verticals. Strategic customer segments include AI data center co‑location providers, Neocloud operators, renewable energy operators, telecoms, MSPs, and land‑and‑power asset owners. These customers rely on Armada’s modular, liquid‑cooled AI Factory for rapid, scalable deployment of high‑density AI compute.

The ideal candidate brings an engineering‑first mindset, strong expertise in modular and liquid‑cooled data centers, GPU‑based systems, and the energy, curiosity, and positivity required to thrive in fast‑moving and ambiguous environments. This role is instrumental in driving adoption, trust, and long‑term customer success.

Key Responsibilities

  • Serve as the primary technical partner between customers and Armada’s Product and Engineering teams, translating real‑world requirements into actionable designs.
  • Provide hands‑on technical guidance on AI Factory solutions, including modular and liquid‑cooled data centers and NVIDIA‑based GPU systems.
  • Advise customers on workload suitability, rack‑level design, system architecture, and deployment tradeoffs.
  • Lead technical demos, proofs‑of‑concept, and working sessions tailored to customer environments and constraints.
  • Build trusted‑advisor relationships with engineering, infrastructure, IT, OT, and security stakeholders.
  • Enable Sales and field teams by distilling complex infrastructure topics into clear, outcome‑driven narratives.
  • Take end‑to‑end ownership of technical engagements, bringing curiosity, urgency, and a problem‑solving mindset.

Required Qualifications & Technical Expertise

  • Bachelor’s degree in Engineering, Computer Science, or related field (or equivalent hands‑on experience); advanced degrees a plus.
  • 5+ years of experience in data center engineering, infrastructure engineering, pre‑sales/sales engineering, or solution architecture.
  • Strong foundation in compute, networking, and storage, including GPU‑based AI systems (e.g., NVIDIA DGX, HGX, MGX).
  • Hands‑on experience with data center infrastructure, including MEP systems, cooling architectures, and rack‑level design.
  • Deep familiarity with modular and/or liquid‑cooled data center architectures, power density, and thermal management.
  • Ability to translate AI workload requirements into scalable, production‑ready infrastructure designs.
  • Working knowledge of cloud platforms, containerization, virtualization, and modern enterprise infrastructure.
  • Comfort operating in complex, live customer environments, with strong troubleshooting skills.
  • Ability to build credibility across IT and OT stakeholders, particularly in industrial or energy‑adjacent contexts.
  • Excellent communication skills, capable of engaging both deeply technical teams and executive audiences.
  • Willingness to travel as required.

Safety First

  • Never pay for a job application.
  • Do not share sensitive bank info.
  • Verify the client before starting work.