Enterprise Architect
Role details
Job location
Tech stack
Job description
- Define and own AI reference architecture across data ingestion, model orchestration, inference services and application integration
- Architect and deploy LLM solutions (GPT, BERT, Transformers) including RAG pipelines and vector databases
- Lead LLMOps / MLOps strategy including model lifecycle, CI/CD for ML, model registry and monitoring
- Design scalable cloud-native AI solutions in Azure, AWS or GCP
- Ensure governance, Responsible AI, security, compliance and non-functional requirements (NFRs)
- Engage senior stakeholders and shape AI roadmaps from discovery through delivery
Requirements
-
Proven experience as an AI Architect / Machine Learning Architect / GenAI Architect
-
Hands-on expertise with LLMs, RAG, LangChain, LangGraph, prompt engineering
-
Strong cloud experience: Azure OpenAI, AWS Bedrock/SageMaker or GCP Vertex AI
-
Experience with Kubernetes, Docker, microservices, API integration
-
Strong knowledge of Python, MLOps, LLMOps, CI/CD, model monitoring
-
Experience delivering enterprise AI solutions at scale Desirable
-
Experience with vector databases (Pinecone, FAISS)
-
AI governance, compliance and security architecture
-
Azure AI / AWS ML / Kubernetes certifications This role suits a technically hands-on architect who has delivered production AI platforms, not purely research or academic profiles.