Senior ML Engineer
Job Title: Senior ML Engineer – MLOps & Productionization
Level: Senior
Employment Type: Full-time
Experience: 7+ Years
Location: Islamabad
Role Overview
We are seeking a Senior ML Engineer to bridge the gap between experimental AI/ML prototypes and production-ready enterprise systems. As our AI/ML initiatives evolve from research to production, you will be responsible for operationalizing ML and LLM models, ensuring they run reliably at scale, and establishing robust engineering practices within our intelligence layer.
This role requires a hands-on engineer with deep Python expertise, strong MLOps experience, and the ability to deliver high-performance, containerized services that support large-scale, streaming data environments.
Key Responsibilities
Production Transition & Microservices Development
Refactor Python-based ML models and LLM chains from experimental notebooks into production-ready, containerized microservices
Ensure robust versioning, logging, and monitoring of models in production
Collaborate with AI/ML researchers to translate R&D prototypes into scalable software
MLOps & Deployment Pipelines
Build automated CI/CD pipelines for model deployment, monitoring, and retraining
Implement model lifecycle management, including rollback strategies and reproducibility
Integrate ML models into high-velocity streaming pipelines with low-latency requirements
Performance, Reliability & Standards
Optimize ML inference logic for real-time and high-throughput environments
Define and enforce coding standards, testing frameworks, and best practices for the AI/ML engineering team
Ensure system stability, reliability, and observability across the intelligence layer
Technical Requirements
Advanced Python Expertise: Beyond scripting – experience with FastAPI, design patterns, testing frameworks, and clean, maintainable code
Containerization & Orchestration: Deep experience with Docker; familiarity with Kubernetes or equivalent orchestration for ML workloads
MLOps & Deployment: Proven experience with CI/CD, model versioning, automated retraining, and tools such as MLflow, Kubeflow, or similar
Experience operationalizing ML/LLM models in production, with strong focus on scalability and reliability
Nice to Have
Experience with cloud-native AI/ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML)
Familiarity with streaming data platforms (Kafka, Pulsar, or similar)
Exposure to monitoring, logging, and alerting frameworks for ML systems
Knowledge of enterprise-grade security and compliance practices for AI
Soft Skills
Strong problem-solving and analytical thinking
Ability to translate research prototypes into robust engineering solutions
Excellent collaboration skills, capable of working across AI, data engineering, and DevOps teams
High ownership mentality with a focus on production stability and scalability
Why Join
Play a critical role in moving cutting-edge AI/ML into production
Shape the MLOps strategy and best practices for a growing intelligence platform
Work in a fast-moving, enterprise-scale AI/ML environment with real business impact