Job BriefThis role is focused on ensuring the stability, reliability, and production readiness of data and machine learning solutions delivered to our client. You’ll be responsible for:
- Bringing models and data pipelines into production environments.
- Ensuring uptime, performance, and observability of data/BI/ML services.
- Supporting integration health across internal APIs, data platforms, and business-facing systems.
You will collaborate with data scientists, engineers, and data analysts to ensure our solutions are repeatable, stable, and scalable.
VentureDive OverviewFounded in 2012 by veteran technology entrepreneurs from MIT and Stanford, VentureDive is the fastest-growing technology company in the region that develops and invests in products and solutions that simplify and improve the lives of people worldwide. We aspire to create a technology organization and an entrepreneurial ecosystem in the region that is recognized as second to none in the world.
Key Responsibilities:In this role, you will:
1. Productionize ML and Data Solutions- Deploy ML models into production in data warehouse or streaming pipelines.
- Containerize services using Docker and manage environment consistency across dev, staging, and prod.
2. Ensure Data & Pipeline Reliability- Build observability into ETL and ML pipelines using tools like Airflow, MLflow, Prometheus/Grafana.
- Monitor model inference jobs, scheduled reports, and data syncing integrations for latency, failure, and anomalies.
3. Support Deployment & Infrastructure Standards- Implement CI/CD using Git-based workflows.
- Contribute to the configuration and automation of infrastructure (IaC) where applicable.
- Align with the client’s platform engineering team on resource usage, architecture, and scale-out plans.
4. Integration Stability & API Monitoring- Validate and monitor end-to-end integrations between ML/Analytics services, business APIs, and third-party systems.
- Work with engineering stakeholders to troubleshoot service-to-service failures, data loss, and schema drift.
Required Qualifications & Experience:- Bachelor’s degree in Computer Science, Data Engineering, or related technical field.
- 5+ years in software/data engineering with at least 2 years in MLOps, DataOps, or reliability engineering.
- Strong command of:
- Python (automation, orchestration, or monitoring scripts)
- SQL for pipeline validation
- Airflow, MLflow, or similar tools
- Docker for containerization - Solid understanding of cloud data services (GCP, AWS) and deployment pipelines.
- Solid understanding or ability to gain quickly confidence in on prem data centers (stack: vertica/talend/Qlickview, to be migrated progressively to the cloud.
Bonus Skills:- Experience with Kubernetes, Terraform, or IaC-based environments.
- Experience with databricks
- Familiarity with REST APIs for ML model serving or system-to-system integrations.
- Exposure to monitoring tools like Prometheus, Datadog, or Grafana.
What we look for beyond required skills
In order to thrive at VentureDive, you
…are intellectually smart and curious
…have the passion for and take pride in your work
…deeply believe in VentureDive’s mission, vision, and values
…have a no-frills attitude
…are a collaborative team player
…are ethical and honest
Are you ready to put your ideas into products and solutions that will be used by millions?
You will find VentureDive to be a quick pace, high standards, fun and a rewarding place to work at. Not only will your work reach millions of users world-wide, you will also be rewarded with competitive salaries and benefits. If you think you have what it takes to be a VenDian, come join us ... we're having a ball!
#LI-Hybrid