Job BriefWe are seeking a skilled
Senior Data Engineer to support the modernization of existing Talend-based ETL pipelines into a modern data engineering ecosystem leveraging Python, dbt, Kafka, Apache NiFi, and orchestration tools such as Airflow or Dagster.
You will work closely with senior engineers to migrate, build, test, and maintain high-quality data pipelines across the organization. This role is ideal for professionals with strong hands-on data engineering skills, a collaborative mindset, and an eagerness to work with modern data stack technologies.
VentureDive OverviewFounded in 2012 by veteran technology entrepreneurs from MIT and Stanford, VentureDive is the fastest-growing technology company in the region that develops and invests in products and solutions that simplify and improve the lives of people worldwide. We aspire to create a technology organization and an entrepreneurial ecosystem in the region that is recognized as second to none in the world.
Key Responsibilities:
Pipeline Migration & Development- Assist in re-engineering legacy Talend pipelines into Python, dbt, and Airflow/Dagster workflows.
- Ensure pipeline logic, data mappings, and tests are accurately replicated and validated.
- Support both legacy and new pipeline environments during the transition period.
Data Ingestion & Processing- Develop and maintain data ingestion flows using Kafka, Apache NiFi, and REST APIs.
- Work with batch and streaming data across structured, semi-structured, and unstructured formats.
- Implement data validation, quality checks, schema enforcement, and row-level transformations.
Transformation & Modeling- Contribute to dbt development (models, tests, documentation, snapshots).
- Support transformation logic to maintain accuracy, maintainability, and lineage.
Monitoring & Maintenance- Monitor daily ETL/ELT workflows for failures, bottlenecks, or data quality issues.
- Perform root-cause analysis and escalate complex issues when needed.
- Optimize performance across data ingestion, processing, and transformation layers.
Documentation & Collaboration- Maintain well-structured documentation for pipeline logic, migration work, and data flows.
- Collaborate with senior engineers, QA, data analysts, architects, and platform teams.
- Participate in Agile ceremonies: stand-ups, planning, reviews, and retrospectives.
Required Experience & Qualification:- 4–6 years of experience in Data Engineering or ETL development.
- Strong SQL and Python skills (file processing, APIs, data manipulation).
- Experience with:
dbt (models, tests, documentation)
Airflow or Dagster (DAG creation, scheduling, monitoring)
Kafka and/or Apache NiFi for data ingestion - Familiarity with modern data formats:
- Parquet, Delta Lake, Iceberg, JSON, CSV
- Understanding of schema management, data validation, and distributed storage systems.
- Hands-on experience with Git and CI/CD workflows.
- Ability to work in Agile/Scrum teams with strong communication and collaboration skills.
Preferred Skills:- Experience with cloud platforms (AWS, GCP, or Azure).
- Exposure to DataOps/MLOps workflows is a plus.
- Working knowledge of containerization (Docker, Kubernetes).
- Prior experience in data pipeline migration or refactoring projects.
What we look for beyond required skills
In order to thrive at VentureDive, you
…are intellectually smart and curious
…have the passion for and take pride in your work
…deeply believe in VentureDive’s mission, vision, and values
…have a no-frills attitude
…are a collaborative team player
…are ethical and honest
Are you ready to put your ideas into products and solutions that will be used by millions?
You will find VentureDive to be a quick pace, high standards, fun and a rewarding place to work at. Not only will your work reach millions of users world-wide, you will also be rewarded with competitive salaries and benefits. If you think you have what it takes to be a VenDian, come join us ... we're having a ball!
#LI-Hybrid