hero

Work with purpose.

Career Opportunities in the True Portfolio
114
companies
905
Jobs

Team Lead, Data Engineering

TripleLift

TripleLift

Data Science
Pune, Maharashtra, India
Posted on Apr 9, 2024

About TripleLift

We're TripleLift, an advertising platform on a mission to elevate digital advertising through beautiful creative, quality publishers, actionable data and smart targeting. Through over 1 trillion monthly ad transactions, we help publishers and platforms monetize their businesses. Our technology is where the world's leading brands find audiences across online video, connected television, display and native ads. Brand and enterprise customers choose us because of our innovative solutions, premium formats, and supportive experts dedicated to maximizing their performance.

As part of the Vista Equity Partners portfolio, we are NMSDC certified, qualify for diverse spending goals and are committed to economic inclusion. Find out how TripleLift raises up the programmatic ecosystem at triplelift.com.

The Team

The data platform engineering organization(DPE)’s mission is to ingest, process, store and surface billions of events we generate across our platform and make it available to our internal and external partners. The scale at which we operate peaks over 200 billion events per day, our platform needs to sustain such scale in a robust way following SOLID principles and SDLC best practices

The Role

TripleLift is seeking a Senior Data Engineer to join the influential Data Engineering team within the Data Platform Engineering organization. This hire will be responsible for expanding and optimizing our distributed computing data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is a software engineer at heart and an experienced data pipeline builder who enjoys optimizing data infrastructure systems and building them from the ground up. The Senior Data Engineer will support our backend engineering teams, product managers, business intelligence analysts and data scientists on data initiatives, and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed, have a strong sense of ownership and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.

Responsibilities

  • Create and maintain optimal data pipeline architecture
  • Explore and assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build and maintain the data infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Spark, EMR, S3, Kafka, Snowflake, Airflow, Druid, Kafka Streams and other big data technologies
  • Work with stakeholders across different teams, including product managers, engineers and analysts to assist with data-related technical issues and support their data infrastructure needs.

Basic Qualifications

  • Advanced experience with distributed processing frameworks like Spark, Kafka etc.
  • Advanced experience with a compiled language (Java, Scala, Go, etc.) and a scripting language preferably Python
  • Expert working experience with SQL
  • Experience with cloud-based data technologies such as Snowflake, Databricks, Trino/Presto, or similar
  • Experience with orchestration tools like Airflow, Luigi, etc.
  • Experience working with AWS services like S3, EC2, EMR, RDS, etc
  • Strong working experience with relational databases as well as NoSQL databases
  • Experience building and optimizing ‘big data’ pipelines, architectures and data sets.
  • Experience with building robust data models to grow and evolve with the company’s business needs
  • Experience supporting and working with cross-functional teams in a dynamic environment.

Preferred qualifications:

  • 5+ years of data engineering experience
  • Bachelor’s degree in Computer Science, computer engineering, electrical engineering OR equivalent work experience.
  • Experience with Elasticsearch or OpenSearch
  • Experience with IaaC tools like Terraform, etc.
  • Experience with CI/CD tools like Github actions, etc.
  • Experience with Druid
  • Experience with Kafka Streams, Flink, etc

#LI-TP1

Life at TripleLift

At TripleLift, we’re a team of great people who like who they work with and want to make everyone around them better. This means being positive, collaborative, and compassionate. We hustle harder than the competition and are continuously innovating.

Learn more about TripleLift and our culture by visiting our LinkedIn Life page.

Diversity, Equity, Inclusion and Accessibility at TripleLift

At TripleLift, we believe in the power of diversity, equity, inclusion and accessibility. Our culture enables individuals to share their uniqueness and contribute as part of a team. With our DE&I initiatives, TripleLift is a place that works for you, and where you can feel a sense of belonging and support. At TripleLift, we will consider and champion all qualified applicants for employment without regard to race, creed, color, religion, national origin, sex, age, disability, sexual orientation, gender identity, gender expression, genetic predisposition, veteran, marital, or any other status protected by law. TripleLift is proud to be an equal opportunity employer.

Learn more about our DEI efforts at https://triplelift.com/diversity-equity-and-inclusion/

Privacy Policy

Please see our Privacy Policies on our TripleLift and 1plusX websites.

TripleLift does not accept unsolicited resumes from any type of recruitment search firm. Any resume submitted in the absence of a signed agreement will become the property of TripleLift and no fee shall be due.