We are seeking a candidate with proven experience working as a Data Engineer, Full Stack Software Engineer with a Data Focus, or similar role. The ideal candidate is comfortable in DevOps and Software Engineering.


  • Build Java Kafka-Streams (KStreams) Applications to create data pipelines in Kafka, utilizing Kafka Connectors to ingest/sink data (Apache Debezium, Snowflake Sink, etc)
  • Build Python micro-batching applications that consume data from, and produce data to Kafka topics in the Avro format
  • Develop and evolve both Avro schemas (AVSC) in the schema registry, and relational database schemas in PostGres and Snowflake to manage our data as it moves through the platform
  • Participate in the design and implementation of best practices in our Kafka cluster(s), applications, and other data infrastructure
  • Maintain, automate, and optimize reporting infrastructure, including cleaning, enriching, and restructuring datasets
  • Bachelors in Computer Science or related field required
  • 5+ years experience working on data warehousing, data systems, machine learning, or big data problems
  • Analytical and Business Intelligence/visualization skills a plus
  • 5+ years of Data Engineering or Software Engineering experience using python, scala, java, etc
  • Fluent in Python and Java KStreams Application experience is a plus
  • Fluent in advanced SQL (DDL & DML): data modeling, indexing, and database tuning experience with data warehouses, data lakes, or other distributed data system implementations Postgres, Redshift, Snowflake, Avro experience is a plus
  • Strong communication skills and experience communicating across business units
  • Experience working with Data Scientists and Data Analysts is a plus
  • Ability to create fast solutions to problems introduced in a changing environment with iteration towards optimal solutions
  • Scrum Agile team experience
Job Country
Job City Name
New York
Job State