Site icon Heitmeyer Consulting

Data Engineer

To Apply for this Job Click Here

Heitmeyer Consulting has a banking client that has a need within their Chief Data Office for a strong Data Engineer to specialize in building and optimizing high-performance, real-time data pipelines. This role is central to leveraging the power of Apache Kafka for event streaming and Apache Flink for complex, stateful stream processing and analytics. The ideal candidate will transform raw, high-velocity data into actionable, low-latency insights that drive core business functionality, working within our AWS-based data ecosystem leveraging S3 for storage. Role must be based in Dallas, TX or Tulsa, OK.

Top Required Skills:

  1. Require extensive experience as a Data Engineer building and maintaining production-grade data pipelines with a focus on real-time systems. 
  2. In-depth experience with Apache Kafka along with hands-on experience with Apache Flink, including understanding of state management, fault tolerance and time semantics. 
  3. Significant cloud background with strong expertise in AWS cloud services related to data engineering (S3, MSK, Glue, Athena, EMR, DynamoDB, etc.).
  4. Proficiency in programming/scripting languages such as Python, Java, or Scala.
  5. Familiar with modern data stack and big data technologies (Spark, Airflow, Iceberg, SQL) along with understanding of distributed systems and processing real-time data at scale. 
  6. Excellent problem-solving skills and the ability to troubleshoot complex issues in distributed systems.

Nice-to-have:

  1. Background within financial services would be preferred but not required.

Key Responsibilities

Heitmeyer Consulting is an equal opportunity employer, and we encourage all qualified candidates to apply. Qualified applicants will be considered without regard to minority status, gender, disability, veteran status or any other characteristic protected by law.
 

To Apply for this Job Click Here

Exit mobile version