Lead Big Data & Cloud Operations Engineer

Austin, TX, USA

Striving for excellence is in our DNA. Since 1993, we have been helping the world’s leading companies imagine, design, engineer, and deliver software and digital experiences that change the world. We are more than just specialists, we are experts.

DESCRIPTION


Currently we are looking for a Lead Big Data & Cloud Operations Engineer for our Austin office to make the team even stronger.

We are looking for a Lead Big Data & Cloud Operations Engineer with a background in supporting on-premise and cloud data warehouse platforms as well as associated tools and frameworks for ETL processing, job scheduling, code deployment, end-user analytics and process automation. We are looking for highly skilled, passionate individuals who are quick learners, excited about new technologies, and willing to support, maintain, and implement solutions on our Hadoop and cloud platforms.

Responsibilities

  • Lead the monitoring and troubleshooting of jobs, capacity, availability, and performance problems;
  • Support platforms and tools for analytics on Hadoop and in AWS (Druid, HUE);
  • Work closely with infrastructure and development teams on capacity planning;
  • Work with infrastructure teams on evaluating new types of hardware and software to improve system performance and capacity;
  • Partner with developers on evaluating or developing new tools to simplify and speed up data pipeline development and delivery;
  • Help automate and support code deployment processes;
  • Provide timely communication to stakeholders and users on issue status and resolution;
  • Work closely with offshore operations team by delegating routine and project tasks.

Requirements

  • Bachelor's Degree in computer science, computer engineering, or a related field;
  • 5+ years of working in a Big Data operations engineer role for a large organization;
  • 2+ years of experience in leading junior operations engineers or contractors;
  • Excellent written and verbal communication skills;
  • Ability to multitask;
  • Highly agile and quick learner;
  • Experience in monitoring large-scale Big Data ETL processes on Hadoop and in the cloud (AWS);
  • Expert in Hadoop (HDFS, Hive, HBase, HUE, Sqoop, Spark, Oozie, Cassandra);
  • Expert in AWS (RedShift, Druid, S3);
  • Experience in writing scripts for operations and monitoring (Python is preferred);
  • Ability to thrive in a fast-paced environment, and the passion to make a difference.

Why EPAM?

EPAMers appreciate our flexible work environment, great benefits, and opportunities to thrive. 

Life@EPAM

Take a sneak peek at our life in and out of the office. We're more than teammates – we're a community of friends.