Striving for excellence is in our DNA. Since 1993, we have been helping the world’s leading companies imagine, design, engineer, and deliver software and digital experiences that change the world. We are more than just specialists, we are experts.
Currently we are looking for a Lead Big Data Engineer for our Hartford, CT office to make the team even stronger.
Our ideal candidate should possess skills that make him/her a subject matter expert in various Big Data technologies and data science. This role will entail architecture analysis, design, development, and other skills.
EPAM occupies a unique position in the digital services industry. Part agency, part consultancy, part innovation lab, EPAM is dedicated to helping global enterprises thrive in the digital age. We innovate breakthrough strategies, experiences and products designed to reinvent the relationship between companies and their customers. We believe in testing and measuring, in continuous improvement, in the relentless pursuit of results.
Develop proposals for implementation and design of scalable Big Data architecture;
Participate in customer’s workshops and presentation of the proposed solution;
Develop scalable production ready data integration and processing solutions;
Convert large volumes of structured and unstructured customer data;
Design, implement, and deploy high-performance, custom applications at scale on Hadoop;
Define and develop network infrastructure solutions to enable partners and clients to scale NoSQL and relational database architecture for growing demands and traffic;
Define common business and development processes, platform and tools usage for data acquisition, storage, transformation, and analysis;
Develop roadmaps and implementation strategy around data science initiatives including recommendation engines, predictive modeling, and machine learning;
Review and audit of existing solution, design and system architecture;
Perform profiling, troubleshooting of existing solutions;
Create technical documentation.
Strong knowledge of programming in Java and Python;
Experience with major Big Data technologies and frameworks including but not limited to Hadoop, MapReduce, HDFS, AWS S3, HBase, Apache Storm, Apache Flink, ElasticSearch, Kibana, Kafka, AWS EMR, and Spark;
Experience with Big Data solutions developed in large cloud computing infrastructures such as Amazon Web Services, Elastic MapReduce or Pivotal Cloud Foundry;
Experience in client-driven large-scale implementation projects;
Data Science and Analytics experience is a plus (Machine Learning, Recommendation Engines, Search Personalization);
Experience with major data integration technologies and frameworks including but not limited to Informatica Integration Hub;
Technical team leading and team management experience, deep understanding of Agile (Scrum), RUP programming process;
Strong experience in applications design, development and maintenance;
Solid knowledge of design patterns and refactoring concepts;
Practical expertise in performance tuning and optimization, bottleneck problems analysis;
Solid technical expertise and troubleshooting skills;
Possess expertise in Object-Oriented Analysis and Design.