Job summary

Location:
Metro Detroit, United States, North America
Career Level:
Senior (5+ years of experience)
Education:
Bachelor's Degree
Job type:
Full time
Positions:
1
Salary:
$90K - $110K+
Apply before:
03 Nov, 2017
Cancel

Hadoop Big Data Engineer

  Bookmark this job About this job

Our client is a fast growing, private health care provider specializing in Medicaid, Medicare and Pharmacy benefit management based in Michigan serving multiple states. This company has an immediate opening for a Hadoop Big Data Engineer among other related BI and Reporting related roles  This position offers an exciting opportunity to build out a leading edge Hadoop and big data environment. The goal is to leverage modern and emerging platforms to house the company’s data for the many reporting and analytics needs of the organization. In this role you will be at the forefront of building out the environment, inventing new processes, configuring and developing components where necessary, and being an all-around Hadoop generalist. Full-Time, permanent hire.

The following are high level requirements for the role:    

  • 2+ Years of Experience with Hadoop – Cloudera, Scala, SPARC, Flume & Kafka
  • Extensive Background with Data Design & Database Development
  • Must be well versed in Big Data Strategies in Enterprise Environments (Healthcare Preferred)

Responsibilities:

  • Install and configure Hadoop components and related utilities
  • Develop processes for source data ingestion, transformation, and database loading
  • Develop processes for data quality monitoring
  • Develop processes to support “Data as a Service (DaaS)”
  • File system management, monitoring, support and maintenance for HDFS, KUDU, and HBASE
  • Create scalable and high-performance web services for data tracking
  • Assisting with data lake folder design
  • Recommending and establishing security policies and procedures for the Hadoop environment
  • Assisting in the development and implementation of various strategic initiatives.
  • Contribute to development of Architecture Policies, Standards and Governance for the Hadoop and Big Data environment
  • Research and development with promising leading edge big data technologies
  • Participating in data architecture design and review processes, including planning and monitoring efforts, reviewing deliverables, and communicating to management
  • Responding to change and engaging in multiple projects simultaneously
  • Works with minimal guidance; seeks guidance on only the most complex tasks

Experience

  • 7+ years of experience as an IT professional
  • 2+ years working with Hadoop (Cloudera, Scala etc.)
  • 3+ years working with data design or database development
  • Bachelors Degree in Related Field (Required)
  • Experience with reporting tools such as Tableau, Qlikview, Datameer, etc (Preferred)
  • Prior experience in a complex, highly integrated services environment
  • Working knowledge of Red Hat LINUX
  • Good aptitude in multi-threading and concurrency concepts
  • Understanding of and experience developing in Hadoop
  • Working knowledge of Kafka, Flume, Hive, Spark, Impala, Sqoop Oozie, Hbase, Zookeeper, HUE
  • Expert level SQL knowledge and experience with a relational database
  • Working knowledge of Pig Latin, HiveQL, Python or Java
  • Substantial understanding of reporting and analytics tools
  • Experience working with data lakes
  • Pre and post installation of Hadoop software and good understanding of Hadoop ecosystems dependencies
  • Implementing data ingress and egress-Facilitating generic input/output, moving bulk data into and out of Hadoop
  • Expertise in setting up, configuration and management of data security
  • On-going support for various Hadoop environments - DEMO, TEST, UAT, and PROD.

Job keywords/tags:  Hadoop , Big Data , Enterprise Environment , Scala , Java , Python , SPARC , Flume , Kafka , Data Design , Database Development