At this time, we are looking for a motivated Cloud Hadoop Admin who will be responsible for the implementation and ongoing Administration of Hadoop Big Data infrastructure. Support, implement and maintain Big Data infrastructure and will be responsible for end to end Hadoop cluster administration
Halo believes in innovation by inclusion to solve digital problems. As an international agency of over 200 people specializing in interactive media strategy and development, we embrace equity and empowerment in a serious way. Our interdisciplinary teams of unique designers, developers and entrepreneurial minds with a variety of backgrounds, viewpoints, and skills connect to solve business challenges of every shape and size. We empathize to form deep, meaningful relationships with our clients, so they can do the same with their audience. Working at Halo feels like belonging. Learn more about our philosophy, benefits, and team at http://halopowered.com/.
We think everyone deserves a chance, and we care about what you can do, not where you’ve been. It’s about weighing a candidate’s competencies over their credentials. We encourage talented humans of all backgrounds to apply. This includes people who are self-taught, fresh from certification bootcamps, have spent time in military service, or who have walked other unconventional paths to acquire the skills relevant to this role.
- Responsible for the implementation and ongoing administration of Hadoop infrastructure initiatives.
- Working with data delivery teams to set up new Hadoop users.
- This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Kafka, Spark and MapReduce access for the new users.
- Cluster maintenance as well as creation and removal of nodes using Hadoop Management Admin tools.
- Sound knowledge in Ranger, Nifi, Kafka, Atlas, Hive, Storm, pig, spark, Elastic Search, Solr, Kyvos, Hbase etc and other - bigdata tools.
- Performance tuning of Hadoop clusters and Hadoop Map Reduce routines.
- Screen Hadoop cluster job performances and capacity planning
- Monitor Hadoop cluster connectivity and security
- Manage and review Hadoop log files, File system management and monitoring
- HDFS support and maintenance
- Should be able to deploy Hadoop cluster running on the Cloud(MS-Azure/AWS), add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backups.
- Must know Hadoop and Big Data infrastructure
- Expert in Hadoop administration with knowledge of Hortonworks/Cloudera or Mapr Big Data management tools
- Expert in implementing and troubleshooting Hive, Spark, Pig, Storm, Kafka, Nifi, Atlas, Kyvos, Elasticsearch, Solr, Splunk, HBase applications.
- Experience with Cloud platforms such us Azure and AWS.
- Great understanding of Linux Environment
What we offer?
- 100% remote work!
- Salary in USD paid directly in a US Bank Account!
- Signing loan, so you can start working with a Mac Computer!
- Up to 3 weeks of Paid Time Off!