In a Hadoop cluster, how to contribute a limited/specific amount of storage as a slave to the cluster

Task 4.1

The Hadoop cluster is set up on the top of the AWS cloud. We configured namenode (master) as well as datanode (slave).

Created external harddisk (EBS) and attached it to the datanode (slave-1).

Why we had done this?

To contribute limited storage (if you want more storage then you can make a large hard disk) to the namenode. Also, it will be persistent after the reboot or crash or corrupt.

After attaching it to datanode we have to partition the external harddisk and then format it.

n for the new partition.

p for primary partition.

w to save the partition.

format the partition
mkfs.ext4 /dev/xvdf
--------------------------------------------------------------------create folder
mkdir datanode
--------------------------------------------------------------------mount the harddisk to the folder.
mount /dev/xvdf /datanode
--------------------------------------------------------------------Confirm the harddisk is attached or not.
df -h
start the datanode
hadoop-daemon.sh start datanode
--------------------------------------------------------------------
check cluser details
hadoop dfsadmin -report

Github

https://github.com/neerajsinghnegi/Arth.git

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Neeraj Singh Negi
Neeraj Singh Negi

Written by Neeraj Singh Negi

AWS | Openstack | GCP | Ansible | Redhat 8 Linux | Docker | Jenkins | Kubernetes | Hadoop | Python | Machine Learning

No responses yet

Write a response