by
Why are blocks in HDFS huge?

1 Answer

0 votes
by

By default, the size of the HDFS data block is 128 MB. The ideas for the large size of blocks are:

  • To reduce the expense of seek: Because of the large size blocks, the time consumed to shift the data from the disk can be longer than the usual time taken to commence the block. As a result, the multiple blocks are transferred at the disk transfer rate.
  • If there are small blocks, the number of blocks will be too many in Hadoop HDFS and too much metadata to store. Managing such a vast number of blocks and metadata will create overhead and head to traffic in a network.

Related questions

0 votes
    What are Hadoop HDFS Commands?...
asked Aug 3, 2021 by JackTerrance
0 votes
    What are the types of Znode?...
asked Aug 3, 2021 by JackTerrance
0 votes
    What are main the YARN components?...
asked Aug 3, 2021 by JackTerrance
0 votes
    What is DistCp?...
asked Aug 5, 2021 by JackTerrance
0 votes
    List the components of Apache Spark?...
asked Aug 2, 2021 by JackTerrance
0 votes
    What is shuffling in MapReduce?...
asked Aug 2, 2021 by JackTerrance
0 votes
    Various climatic factors like are considered during construction of runways, seaports, huge bridges and ... proposed by,electromagnetic theory engineering physics,Science nptel...
asked Nov 7, 2021 in Education by JackTerrance
0 votes
    What are the main differences between HDFS (Hadoop Distributed File System ) and Network Attached Storage(NAS) ?...
asked Aug 2, 2021 in Technology by JackTerrance
0 votes
    What are the different Features of HDFS?...
asked Aug 2, 2021 in Technology by JackTerrance
0 votes
    What are the mandatory configurations needed in order to connect to HDFS?...
asked Mar 23, 2021 in Technology by JackTerrance
0 votes
    By default the records from databases imported to HDFS by sqoop are A - Tab separated B - Concatenated columns C - space separated D - comma separated...
asked Jan 13, 2021 in Technology by JackTerrance
0 votes
    ________ is a collection of data that is huge in volume, yet growing exponentially with time. A. Big Dataabase B. Big Data C. Big Datafile D. Big DBMS...
asked Dec 2, 2022 in Education by JackTerrance
0 votes
    I am loading a (5gb compressed file) into memory (aws), creating a dataframe(in spark) and trying ... JavaScript Questions for Interview, JavaScript MCQ (Multiple Choice Questions)...
asked Jun 12, 2022 in Education by JackTerrance
0 votes
    Is it possible to memory-map huge files (multiple GBs) in Java? This method of FileChannel looks promising: ... Questions for Interview, JavaScript MCQ (Multiple Choice Questions)...
asked May 15, 2022 in Education by JackTerrance
0 votes
    Is it possible to memory-map huge files (multiple GBs) in Java? This method of FileChannel looks promising: ... Questions for Interview, JavaScript MCQ (Multiple Choice Questions)...
asked May 10, 2022 in Education by JackTerrance
...