hadoop - HDFS can I specify replication factor per file to increase avaliability -
i'm newbie in hdfs, sorry if question naive.
suppose store files in hadoop cluster. files popular , requested often(but not put them in memory) other. worth keep more copies(replicas) of files.
can implement in hdfs or there best practice tackle task?
yes, can entire cluster/directory/file individually.
you can change replication factor(lets 3) on per-file basis using hadoop fs shell.
[sys@localhost ~]$ hadoop fs –setrep –w 3 /my/file
alternatively, can change replication factor(lets 3) of files under directory.
[sys@localhost ~]$ hadoop fs –setrep –w 3 -r /my/dir
to change replication of entire hdfs 1:
[sys@localhost ~]$ hadoop fs -setrep -w 1 -r /
but replication factor should lie between dfs.replication.max , dfs.replication.min value.
Comments
Post a Comment