hadoop的ha(High availability) 架构解决了hadoop中namenode可能存在的单点故障问题,两个namenode组成一个联邦,一个为active,另一个为standby状态。yarn集群的HA,则是需要两台resourcemanager机器,一个active一个standby。
7台虚拟机:
hadoopNode01 192.168.9.11 namenode 、 zkfc 内存 1.2G
hadoopNode02 192.168.9.12 namenode 、 zkfc 内存1.2G
hadoopNode03 192.168.9.13 resourcemanager 内存 1G
hadoopNode04 192.168.9.14 resourcemanager 内存1G
hadoopNode05 192.168.9.15 datanode 、nodemanager 、journalnode、zookeeper 内存1.1G
hadoopNode06 192.168.9.16 datanode 、nodemanager 、journalnode 、zookeeper 内存1.1G
hadoopNode07 192.168.9.17 datanode 、nodemanager 、journalnode、zookeeper 内存1.1G
vi zoo.cfg 内容如下
# synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. #dataDir=/tmp/zookeeper dataDir=/home/hadoop/app/zookeeper-3.4.12/data dataLogDir=/home/hadoop/app/zookeeper-3.4.12/dataLog # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.1=hadoopNode05:2888:3888 server.2=hadoopNode06:2888:3888 server.3=hadoopNode07:2888:3888
cd /home/hadoop/app/zookeeper-3.4.12
mkdir data dataLog
cd data
echo 1 >myid
hadoopNode06、 hadoopNode07上zookeeper安装与此类似(只是myid文件中的保存的id不同而已):
hadoopNode06上 echo 2>myid
hadoopNode07上 echo 3>myid
<configuration> <!-- 指定hdfs的nameservice为ns1 --> <property> <name>fs.defaultFS</name> <value>hdfs://ns1/</value> </property> <!-- 指定hadoop临时目录 --> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/app/hadoop-2.7.6/tmp</value> </property> <!-- 指定zookeeper地址 --> <property> <name>ha.zookeeper.quorum</name> <value>hadoopNode05:2181,hadoopNode06:2181,hadoopNode07:2181</value> </property> </configuration>(3) 修改hdfs-site.xml 配置
<configuration> <!-- 指定HDFS副本的数量 默认3个副本 --> <property> <name>dfs.replication</name> <value>3</value> </property> <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <!-- ns1下面有两个NameNode,分别是nn1,nn2 --> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>hadoopNode01:9000</value> </property> <!-- nn1的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>hadoopNode01:50070</value> </property> <!-- nn2的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>hadoopNode02:9000</value> </property> <!-- nn2的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>hadoopNode02:50070</value> </property> <!-- 指定NameNode的元数据在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoopNode05:8485;hadoopNode06:8485;hadoopNode07:8485/ns1</value> </property> <!-- 指定JournalNode在本地磁盘存放数据的位置 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/hadoop/app/hadoop-2.7.6/journaldata</value> </property> <!-- 开启NameNode失败自动切换 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失败自动切换实现方式 --> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行--> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence shell(/bin/true) </value> </property> <!-- 使用sshfence隔离机制时需要ssh免登陆 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <!-- 配置sshfence隔离机制超时时间 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> <!--datanode 节点超时 14秒 --> <property> <name>heartbeat.recheck.interval</name> <value>2000</value> </property> <property> <name>dfs.heartbeat.interval</name> <value>1</value> </property> <!--HDFS冗余数据块的自动删除 10秒 --> <property> <name>dfs.blockreport.intervalMsec</name> <value>10000</value> </property>
(4)修改mapred-site.xml配置
vi mapred-site.xml 修改如下
<configuration> <!-- 指定mr框架为yarn方式 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
(5)修改yarn-site.xml配置
vi yarn-site.xml 修改如下:
<configuration> <!-- 开启RM高可用 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!-- 指定RM的cluster id --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <!-- 指定RM的名字 --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 分别指定RM的地址 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>hadoopNode03</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>hadoopNode04</value> </property> <!-- 指定zk集群地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>hadoopNode05:2181,hadoopNode06:2181,hadoopNode07:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>(6) 修改slave 配置 (对于hdfs从节点为datanode,对于yarn从节点为nodemanager)
hadoopNode05 hadoopNode06 hadoopNode07
(7) 配置7台机器之间相互ssh免密
分别在7台机器上执行:
ssh-keygen -t rsa
ssh-copy-id hadoopNode01
ssh-copy-id hadoopNode02
ssh-copy-id hadoopNode03
ssh-copy-id hadoopNode04
ssh-copy-id hadoopNode05
ssh-copy-id hadoopNode06
ssh-copy-id hadoopNode07
(8) scp 远程复制及其他配置
cd app/hadoop-2.7.6/
mkdir tmp
hadoop安装完成后,复制hadoopNode01上hadoop-2.7.6到其他6台机器上.
scp -r hadoop-2.7.6 hadoopNode02:/home/hadoop/app/
scp -r hadoop-2.7.6 hadoopNode03:/home/hadoop/app/
scp -r hadoop-2.7.6 hadoopNode04:/home/hadoop/app/
scp -r hadoop-2.7.6 hadoopNode05:/home/hadoop/app/
scp -r hadoop-2.7.6 hadoopNode06:/home/hadoop/app/
scp -r hadoop-2.7.6 hadoopNode07:/home/hadoop/app/
7台机器分别配置环境变量
su root
vi /etc/profile
JAVA_HOME=/home/hadoop/app/jdk1.7.0_80 CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar HADOOP_HOME=/home/hadoop/app/hadoop-2.7.6 PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin export JAVA_HOME export HADOOP_HOME export CLASSPATH export PATHsource /etc/profile
复制本地格式化的dfs到另一台namenode上。
scp -r tmp/dfs hadoopNode02:/home/hadoop/app/hadoop-2.7.4/tmp/
Copyright © 叮叮声的奶酪 版权所有
备案号:鄂ICP备17018671号-1