hadoop ha集群搭建(5台)

  

yum安装ntpdate lrzsz - y

  

systemctl停止firewalld
systemctl禁用firewalld
systemctl停止使
systemctl禁用使
setenforce 0

  

mkdir/home/myapps,,cd/home/myapps

  

主机文件配置:
猫在祝辞& lt的/etc/hosts; & lt;EOF
192.168.163.129 node1
192.168.163.131 node2
192.168.163.132 node3
192.168.163.133 node4
192.168.163.128 node5
EOF

  

时间同步配置:
/usr/sbin/ntpdate ntp1.aliyun.com
crontab 30 - e
/ */usr/sbin/ntpdate ntp1.aliyun.com

  

互信配置:ssh - keygen

在132 129 131 133 128;ip ssh-copy-id我~/. ssh/id_rsa。酒吧root@192.168.163。ip美元;做

  

java环境变量配置:java - version

vim/etc/profile
出口JAVA_HOME=/home/myapp/java
出口路径=$ JAVA_HOME/bin: $路径
出口CLASSPATH=: $ JAVA_HOME/lib/dt.jar: $ JAVA_HOME/lib/tools.jar

  源/etc/profile

  

管理员集群搭建:
vim/etc/profile

  
 <代码> export ZOOKEEPER_HOME=/usr/地方/zookeeper-3.4.11
  出口路径=$路径:$ ZOOKEEPER_HOME/bin/<代码> 
  

cd/home/myapps/zookeeper-3.4.11/

mkdir mkdir数据日志
vim conf/zoo.cfg

  
 <代码> dataDir=/home/myapp/zookeeper-3.4.11/数据
  dataLogDir=/home/myapp/zookeeper-3.4.11/日志
  server.1=node1:2888:3888
  server.2=node2:2888:3888
  server.3=node3:2888:3888  
  
 <代码>将修改配置文件推送到各个节点:
  scp - r zookeeper-3.4.11 root@node2:/home/myapp/scp - r zookeeper-3.4.11 root@node3:/home/myapp/ 
  

设置myid
在我们配置的dataDir指定的目录下面,创建一个myid文件,里面内容为一个数字,用来标识当前主机
[root@node1 zookeeper-3.4.11] #回声“1”;比;/home/myapps/zookeeper-3.4.11/数据/myid
[root@node2 zookeeper-3.4.11] #回声“2”;比;/home/myapps/zookeeper-3.4.11/数据/myid
[root@node3 zookeeper-3.4.11] #回声“3”;比;/home/myapps/zookeeper-3.4.11/数据/myid
启动集群:
[root@node1 zookeeper-3.4.11] #/home/myapps/zookeeper-3.4.11/bin/zkServer。sh开始
[root@node2 zookeeper-3.4.11] #/home/myapps/zookeeper-3.4.11/bin/zkServer。sh开始
[root@node3 zookeeper-3.4.11] #/home/myapps/zookeeper-3.4.11/bin/zkServer。sh开始

  

hadoop集群搭建:
/home/myapps/hadoop/hadoop-2.7.5 cd/etc/hadoop

  vim hadoop-env.sh

  
 <代码> 25出口JAVA_HOME=/home/myapp/java代码 
  vim yarn-env.sh

  
 <代码> 23出口JAVA_HOME=/home/myapp/java代码 
  

vim hdfs-site。xml

  
 <代码> & lt; configuration>
  & lt; property>
  & lt; name> dfs.nameservices
  & lt; value> mycluster
  & lt;/property>
  & lt; property>
  & lt; name> dfs.ha.namenodes.mycluster
  & lt; value> nn1 nn2
  & lt;/property>
  & lt; property>
  & lt; name> dfs.namenode.rpc-address.mycluster.nn1
  & lt; value> node1:8020
  & lt;/property>
  & lt; property>
  & lt; name> dfs.namenode.rpc-address.mycluster.nn2
  & lt; value> node2:8020
  & lt;/property>
  & lt; property>
  & lt; name> dfs.namenode.http-address.mycluster.nn1
  & lt; value> node1:50070
  & lt;/property>
  & lt; property>
  & lt; name> dfs.namenode.http-address.mycluster.nn2
  & lt; value> node2:50070
  & lt;/property>
  & lt; property>
  & lt; name> dfs.namenode.shared.edits.dir
  & lt; value> qjournal://node3:8485; node4:8485; node5:8485/mycluster
  & lt;/property>
  & lt; property>
  & lt; name> dfs.client.failover.proxy.provider.mycluster
  & lt; value> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
  & lt;/property>
  & lt; property>
  & lt; name> dfs.ha.fencing.methods
  & lt; value> sshfence
  & lt;/property>
  & lt; property>
  & lt; name> dfs.ha.fencing.ssh.private-key-files
  & lt; value>/root/. ssh/id_dsa
  & lt;/property>
  & lt; property>
  & lt; name> dfs.journalnode.edits.dir
  & lt; value>/home/myapp/hadoop/节点/地方/data
  & lt;/property>
  & lt; property>
  & lt; name> dfs.ha.automatic-failover.enabled
  & lt; value> true
  & lt;/property>
  & lt;/configuration>  
  vim core-site.xml

  
 <代码> & lt; configuration>
  & lt; property>
  & lt; name> fs.defaultFS
  & lt; value> hdfs://mycluster
  & lt;/property>
  & lt; property>
  & lt; name> hadoop.tmp.dir

hadoop ha集群搭建(5台)