Hbase安装模式介绍

单机模式
1> Hbase不使用HDFS,仅使用本地文件系统
2> ZooKeeper与Hbase运行在同一个JVM中

分布式模式
– 伪分布式模式
1> 所有进程运行在同一个节点上,不同进程运行在不同的JVM当中
2> 比较适合实验测试
– 完全分布式模式
1> 进程运行在多个服务器集群中
2> 分布式依赖于HDFS系统,因此布署Hbase之前一定要有一个正常工作的HDFS集群

Linux环境准备

关闭防火墙和SELinux

# service iptables stop

# chkconfig iptables off

# vim /etc/sysconfig/selinux

SELINUX=disabled

配置主机名及主机名绑定

# vim /etc/sysconfig/network

NETWORKING=yes<br/>
HOSTNAME=hbase

# vim /etc/hosts

192.168.244.30 hbase

SSH免密码登录

# ssh-keygen

一直按Enter键

# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.244.30

安装jdk

# tar xvf jdk-7u79-linux-x64.tar.gz -C /usr/local/

# vim /etc/profile

export JAVA_HOME=/usr/local/jdk1..0_79<br/>
export PATH=$JAVA_HOME/bin:$PATH<br/>
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

# source /etc/profile

Hadoop的安装部署

下载地址:http://hadoop.apache.org/releases.html

在这里,选择2.5.2版本

# wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz

# tar xvf hadoop-2.5.2.tar.gz -C /usr/local/

# cd /usr/local/hadoop-2.5.2/

# vim etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/local/jdk1..0_79

配置HDFS

# mkdir /usr/local/hadoop-2.5.2/data/

# vim etc/hadoop/core-site.xml

<configuration><br/>
    <property><br/>
        <name>fs.defaultFS</name><br/>
        <value>hdfs://192.168.244.30:8020</value><br/>
    </property><br/>
    <property><br/>
        <name>hadoop.tmp.dir</name><br/>
        <value>/usr/local/hadoop-2.5.2/data/tmp</value><br/>
    </property><br/>
</configuration>

# vim etc/hadoop/hdfs-site.xml

<configuration><br/>
    <property><br/>
        <name>dfs.replication</name><br/>
        <value>1</value><br/>
    </property><br/>
</configuration>

配置YARN

# mv etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml

# vim etc/hadoop/mapred-site.xml

<configuration><br/>
    <property><br/>
        <name>mapreduce.framework.name</name><br/>
        <value>yarn</value><br/>
    </property><br/>
</configuration>

# vim etc/hadoop/yarn-site.xml

<configuration><br/>
    <property><br/>
        <name>yarn.nodemanager.aux-services</name><br/>
        <value>mapreduce_shuffle</value><br/>
    </property><br/>
</configuration>

启动HDFS

初始化文件系统

# bin/hdfs namenode -format

...<br/>
16/09/25 20:33:02 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.<br/>
...

输出上述信息即代表文件系统初始化成功

启动NameNod和DataNode进程

# sbin/start-dfs.sh

// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes wh<br/>
ere applicableStarting namenodes on [hbase]<br/>
The authenticity of host 'hbase (192.168.244.30)' can't be established.<br/>
RSA key fingerprint is 1a::f5:e3:5d:e1:2c:5c:8c:::ba::1c:ac:ba.<br/>
Are you sure you want to continue connecting (yes/no)? yes<br/>
hbase: Warning: Permanently added 'hbase' (RSA) to the list of known hosts.<br/>
hbase: starting namenode, logging to /usr/local/hadoop-2.5./logs/hadoop-root-namenode-hbase.out<br/>
The authenticity of host 'localhost (::1)' can't be established.<br/>
RSA key fingerprint is 1a::f5:e3:5d:e1:2c:5c:8c:::ba::1c:ac:ba.<br/>
Are you sure you want to continue connecting (yes/no)? yes<br/>
localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.<br/>
localhost: starting datanode, logging to /usr/local/hadoop-2.5./logs/hadoop-root-datanode-hbase.out<br/>
Starting secondary namenodes [0.0.0.0]<br/>
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.<br/>
RSA key fingerprint is 1a::f5:e3:5d:e1:2c:5c:8c:::ba::1c:ac:ba.<br/>
Are you sure you want to continue connecting (yes/no)? yes<br/>
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.<br/>
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.5./logs/hadoop-root-secondarynamenode-hbase.out<br/>
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes wh<br/>
ere applicable

启动YARN

# sbin/start-yarn.sh

starting yarn daemons<br/>
starting resourcemanager, logging to /usr/local/hadoop-2.5./logs/yarn-root-resourcemanager-hbase.out<br/>
localhost: starting nodemanager, logging to /usr/local/hadoop-2.5./logs/yarn-root-nodemanager-hbase.out

通过jps查看各进程是否启动成功

# jps

 NodeManager<br/>
 ResourceManager<br/>
 NameNode<br/>
 DataNode<br/>
 SecondaryNameNode<br/>
 Jps

也可通过访问http://192.168.244.30:50070/查看hdfs是否启动成功

Hbase的伪分布式安装

Hbase的安装部署

下载地址:http://mirror.bit.edu.cn/apache/hbase/1.2.3/hbase-1.2.3-bin.tar.gz

在这里,下载的是1.2.3版本

关于hbase和hadoop的版本对应信息,可参考官档的说明

http://hbase.apache.org/book/configuration.html#basic.prerequisites

# wget http://mirror.bit.edu.cn/apache/hbase/1.2.3/hbase-1.2.3-bin.tar.gz

# tar xvf hbase-1.2.3-bin.tar.gz -C /usr/local/

# cd /usr/local/hbase-1.2.3/

# vim conf/hbase-env.sh

export JAVA_HOME=/usr/local/jdk1..0_79

配置Hbase

# mkdir /usr/local/hbase-1.2.3/data

# vim conf/hbase-site.xml

<configuration><br/>
  <property><br/>
    <name>hbase.rootdir</name><br/>
    <value>hdfs://192.168.244.30:8020/hbase</value><br/>
  </property><br/>
  <property><br/>
    <name>hbase.zookeeper.property.dataDir</name><br/>
    <value>/usr/local/hbase-1.2.3/data/zookeeper</value><br/>
  </property><br/>
  <property><br/>
    <name>hbase.cluster.distributed</name><br/>
    <value>true</value><br/>
  </property><br/>
</configuration>

# vim conf/regionservers

192.168.244.30

启动Hbase

# bin/hbase-daemon.sh start zookeeper

starting zookeeper, logging to /usr/local/hbase-1.2./bin/../logs/hbase-root-zookeeper-hbase.out

# bin/hbase-daemon.sh start master

starting master, logging to /usr/local/hbase-1.2./bin/../logs/hbase-root-master-hbase.out

# bin/hbase-daemon.sh start regionserver

starting regionserver, logging to /usr/local/hbase-1.2./bin/../logs/hbase-root-regionserver-hbase.out

通过jps查看新增的java进程

# jps

 NodeManager<br/>
 HQuorumPeer<br/>
 HRegionServer<br/>
 HMaster<br/>
 ResourceManager<br/>
 NameNode<br/>
 DataNode<br/>
 SecondaryNameNode<br/>
 Jps

可以看出,新增了HQuorumPeer,HRegionServer和HMaster三个进程。

通过http://192.168.244.30:16030/访问Hbase的web页面

Hbase的伪分布式安装

至此,Hbase的伪分布式集群搭建完毕~

参考

1. http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-common/SingleCluster.html