hadoop 1.0.0的安装这里就不讲了,请参照 http://blog.csdn.net/ylqmf/article/details/7250235,里面已经相信介绍了.
我们在此基础上继续安装hbase 0.92.0
[root@da-free-test1 ~]# cd /opt/hadoop/
一 先下载hbase,因为bug等问题,具体版本请自己选择,这里以 0.92.0为例:
[root@da-free-test1 hadoop]# wget http://labs.renren.com/apache-mirror//hbase/hbase-0.92.0/hbase-0.92.0.tar.gz
解压
[root@da-free-test1 hadoop]# tar zxvf hbase-0.92.0.tar.gz
[root@da-free-test1 hadoop]# mv hbase-0.92.0 hbase_0_92_0
[root@da-free-test1 hadoop]# ln -s hbase_0_92_0 hbase
二 修改配置文件
[root@da-free-test1 hadoop]# vi /etc/profile
在环境变量中添加HBASE_HOME
export HBASE_HOME=/opt/hadoop_1_0_0/hbase_0_92_0
export PATH=$PATH:$HBASE_HOME/bin
修改HBASE的配置文件
[root@da-free-test1 hadoop]# cd hbase/conf/
1 hbase-env.sh
[root@da-free-test1 conf]# vi hbase-env.sh
修改参数
export JAVA_HOME=/soft/jdk1.6.0_30
export HBASE_OPTS="$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"
export HBASE_PID_DIR=/opt/hadoop_1_0_0/hbase_0_92_0/pids
export HBASE_MANAGES_ZK=true
2 hbase-site.xml
[root@da-free-test1 conf]# vi hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://da-free-test1:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>da-free-test1,da-free-test2,da-free-test3,da-free-test4</value>
</property>
<property>
<name>zookeeper.session.timeout</name>
<value>60000</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>21818</value>
</property>
</configuration>
3 regionservers
[root@da-free-test1 conf]# vi regionservers
da-free-test2
da-free-test3
da-free-test4
三 配置完毕将hbase复制到其他节点上
[root@da-free-test1 conf]# scp -r /opt/hadoop/hbase_0_92_0 root@192.168.60.152:/opt/hadoop/
[root@da-free-test1 conf]# scp -r /opt/hadoop/hbase_0_92_0 root@192.168.60.153:/opt/hadoop/
[root@da-free-test1 conf]# scp -r /opt/hadoop/hbase_0_92_0 root@192.168.60.154:/opt/hadoop/
别忘记分别重新建立软链接hbase
四 启动hbase
[root@da-free-test1 conf]# /opt/hadoop/bin/start-all.sh
Warning: $HADOOP_HOME is deprecated.
starting namenode, logging to /opt/hadoop_1_0_0/libexec/../logs/hadoop-root-namenode-da-free-test1.out
da-free-test2: starting datanode, logging to /opt/hadoop_1_0_0/libexec/../logs/hadoop-root-datanode-da-free-test2.out
da-free-test3: starting datanode, logging to /opt/hadoop_1_0_0/libexec/../logs/hadoop-root-datanode-da-free-test3.out
da-free-test4: starting datanode, logging to /opt/hadoop_1_0_0/libexec/../logs/hadoop-root-datanode-da-free-test4.out
da-free-test1: starting secondarynamenode, logging to /opt/hadoop_1_0_0/libexec/../logs/hadoop-root-secondarynamenode-da-free-test1.out
starting jobtracker, logging to /opt/hadoop_1_0_0/libexec/../logs/hadoop-root-jobtracker-da-free-test1.out
da-free-test4: starting tasktracker, logging to /opt/hadoop_1_0_0/libexec/../logs/hadoop-root-tasktracker-da-free-test4.out
da-free-test3: starting tasktracker, logging to /opt/hadoop_1_0_0/libexec/../logs/hadoop-root-tasktracker-da-free-test3.out
da-free-test2: starting tasktracker, logging to /opt/hadoop_1_0_0/libexec/../logs/hadoop-root-tasktracker-da-free-test2.out
[root@da-free-test1 conf]# jps
6453 JobTracker
6529 Jps
6371 SecondaryNameNode
6207 NameNode
hadoop已经起来了,下面启动 hbase
[root@da-free-test1 conf]# /opt/hadoop/hbase/bin/start-hbase.sh
da-free-test1: starting zookeeper, logging to /opt/hadoop/hbase/bin/../logs/hbase-root-zookeeper-da-free-test1.out
da-free-test4: starting zookeeper, logging to /opt/hadoop/hbase/bin/../logs/hbase-root-zookeeper-da-free-test4.out
da-free-test2: starting zookeeper, logging to /opt/hadoop/hbase/bin/../logs/hbase-root-zookeeper-da-free-test2.out
da-free-test3: starting zookeeper, logging to /opt/hadoop/hbase/bin/../logs/hbase-root-zookeeper-da-free-test3.out
da-free-test1: 2012-02-13T13:21:23.622+0800: [GC [DefNew: 3200K->283K(3584K), 0.0057370 secs] 3200K->283K(11520K), 0.0057930 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
starting master, logging to /opt/hadoop_1_0_0/hbase_0_92_0/logs/hbase-root-master-da-free-test1.out
da-free-test4: starting regionserver, logging to /opt/hadoop/hbase/bin/../logs/hbase-root-regionserver-da-free-test4.out
da-free-test3: starting regionserver, logging to /opt/hadoop/hbase/bin/../logs/hbase-root-regionserver-da-free-test3.out
da-free-test2: starting regionserver, logging to /opt/hadoop/hbase/bin/../logs/hbase-root-regionserver-da-free-test2.out
进入命令行
[root@da-free-test1 conf]# /opt/hadoop/hbase/bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.92.0, r1231986, Mon Jan 16 13:16:35 UTC 2012
如果安装的有问题,此时应该就报错了,例如最典型的
java.lang.NoClassDefFoundError: org/slf4j/impl/StaticLoggerBinder
hbase(main):003:0> list
TABLE
0 row(s) in 0.0300 seconds
ok 通过上面的步骤,hbase 已经建立起来了...
通过 ui查看
http://i752.photobucket.com/albums/xx166/ntudou/dev/hbase01.png
分享到:
相关推荐
hive0.8.1和hbase0.92.0集成的hive-hbase-handler.Jar包,里面包含:hbase-0.92.0.jar、hbase-0.92.0-tests.jar、hive-hbase-handler-0.9.0-SNAPSHOT.jar。经测试没有问题。
来自一位老学姐的Hbase安装详细教程(清华大学镜像下)及基本操作,希望能带给你们些许帮助。主要包含Hbase的下载过程及其遇到的小问题,后续会继续完善该文档!
在本地模式下,hbase只需要安装JDK就可以了。 若使用HDFS文件系统模式,除了JDK,还需要安装hadoop(HDFS是hadoop的旗舰级文件系统,是hadoop项目的核心子项目,安装hadoop会带有hdfs),本版本hbase依赖安装的hadoop...
本资源为hbase的安装和使用,内含hbase安装工具包,hbase的安装说明,hbase的使用说明 。
HBase is one of Hadoop core components, included in CDH parcel already. HDFS and
Hbase详细安装步骤
大数据技术基础实验报告-HBase安装配置和应用实践
jdk1.8.0_131、apache-zookeeper-3.8.0、hadoop-3.3.2、hbase-2.4.12 mysql5.7.38、mysql jdbc驱动mysql-connector-java-8.0.8-dmr-bin.jar、 apache-hive-3.1.3 2.本文软件均安装在自建的目录/export/server/下 ...
详细的讲述了hadoop的安装,zookeeper的安装,还有hbase的安装,每一步都非常的详细,按照我的粘贴就行
NULL 博文链接:https://username2.iteye.com/blog/2106533
zookeeper及hbase安装配置,安装时先检查zookeeper,hbase是否与hadoop版本对应
VMware10+CentOS6.5+Hadoop2.2+Zookeeper3.4.6+HBase0.96安装过程详解 用于解决分布式集群服务器
文档是我自己一步步完成实验写成的,给初学大数据的朋友共享一下希望能有所帮助
nosql实验一-HBase的安装与配置
第8章 HBase组件安装配置.docx
hbase 安装
在学习过程中需要安装HBase,经过学习很多有经验的人的安装步骤,在不断尝试后安装成功,希望可以对大家有所帮助,也期待各位大佬的指正让我有所学习。
hbase需要搭建集群,这里详细的介绍了集群的安装方式以及配置文件的一些修改
用独立安装的zookeeper作为协调服务的hbase的完全分布式安装 既能满足你安装zookeeper要求又能满足你安装hbase的要求
Hbase安装详细文档,有配置,有说明,有截图,非常详细