您正在查看: 框架相关 分类下的文章

FastDFS分布式文件系统在服务器集群上的安装部署

Overview

到目前为止,我们手里已经有了10台服务器了。之前一直说要把这些服务器搭建一个分布式文件系统,现在条件终于成熟了。这些服务器预装的系统大多数是Ubuntu16.04LTS发行版的Linux,少部分是Ubuntu14.04LTS。这次我们选择5台服务器,1feagen(118.138.241.39)服务器作为tracker4台服务器(Bastion3(118.138.240.146),Bastion4(118.138.233.74),Bastion6(118.138.233.26),POSSUM(118.138.233.27))作为storage。这里会记录下FastDFS分布式文件系统在Ubuntu16.04上的安装部署过程的细节。

5台服务器都要安装,只是配置文件稍有不同。

1.必备软件

  • FastDFS
    这个去GitHub上面下载源码:FastDFS
  • libfastcommon
    这个仍然要去GitHub上面下载源码:libfastcommon

这两个Zip包用wget或者在本地下载上传至服务器均可。

2.libfastcommon的安装

首先解压libfastcommon-master.zip包:

unzip libfastcommon-master.zip

进入这个文件夹:

cd libfastcommon-master/

依次输入如下命令:

sudo ./make.sh
sudo ./make.sh install

输入第二条命令之后,会有如下显示:

mkdir -p /usr/lib64
mkdir -p /usr/lib
install -m 755 libfastcommon.so /usr/lib64
install -m 755 libfastcommon.so /usr/lib
mkdir -p /usr/include/fastcommon
install -m 644 common_define.h hash.h chain.h logger.h base64.h shared_func.h pthread_func.h ini_file_reader.h _os_define.h sockopt.h sched_thread.h http_func.h md5.h local_ip_func.h avl_tree.h ioevent.h ioevent_loop.h fast_task_queue.h fast_timer.h process_ctrl.h fast_mblock.h connection_pool.h fast_mpool.h fast_allocator.h fast_buffer.h skiplist.h multi_skiplist.h flat_skiplist.h skiplist_common.h system_info.h fast_blocked_queue.h php7_ext_wrapper.h id_generator.h char_converter.h char_convert_loader.h /usr/include/fastcommon

可以看到libfastcommon.so安装到了/usr/lib64.
下面,我们需要为两个文件创建软链接,指向FastDFS主程序的lib目录:

sudo ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so
sudo ln -s /usr/lib64/libfastcommon.so /usr/lib/libfastcommon.so
sudo ln -s /usr/lib64/libfdfsclient.so /usr/local/lib/libfdfsclient.so
sudo ln -s /usr/lib64/libfdfsclient.so /usr/lib/libfdfsclient.so

3.FastDFS的安装

解压fastdfs-master.zip文件:

unzip fastdfs-master.zip

进入FastDFS源码根目录:

cd fastdfs-master

依次输入下面两条命令:

sudo ./make.sh
sudo ./make.sh install

安装完成之后,所有的可执行文件都被放在了/usr/bin/目录下面,可以用下面的命令查看:

ls /usr/bin/fdfs*

显示共有14个文件:

/usr/bin/fdfs_appender_test   
/usr/bin/fdfs_download_file  
/usr/bin/fdfs_test1
/usr/bin/fdfs_appender_test1  
/usr/bin/fdfs_file_info      
/usr/bin/fdfs_trackerd
/usr/bin/fdfs_append_file     
/usr/bin/fdfs_monitor        
/usr/bin/fdfs_upload_appender
/usr/bin/fdfs_crc32           
/usr/bin/fdfs_storaged       
/usr/bin/fdfs_upload_file
/usr/bin/fdfs_delete_file     
/usr/bin/fdfs_test

4. 配置FastDFS

配置文件在目录/etc/fdfs下,查看一下:

ls /etc/fdfs

显示如下:

client.conf.sample  storage.conf.sample  storage_ids.conf.sample  tracker.conf.sample

4.1 创建跟踪器和存储节点的配置文件

cd /etc/fdfs
sudo cp tracker.conf.sample tracker.conf
sudo cp storage.conf.sample storage.conf

4.2 修改tracker配置文件

feagen(118.138.241.39)服务器中修改tracker配置文件如下:

sudo vim /etc/fdfs/tracker.conf

可以看到如下显示:

# the base path to store data and log files
base_path=/home/yuqing/fastdfs

这行改成如下内容:

base_path=/feagen/fastdfs/tracker

这种配置文件,注意等号两边不能有空格。且上面的目录必须是真实存在的。
启动tracker

sudo fdfs_trackerd /etc/fdfs/tracker.conf start

查看监听端口:

sudo netstat -unltp|grep fdfs

如果显示如下,则证明tracker启动成功:

tcp        0      0 0.0.0.0:22122           0.0.0.0:*               LISTEN      6951/fdfs_trackerd

4.3 修改storage配置文件

Bastion3(118.138.240.146)服务器中,修改storage配置文件如下:

sudo vim /etc/fdfs/storage.conf

可以看到如下显示:

group_name=group1
base_path=/home/yuqing/fastdfs
store_path0=/home/yuqing/fastdfs
tracker_server=192.168.209.121:22122

修改为:

group_name=group1
base_path=/bastion3_cache/fastdfs/storage
store_path0=/bastion3_cache/fastdfs/storage
tracker_server=118.138.241.39:22122

Bastion4(118.138.233.74)服务器中,修改storage配置文件如下:

sudo vim /etc/fdfs/storage.conf

可以看到如下显示:

group_name=group1
base_path=/home/yuqing/fastdfs
store_path0=/home/yuqing/fastdfs
tracker_server=192.168.209.121:22122

修改为:

group_name=group1
base_path=/bastion4_cache/fastdfs/storage
store_path0=/bastion4_cache/fastdfs/storage
tracker_server=118.138.241.39:22122

Bastion6(118.138.233.26)服务器中,修改storage配置文件如下:

sudo vim /etc/fdfs/storage.conf

可以看到如下显示:

group_name=group1
base_path=/home/yuqing/fastdfs
store_path0=/home/yuqing/fastdfs
tracker_server=192.168.209.121:22122

修改为:

group_name=group1
base_path=/bastion6_cache/fastdfs/storage
store_path0=/bastion6_cache/fastdfs/storage
tracker_server=118.138.241.39:22122

POSSUM(118.138.233.27)服务器中,修改storage配置文件如下:

sudo vim /etc/fdfs/storage.conf

可以看到如下显示:

group_name=group1
base_path=/home/yuqing/fastdfs
store_path0=/home/yuqing/fastdfs
tracker_server=192.168.209.121:22122

修改为:

group_name=group1
base_path=/possum/fastdfs/storage
store_path0=/possum/fastdfs/storage
tracker_server=118.138.241.39:22122

启动storage服务:

sudo fdfs_storaged /etc/fdfs/storage.conf start

显示如下,即代表成功启动:

process fdfs_storaged already running, pid: 28250

查看监听端口:

sudo netstat -unltp|grep fdfs

如果显示如下,则证明storage启动成功:

tcp        0      0 0.0.0.0:23000           0.0.0.0:*               LISTEN      28250/fdfs_storaged

如果没有显示,则有可能是storage启动失败。

那么我们以Bastion6服务器为例,查看启动日志:

tail /bastion6_cache/fastdfs/storage/logs/storaged.log

如果显示如下,代表启动失败:

[2018-05-13 19:56:31] ERROR - file: storage_ip_changed_dealer.c, line: 186, connect to tracker server 118.138.2
40.146:22122 fail, errno: 110, error info: Connection timed out

连接超时;
如果显示如下,代表启动成功:

[2018-05-14 12:59:40] INFO - file: tracker_client_thread.c, line: 310, successfully connect to tracker server 118.138.240.146:22122, as a tracker client, my ip is 118.138.233.26
[2018-05-14 13:00:10] INFO - file: tracker_client_thread.c, line: 1263, tracker server 118.138.240.146:22122, set tracker leader: 118.138.240.146:22122
[2018-05-14 13:03:06] INFO - file: storage_sync.c, line: 2733, successfully connect to storage server 118.138.233.74:23000, continuous fail count: 16
[2018-05-14 13:03:41] INFO - file: storage_sync.c, line: 2733, successfully connect to storage server 118.138.233.74:23000

连接成功。

超时解读:
之前跟Chris讨论的时候,刚开始以为这几天服务器都是Monash云服务器中心的,属于内网,所以相互之间的端口是不用开的,实验之后才知道不开端口是不行的。后来联系Jerico开开3个端口之后,才连接成功。这3个端口分别是22122888823000

  • 22122:代表tracker服务端口;
  • 8888:代表HTTP协议端口,网页上传下载文件需要这个端口;
  • 23000:代表storage服务端口。

至此,storage存储节点安装成功。

4.4 查看所有存储节点信息

所有存储节点都启动之后,在任意一台storage上面用下面命令,查看集群状态信息:

sudo /usr/bin/fdfs_monitor /etc/fdfs/storage.conf

显示如下:

[2018-05-24 01:27:18] DEBUG - base_path=/bastion6_cache/fastdfs/storage, connect_timeout=10, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0

server_count=1, server_index=0

tracker server is 118.138.241.39:22122

group count: 1

Group 1:
group name = group1
disk total space = 100665 MB
disk free space = 72339 MB
trunk free space = 0 MB
storage server count = 4
active server count = 4
storage server port = 23000
storage HTTP port = 8888
store path count = 1
subdir count per path = 256
current write server index = 0
current trunk file id = 0

    Storage 1:
        id = 118.138.233.26
        ip_addr = 118.138.233.26 (vm-118-138-233-26.erc.monash.edu.au)  ACTIVE
        http domain = 
        version = 5.12
        join time = 2018-05-13 19:55:46
        up time = 2018-05-20 00:51:17
        total storage = 483679 MB
        free storage = 452554 MB
        upload priority = 10
        store_path_count = 1
        subdir_count_per_path = 256
        storage_port = 23000
        storage_http_port = 8888
        current_write_path = 0
        source storage id = 
        if_trunk_server = 0
        connection.alloc_count = 256
        connection.current_count = 3
        connection.max_count = 3
        total_upload_count = 1
        success_upload_count = 1
        total_append_count = 0
        success_append_count = 0
        total_modify_count = 0
        success_modify_count = 0
        total_truncate_count = 0
        success_truncate_count = 0
        total_set_meta_count = 0
        success_set_meta_count = 0
        total_delete_count = 0
        success_delete_count = 0
        total_download_count = 0
        success_download_count = 0
        total_get_meta_count = 0
        success_get_meta_count = 0
        total_create_link_count = 0
        success_create_link_count = 0
        total_delete_link_count = 0
        success_delete_link_count = 0
        total_upload_bytes = 87026
        success_upload_bytes = 87026
        total_append_bytes = 0
        success_append_bytes = 0
        total_modify_bytes = 0
        success_modify_bytes = 0
        stotal_download_bytes = 0
        success_download_bytes = 0
        total_sync_in_bytes = 11288
        success_sync_in_bytes = 11288
        total_sync_out_bytes = 0
        success_sync_out_bytes = 0
        total_file_open_count = 5
        success_file_open_count = 5
        total_file_read_count = 0
        success_file_read_count = 0
        total_file_write_count = 5
        success_file_write_count = 5
        last_heart_beat_time = 2018-05-24 01:26:54
        last_source_update = 2018-05-17 01:27:29
        last_sync_update = 2018-05-17 01:06:46
        last_synced_timestamp = 1970-01-01 10:00:00 (never synced)
    Storage 2:
        id = 118.138.233.27
        ip_addr = 118.138.233.27 (vm-118-138-233-27.erc.monash.edu.au)  ACTIVE
        http domain = 
        version = 5.12
        join time = 2018-05-17 00:39:17
        up time = 2018-05-20 00:52:30
        total storage = 2015737 MB
        free storage = 1689979 MB
        upload priority = 10
        store_path_count = 1
        subdir_count_per_path = 256
        storage_port = 23000
        storage_http_port = 8888
        current_write_path = 0
        source storage id = 
        if_trunk_server = 0
        connection.alloc_count = 256
        connection.current_count = 3
        connection.max_count = 3
        total_upload_count = 0
        success_upload_count = 0
        total_append_count = 0
        success_append_count = 0
        total_modify_count = 0
        success_modify_count = 0
        total_truncate_count = 0
        success_truncate_count = 0
        total_set_meta_count = 0
        success_set_meta_count = 0
        total_delete_count = 0
        success_delete_count = 0
        total_download_count = 0
        success_download_count = 0
        total_get_meta_count = 0
        success_get_meta_count = 0
        total_create_link_count = 0
        success_create_link_count = 0
        total_delete_link_count = 0
        success_delete_link_count = 0
        total_upload_bytes = 0
        success_upload_bytes = 0
        total_append_bytes = 0
        success_append_bytes = 0
        total_modify_bytes = 0
        success_modify_bytes = 0
        stotal_download_bytes = 0
        success_download_bytes = 0
        total_sync_in_bytes = 98314
        success_sync_in_bytes = 98314
        total_sync_out_bytes = 0
        success_sync_out_bytes = 0
        total_file_open_count = 5
        success_file_open_count = 5
        total_file_read_count = 0
        success_file_read_count = 0
        total_file_write_count = 5
        success_file_write_count = 5
        last_heart_beat_time = 2018-05-24 01:27:08
        last_source_update = 1970-01-01 10:00:00
        last_sync_update = 2018-05-21 16:32:36
        last_synced_timestamp = 2018-05-17 01:06:38 (20m:51s delay)
    Storage 3:
        id = 118.138.233.74
        ip_addr = 118.138.233.74 (vm-118-138-233-74.erc.monash.edu.au)  ACTIVE
        http domain = 
        version = 5.12
        join time = 2018-05-13 20:03:06
        up time = 2018-05-20 00:53:40
        total storage = 100665 MB
        free storage = 72339 MB
        upload priority = 10
        store_path_count = 1
        subdir_count_per_path = 256
        storage_port = 23000
        storage_http_port = 8888
        current_write_path = 0
        source storage id = 
        if_trunk_server = 0
        connection.alloc_count = 256
        connection.current_count = 3
        connection.max_count = 3
        total_upload_count = 2
        success_upload_count = 2
        total_append_count = 0
        success_append_count = 0
        total_modify_count = 0
        success_modify_count = 0
        total_truncate_count = 0
        success_truncate_count = 0
        total_set_meta_count = 2
        success_set_meta_count = 2
        total_delete_count = 0
        success_delete_count = 0
        total_download_count = 0
        success_download_count = 0
        total_get_meta_count = 0
        success_get_meta_count = 0
        total_create_link_count = 0
        success_create_link_count = 0
        total_delete_link_count = 0
        success_delete_link_count = 0
        total_upload_bytes = 11190
        success_upload_bytes = 11190
        total_append_bytes = 0
        success_append_bytes = 0
        total_modify_bytes = 0
        success_modify_bytes = 0
        stotal_download_bytes = 0
        success_download_bytes = 0
        total_sync_in_bytes = 174142
        success_sync_in_bytes = 87026
        total_sync_out_bytes = 0
        success_sync_out_bytes = 0
        total_file_open_count = 3
        success_file_open_count = 3
        total_file_read_count = 0
        success_file_read_count = 0
        total_file_write_count = 3
        success_file_write_count = 3
        last_heart_beat_time = 2018-05-24 01:27:13
        last_source_update = 2018-05-17 01:06:38
        last_sync_update = 2018-05-21 16:32:54
        last_synced_timestamp = 2018-05-17 01:27:30 (-1s delay)
    Storage 4:
        id = 118.138.240.146
        ip_addr = 118.138.240.146 (vm-118-138-240-146.erc.monash.edu.au)  ACTIVE
        http domain = 
        version = 5.12
        join time = 2018-05-20 00:53:13
        up time = 2018-05-20 00:53:13
        total storage = 483679 MB
        free storage = 449872 MB
        upload priority = 10
        store_path_count = 1
        subdir_count_per_path = 256
        storage_port = 23000
        storage_http_port = 8888
        current_write_path = 0
        source storage id = 118.138.233.26
        if_trunk_server = 0
        connection.alloc_count = 256
        connection.current_count = 3
        connection.max_count = 3
        total_upload_count = 0
        success_upload_count = 0
        total_append_count = 0
        success_append_count = 0
        total_modify_count = 0
        success_modify_count = 0
        total_truncate_count = 0
        success_truncate_count = 0
        total_set_meta_count = 0
        success_set_meta_count = 0
        total_delete_count = 0
        success_delete_count = 0
        total_download_count = 0
        success_download_count = 0
        total_get_meta_count = 0
        success_get_meta_count = 0
        total_create_link_count = 0
        success_create_link_count = 0
        total_delete_link_count = 0
        success_delete_link_count = 0
        total_upload_bytes = 0
        success_upload_bytes = 0
        total_append_bytes = 0
        success_append_bytes = 0
        total_modify_bytes = 0
        success_modify_bytes = 0
        stotal_download_bytes = 0
        success_download_bytes = 0
        total_sync_in_bytes = 98314
        success_sync_in_bytes = 98314
        total_sync_out_bytes = 0
        success_sync_out_bytes = 0
        total_file_open_count = 5
        success_file_open_count = 5
        total_file_read_count = 0
        success_file_read_count = 0
        total_file_write_count = 5
        success_file_write_count = 5
        last_heart_beat_time = 2018-05-24 01:27:03
        last_source_update = 1970-01-01 10:00:00
        last_sync_update = 2018-05-21 16:32:54
        last_synced_timestamp = 2018-05-17 01:27:30 (-1s delay)

可以看到,4storage状态都是ACTIVE,表示都启动成功了。

5. 测试上传文件

需要注意一点,我们之后的特征提取程序会在feagen服务器上,所以,我们暂时将client安装在这台服务器上。测试一下在本台服务器上上传文件,能否在所有storage服务器上面查看得到此文件的备份。

5.1 修改tracker服务器上的client配置文件

先后输入以下命令:

cd /etc/fdfs
sudo cp client.conf.sample client.conf
sudo vim client.conf

base_pathtracker_server两处配置修改为:

base_path=/feagen/fastdfs/client
tracker_server=118.138.241.39:22122

shell当中输入下面的命令:

sudo /usr/bin/fdfs_upload_file /etc/fdfs/client.conf fastdfs-master.zip

返回如下,即可表明上传成功:

group1/M00/00/00/dorpG1td37yAFSYPAAZ-Sk23ivY904.zip

这个返回值,在perl当中,以如下方式获取:

my $file_name = qx(sudo /usr/bin/fdfs_upload_file /etc/fdfs/client.conf fastdfs-master.zip);

到任意一个storage服务器当中查看,以Bastion3为例,至此处:

/bastion3_cache/fastdfs/storage/data/00/00

发现存在如下文件:

dorpG1td37yAFSYPAAZ-Sk23ivY904.zip

即表明上传成功。

6. 测试下载文件

当前目录下,运行下面的程序:

sudo /usr/bin/fdfs_download_file /etc/fdfs/client.conf group1/M00/00/00/dorpG1td37yAFSYPAAZ-Sk23ivY904.zip

即可在当前目录下面找到dorpG1td37yAFSYPAAZ-Sk23ivY904.zip,表明下载成功。

至此我们的分布式文件服务器算是搭建完成了。

我们参考了这些文章,表示感谢!
FastDFS分布式文件系统集群安装与配置
FastDFS--原理篇
分布式文件系统FastDFS原理介绍
Ubuntu下安装并配置FastDFS

ubuntu14.04安装gearman及perl扩展包

Overview

Bastion4这个项目经过我们实验验证和安全考虑,决定舍弃kafka而转用gearman这个消息队列框架,具体分析将在后续文章中给出,这里只记录gearman相关的安装。

1.下载安装gearman

最新版的gearmangearmand-1.1.12。我们执行下面几步,先将其下载到本地主文件夹,并解压缩。

sudo apt-get update
wget https://launchpad.net/gearmand/1.2/1.1.12/+download/gearmand-1.1.12.tar.gz
tar zxvf gearmand-1.1.12.tar.gz
cd gearmand-1.1.12/

进入gearmand-1.1.12文件夹后,如果直接运行

./configure

就会报缺少如下几个依赖包错误:

configure: error: could not find boost
configure: error: Could not find a version of the library
configure: error: could not find gperf
configure: error: Unable to find libevent
configure: error: Unable to find libuuid

所以,我们先将这些依赖都安装好:

sudo apt-get install libboost-dev
sudo apt-get install libboost-all-dev
sudo apt-get install gperf
sudo apt-get install libevent-dev
sudo apt-get install uuid-dev

安装好之后,如果没有错误,仍在gearmand-1.1.12文件夹下运行下面两条命令,编译时间比较长:

sudo make
sudo make install

这个过程中如果出现了错误,就运行下面的命令清除一下之前编译产生的可执行文件以及object文件(即扩展名为o的文件):

sudo make clean

继续重新安装编译:

./configure
sudo make
sudo make install

没有错误的话,就安装gearmanjob server

sudo apt-get install gearman-job-server

安装好以后,运行一下gearman:

gearman

会提示错误:

gearman: error while loading shared libraries: libgearman.so.8: cannot open shared object file: No such file or directory

这表示找不到libgearman.so.8所在的目录。这时我们打开/etc/ld.so.conf文件:

sudo vim /etc/ld.so.conf

添加一句话:

include /usr/local/lib

保存退出,并执行下面这句:

sudo /sbin/ldconfig

这样就不会出错了。
启动一下job server

gearmand -d

报错如下:

gearmand: Could not open log file "/usr/local/var/log/gearmand.log", from "/home/young/gearmand-1.1.12", switching to stderr. (No such file or directory)

我们这样解决:在/usr/local/下面新建var子目录,进去,新建log子目录,再进去,新建文件gearmand.log。这样就没有问题了。
sudo权限运行下面的命令:

sudo gearmand -d -L 127.0.0.1 -p 4730

-d表示daemon,在后台运行;
-L表示监听的ip,默认是localhost
-p表示监听的端口号port,默认是4730

这样gearmanubuntu14.04上面就安装成功了。

2.安装perl扩展包

perl端需要3个扩展包:

Gearman::Server
Gearman::Client
Gearman::Worker

可以用之前Chris的博客BioPerl(一):安装BioPerl中介绍的方法,先安装好CPAN
之后,用sudo权限打开CPAN

sudo cpan

然后依次安装3个扩展包:

install Gearman::Server
install Gearman::Client
install Gearman::Worker

都安装后之后,可以重启一下机器。
启动之后,用如下命令观察4730端口,查看gearmanjob server是否已经启动(gearmand -d这条命令是否生效):

sudo lsof -i:4730

如果不加sudo是看不到的,所以linux命令建议都要加上sudo

可以看到:

COMMAND  PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
gearmand 576 gearman    9u  IPv4  12861      0t0  TCP *:4730 (LISTEN)
gearmand 576 gearman   10u  IPv6  12862      0t0  TCP *:4730 (LISTEN)

服务器端可能会只显示

COMMAND  PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
gearmand 576 gearman    9u  IPv4  12861      0t0  TCP *:4730 (LISTEN)

可见这个server是开机启动的。

3.运行测试perl脚本

client.pl将消息发送到server,相当于生产者;
worker.pl来处理这些消息,并把结果返回给client.pl
相当于消费者。我们观察输出即可。
client.pl代码如下:

#!/usr/bin/perl
use strict;
use warnings;
use Gearman::Client;
use Storable; 
use Storable qw(freeze);
use Storable qw(thaw);
use IO::All;

# fork this process
my $pid = fork();
if ($pid == 0)
{
    # do this in the child
    print "start new client \n";
    my $client = Gearman::Client->new;
    print "finish new client \n";
    print "start job_servers \n";
    $client->job_servers('127.0.0.1',4730);
    print "finish job_servers \n";
    # 设置异步任务
    print "start new_task_set \n";
    my $tasks = $client->new_task_set; 
    print "finish new_task_set \n";
    print "start add_task \n";
    #handle database
    my @rows=('hello','byebye');
    $tasks->add_task(
        # 开始任务,多个参数
        showMessage => freeze(\@rows), 
        # 注册回调函数 
        { on_complete => \&complete },  
    );  
    print "finish add_task \n";
    print "start wait \n";
    # 等待任务结束
    $tasks->wait;
    print "finish wait \n";
    exit;
}

print "The background task will be finished shortly.\n";
 
sub complete{   
    my $ret = ${ $_[0] };
    #io("complete.txt")->print($ret);
    print $ret, "\n";
} 
 

worker.pl代码如下:

#!/usr/bin/perl
use strict;
use warnings;
use Gearman::Worker;
use Storable qw(thaw);
use Storable qw(freeze);

 print "start new worker \n";
my $worker = Gearman::Worker->new;
print "finish new worker \n";
print "start job_servers \n";
$worker->job_servers('127.0.0.1',4730);
print "finish job_servers \n";
# Worker 注册可以使用的功能
print "start register_function \n";
$worker->register_function( showMessage => \&showMessage );  
 print "finish register_function \n";
# 等待连接的任务
print "start work \n";

$worker->work while 1;  
print "finish work \n";
sub showMessage{
    my @row=@{ thaw($_[0]->arg) };

    my $job = \@row;
    print "\n";
    print "$row[0] \n";
    print "$row[1] \n";   
    print "start sleep \n";
    my $date = &getTime();  
    print  $date->{date}," ",$date->{hour},":",$date->{minute},":",$date->{second};
    print "\n";
    sleep(10);
    print "finish sleep \n";
    $date = &getTime();  
    print  $date->{date}," ",$date->{hour},":",$date->{minute},":",$date->{second};
    print "\n";

    my $ret = "hello world";
    return $ret;
}

sub getTime
{
    my $time = shift || time();
    my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime($time);

    $year += 1900;
    $mon ++;

    $min  = '0'.$min  if length($min)  < 2;
    $sec  = '0'.$sec  if length($sec)  < 2;
    $mon  = '0'.$mon  if length($mon)  < 2;
    $mday = '0'.$mday if length($mday) < 2;
    $hour = '0'.$hour if length($hour) < 2;
    
    my $weekday = ('Sun','Mon','Tue','Wed','Thu','Fri','Sat')[$wday];

    return { 'second' => $sec,
             'minute' => $min,
             'hour'   => $hour,
             'day'    => $mday,
             'month'  => $mon,
             'year'   => $year,
             'weekNo' => $wday,
             'wday'   => $weekday,
             'yday'   => $yday,
             'date'   => "$year-$mon-$mday"
          };
}

我们先运行worker.pl

sudo perl worker.pl

再查看4730端口:

sudo lsof -i:4730

发现多了两个占用:

COMMAND   PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
gearmand  576 gearman    9u  IPv4  12861      0t0  TCP *:4730 (LISTEN)
gearmand  576 gearman   10u  IPv6  12862      0t0  TCP *:4730 (LISTEN)
gearmand  576 gearman   33u  IPv4 191760      0t0  TCP localhost:4730->localhost:37151 (ESTABLISHED)
perl     4508    root    3u  IPv4 192605      0t0  TCP localhost:37151->localhost:4730 (ESTABLISHED)

再运行多个client.pl

sudo perl client.pl

每启动一个client.pl就会多两个端口占用。

COMMAND   PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
gearmand  576 gearman    9u  IPv4  12861      0t0  TCP *:4730 (LISTEN)
gearmand  576 gearman   10u  IPv6  12862      0t0  TCP *:4730 (LISTEN)
gearmand  576 gearman   33u  IPv4 191760      0t0  TCP localhost:4730->localhost:37151 (ESTABLISHED)
gearmand  576 gearman   34u  IPv4 215493      0t0  TCP localhost:4730->localhost:37189 (ESTABLISHED)
perl     4508    root    3u  IPv4 192605      0t0  TCP localhost:37151->localhost:4730 (ESTABLISHED)
perl     4709    root    3u  IPv4 214688      0t0  TCP localhost:37189->localhost:4730 (ESTABLISHED)

每运行一次worker.pl表示启动一个新的消费者。如果只启动一个消费者,而启动多个生产者,就可以很好地观察到“排队”效果了:一定是一个任务执行完成之后才会开始处理另一个任务。
这样gearmanperl扩展包的安装使用就结束了。

这篇文章主要参考了如下几篇文章:

Gearman Job Server
Gearman
使用 Gearman 实现分布式处理
Gearman 安装使用 以及 问题处理
Ubuntu下Gearman安装搭建

配置Apache2服务器以CGI方式运行Perl程序

Overview

这次我们开发Bastion4服务器使用了JAVA+Perl的架构,后端用Perl做服务器提供Webservice,用JAVA框架Struts接收处理用户请求,再跟Perl服务器交互。
我们使用Apache2作为Perl服务器,由于Apache2默认并不支持Perl,因此需要简单配置一下,使得Apache2CGI的方式支持Perl运行。在配置的过程中,参考了一些网页,但由于各个网页都没有完整地描述配置过程(至少在我们的服务器上是这样),所以这里记录一下我们配置的全过程。

1. Apache2基本配置

我们为Bastion4申请了一个新的云服务器,版本为NeCTAR Ubuntu 16.04 LTS x86_64,自带了Apache2服务器,下面简单列出了Apache2的一些基本信息:

  • /etc/apache2/ Apache2的配置文件目录
  • /var/www/ Apache2的网站存放目录
  • /var/log/apache2/ Apache2的日志存放目录
  • /etc/apache2/apache2.conf Apache2的配置主文件,以前的Apache版本中,主配置文件名字叫httpd.conf,所以很多网页中还用到这个名字,不知道的话会造成很多混淆。尽管apache2.conf是主配置文件,但是多数具体的配置信息并不是写在这个文件中的,/etc/apache2目录中有还有几个文件夹分别记录了几个模块的配置信息,apache2.conf负责把他们导入整合在一起。
  • /etc/apache2/sites-enabled/ 看名字就可以明白,Apache2的网站配置文件一般都是放在这个目录中的,这个目录下的000-default.conf是我们经常要修改的配置文件。

我们假定Apache存放网站的根目录是/var/www/,如果不确定(因为默认好像是/var/www/html这个目录),打开/etc/apache2/apache2.conf目录,查看下面的配置:

<Directory /var/www/>
    Options Indexes FollowSymLinks
    AllowOverride None
    Require all granted
</Directory>

如果在你的配置里是

<Directory /var/www/html/>

修改成

<Directory /var/www/>

这样,当你把一个名叫Mywebsite的网站(通常是一个文件夹),扔到/var/www中时,你就可以以http://localhost/Mywebsite/访问到这个网站了。

如果你的配置是<Directory /var/www/html/>,那你需要把你的网站扔到/var/www/html中,才能使用http://localhost/Mywebsite/访问,这是因为<Directory>设置了Apache的根目录,当你使用http://****/Mywebsite访问时,Apache会去当前的根目录/Mywebsite找这个网站。

由于Apache的端口号默认是80,而不加端口号的url访问都会被服务器默认定向到80端口,因此这里不需要添加端口号,如果你手动将Apache端口号修改成了别的端口号(比如8888),那你需要使用http://localhost:8888/Mywebsite/访问这个网站。

2. 配置Apache2支持Perl CGI程序

2.1 创建一个Perl网站

  1. /var/www下创建网站的目录,我们创建一个名字叫cgi-bin的网站作为例子:

    mkdir /var/www/cgi-bin
    

    需要使用管理员权限的命令,请自己加上sudo

  2. /var/www/cgi-bin中创建一个cgi_test.pl脚本,输入下面的内容:

    #!/usr/bin/perl -w
    use warnings;
    use CGI qw(:standard);
    #! must use 'my' to define a variable
    print header;
    my $now_string = localtime();
    print "<b>Hello, CGI using Perl!</b><br/>It's $now_string NOW!<br />";
    
  3. cgi_test.pl添加可执行权限:

    chmod +x /var/www/cgi-bin/cgi_test.pl
    
  4. 在命令行运行cgi_test.pl

    /var/cgi-bin/www/cgi_test.pl
    

    会得到下面的结果:

    Content-Type: text/html; charset=ISO-8859-1
    
    <b>Hello, CGI using Perl!</b><br/>It's Mon Aug  1 03:35:42 2016 NOW!<br />
    

    成功运行这个perl脚本需要先安装perlCGI

  5. 尽管现在我们可以在本地以命令行的形式运行这个perl脚本,但还是不能让这个脚本在服务器中运行,访问http://localhost:8888/cgi-bin/cgi_test.pl,浏览器会直接显示整个脚本的内容:

    #!/usr/bin/perl -w
    use warnings;
    use CGI qw(:standard);
    #! must use 'my' to define a variable
    print header;
    my $now_string = localtime();
    print "<b>Hello, CGI using Perl!</b><br/>It's $now_string NOW!<br />";
    

    而不是只把脚本打印的内容显示出来,所以,接下来我们继续配置Apache服务器。

2.2 配置Apache服务器

  1. 打开/etc/apache2/sites-enabled/000-default.conf配置文件,找到下面的配置信息:

    <VirtualHost *:8080>
    ...
    
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www
        
    ...
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
        
    </VirtualHost>
    

    我们需要添加一些配置信息,添加之后,000-default.conf的内容如下:

    <VirtualHost *:8080>
    ...
    
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www
    
    ScriptAlias /cgi-bin/ /var/www/cgi-bin/
    <Directory "/var/www/cgi-bin">
             AllowOverride all
             Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
             Order allow,deny
             Allow from all
             AddHandler cgi-script .cgi .pl
    </Directory>
    ...
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
       
    </VirtualHost>
    

    很容易看得出来,这部分配置主要是用来添加Apacheperl的支持。

  2. 默认情况下,Apache并没有启用CGI模块,我们可以从/etc/apache2/mods-enabled/中确认这点。

    ls -l /etc/apache2/mods-enabled/ | grep cgi
    

    显示结果为空。

    ls -l /etc/apache2/mods-available/ | grep cgi
    

    显示下面的结果:

    -rw-r--r-- 1 root root   74 Mar 19 09:48 authnz_fcgi.load
    -rw-r--r-- 1 root root   58 Mar 19 09:48 cgi.load
    -rw-r--r-- 1 root root  115 Mar 19 09:48 cgid.conf
    -rw-r--r-- 1 root root   60 Mar 19 09:48 cgid.load
    -rw-r--r-- 1 root root   89 Mar 19 09:48 proxy_fcgi.load
    -rw-r--r-- 1 root root   89 Mar 19 09:48 proxy_scgi.load
    

    从名字上就可以看出来,mods-enabled中存放的是已经启用的模块,mods-available中存放的是所有可用的模块,所以启用CGI模块变得很简单,把mods-available中的相关文件在mods-enabled中创建软连接就可以了。

  3. mods-enabled中创建软连接,指向mods-available中的cgid.*文件:

    ln -s /etc/apache2/mods-available/cgid.load /etc/apache2/mods-enabled/
    ln -s /etc/apache2/mods-available/cgid.conf /etc/apache2/mods-enabled/
    

    再次查看mods-enabled

    ls -l /etc/apache2/mods-enabled/ | grep cgi
    

    显示结果为:

    lrwxrwxrwx 1 root root 37 Jul 24 11:55 cgid.conf -> /etc/apache2/mods-available/cgid.conf
    lrwxrwxrwx 1 root root 37 Jul 24 11:55 cgid.load -> /etc/apache2/mods-available/cgid.load
    
  4. 重启Apache服务器:

    service apache2 restart
    

    或者重新载入配置文件:

    service apache2 reload
    

    稳妥的方式是restart Apache服务器,而不只是reload服务器,参见3.2

  5. 在浏览器中访问:http://localhost:8888/cgi-bin/cgi_test.pl就会显示:

    Hello, CGI using Perl!
    It's Mon Aug 1 03:48:37 2016 NOW!
    

可以看到perl脚本在Apache服务器中以CGI脚本的形式运行了。

3. 可能遇到的一些错误

实际上,如果你严格按照上面的步骤,基本上不会出现错误,但是在这里,还是列出一些常见的错误,以便真的遇到这些问题时可以快速查看。

本节主要取自Perl/CGI script with Apache2,所以里面的脚本名字(这里是echo.pl,脚本的路径(这里是/var/cgi-bin而不是/var/www/cgi-bin)和错误信息都沿用了这篇文章的内容,与本文上面使用的脚本名字,脚本路径和脚本内容内容稍有不同,但也都大同小异。

3.1 500 Internal Server Error

如果你使用浏览器访问perl脚本时,看到500 Internal Server Error错误,打开Apache的错误日志/var/log/apache2/error.log

  1. 如果显示下面的错误信息:

    [Wed Mar 19 15:19:15.740781 2014] [cgid:error] [pid 3493:tid 139896478103424] (8)Exec format error: AH01241: exec of '/var/cgi-bin/echo.pl' failed
    [Wed Mar 19 15:19:15.741057 2014] [cgid:error] [pid 3413:tid 139896186423040] [client 192.120.120.120:62309] End of script output before headers: echo.pl
    

    说明脚本的第一行sh-bang line没有正确指向'perl'的安装路径,检查你的perl脚本,确认第一行是下面的样子:

    #!/usr/bin/perl
    

    其实这样的错误非常罕见,除非你是新手,而且又完全自己手写了一段perl脚本。

  2. 如果显示下面的错误信息:

    No such file or directory: AH01241: exec of '/var/cgi-bin/echo.pl' failed
    [Wed Mar 19 15:24:33.505429 2014] [cgid:error] [pid 3412:tid 139896261957376] [client 192.120.120.120:58087] End of script output before headers: echo.pl
    

    说明echo.plDOS格式,而非Unix格式,如果你经常的是Windows,或者喜欢在Windows下写好脚本再上传到Unix/Linux服务器中,那么这会是一个常见而且通用的错误。使用dos2unix命令转换脚本的格式:

    dos2unix /var/cgi-bin/echo.pl
    

    系统默认可能并没有装dos2unix命令,需要自己安装一下。如果你用的是Ubuntu,用sudo apt install dos2unix安装。

  3. 如果显示下面的错误信息:

    [Wed Mar 19 15:40:31.179155 2014] [cgid:error] [pid 4796:tid 140208841959296] (13)Permission denied: AH01241: exec of '/var/cgi-bin/echo.pl' failed
    [Wed Mar 19 15:40:31.179515 2014] [cgid:error] [pid 4702:tid 140208670504704] [client 192.120.120.120:60337] End of script output before headers: echo.pl
    

    说明你没有为perl脚本添加可执行权限,使用chmod命令很容易更正这个错误:

    chmod +x /var/cgi-bin/echo.pl
    
  4. 如果显示下面的错误信息:

    Wed Mar 19 16:02:20.239624 2014] [cgid:error] [pid 4703:tid 140208594970368] [client 192.120.120.120:62841] malformed header from script 'echo.pl': Bad header: hi
    

    上面错误对应的脚本:

    #!/usr/bin/perl
    use strict;
    use warnings;
    
    print "hi\n";
    print qq(Content-type: text/plain\n\n);
    

    perl脚本输出Content-type之前输出了别的字符,而浏览器解析的时候把那些字符当做了Content-type,所以报了Bad header错误。所以,不要在print qq(Content-type: text/plain\n\n);之前,输出别的字符。

  5. 如果显示下面的错误信息:

    [Wed Mar 19 16:08:00.342999 2014] [cgid:error] [pid 4703:tid 140208536221440] [client 192.120.120.120:59319] End of script output before headers: echo.pl
    

    还是一个跟header相关的问题,说明你的脚本在打印Content-type之前就出了错误或者异常,error.log中也可能在上面的错误信息之前提供其他更具体的相关信息。所以,请仔细检查打印Content-type之前的perl代码。

    关于End of script output before headers错误,原文作者认为可能与Premature end of script headers是同样的原因。

3.2 503 Service Unavailable

如果在本文2.2节为mods-enabled创建了软连接之后,并及时重载了apache,会报下面的错误:

[Wed Mar 19 15:30:22.515457 2014] [cgid:error] [pid 3927:tid 140206699169536] (22)Invalid argument: [client 192.120.120.120:58349] AH01257: unable to connect to cgi daemon after multiple tries: /var/cgi-bin/echo.pl

我猜reload只是重新载入了服务器配置信息,并未将CGI进程启动,所以尽管已经配置了,但是由于CGI模块并未启动,所以自然也就没法连接CGI的守护进程。
所以,如果你只是修改了配置文件,reload就可以了,如果你启用或禁用了某个模块,需要restart,因为所有启用的模块,是在服务器启动过程中启动的。

稳妥的方式是restart Apache服务器,而不只是reload服务器。

3.3 404 Not Found

如果显示下面的错误信息:

[Wed Mar 19 15:35:13.487333 2014] [cgid:error] [pid 4194:tid 139911599433472] [client 192.120.120.120:58339] AH01264: script not found or unable to stat: /usr/lib/cgi-bin/echo.pl

echo.pl确实已经存在了,那么检查/etc/apache2/sites-enabled/000-default.conf中的DocumentRootScriptAlias是否配置正确。

3.4 403 Forbidden

如果你遇到403 Forbidden错误,问题一般也出现在/etc/apache2/sites-enabled/000-default.conf这里。检查<Directory>,确保里面涉及到权限的语句正确配置了。

4. 小结

CGI的方式运行perl是最直接最原始的方式,如果你已经成功地以CGI形式运行了perl脚本,建议你尝试下面的方式:

参考文章

kafka在java中简单应用

Overview

之前的这篇博客ubuntu14.04单机安装配置zookeeper和kafka,介绍了zookeeperkafka的安装配置,并在命令行下验证了生产者消费者可以跑通。但是实际项目中,需要和java交互,不可能接触到命令行和后台的。本文旨在记录一下javakafka的简单交互,web中道理相同,只不过程序入口换成了action

1.新建项目配置环境

打开eclipse,依次点击Window→Preferences→Java→Build Path→User Libraries,然后在右边选择New
,添加一个自己常用的Library,我命名为kafka。选中kafka,右边选择Add External JARS,然后到之前安装好的kafka的目录,找到libs这个文件夹,如果按照上次配置好的情况,这里应该是15jar文件,见下图:

2016-07-12 17:16:06屏幕截图.png

全部选中,点击确定。这样,我们以后就可以复用了。

然后我们在eclipse中构建一个普通的java项目testKafka。右击项目,依次点击Build Path→Add Libraries→User Library,选择kafka这个library,点击Finish。这样环境就搭建好了。

2.生产者消费者程序

下面编码测试程序,即消息生产者和消息消费者。

2.1 生产者

package testKafka;

import java.util.Properties;

import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

public class MsgProducer {
    private static Producer<String,String> producer;
    private final Properties props=new Properties();
    public MsgProducer(){
        //定义连接的broker list
        props.put("metadata.broker.list", "127.0.0.1:9092");
        //定义序列化类,Java中对象传输之前要序列化
        props.put("serializer.class", "kafka.serializer.StringEncoder");
        producer = new Producer<String, String>(new ProducerConfig(props));
    }
    public static void main(String[] args) {
        MsgProducer mProducer=new MsgProducer();
        //定义topic
        String topic="testkafka";
        
        //定义要发送给topic的消息
        String mString = "Hello kafka!";
                
        //构建消息对象
        KeyedMessage<String, String> data = new KeyedMessage<String, String>(topic, mString);
 
        //推送消息到broker
        producer.send(data);
        producer.close();
    }
}

这里需要注意,生产者这里,最少需要两个配置项:metadata.broker.list127.0.0.1:9092serializer.class设置为kafka.serializer.StringEncoder。打开上次配置的producer.properties文件,看到这两项配置分别为metadata.broker.list=localhost:9092serializer.class=kafka.serializer.DefaultEncoderbroker list要一致,否则会报错。
这些项,最好写在配置文件里,方便以后添加服务器时候更改。

2.2 消费者

package testKafka;

import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;

import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;

public class MsgConsumer {
    private final ConsumerConnector consumer;
    private final String topic;

    public MsgConsumer(String zookeeper, String groupId, String topic) {
        Properties props = new Properties();
        //定义连接zookeeper信息
        props.put("zookeeper.connect", zookeeper);
        //定义Consumer所有的groupID
        props.put("group.id", groupId);
        props.put("zookeeper.session.timeout.ms", "500");
        props.put("zookeeper.sync.time.ms", "250");
        props.put("auto.commit.interval.ms", "1000");
        consumer = Consumer.createJavaConsumerConnector(new ConsumerConfig(props));
        this.topic = topic;
    }

    public void testConsumer() {
        Map<String, Integer> topicCount = new HashMap<String, Integer>();
        //定义订阅topic数量
        topicCount.put(topic, new Integer(1));
        //返回的是所有topic的Map
        Map<String, List<KafkaStream<byte[], byte[]>>> consumerStreams = consumer.createMessageStreams(topicCount);
        //取出我们要需要的topic中的消息流
        List<KafkaStream<byte[], byte[]>> streams = consumerStreams.get(topic);
        for (final KafkaStream stream : streams) {
            ConsumerIterator<byte[], byte[]> consumerIte = stream.iterator();
            while (consumerIte.hasNext()) {
                System.out.println(new String(consumerIte.next().message()));
            }
        }
        if (consumer != null) {
            consumer.shutdown();
        }
            
    }

    public static void main(String[] args) {
        String topic = "testkafka";
        MsgConsumer mConsumer = new MsgConsumer("127.0.0.1:2181", "test-consumer-group", topic);
        mConsumer.testConsumer();
    }

}

这里需要注意,消费者的配置信息,应该和生产者对应。最关键的配置是两项:zookeeper.connectgroup.id。这两项打开consumer.properties就可以看到。

3. 测试

首先,要在命令行中启动zookeeperkafka

在消费者程序里面,运行一下,Console框显示如下:

log4j:WARN No appenders could be found for logger (kafka.utils.VerifiableProperties).
log4j:WARN Please initialize the log4j system properly.

这个不是错误信息,不用理睬。
接着在生产者那里,运行一下,Console框显示如下:

log4j:WARN No appenders could be found for logger (kafka.utils.VerifiableProperties).
log4j:WARN Please initialize the log4j system properly.
Hello kafka!

这样,我们的程序就跑通了。
这里主要参考了kafka官方例子:生产者消费者

KafkaOffsetMonitor监控消息消费状态

Overview

这次做服务器,计划加入消息队列,并在web页面显示当前提交的序列处理状态和已处理序列的信息。我们知道,在后台命令行中可以看到kafka的消息者处理消息的状态,但是,对于访问者来说,查看命令行是不现实的,于是我们便采用了KafkaOffsetMonitor这一开源软件。Github的下载地址如下:Kafka Offset Monitor

1. 安装jdk,zookeeper,kafka

这部分可以参考上一篇文章:ubuntu14.04单机安装配置zookeeper和kafka

2. 安装配置KafkaOffsetMonitor

新建一个文件夹,比如我在kafka文件夹下建立子文件夹kafkaMonitor下载好以后,把这个KafkaOffsetMonitor-assembly-0.2.0.jar文件放入kafkaMonitor文件夹。在当前位置新建一个kafkaMonitor.sh文件,文件内容如下:

#! /bin/bash
java -cp KafkaOffsetMonitor-assembly-0.2.0.jar \
com.quantifind.kafka.offsetapp.OffsetGetterWeb \
--zk localhost:2181 \
--port 8089 \
--refresh 10.seconds \
--retain 7.days

上面是最关键的几条配置,剩余配置不写也可以运行。逐一分析上面每一项配置:

  • --zk 这里写的地址和端口,是zookeeper集群的各个地址和端口。应和kafka/bin文件夹中的zookeeper.properties中的host.nameclientPort一致。本机是host.name=localhost,clientPort=2181
  • --port 这个是本软件KafkaOffsetMonitor的端口。注意不要使用那些著名的端口号,例如80,8080等。我采用了8089.
  • --refresh 这个是软件刷新间隔时间,不要太短也不要太长。
  • --retain 这个是数据在数据库中保存的时间。

3. 启动KafkaOffsetMonitor

  • 首先,启动zookeeper

    切换到/home/young/zookeeper/bin目录下,运行下面命令:

    ./zkServer.sh start
    
  • 然后,启动kafka

    切换到/home/young/kafka目录下,执行下面命令:

    bin/kafka-server-start.sh config/server.properties
    
  • 最后,启动KafkaOffsetMonitor

    切换到/home/young/kafka/kafkaMonitor目录下面,执行下面命令:

    ./kafkaMonitor.sh 
    

    如果显示如下,就证明成功了:

    serving resources from: jar:file:/home/young/kafka/kafkaMonitor/KafkaOffsetMonitor-assembly-0.2.0.jar!/offsetapp
    SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
    SLF4J: Defaulting to no-operation (NOP) logger implementation
    SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
    2016-06-10 12:05:13.724:INFO:oejs.Server:jetty-7.x.y-SNAPSHOT
    log4j:WARN No appenders could be found for logger (org.I0Itec.zkclient.ZkConnection).
    log4j:WARN Please initialize the log4j system properly.
    log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
    2016-06-10 12:05:13.781:INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,jar:file:/home/young/kafka/kafkaMonitor/KafkaOffsetMonitor-assembly-0.2.0.jar!/offsetapp}
    2016-06-10 12:05:13.802:INFO:oejs.AbstractConnector:Started SocketConnector@0.0.0.0:8089
    

4.运行KafkaOffsetMonitor

这里就不新建话题topic了,沿用上一篇文章中的testkafka
切换到/home/young/kafka,打开两个命令行终端,分别打开生产者和消费者:

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic testkafka
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic testkafka --from-beginning

在生产者那里输入一条内容,如果在消费者那里能够收到,就证明成功了:

Hello kafkaOffsetMonitor!

这时候,我们打开浏览器,输入如下地址(对应你前面设置的地址和端口):

http://127.0.0.1:8089/

显示如下:

1.png

我们选择点击下面那个(因为上面那个已经死掉了):

2.png

其中,logSize,Offset,Lag分别代表总消息数,已处理消息数,未处理消息数
我们点击最上面的Topic List,就会显示所有Topic

3.png

我们点击最下面那个Topictestkafka,一路点下去,显示的是处理进度的过程图:

5.png

这里消息不多,如果多的话,将会非常壮观。
我们点击标题栏最后一项Visualization,就可以查看其他消费者或者整个集群的情况了(我们这里只有一个消费者,一个服务器):

6.png

这样,我们的安装就结束了,实际工程中使用,根据需要更改配置。
这篇文章主要参考了官方文档,和以下文章:
Kafka实战-KafkaOffsetMonitor
Apache Kafka监控之KafkaOffsetMonitor
apache kafka监控系列-KafkaOffsetMonitor