wiki:waue/2009/0511
Deb檔打包工作

前言

  • 要學習如何包裝nutch 的 deb檔
  • 然而目前大部分找到的說明是適用於 c code等要configure , make , make install 的專案,才會很簡單的用以下方法來完成製作deb
    • 目錄名稱如 pkgname-version ,例: hadoop-0.19
    • 在source code內執行dh_make
    • 修改 新產生debian資料夾內的設定檔
    • 執行dpkg-buildpackage -rfakeroot
  • 不過hadoop 與 nutch都是java code,沒有make or configure檔,因此試過上面的方法會遇到錯誤
  • 因此此篇適合用在:打包已經compiler好的可執行檔,並搭配設定好的配置檔來做簡便安裝用途者

步驟

製作deb打包設定檔

$ mkdir -p ~/test/hadoop-0.19.1/debian
$ cd ~/test/hadoop-0.19.1/debian

編輯檔案

  • 這些與dh_make產生出來的檔案差不多,就不討論了
changelog
copyright
compat
control
  • 以下檔案就把內容列出

rules

#!/usr/bin/make -f

export DH_VERBOSE=0

all:

install:
        dh_testdir
        dh_testroot
        dh_install -Xlicense.txt
        dh_installdocs
        dh_installchangelogs
        #dh_installexamples
        dh_compress
        dh_fixperms
        dh_installdeb
        dh_link
        dh_gencontrol
        dh_md5sums
        dh_builddeb

clean:
        dh_clean

binary: install

build:
binary-arch:
binary-indep:

control

Source: hadoop
Section: devel
Priority: extra
Maintainer: Jazz Yao-Tsung Wang <jazzwang.tw@gmail.com>
Build-Depends: debhelper (>= 5)
Standards-Version: 3.7.2
Package: hadoop
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}, sun-java6-jre, sun-java6-bin
Suggests: sun-java6-jdk
Description: Apache Hadoop Core
  .
  Apache Hadoop Core is a software platform that lets one easily write and 
  run applications that process vast amounts of data.
  .
  Here's what makes Hadoop especially useful:
   * Scalable: Hadoop can reliably store and process petabytes.
   * Economical: It distributes the data and processing across clusters of 
                 commonly available computers. These clusters can number into 
                 the thousands of nodes.
   * Efficient: By distributing the data, Hadoop can process it in parallel on
                the nodes where the data is located. This makes it extremely 
                rapid.
   * Reliable: Hadoop automatically maintains multiple copies of data and 
               automatically redeploys computing tasks based on failures.
  .
  Hadoop implements MapReduce, using the Hadoop Distributed File System (HDFS)
  MapReduce divides applications into many small blocks of work. HDFS creates 
  multiple replicas of data blocks for reliability, placing them on compute 
  nodes around the cluster. MapReduce can then process the data where it is 
  located.
  .
  For more information about Hadoop, please see the Hadoop website.
  http://hadoop.apache.org/
Package: hadoop-src
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}, sun-java6-jdk, ant, gcc, g++, hadoop
Description: Apache Hadoop Core ( java source code and examples )
  .
  Apache Hadoop Core is a software platform that lets one easily write and
  run applications that process vast amounts of data.
  .
  This package include the java source code and examples from original
  tarball. Install this package only when you need to rebuild the jar binary
  or want to run the 'Word Count' examples of MapReduce.
Package: hadoop-doc
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}
Description: Apache Hadoop Core Documents
  .
  Apache Hadoop Core is a software platform that lets one easily write and
  run applications that process vast amounts of data.
  .
  This package include the HTML and PDF documents from original tarball.
  Install this package only when you need these documents.

hadoop.install

conf/*        etc/hadoop
debian/conf/* etc/hadoop
bin       opt/hadoop
c++       opt/hadoop
contrib       opt/hadoop
lib       opt/hadoop
libhdfs       opt/hadoop
librecordio   opt/hadoop
webapps       opt/hadoop
*.jar       opt/hadoop

hadoop.prerm

#!/bin/sh
su -c /opt/hadoop/bin/stop-all.sh hdfsadm -

hadoop-doc.install

docs/*  usr/share/doc/hadoop
etc/hadoop  opt/hadoop/conf
usr/share/doc/hadoop opt/hadoop/docs
var/log/hadoop  opt/hadoop/logs

hadoop-src.install

src opt/hadoop
*.xml opt/hadoop
usr/share/doc/hadoop  opt/hadoop/docs

hadoop.postinst

#!/bin/sh
echo "$1"
if [ "$1" != configure ]
then
  exit 0
fi
setup_hdfsadm_user() {
  if ! getent passwd hdfsadm >/dev/null; then
    useradd hdfsadm
    mkdir -p /home/hdfsadm/.ssh
    mkdir -p /var/log/hadoop
    ssh-keygen -t rsa -q -f /home/hdfsadm/.ssh/id_rsa -N ""
    cp /home/hdfsadm/.ssh/id_rsa.pub /home/hdfsadm/.ssh/authorized_keys
    chown hdfsadm:hdfsadm /var/log/hadoop
    chown -R hdfsadm:hdfsadm /home/hdfsadm/.ssh
    chown -R hdfsadm:hdfsadm /home/hdfsadm
    su -c "/opt/hadoop/bin/hadoop namenode -format" hdfsadm -
    su -c /opt/hadoop/bin/start-all.sh hdfsadm -
    echo "Please check via browsing following URLs:"
    echo "(1) http://localhost:50030 for Hadoop Map/Reduce Administration."
    echo "(2) http://localhost:50060 for Hadoop Task Tracker status"
    echo "(3) http://localhost:50070 for Hadoop Distributed File System status"
  fi
}
setup_hdfsadm_user

hadoop.docs

CHANGES.txt
LICENSE.txt
NOTICE.txt
README.txt

hadoop.postrm

#!/bin/sh
echo "$1"
if [ "$1" != remove ]
then
  exit 0
fi
setup_hdfsadm_user() {
  if ! getent passwd hdfsadm >/dev/null; then
    echo "no account found: 'hdfsadm'."
  else
    userdel hdfsadm
    rm -rf /home/hdfsadm
    rm -rf /var/log/hadoop
    rm -rf /tmp/hadoop-hdfsadm*
    rm -rf /tmp/hsperfdata_*
  fi
}
setup_hdfsadm_user

加入目錄 conf

  • 用來放編輯好的Hadoop設定檔,而此設定檔與hadoop有關,就不再贅述

編輯一個Makefile

VERSION = 0.19.1
all: help
deb:
	@dpkg-buildpackage -rfakeroot -aamd64
	@dpkg-buildpackage -rfakeroot -ai386
clean:
	@debian/rules clean
source: 
	@wget http://ftp.twaren.net/Unix/Web/apache/hadoop/core/hadoop-${VERSION}/hadoop-${VERSION}.tar.gz
	@tar zxvf hadoop-${VERSION}.tar.gz -C ..
	@rm conf/hadoop-env.sh  
	@rm conf/hadoop-site.xml
	@chmod a+x `find . -name "configure"`
update:
	@scp ../hadoop*_amd64.deb www.classcloud.org:/var/www/hadoop/dists/unstable/main/binary-amd64/.
	@scp ../hadoop*_i386.deb www.classcloud.org:/var/www/hadoop/dists/unstable/main/binary-i386/.
	@ssh www.classcloud.org /var/www/hadoop/update-repository.sh
help:
	@echo "Usage:"
	@echo "make deb     - Build Debian Package."
	@echo "make clean   - Clean up Debian Package temparate files."
	@echo "make source  - download source tarball from hadoop mirror site."
	@echo "make update  - upload deb packages to classcloud.org."
	@echo "make help    - show Makefile options."
	@echo " "
	@echo "Example:"
	@echo "$$ make source; make deb; make clean"
  • 以上檔案做完應該會如附檔 debain-hadoop-pkg.zip

執行makefile的內容

$ make source; make deb; make clean
  • 之後會在 ~/test 下出現 hadoop_0.19.1-1_amd64.deb 的檔案

參考

Last modified 15 years ago Last modified on May 12, 2009, 4:04:15 PM

Attachments (2)