close
Warning:
Can't synchronize with repository "(default)" (Unsupported version control system "svn": /usr/lib/python2.7/dist-packages/libsvn/_delta.so: failed to map segment from shared object: Cannot allocate memory). Look in the Trac log for more information.
- Timestamp:
-
Aug 4, 2010, 10:39:30 AM (15 years ago)
- Author:
-
jazz
- Comment:
-
--
Legend:
- Unmodified
- Added
- Removed
- Modified
-
v8
|
v9
|
|
2 | 2 | #!html |
3 | 3 | <div style="text-align: center;"><big |
4 | | style="font-weight: bold;"><big><big>實作一: Hadoop 0.20 單機安裝</big></big></big><br/><big><big>Lab1: Hadoop Installation: single node, presudo mode</big></big></div> |
5 | | }}} |
| 4 | style="font-weight: bold;"><big><big>實作一: Hadoop 0.20 單機安裝</big></big></big><br/><big><big>Lab1: Hadoop Installation: single node, pseudo-distributed </big></big></div> |
| 5 | }}} |
| 6 | |
6 | 7 | [[PageOutline]] |
7 | 8 | |
… |
… |
|
89 | 90 | }}} |
90 | 91 | |
| 92 | ---- |
| 93 | |
| 94 | [[PageOutline]] |
| 95 | |
91 | 96 | == Step 4: 設定 hadoop-env.sh == |
92 | 97 | == Step 4: Configure hadoop-env.sh == |
… |
… |
|
175 | 180 | == Step 6: Format HDFS == |
176 | 181 | |
177 | | * 以上我們已經設定好 Hadoop 單機測試的環境,接著讓我們來啟動 Hadoop 相關服務,格式化 namenode, secondarynamenode, tasktracker |
| 182 | * 以上我們已經設定好 Hadoop 單機測試的環境,接著讓我們來啟動 Hadoop 相關服務,首先必須格式化 namenode[[BR]] Now, we have configured Hadoop single node into pseudo-distributed mode. Let's start Hadoop related services. First, we need to format namenode. |
178 | 183 | |
179 | 184 | {{{ |
… |
… |
|
181 | 186 | }}} |
182 | 187 | |
183 | | 執行畫面如: |
| 188 | 執行畫面如: [[BR]] You should see results like this: |
184 | 189 | |
185 | 190 | {{{ |
… |
… |
|
203 | 208 | }}} |
204 | 209 | |
205 | | == step 7. 啟動Hadoop == |
206 | | |
207 | | * 接著用 start-all.sh 來啟動所有服務,包含 namenode, datanode, |
| 210 | == Step 7: 啟動 Hadoop == |
| 211 | == Step 7: Start Hadoop == |
| 212 | |
| 213 | * 接著用 start-all.sh 來啟動所有服務,包含 namenode, secondary namenode, datanode, jobtracker 及 tasktracker.. [[BR]] After formating namenode, now you can use '''start-all.sh''' to start all services, including namenode, secondary namenode, datanode, jobtracker, and tasktracker. |
208 | 214 | |
209 | 215 | {{{ |
… |
… |
|
211 | 217 | }}} |
212 | 218 | |
213 | | 執行畫面如: |
| 219 | 執行畫面如: [[BR]] You should see results like this: |
214 | 220 | |
215 | 221 | {{{ |
… |
… |
|
221 | 227 | }}} |
222 | 228 | |
223 | | == step 8. 完成!檢查運作狀態 == |
224 | | |
225 | | * 啟動之後,可以檢查以下網址,來觀看服務是否正常。[http://localhost:50030/ Hadoop 管理介面] [http://localhost:50060/ Hadoop Task Tracker 狀態] [http://localhost:50070/ Hadoop DFS 狀態] |
226 | | |
227 | | * http://localhost:50030/ - Hadoop 管理介面 |
| 229 | == Step 8: 完成!檢查 Hadoop 運作狀態 == |
| 230 | == Step 8: Complete!! Let's check the status of Hadoop == |
| 231 | |
| 232 | * 啟動之後,可以檢查以下網址,來觀看服務是否正常。[http://localhost:50030/ Hadoop 管理介面] [http://localhost:50060/ Hadoop Task Tracker 狀態] [http://localhost:50070/ Hadoop DFS 狀態] [[BR]] After running start-all.sh, you could check following URLs to check if the services are working or not. [http://localhost:50030/ Hadoop JobTracker Web Interface] [http://localhost:50060/ Hadoop TaskTracker Web Interface] [http://localhost:50070/ Hadoop NameNode Web Interface] |
| 233 | |
| 234 | * http://localhost:50030/ - Hadoop 管理介面 - Hadoop JobTracker Web Interface |
228 | 235 | |
229 | 236 | ------ |
230 | | * http://localhost:50060/ - Hadoop Task Tracker 狀態 |
| 237 | |
| 238 | * http://localhost:50060/ - Hadoop Task Tracker 狀態 - Hadoop TaskTracker Web Interface |
231 | 239 | |
232 | 240 | ------ |
233 | | * http://localhost:50070/ - Hadoop DFS 狀態 |
234 | | |
| 241 | |
| 242 | * http://localhost:50070/ - Hadoop DFS 狀態 - Hadoop NameNode Web Interface |
| 243 | |
| 244 | == DEBUG: 使用 jps 檢查 java 程序 == |
| 245 | == DEBUG: Use jps to check running java process == |
| 246 | |
| 247 | * 有些時候您需要使用 '''jps''' 指令來檢查目前系統裡面存在哪些 java 程序[[BR]]Sometimes it's useful to use '''jps''' command to check running java process. |
| 248 | {{{ |
| 249 | /opt/hadoop$ jps |
| 250 | }}} |