close
Warning:
Can't synchronize with repository "(default)" (Unsupported version control system "svn": /usr/lib/python2.7/dist-packages/libsvn/_fs.so: failed to map segment from shared object: Cannot allocate memory). Look in the Trac log for more information.
- Timestamp:
-
Sep 2, 2009, 8:01:36 PM (16 years ago)
- Author:
-
jazz
- Comment:
-
--
Legend:
- Unmodified
- Added
- Removed
- Modified
-
v4
|
v5
|
|
3 | 3 | * hadoop.nchc.org.tw 系統維護 |
4 | 4 | * 近期仍常有 Kernel Panic 問題,懷疑主因是記憶體空間不足。 |
5 | | * [[Image(hadoop_kernel_panic.png,size=400)]] |
| 5 | * [[Image(hadoop_kernel_panic.png,width=400)]] |
6 | 6 | * [追蹤一] HADOOP_HEAPSIZE 曾改為 1500MB 現下修至 1024MB - 實體記憶體只有 2GB,系統本身就會吃掉 1.5GB,若再額外執行 Map/Reduce 程式,使用 HEAP 1.5GB 就高達 3GB。縱使 SWAP 開 2GB,或許還是無濟於事。 |
7 | 7 | {{{ |
8 | 8 | export HADOOP_HEAPSIZE=1024 |
9 | 9 | }}} |
10 | | * [[Image(mem_usage_of_hadoop_cluster.png,size=400)]] |
| 10 | * [[Image(mem_usage_of_hadoop_cluster.png,width=400)]] |
11 | 11 | * [追蹤二] 實際會使用記憶體的程序有:datanode, tasktracker, mapper, reducer,因此 mapper 個數與 reducer 個數也會受限於實際記憶體大小 |
12 | | * [[Image(hadoop_task_mem_usage.png,size=400)]] |
| 12 | * [[Image(hadoop_task_mem_usage.png,width=400)]] |
13 | 13 | * 在電腦教室測試多核心的 restore - 測 20090831-karmic 那個版本 |