2009년 3월 31일 화요일

Raid server 구성정보 출력

Fedora 10을 설치한 HP xw4400 서버 정보

[root@raid-server home]# df -k
        Filesystem           1K-blocks      Used Available Use% Mounted on
        /dev/md0              25197100   5042740  18874392  22% /
        tmpfs                  4155928        76   4155852   1% /dev/shm/dev/mapper/VG0--raid-lv0
                             453456384   2093972 428328148   1% /home

        Filesystem            Size  Used Avail Use% Mounted on
        /dev/md0               25G  4.9G   19G  22% /
        tmpfs                 4.0G   76K  4.0G   1% /dev/shm/dev/mapper/VG0--raid-lv0
                              433G  2.0G  409G   1% /home


[root@raid-server home]# fdisk -l
        Disk /dev/sda: 500.1 GB, 500107862016 bytes
        255 heads, 63 sectors/track, 60801 cylinders
        Units = cylinders of 16065 * 512 = 8225280 bytes
        Disk identifier: 0x000d49c3

           Device Boot      Start         End      Blocks   Id  System
        /dev/sda1   *           1        3187    25599546   fd  Linux raid autodetect
        /dev/sda2            3188        3448     2096482+  fd  Linux raid autodetect
        /dev/sda3            3449       60801   460687972+  fd  Linux raid autodetect

        Disk /dev/sdb: 500.1 GB, 500107862016 bytes
        255 heads, 63 sectors/track, 60801 cylinders
        Units = cylinders of 16065 * 512 = 8225280 bytes
        Disk identifier: 0x000691f2

           Device Boot      Start         End      Blocks   Id  System
        /dev/sdb1   *           1        3187    25599546   fd  Linux raid autodetect
        /dev/sdb2            3188        3448     2096482+  fd  Linux raid autodetect
        /dev/sdb3            3449       60801   460687972+  fd  Linux raid autodetect

[root@raid-server home]# pvdisplay
          --- Physical volume ---
          PV Name               /dev/md2
          VG Name               VG0-raid
          PV Size               439.35 GB / not usable 2.50 MB
          Allocatable           yes (but full)
          PE Size (KByte)       16384
          Total PE              28118
          Free PE               0
          Allocated PE          28118
          PV UUID               Rk2rBa-LBE7-0mkC-6hz1-P79o-hZ2o-EkNptz

[root@raid-server home]# vgdisplay
          --- Volume group ---
          VG Name               VG0-raid
          System ID
          Format                lvm2
          Metadata Areas        1
          Metadata Sequence No  2
          VG Access             read/write
          VG Status             resizable
          MAX LV                0
          Cur LV                1
          Open LV               1
          Max PV                0
          Cur PV                1
          Act PV                1
          VG Size               439.34 GB
          PE Size               16.00 MB
          Total PE              28118
          Alloc PE / Size       28118 / 439.34 GB
          Free  PE / Size       0 / 0
          VG UUID               L2lNJW-H96E-yr2y-kQYB-dVgu-cNtc-Nkjint

[root@raid-server home]# lvdisplay
          --- Logical volume ---
          LV Name                /dev/VG0-raid/lv0
          VG Name                VG0-raid
          LV UUID                T0zlfw-90Ev-yimU-FMXF-iJW3-q27Z-SzKWI6
          LV Write Access        read/write
          LV Status              available
          # open                 1
          LV Size                439.34 GB
          Current LE             28118
          Segments               1
          Allocation             inherit
          Read ahead sectors     auto
          - currently set to     256
          Block device           253:0

[root@raid-server home]# mdadm --detail --scan
        ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=4986355b:6ff50e18:4f8021ce:19673fca
        ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=ff1e49cb:daaea683:1ed31a98:32961f73
        ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=c26b0201:2ada3aa0:ad476803:4025a76b

[root@raid-server home]# ll /dev/md*
        brw-rw---- 1 root disk 9, 0 2009-03-04 13:16 /dev/md0
        brw-rw---- 1 root disk 9, 1 2009-03-04 22:16 /dev/md1
        brw-rw---- 1 root disk 9, 2 2009-03-04 13:16 /dev/md2

[root@raid-server home]# cat /proc/mdstat
        Personalities : [raid1] [raid6] [raid5] [raid4]
        md2 : active raid1 sda3[0] sdb3[1]
              460687872 blocks [2/2] [UU]
        md1 : active raid1 sda2[0] sdb2[1]
              2096384 blocks [2/2] [UU]
        md0 : active raid1 sda1[0] sdb1[1]
              25599424 blocks [2/2] [UU]

[root@raid-server fwiki]# cat /boot/grub/grub.conf
        # grub.conf generated by anaconda
        # #
        # # Note that you do not have to rerun grub after making changes to this file
        # # NOTICE:  You do not have a /boot partition.  This means that
        # #          all kernel and initrd paths are relative to /, eg.
        # #          root (hd0,0)
        # #          kernel /boot/vmlinuz-version ro root=/dev/md0
        # #          initrd /boot/initrd-version.img
        # #boot=/dev/md0
         default=0
         timeout=1
         splashimage=(hd0,0)/boot/grub/splash.xpm.gz
         hiddenmenu
         password --md5 $1$FoYUHc5D$I/KG.HEBwtG3kSVjkUsgY0
         title Fedora (2.6.27.5-117.fc10.i686.PAE)
                 root (hd0,0)
                 kernel /boot/vmlinuz-2.6.27.5-117.fc10.i686.PAE ro root=UUID=49d8a50b-bcab-49ed-a365-dffa6fdf8dd6 rhgb quiet
                 initrd /boot/initrd-2.6.27.5-117.fc10.i686.PAE.img

Raid and LVM 개요

Raid and LVM
 : KLDP문서로 개념 잡고 Managing RAID and LVM with Linux 내용으로 설치.(설명이 잘 되어 있다.)

1. Logical Volume Manager HOWTO - KLDP
        http://wiki.kldp.org/wiki.php/DocbookSgml/LVM-HOWTO?action=edit
2. Managing RAID and LVM with Linux
        http://www.gagme.com/greg/linux/raid-lvm.php       
3. mdadm 으로 소프트 레이드 구성
        http://blog.naver.com/parkjy76/30034935057
4. LVM (logical volume manager),/etc/fstab
        http://blog.naver.com/lovexit/70030137960

Raid level 관련 자료
1. Raid level
        http://www.raid.co.kr/sub/study/level.asp
        http://blog.naver.com/morningz/150037655081
        http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/sysadmin-guide/s1-raid-levels.html

RAID1에 페도라 설치하기 (부트영역 포함)
        http://www.hanb.co.kr/board/view.php?ma_id=2087&mcf_id=28&pg=1&sfl=&sw=
      

Raid server 설치 개요 - Fedora 10

서버(Linux Software Raid)  설치 개요 - Fedora 10

[ 함께 볼 글들...]

 1. 500G hd 2개 설치(SATA2)되어 있어서 Software RAID(level1)로 구성했다.
        - 하드가 4개였다면 leve5로...

 2. Partition을 3개로 나눴다.
   1)  /, /home, swap 이렇게 3개로 나눴다.
        Filesystem            Size  Used Avail Use% Mounted on
        /dev/md0               25G  4.9G   19G  22% /
        tmpfs                 4.0G   76K  4.0G   1% /dev/shm/dev/mapper/VG0--raid-lv0
                              433G  2.0G  409G   1% /home

   2) sda, sdb를 Raid로 묶었다.
    [root@raid-server home]# cat /proc/mdstat
        Personalities : [raid1] [raid6] [raid5] [raid4]
        md2 : active raid1 sda3[0] sdb3[1]
              460687872 blocks [2/2] [UU]
        md1 : active raid1 sda2[0] sdb2[1]
              2096384 blocks [2/2] [UU]
        md0 : active raid1 sda1[0] sdb1[1]
              25599424 blocks [2/2] [UU]

 3. 구성
  1) / 디렉토리에 Fedora를 깔았다.
        - /boot를 별도로 파티션을 나눌까 하다가 그럴필요가 있을까 싶어서 통으로 묶었다.
  2) /home 디렉토리에 데이터를 깔았다.
        - /, swap을 빼고 나머지는 전부 /home으로 잡았다.
  3) swap 디렉토리는 4G를 잡았다.
        - RAM의 2배로 잡으면 된다고 했는데... 현재 RAM은 8G...
  4) 각각을 Raid로 묶었다.
      [root@raid-server home]# fdisk -l
        Disk /dev/sda: 500.1 GB, 500107862016 bytes
        255 heads, 63 sectors/track, 60801 cylinders
        Units = cylinders of 16065 * 512 = 8225280 bytes
        Disk identifier: 0x000d49c3

           Device Boot      Start         End      Blocks   Id  System
        /dev/sda1   *           1        3187    25599546   fd  Linux raid autodetect
        /dev/sda2            3188        3448     2096482+  fd  Linux raid autodetect
        /dev/sda3            3449       60801   460687972+  fd  Linux raid autodetect

        Disk /dev/sdb: 500.1 GB, 500107862016 bytes
        255 heads, 63 sectors/track, 60801 cylinders
        Units = cylinders of 16065 * 512 = 8225280 bytes
        Disk identifier: 0x000691f2

           Device Boot      Start         End      Blocks   Id  System
        /dev/sdb1   *           1        3187    25599546   fd  Linux raid autodetect
        /dev/sdb2            3188        3448     2096482+  fd  Linux raid autodetect
        /dev/sdb3            3449       60801   460687972+  fd  Linux raid autodetect

  5) /home을 LVM으로 설정했다.
        - 나중에 500G짜리를 더 붙일수 있어 추가되면 여기 LVM에 더 붙이면 될듯 싶다
        - 하드가 크기 때문에 Volumn Group 으로 묶을때 physical extent size를 크게 잡아주어야 한다.
                # vgcreate -s 16M <volume group name>
                The default value for the physical extent size can be too low for a large RAID array. In those cases you'll need to specify the -s option with a larger than default physical extent size. The default is only 4MB as of the version in Fedora Core 5. The maximum number of physical extents is approximately 65k so take your maximum volume size and divide it by 65k then round it to the next nice round number. For example, to successfully create a 550G RAID let's figure that's approximately 550,000 megabytes and divide by 65,000 which gives you roughly 8.46. Round it up to the next nice round number and use 16M (for 16 megabytes) as the physical extent size and you'll be fine:

6) 서버 HW 정보
  - HP xw4400


2009년 3월 30일 월요일

tftp server and client - Fedora 5

UPD 트래픽 시험을 하려고 보니 linux서버에 tftp 서버를 설치하면 되겠더군요.

그래서 Fedora core 5에 tftp-server와 client를 아래처럼 설치....
1. tftp server, client 설치(Fedora 5에)
 a. rpm -ivh tftp-server-0.41-1.2.1.i386.rpm
 b. rpm -ivh tftp-0.41-1.2.1.i386.rpm

2. server configuration
 a./etc/xinetd.d/tftp 파일을 아래처럼 설정
  - server_args부분의 -c 옵션 안줬더니 put이 잘 안되더군요.

service tftp
{
  socket_type = dgram
  protocol = udp
  wait = yes
  user = root
  server = /usr/sbin/in.tftpd
  server_args = -c -s /tftpboot
  disable = no
  per_source = 11
  cps = 100 2
  flags = IPv4
}

3. tftp서버, 클라이언트 사용하기
 a. 서버 실행시키기
  : service xinet.d restart
  : ps -ef |grep tftp 로 잘 실행되었는지 확인

 b. linux 클라이언트에서 파일 주고 받기
  - tftp -v 192.168.x.x(tftp서버) 실행
  - put 또는 get 명령으로 파일 주고 받기

 c. 윈도우용 tftp 서버, 클라이언트 프로그램



2009년 3월 25일 수요일

Fedora 10 installation

1. Installation
  ㄱ. DVD를 가지고 installation을 했다.
  ㄴ. CD1을 이용해서 Network installation을 했더니 에러가 난다.

   a. https://bugzilla.redhat.com/show_bug.cgi?id=517171

   b. Possibleworkarounds include:

   * Boot and install from CD
   * Boot and install from DVD
   * Boot from PXE, install from network  


2. install 후 설치한 것들 

  ㄱ. yum -y updata
  ㄴ. yum으로 설치한 것들
    a. vsftpd, wireshark, wireshark-gnome, bind, bind-utils, dhcp, vlc
        : vlc를 이용해 VOD 서버 설정은 여기를 참조
  ㄷ. telnet server
     a. 아래 2개 package를 설치한다.
      - rpm -ivh xinetd-2.3.14-21.fc10.i386.rpm
      - rpm -ivh telnet-server-0.17-42.fc9.i386.rpm
     b. xinetd를 재시동 하면 telnet server가 실행된다.
      - service xinetd restart   # xinetd를 재시동해서 telnet-server를 살린다.
     c. 아래처럼 명령을 수행하면 재부팅 해도 telnet server가 구동된다.
      - chkconfig telnet on     #재부팅 해도 telnet server가  살도록 설정     d. 설치 파일들(yum으로는 설치가 안되더라)