Redhat linux DNS配置指南

在oracle 11g的RAC中增加了SCAN IP,而使用 SCAN IP的一种方式就是使用DNS,这里介绍在Redhat Linux 5.4中DNS的详细配置操作
在配置DNS之前修改主机名
Redhat linux 5.4 DNS配置操作
在配置DNS之前修改主机名

[root@beiku1 etc]# hostname beiku1.sbyy.com
[root@beiku1 etc]# vi /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               beiku1.sbyy.com localhost
::1             localhost6.localdomain6 localhost6
10.138.130.161 beiku1

[root@beiku1 etc]# vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=beiku1.sbyy.com
GATEWAY=10.138.130.254

一.安装软件包
Redhat linux 5.4 下的dns服务所有的bind包如下:

bind-9.3.6-4.P1.el5 
bind-libbind-devel-9.3.6-4.P1.el5 
kdebindings-devel-3.5.4-6.el5 
kdebindings-3.5.4-6.el5 
bind-devel-9.3.6-4.P1.el5 
bind-utils-9.3.6-4.P1.el5 
bind-chroot-9.3.6-4.P1.el5 
ypbind-1.19-12.el5 
system-config-bind-4.0.3-4.el5 
bind-libs-9.3.6-4.P1.el5 
bind-sdb-9.3.6-4.P1.el5 

使用rpm –qa | grep bind来检查系统是否已经安装了以上软件包:

[root@beiku1 soft]# rpm -qa | grep bind
bind-chroot-9.3.6-4.P1.el5
kdebindings-3.5.4-6.el5
ypbind-1.19-12.el5
bind-libs-9.3.6-4.P1.el5
bind-9.3.6-4.P1.el5
system-config-bind-4.0.3-4.el5
bind-utils-9.3.6-4.P1.el5

对于没有安装的软件包执行以下命令进行安装

[root@beiku1 soft]# rpm -ivh bind-9.3.6-4.P1.el5.i386.rpm
warning: bind-9.3.6-4.P1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing...                ########################################### [100%]
        package bind-9.3.6-4.P1.el5.i386 is already installed
[root@beiku1 soft]# rpm -ivh caching-nameserver-9.3.6-4.P1.el5.i386.rpm
warning: caching-nameserver-9.3.6-4.P1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing...                ########################################### [100%]
   1:caching-nameserver     ########################################### [100%]

[root@beiku1 soft]# rpm -ivh install kdebindings-devel-3.5.4-6.el5.i386.rpm
error: open of install failed: No such file or directory
warning: kdebindings-devel-3.5.4-6.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
[root@beiku1 soft]# rpm -ivh kdebindings-devel-3.5.4-6.el5.i386.rpm
warning: kdebindings-devel-3.5.4-6.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing...                ########################################### [100%]
   1:kdebindings-devel      ########################################### [100%]
[root@beiku1 soft]# rpm -ivh bind-sdb-9.3.6-4.P1.el5.i386.rpm
warning: bind-sdb-9.3.6-4.P1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing...                ########################################### [100%]
   1:bind-sdb               ########################################### [100%]
[root@beiku1 soft]# rpm -ivh bind-libbind-devel-9.3.6-4.P1.el5.i386.rpm
warning: bind-libbind-devel-9.3.6-4.P1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing...                ########################################### [100%]
   1:bind-libbind-devel     ########################################### [100%]
[root@beiku1 soft]# rpm -ivh bind-devel-9.3.6-4.P1.el5.i386.rpm
warning: bind-devel-9.3.6-4.P1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing...                ########################################### [100%]
   1:bind-devel             ########################################### [100%]

还要手动安装一个软件包caching-nameserver-9.3.6-4.P1.el5 ,不安装这个软件包named服务不能启动,会报错误信息 例如:

[root@beiku1 ~]# service named start
Locating /var/named/chroot//etc/named.conf failed:
[FAILED]

[root@beiku1 soft]# rpm -ivh caching-nameserver-9.3.6-4.P1.el5.i386.rpm
warning: caching-nameserver-9.3.6-4.P1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing...                ########################################### [100%]
   1:caching-nameserver     ########################################### [100%]

[root@beiku1 soft]# service named start
Starting named: [  OK  ]

二.复制模板文件
由于安装了chroot环境,所以我们的DNS主配置文件应该在/var/named/chroot/etc目录下面

[root@beiku1 soft]# cd /var/named/chroot/
[root@beiku1 chroot]# ls
dev  etc  proc  var
[root@beiku1 chroot]# cd etc
[root@beiku1 etc]# ls
localtime  named.caching-nameserver.conf  named.rfc1912.zones  rndc.key
[root@beiku1 etc]#

named.caching-nameserver.conf文件内容如下:

[root@beiku1 etc]# cat named.caching-nameserver.conf
//
// named.caching-nameserver.conf
//
// Provided by Red Hat caching-nameserver package to configure the
// ISC BIND named(8) DNS server as a caching only nameserver 
// (as a localhost DNS resolver only). 
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
// DO NOT EDIT THIS FILE - use system-config-bind or an editor
// to create named.conf - edits to this file will be lost on 
// caching-nameserver package upgrade.
//
options {
        listen-on port 53 { 127.0.0.1; };
        listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";

        // Those options should be used carefully because they disable port
        // randomization
        // query-source    port 53;
        // query-source-v6 port 53;

        allow-query     { localhost; };
        allow-query-cache { localhost; };
};
logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};
view localhost_resolver {
        match-clients      { localhost; };
        match-destinations { localhost; };
        recursion yes;
        include "/etc/named.rfc1912.zones";
};

这个文件告诉我们不要直接的编辑这个文件,去创建一个named.conf文件,然后编辑named.conf文件,当有了named.conf,将不在读取这个文件。现在就将named.caching-nameserver.conf文件复制成named.conf文件。

[root@beiku1 etc]# cp -p named.caching-nameserver.conf named.conf
[root@beiku1 etc]# ls
localtime  named.caching-nameserver.conf  named.conf  named.rfc1912.zones  rndc.key

可以看到,named.conf文件就被创建成功了。最好在copy的时候加上-P的参数,保留权限。否则启动服务的时候会报权限拒绝的。

三.编辑named.conf文件

[root@beiku1 etc]# vi named.conf
//
// named.caching-nameserver.conf
//
// Provided by Red Hat caching-nameserver package to configure the
// ISC BIND named(8) DNS server as a caching only nameserver
// (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
// DO NOT EDIT THIS FILE - use system-config-bind or an editor
// to create named.conf - edits to this file will be lost on
// caching-nameserver package upgrade.
//
options {
        listen-on port 53 { any; };
        listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";

        // Those options should be used carefully because they disable port
        // randomization
        // query-source    port 53;
        // query-source-v6 port 53;

        allow-query     { 10.138.130.0/24; };
        allow-query-cache { any; };
};
logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};
view localhost_resolver {
        match-clients      { 10.138.130.0/24; };
        match-destinations { any; };
        recursion yes;
        include "/etc/named.rfc1912.zones";
};

解释这些语法参数的意思
options
代表全局配置
listen-on port 53 { any; };
DNS服务监听在所有接口
listen-on-v6 port 53 { ::1; };
ipv6监听在本地回环接口
directory “/var/named”;
zone文件的存放目录,指的是chroot环境下面的/var/named
dump-file “/var/named/data/cache_dump.db”;
存放缓存的信息
statistics-file “/var/named/data/named_stats.txt”;
统计用户的访问状态
memstatistics-file “/var/named/data/named_mem_stats.txt”;
每一次访问耗费了多数内存的存放文件
allow-query { 10.138.130.0/24 };
允许查询的客户端,现在修改成本地网段,
allow-query-cache {any; };
允许那些客户端来查询缓存,any表示允许任何人。
logging {
channel default_debug {
file “data/named.run”;
severity dynamic;
};
定义日志的存放位置在/var/named/chroot/var/named/data/目录下面
};
view localhost_resolver {
match-clients { 10.138.130.0/24; };
match-destinations { any; };
recursion yes;
include “/etc/named.rfc1912.zones”;
};

这里是定义视图的功能,
Match-clients 是指匹配的客户端
Match-destination 是指匹配的目标
到这里,named.conf文件就已经配置成功了,这个视图最后写include “/etc/named.rfc1912.zones”;接下面,就去配置这个文件。当然,我们可以匹配不同的客户端来创建不同的视图。

四.定义zone文件

[root@beiku1 etc]# vi  named.rfc1912.zones
// named.rfc1912.zones:
//
// Provided by Red Hat caching-nameserver package 
//
// ISC BIND named zone configuration for zones recommended by
// RFC 1912 section 4.1 : localhost TLDs and address zones
// 
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
zone "." IN {
        type hint;
        file "named.ca";
};

zone "sbyy.com" IN {
        type master;
        file "sbyy.zone";
        allow-update { none; };
};

zone "130.138.10.in-addr.arpa" IN {
        type master;
        file "named.sbyy";
        allow-update { none; };
};

解释这些语法参数的意思
Zone “.” 根区域
Zone “sbyy.com” 定义正向解析的区域
zone “130.138.10.in-addr.arpa” 定义反向解析的区域
IN Internet记录
type hint 根区域的类型为hint
type master 区域的类型为主要的
file “named.ca” ; 区域文件是named,ca
file “sbyy.zone”; 指定正向解析的区域文件是sbyy.zone
file “named.sbyy”; 指定反向解析的区域文件是named,sbyy
allow-update { none; }; 默认情况下,是否允许客户端自动更新
在named.ca文件中就定义了全球的13台根服务器,
在sbyy.com文件中就定义DNS的正向解析数据库
在named.sbyy文件中就定义DNS反向解析的数据库
定义zone文件就完成了,下面来编辑DNS的数据库文件。

五.使用模板文件来创建数据库文件

[root@beiku1 etc]# cd /var/named/chroot/var/named/
[root@beiku1 named]# ls
data  localdomain.zone  localhost.zone  named.broadcast  named.ca  named.ip6.local  named.local  named.zero  slaves

可以看到,在chroot环境下面的/var/named/有很多模板文件。Named.ca就是根区域的数据库文件,我们将localhost.zone复制成sbyy.zone,这个是正向解析的数据库文件,将named.local复制成named.sbyy,这个是反向解析的数据库文件。数据库文件一定要和/etc/named.rfc1912.zones这个文件里面的匹配。

[root@beiku1 named]# cp -p localhost.zone sbyy.zone
[root@beiku1 named]# cp -p named.local named.sbyy
[root@beiku1 named]# ls 
data              named.broadcast  named.local  sbyy.zone
localdomain.zone  named.ca         named.sbyy   slaves
localhost.zone    named.ip6.local  named.zero

复制成功,正向解析和反向解析的数据库文件就创建完成了。

六.定义数据库文件
1. 定义正向解析数据库文件

[root@beiku1 named]# vi sbyy.zone
$TTL    86400
@               IN SOA  beiku1.sbyy.com.       root.sbyy.com. (
                                        44              ; serial (d. adams)
                                        3H              ; refresh
                                        15M              ; retry
                                        1W              ; expiry
                                        1D )            ; minimum

@              IN NS           beiku1.sbyy.com.


beikuscan      IN A            10.138.130.167
beikuscan      IN A            10.138.130.168
beikuscan      IN A            10.138.130.169
beiku2         IN A            10.138.130.162
beiku1         IN A            10.138.130.161

关于正向解析数据库中每一行参数的解释
$TTL 86400
最小的存活的时间是86400S(24H)

@ IN SOA @ root (
这是一笔SOA记录,只允许存在一个SOA记录
@是代表要解析的这个域本身()
IN是Internet记录。
SOA 是初始授权记录,指定网络中第一台DNS Server。
root是指管理员的邮箱。

44 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum

这些部分主要是用来主DNS和辅助DNS做同步用的
44 序列号,当主DNS数据改变时,这个序列号就要被增加1,而辅助DNS通过序列号来和主DNS同步。
3H 刷新,主DNS和辅助DNS每隔三小时同步一次。
15M 重试,3H之内,没有同步,每隔15M在尝试同步
1W 过期,1W之内,还没有同步,就不同步了
1D 生存期,没有这条记录,缓存的时间。
@ IN NS beiku1.sbyy.com.

这是一笔NS记录,指定nameserver为beiku1.sbyy.com至少要有一笔NS记录

beiku1 IN A 10.138.130.161
指定beiku1的ip地址为10.138.130.161

beikuscan IN A 10.138.130.167
指定beikuscan的ip地址为10.138.130.167

beikuscan IN A 10.138.130.168
指定beikuscan的ip地址为10.138.130.168

beikuscan IN A 10.138.130.169
指定beikuscan的ip地址为10.138.130.169
beiku2 IN A 10.138.130.162
指定beiku2的ip地址为10.138.130.162

正向解析的数据库就完成了,下面定义反向解析的数据库。

2. 定义反向解析数据库

[root@beiku1 named]# vi named.sbyy
$TTL    86400
@       IN      SOA     beiku1.sbyy.com. root.sbyy.com.  (
                                      1997022702 ; Serial
                                      120      ; Refresh
                                      120      ; Retry
                                      3600000    ; Expire
                                      86400 )    ; Minimum
@        IN      NS     beiku1.sbyy.com.

167     IN      PTR     beikuscan.sbyy.com.
168     IN      PTR     beikuscan.sbyy.com.
169     IN      PTR     beikuscan.sbyy.com.
162     IN      PTR     beiku2.sbyy.com. 
161     IN      PTR     beiku1.sbyy.com.

其实反向解析的数据库文件的配置和正向解析的差不多,只需要将ip地址和域名换一个位置就可以了,把A换成PTR就ok了。
DNS的基本配置就完成了,在来看看DNS是否能够正常工作。
我们先重启一下DNS服务

[root@beiku1 etc]# service named restart
Stopping named: [  OK  ]
Starting named: [  OK  ]

可以看到,DNS服务启动成功了。
在查询以前,要在客户端来指定DNS Server,在/etc/resolv.conf这个文件中指定。

[root@beiku1 etc]# vi /etc/resolv.conf
search sbyy.com
nameserver       10.138.130.161


[root@beiku1 etc]# service named restart
Stopping named: [  OK  ]
Starting named: [  OK  ]

参数及意义:
nameserver 表明dns 服务器的ip 地址,可以有很多行的nameserver,每一个带一个ip地址。
在查询时就按nameserver 在本文件中的顺序进行,且只有当第一个nameserver 没有反应时才查询下面的nameserver.
domain 声明主机的域名。很多程序用到它,如邮件系统;当为没有域名的主机进行dns 查询时,也要用到。如果没有域名,主机名将被使,用删除所有在第一个点( . )前面的内容。
search 它的多个参数指明域名查询顺序。当要查询没有域名的主机,主机将在由search 声明的域中分别查找。
domain 和search 不能共存;如果同时存在,后面出现的将会被使用。
sortlist 允许将得到域名结果进行特定的排序。它的参数为网络/掩码对,允许任意的排列顺序。

再来使用nslookup工具来查询一下

[root@beiku1 named]# nslookup beiku1.sbyy.com
Server:         10.138.130.161
Address:        10.138.130.161#53

Name:   beiku1.sbyy.com
Address: 10.138.130.161

[root@beiku1 named]# nslookup beiku2.sbyy.com
Server:         10.138.130.161
Address:        10.138.130.161#53

Name:   beiku2.sbyy.com
Address: 10.138.130.162

[root@beiku1 named]# nslookup beikuscan.sbyy.com
Server:         10.138.130.161
Address:        10.138.130.161#53

Name:   beikuscan.sbyy.com
Address: 10.138.130.169
Name:   beikuscan.sbyy.com
Address: 10.138.130.167
Name:   beikuscan.sbyy.com
Address: 10.138.130.168

[root@beiku1 named]# nslookup beiku1
Server:         10.138.130.161
Address:        10.138.130.161#53

Name:   beiku1.sbyy.com
Address: 10.138.130.161

[root@beiku1 named]# nslookup beiku2
Server:         10.138.130.161
Address:        10.138.130.161#53

Name:   beiku2.sbyy.com
Address: 10.138.130.162

[root@beiku1 named]# nslookup beikuscan
Server:         10.138.130.161
Address:        10.138.130.161#53

Name:   beikuscan.sbyy.com
Address: 10.138.130.168
Name:   beikuscan.sbyy.com
Address: 10.138.130.169
Name:   beikuscan.sbyy.com
Address: 10.138.130.167

[root@beiku1 named]# nslookup 10.138.130.161
Server:         10.138.130.161
Address:        10.138.130.161#53

161.130.138.10.in-addr.arpa     name = beiku1.sbyy.com.

[root@beiku1 named]# nslookup 10.138.130.162
Server:         10.138.130.161
Address:        10.138.130.161#53

162.130.138.10.in-addr.arpa     name = beiku2.sbyy.com.

[root@beiku1 named]# nslookup 10.138.130.167
Server:         10.138.130.161
Address:        10.138.130.161#53

167.130.138.10.in-addr.arpa     name = beikuscan.sbyy.com.

[root@beiku1 named]# nslookup 10.138.130.168
Server:         10.138.130.161
Address:        10.138.130.161#53

168.130.138.10.in-addr.arpa     name = beikuscan.sbyy.com.

[root@beiku1 named]# nslookup 10.138.130.169
Server:         10.138.130.161
Address:        10.138.130.161#53

169.130.138.10.in-addr.arpa     name = beikuscan.sbyy.com.

可以看到,DNS解析一切正常,上面只是配置了主DNS服务器,而且主DNS服务器也工作正常,现在我们来配置一个辅助DNS服务器

配置辅助DNS服务器
主DNS的东西和辅助DNS东西其实是相同的
一.安装软件包

 [root@beiku2 soft]# rpm -qa | grep bind
bind-chroot-9.3.6-4.P1.el5
kdebindings-3.5.4-6.el5
system-config-bind-4.0.3-4.el5
ypbind-1.19-12.el5
bind-libs-9.3.6-4.P1.el5
bind-9.3.6-4.P1.el5
bind-utils-9.3.6-4.P1.el5
[root@beiku2 soft]# rpm -ivh kdebindings-devel-3.5.4-6.el5.i386.rpm
warning: kdebindings-devel-3.5.4-6.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing...                ########################################### [100%]
   1:kdebindings-devel      ########################################### [100%]
[root@beiku2 soft]# rpm -ivh caching-nameserver-9.3.6-4.P1.el5.i386.rpm
warning: caching-nameserver-9.3.6-4.P1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing...                ########################################### [100%]
   1:caching-nameserver     ########################################### [100%]
[root@beiku2 soft]# rpm -ivh bind-sdb-9.3.6-4.P1.el5.i386.rpm
warning: bind-sdb-9.3.6-4.P1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing...                ########################################### [100%]
   1:bind-sdb               ########################################### [100%]
[root@beiku2 soft]# rpm -ivh bind-libbind-devel-9.3.6-4.P1.el5.i386.rpm
warning: bind-libbind-devel-9.3.6-4.P1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing...                ########################################### [100%]
   1:bind-libbind-devel     ########################################### [100%]
[root@beiku2 soft]# rpm -ivh bind-devel-9.3.6-4.P1.el5.i386.rpm
warning: bind-devel-9.3.6-4.P1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
Preparing...                ########################################### [100%]
   1:bind-devel             ########################################### [100%]

二.复制模板文件

[root@beiku2 /]# cd /var/named/chroot/etc
[root@beiku2 etc]# ls -lrt
total 24
-rw-r--r-- 1 root root  3519 Feb 27  2006 localtime
-rw-r----- 1 root named  955 Jul 30  2009 named.rfc1912.zones
-rw-r----- 1 root named 1230 Jul 30  2009 named.caching-nameserver.conf
-rw-r----- 1 root named  113 Nov 15  2014 rndc.key

[root@beiku2 etc]# cp -p named.caching-nameserver.conf named.conf

三.编辑named.conf文件

[root@beiku2 etc]# vi named.conf
//
// named.caching-nameserver.conf
//
// Provided by Red Hat caching-nameserver package to configure the
// ISC BIND named(8) DNS server as a caching only nameserver
// (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
// DO NOT EDIT THIS FILE - use system-config-bind or an editor
// to create named.conf - edits to this file will be lost on
// caching-nameserver package upgrade.
//
options {
        listen-on port 53 { any; };
        listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";

        // Those options should be used carefully because they disable port
        // randomization
        // query-source    port 53;
        // query-source-v6 port 53;

        allow-query     { 10.138.130.0/24; };
        allow-query-cache { any; };
};
logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};
view localhost_resolver {
        match-clients      { 10.138.130.0/24; };
        match-destinations { any; };
        recursion yes;
        include "/etc/named.rfc1912.zones";
};

和主DNS配置一样

四.定义zone文件

[root@beiku2 etc]# vi named.rfc1912.zones
// named.rfc1912.zones:
//
// Provided by Red Hat caching-nameserver package
//
// ISC BIND named zone configuration for zones recommended by
// RFC 1912 section 4.1 : localhost TLDs and address zones
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//

zone "sbyy.com" IN {
        type slave;
        masters {10.138.130.161;};
        file "slaves/sbyy.com";
};

zone "0.138.10.in-addr.arpa" IN {
        type slave;
        masters {10.138.130.161;};
        file "slaves/named.sbyy";
};

辅助DNS在定义zone文件的时候和主DNS有些不同
在辅助DNS里面 type要改为slave
master { 10.138.130.161; }; 而且必须指定主DNS的IP address
file “slaves/sbyy.com”;
file “slaves/named.sbyy”;
为什么要指定数据库文件在slaves目录下面呢,是因为slaves目录是拥有人和拥有组都是named用户,在启动DNS服务的时候,只有named有权限进行操作,所以我们要把数据库放在这个目录下面。

[root@beiku2 etc]# cd /var/named/chroot/var/named/
[root@beiku2 named]# ls -lrt
total 44
drwxrwx--- 2 named named 4096 Jul 27  2004 slaves
drwxrwx--- 2 named named 4096 Aug 26  2004 data
-rw-r----- 1 root  named  427 Jul 30  2009 named.zero
-rw-r----- 1 root  named  426 Jul 30  2009 named.local
-rw-r----- 1 root  named  424 Jul 30  2009 named.ip6.local
-rw-r----- 1 root  named 1892 Jul 30  2009 named.ca
-rw-r----- 1 root  named  427 Jul 30  2009 named.broadcast
-rw-r----- 1 root  named  195 Jul 30  2009 localhost.zone
-rw-r----- 1 root  named  198 Jul 30  2009 localdomain.zone
[root@beiku2 named]# cd slaves
[root@beiku2 slaves]# ls -lrt
total 0

可以看到,slaves目录的拥有人和拥有组是named,并且现在的slaves目录下面是什么东西都没有的。
现在我们重启一下DNS服务

[root@beiku2 slaves]# service named restart
Stopping named: [  OK  ]
Starting named: [  OK  ]

可以看到,服务启动成功了。在启动服务的同时,我们来查看一下日志信息,看看日志里面有什么提示

[root@beiku2 slaves]# tail /var/log/messages
Aug 25 23:41:49 beiku2 named[30421]: the working directory is not writable
Aug 25 23:41:49 beiku2 named[30421]: running
Aug 25 23:41:49 beiku2 named[30421]: zone 0.138.10.in-addr.arpa/IN/localhost_resolver: Transfer started.
Aug 25 23:41:49 beiku2 named[30421]: transfer of '0.138.10.in-addr.arpa/IN' from 10.138.130.161#53: connected using 10.138.130.162#44647
Aug 25 23:41:49 beiku2 named[30421]: zone 0.138.10.in-addr.arpa/IN/localhost_resolver: transferred serial 1997022700
Aug 25 23:41:49 beiku2 named[30421]: transfer of '0.138.10.in-addr.arpa/IN' from 10.138.130.161#53: end of transfer
Aug 25 23:41:49 beiku2 named[30421]: zone sbyy.com/IN/localhost_resolver: Transfer started.
Aug 25 23:41:49 beiku2 named[30421]: transfer of 'sbyy.com/IN' from 10.138.130.161#53: connected using 10.138.130.162#56490
Aug 25 23:41:49 beiku2 named[30421]: zone sbyy.com/IN/localhost_resolver: transferred serial 42
Aug 25 23:41:49 beiku2 named[30421]: transfer of 'sbyy.com/IN' from 10.138.130.161#53: end of transfer

在日志里面可以看到,主DNS与辅助DNS正在同步序列号,同步成功,这个日志里面的信息非常的详细。
接下来,我们在到slaves目录下面去看看

[root@beiku2 slaves]# ls -lrt
total 8
-rw-r--r-- 1 named named 414 Aug 25 23:41 sbyy.com
-rw-r--r-- 1 named named 451 Aug 25 23:41 named.sbyy

刚才slaves目录下面的是什么东西都没有,现在就多了两个文件,example.com和named.example这个两个文件。这个就是我们刚才在定义zone文件的时候在slaves目录下面定义的,文件名是随意写的,这个没有关系,但是里面东西是和主DNS一样的。
我们查看这两个文件的具体内容

[root@beiku2 slaves]# cat sbyy.com
$ORIGIN .
$TTL 86400      ; 1 day
sbyy.com                IN SOA  sbyy.com. root.sbyy.com. (
                                42         ; serial
                                10800      ; refresh (3 hours)
                                900        ; retry (15 minutes)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
                        NS      sbyy.com.
                        A       127.0.0.1
                        AAAA    ::1
$ORIGIN sbyy.com.
beiku1                  A       10.138.130.161
beikuscan1              A       10.138.130.167
beikuscan2              A       10.138.130.168
beikuscan3              A       10.138.130.169
beiku2                  A       10.138.130.162

[root@beiku2 slaves]# cat named.sbyy
$ORIGIN .
$TTL 86400      ; 1 day
0.138.10.in-addr.arpa   IN SOA  localhost. root.localhost. (
                                1997022700 ; serial
                                28800      ; refresh (8 hours)
                                14400      ; retry (4 hours)
                                3600000    ; expire (5 weeks 6 days 16 hours)
                                86400      ; minimum (1 day)
                                )
                        NS      localhost.
$ORIGIN 0.138.10.in-addr.arpa.
1                       PTR     localhost.
161                     PTR     beiku1.sbyy.com
167                     PTR     beikuscan1.sbyy.com
168                     PTR     beikuscan2.sbyy.com
169                     PTR     beikuscan3.sbyy.com
162                     PTR     beiku2.sbyy.com

这两个文件里面的内容和我们的主DNS的内容都是一样的。而且还帮我们整理的非常的漂亮。这些都是系统自动生成的。
现在我们来测试一下主DNS和辅助DNS可不可以正常的工作

[root@beiku2 slaves]# vi /etc/resolv.conf
search sbyy.com
nameserver 10.138.130.161
nameserver 10.138.130.162

现在我们将主DNS和辅助DNS都设置一下。然后在使用nslookup工具来测试

[root@beiku2 slaves]# nslooup beiku1
-bash: nslooup: command not found
[root@beiku2 slaves]# nslookup beiku1
Server:         10.138.130.161
Address:        10.138.130.161#53

Name:   beiku1.sbyy.com
Address: 10.138.130.161

 [root@beiku2 slaves]# nslookup beiku2
Server:         10.138.130.161
Address:        10.138.130.161#53

Name:   beiku2.sbyy.com
Address: 10.138.130.162

现在解析没有问题,还是有10.138.130.161这台主DNS来解析的。
接下来,我们将10.138.130.161这台主DNS给down,看下10.138.130.162这台辅助DNS能否正常工作。

[root@beiku1 named]# service named stop
Stopping named: [  OK  ]

用nslookup来测试一下

[root@beiku2 slaves]# nslookup beiku1
Server:         10.138.130.162
Address:        10.138.130.162#53

Name:   beiku1.sbyy.com
Address: 10.138.130.161

现在解析照样成功了,现在并不是通过10.138.130.161这台主DNS来解析出来的,而是通过我们的10.138.130.162这台辅助DNS来解析出来的。当我们网络中的主DNSdown掉的时候,我们的辅助DNS照样能够正常的工作。我们还可以实现负载均衡,可以在网络中的一半客户端的主DNS指向10.138.130.161,辅助DNS指向10.138.130.161。将网络中的另一半客户端的主DNS指向10.138.130.162,辅助DNS指向10.138.130.161。这样两台服务器都可以正常的工作,正常的为客户端解析,当其中的一台DNSdown掉后,另一台DNS也会继续的工作,这样就实现了简单的负载均衡。到目前为止,我们的主DNS Server 和我们的辅助DNS Server都已经设置成功了,并且都可以正常的工作了。

接下来,我们在做一个试验,我们在主DNS添加一条记录,看下辅助DNS能否检测试到这条记录,不能够在辅助DNS上面添加记录,这样没有意义,我们的主DNS是检测不到这条记录的。

[root@beiku1 named]# vi sbyy.zone
$TTL    86400
@               IN SOA  @       root (
                                        43              ; serial (d. adams)
                                        2M              ; refresh
                                        2M              ; retry
                                        1W              ; expiry
                                        1D )            ; minimum

                IN NS           @
                IN A            127.0.0.1
                IN AAAA         ::1


beiku1          IN A            10.138.130.161
beikuscan      IN A            10.138.130.167
beikuscan      IN A            10.138.130.168
beikuscan      IN A            10.138.130.169
beiku2          IN A            10.138.130.162
www             IN A            10.138.130.170

增加了www IN A 10.138.130.170记录。在主DNS里面做了新的操作以后,一定要将主DNS的序列号加一。否则辅助DNS是不会来同步我们的主DNS的。我们已经将主DNS的序列号加一了,但是默认情况下,主DNS与辅助DNS的同步时间是3H,这样我们很难看到效果,我们将它改为2M,然后在将重试时间改为2M,这样就代表每隔两分钟主DNS和辅助DNS进行同步,如果同步不成功,在隔两分钟同步一次。接下来我们将反向解析里面的也来修改一下

[root@beiku1 named]# vi named.sbyy
$TTL    86400
@       IN      SOA     beiku1.sbyy.com. root.sbyy.com.  (
                                      1997022703 ; Serial
                                      120      ; Refresh
                                      120      ; Retry
                                      3600000    ; Expire
                                      86400 )    ; Minimum
@        IN      NS     beiku1.sbyy.com.

167     IN      PTR     beikuscan.sbyy.com.
168     IN      PTR     beikuscan.sbyy.com.
169     IN      PTR     beikuscan.sbyy.com.
162     IN      PTR     beiku2.sbyy.com.
161     IN      PTR     beiku1.sbyy.com.
170     IN      PTR     www.sbyy.com.

这样,反向解析里面也已经修改完成了。现在将DNS服务重启

[root@beiku1 named]# service named restart
Stopping named: [  OK  ]
Starting named: [  OK  ]

重启成功,等几分钟之后在来看下效果。现在我们查看辅助DNS的正向解析数据库文件的内容

[root@beiku2 slaves]# cat sbyy.com
$ORIGIN .
$TTL 86400      ; 1 day
sbyy.com                IN SOA  beiku1.sbyy.com. root.sbyy.com. (
                                45         ; serial
                                120        ; refresh (2 minutes)
                                120        ; retry (2 minutes)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
                        NS      beiku1.sbyy.com.
$ORIGIN sbyy.com.
beiku1                  A       10.138.130.161
beiku2                  A       10.138.130.162
beikuscan               A       10.138.130.167
                        A       10.138.130.168
                        A       10.138.130.169
www                     A       10.138.130.170

OK,可以看到,我们刚才在主DNS里面添加的一条新的记录现在已经被辅助DNS同步过去了,而且辅助DNS的序列号和刷新时间,重试时间都同步了。下来我们查看辅助DNS的反向解析数据库文件的内容

[root@beiku2 slaves]# cat named.sbyy
RIGIN .
$TTL 86400      ; 1 day
0.138.10.in-addr.arpa   IN SOA  localhost. root.localhost. (
                                1997022702 ; serial
                                28800      ; refresh (8 hours)
                                14400      ; retry (4 hours)
                                3600000    ; expire (5 weeks 6 days 16 hours)
                                86400      ; minimum (1 day)
                                )
                        NS      localhost.
$ORIGIN 0.138.10.in-addr.arpa.
1                       PTR     localhost.
161                     PTR     beiku1.sbyy.com
167                     PTR     beikuscan1.sbyy.com
168                     PTR     beikuscan2.sbyy.com
169                     PTR     beikuscan3.sbyy.com
162                     PTR     beiku2.sbyy.com
170                     PTR     www.sbyy.com

OK,也可以看到,辅助DNS也已经同步成功了,到此DNS的配置就完成了。

HP服务器控制台输出设置造成的启动故障

今天单位的HP数据库容灾服务器在启动时出错,错误信息如下图所示:Warning:multiple console coutput devices are configured.If this message remains on the screen for more than a few miutes,then this is not the device in use by HP-UX as the console output device.If you would like this device to be the one used by HP-UX as the console output device,reboot and use the EFI boot manager or the EFI ‘conconfig’ command to select this device and deconfigure the others.错误信息说这台服务器已经配置了多个控制台输出设备。如果这个提示信息在屏幕上显示超过几分钟,那么这个输出设备没有被HP-UX作为控制输出设备。如果想让HP-UX使用这个设备作为控制台输出设备,重启服务器并使用EFI boot manager或者EFI的’conconfig’命令来选择这个设备并取消其它设备的配置。
IMG_20150507_083709

而这台HP服务器之前是能够正常启动,最近也没人进行设置,很奇怪,根据错误信息的提示重启并进行设置。

首先在启动时进入EFI boot manager设置界面
IMG_20150512_122943

选择Boot Configuration
IMG_20150512_122956

选择Console Configuration
IMG_20150512_123016

从上图中可以看到HP服务器现在主控制台输出配置为Serial Acpi,辅助输出设备为VGA,因为我这里配置的是KVM所以应该主控制台输出要设置为VGA才对,但之前同事说能正常启动,这里不再讨论原因,现在将VGA作为主控制台输出设备。
IMG_20150512_123039

IMG_20150512_123126

保存更改,服务器会自动重启
IMG_20150512_123404

IMG_20150512_123440

IMG_20150512_123452

IMG_20150512_123614
在将控制台输出设置为VGA后服务器可以正常启动了可以做容灾演验了

Redhat Linux两台主机之间设置NFS挂载的步骤

为了测试Oracle RMAN使用PFILE参数文件在异机上执行duplicate命令,需要将辅助实例pfile参数文件通过NFS来让执行RMAN命令的主机应访问,所以需要配置NFS,这里只介绍了最基本的配置方法。
一、安装nfs
一般redhat是默认安装了nfs服务的,如果非默认安装且取消勾选nfs的话,需要挂载iso或下载安装包手动安装来进行安装,这里不再赘述。

二、在被共享目录所在主机配置/etc/exports
nfs允许挂载的目录及权限需在文件/etc/exports中进行定义。例如,我们要将参数文件所在目录/u01/app/oracle/product/10.2.0/db/dbs共享出来,那么我们需要编辑/etc/exports文件,追加一行/u01/app/oracle/product/10.2.0/db/dbs *(rw,sync):

[root@jingyong1 /]# vi /etc/exports


/u01/app/oracle/product/10.2.0/db/dbs *(rw,sync)

其中
/u01/app/oracle/product/10.2.0/db/dbs是要共享的目录;
* 代表允许所有的网络段访问(仅测试中使用,实际使用应该做严格的IP限制);
rw开启共享目录的可读写权限;
sync是资料同步写入内存和硬盘;
其他更多参数说明:
ro 只读访问
rw 读写访问sync 所有数据在请求时写入共享
async nfs在写入数据前可以响应请求
secure nfs通过1024以下的安全TCP/IP端口发送
insecure nfs通过1024以上的端口发送
wdelay 如果多个用户要写入nfs目录,则归组写入(默认)
no_wdelay 如果多个用户要写入nfs目录,则立即写入,当使用async时,无需此设置。
hide 在nfs共享目录中不共享其子目录
no_hide 共享nfs目录的子目录
subtree_check 如果共享/usr/bin之类的子目录时,强制nfs检查父目录的权限(默认)
no_subtree_check 和上面相对,不检查父目录权限
all_squash 共享文件的UID和GID映射匿名用户anonymous,适合公用目录。
no_all_squash 保留共享文件的UID和GID(默认)
root_squash root用户的所有请求映射成如anonymous用户一样的权限(默认)
no_root_squas root用户具有根目录的完全管理访问权限
anonuid=xxx 指定nfs服务器/etc/passwd文件中匿名用户的UID
anongid=xxx 指定nfs服务器/etc/passwd文件中匿名用户的GID

三、启动nfs服务
在启动nfs之前需要先启动portmap服务,否则如下报错:

[root@jingyong1 /]# service nfs start
Starting NFS services:                                     [  OK  ]
Starting NFS quotas: Cannot register service: RPC: Unable to receive; errno = Connection refused
rpc.rquotad: unable to register (RQUOTAPROG, RQUOTAVERS, udp).
                                                           [FAILED]
Starting NFS daemon:

正确方法先启动portmap再启动nfs,如下:
service portmap start
service nfs start

/etc/init.d/portmap start
/etc/init.d/nfs start


[root@jingyong1 /]# service portmap start
Starting portmap: [  OK  ]

[root@jingyong1 /]# service nfs start
Starting NFS services:  [  OK  ]
Starting NFS quotas: [  OK  ]
Starting NFS daemon: [  OK  ]
Starting NFS mountd: [  OK  ]

四、在客户端主机上挂载共享目录
1、挂载之前同样需要先启动portmap服务

[root@oracle11g /]# service portmap start
Starting portmap: [  OK  ]

2、在客户端使用showmount -e IP 查看nfs主机共享情况:

[root@oracle11g /]# showmount -e 192.168.56.11
Export list for 192.168.56.11:
/u01/app/oracle/product/10.2.0/db/dbs *

3、在客户端建立jingyong1文件夹,并使用mount挂载命令:

[root@oracle11g net]# mkdir /jingyong1
[root@oracle11g /]# chown -R oracle:oinstall jingyong1
[root@oracle11g /]# chmod -R 777 jingyong1

[root@oracle11g /]# mount -t nfs 192.168.56.11:/u01/app/oracle/product/10.2.0/db/dbs /jingyong1

4、若无报错,则可使用df -h 查看到挂载情况:

[root@oracle11g /]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              23G   12G  9.6G  55% /
/dev/sdb1             9.9G  5.6G  3.9G  60% /u02
tmpfs                 252M     0  252M   0% /dev/shm
192.168.56.11:/u01/app/oracle/product/10.2.0/db/dbs
                       17G   14G  2.3G  86% /jingyong1

五、在客户端卸载已挂载的目录

[root@oracle11g /]# umount /jingyong1

hp rx6600两台oracle双机互备服务器其中一台经常自动关机的故障诊断

hp rx6600两台oracle数据库双机互备服务器其中一台经常自动关机,刚好在做巡检时遇到了就顺便检查一下原因.检查经常出故障的一台小机日志信息如下:

rx6600-1:[/]#cat /var/adm/syslog/syslog.log
Nov  6 10:40:35 rx6600-1 syslogd: restart
Nov  6 10:40:35 rx6600-1 vmunix: Found adjacent data tr.  Growing size.  0x32a6000 -> 0x72a6000.
Nov  6 10:40:35 rx6600-1 vmunix: Pinned PDK malloc pool: base: 0xe000000100d5a000  size=117400K
Nov  6 10:40:35 rx6600-1 vmunix: Loaded ACPI revision 2.0 tables.
Nov  6 10:40:35 rx6600-1 vmunix: MMIO on this platform supports Write Coalescing.
Nov  6 10:40:35 rx6600-1 vmunix: 
Nov  6 10:40:35 rx6600-1 vmunix: MFS is defined: base= 0xe000000100d5a000  size= 5084 KB
Nov  6 10:40:35 rx6600-1 vmunix: Unpinned PDK malloc pool: base: 0xe000000108000000  size=393216K
Nov  6 10:40:35 rx6600-1 vmunix: NOTICE: cachefs_link(): File system was registered at index 5.
Nov  6 10:40:35 rx6600-1 vmunix: emcp:GPX:Info: GPX emcpgpx_install() success.
Nov  6 10:40:35 rx6600-1 vmunix: 
Nov  6 10:40:35 rx6600-1  above message repeats 2 times
Nov  6 10:40:35 rx6600-1 vmunix: emcp:GPX:Info: DM emcpgpx_dm_install() success.
Nov  6 10:40:35 rx6600-1 vmunix: emcp:GPX:Info: VLUMD emcpgpx_vlumd_install() success.
Nov  6 10:40:35 rx6600-1 vmunix: emcp:GPX:Info: XCRYPT emcpgpx_xcrypt_install() success.
Nov  6 10:40:35 rx6600-1 vmunix: NOTICE: nfs3_link(): File system was registered at index 8.
Nov  6 10:40:35 rx6600-1 vmunix: NOTICE: mod_fs_reg: Cannot retrieve configured loading phase from KRS for module: cifs. Setting to load at INIT
Nov  6 10:40:35 rx6600-1 vmunix: 
Nov  6 10:40:35 rx6600-1 vmunix: 0 sba
Nov  6 10:40:35 rx6600-1 vmunix: 0/0 lba
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/1/0 rmp3f01
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/1/1 rmp3f01
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/1/2 asio0
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/0 UsbOhci
Nov  6 10:40:35 rx6600-1 vmunix: NOTICE: USB device attached.  Identification String: 
Nov  6 10:40:35 rx6600-1 vmunix: Devices/Device/USB/Standard/hp/Unknown/0_1
Nov  6 10:40:35 rx6600-1 vmunix:  <2.1.3.10.1008.4390.1>
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/0.0 UsbMiniBus
Nov  6 10:40:35 rx6600-1 vmunix: Devices/Keyboard/USB/Boot/hp/Unknown/0_1
Nov  6 10:40:35 rx6600-1 vmunix:  <2.305.3.100.1008.4390.1>
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/0.0.0 UsbBootKeyboard
Nov  6 10:40:35 rx6600-1 vmunix: Devices/Mouse/USB/Standard/hp/Unknown/0_1
Nov  6 10:40:35 rx6600-1 vmunix:  <2.307.3.10.1008.4390.1>
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/1 UsbOhci
Nov  6 10:40:35 rx6600-1 vmunix: Devices/Device/USB/Standard/hp/Multibay/0_a1
Nov  6 10:40:35 rx6600-1 vmunix:  <2.1.3.10.1008.294.161>
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/1.0 UsbMiniBus
Nov  6 10:40:35 rx6600-1 vmunix: Devices/MassStorage-SCSI/USB/BulkOnly/hp/Multibay/0_a1
Nov  6 10:40:35 rx6600-1 vmunix:  <2.310.3.150.1008.294.161>
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/1.0.0 UsbBulkOnlyMS
Nov  6 10:40:35 rx6600-1 vmunix: Devices/ScsiControllerAdaptor/USB/BulkOnly/hp/Multibay
Nov  6 10:40:35 rx6600-1 vmunix:  <2.1000.3.150.1008.294>
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/1.0.16 UsbScsiAdaptor
Nov  6 10:40:35 rx6600-1 vmunix: NOTICE: USB device attached.  Identification String: 
Nov  6 10:40:36 rx6600-1  above message repeats 5 times
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/1.0.16.0 tgt
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/1.0.16.0.0 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/1.0.16.7 tgt
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/1.0.16.7.0 sctl
Nov  6 10:40:35 rx6600-1 vmunix: NOTICE: USB device attached.  Identification String: 
Nov  6 10:40:35 rx6600-1 vmunix: Devices/Device/USB/Standard/Avocent/KVMAdaptor/1_0
Nov  6 10:40:35 rx6600-1 vmunix:  <2.1.3.10.1572.833.256>
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/1.1 UsbMiniBus
Nov  6 10:40:35 rx6600-1 vmunix: Devices/Keyboard/USB/Boot/Avocent/KVMAdaptor/1_0
Nov  6 10:40:35 rx6600-1 vmunix:  <2.305.3.100.1572.833.256>
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/1.1.0 UsbBootKeyboard
Nov  6 10:40:35 rx6600-1 vmunix: Devices/Mouse/USB/Boot/Avocent/KVMAdaptor/1_0
Nov  6 10:40:35 rx6600-1 vmunix:  <2.307.3.100.1572.833.256>
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/1.1.1 UsbBootMouse
Nov  6 10:40:35 rx6600-1 vmunix: NOTICE: USB device attached.  Identification String: 
Nov  6 10:40:36 rx6600-1  above message repeats 2 times
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/2/2 UsbEhci
Nov  6 10:40:35 rx6600-1 vmunix: 0/0/4/0 gvid_core
Nov  6 10:40:35 rx6600-1 vmunix: 0/1 lba
Nov  6 10:40:35 rx6600-1 vmunix: 0/2 lba
Nov  6 10:40:35 rx6600-1 vmunix: 0/2/1/0 PCItoPCI
Nov  6 10:40:35 rx6600-1 vmunix: fcd: Claimed HP AD193-60001 4Gb Fibre Channel port at hardware path 0/2/1/0/4/0 (FC Port 1 on HBA)
Nov  6 10:40:35 rx6600-1 vmunix: 0/2/1/0/4/0 fcd
Nov  6 10:40:35 rx6600-1 vmunix: 0/2/1/0/6/0 iether
Nov  6 10:40:35 rx6600-1 vmunix: 0/3 lba
Nov  6 10:40:35 rx6600-1 vmunix: 0/3/1/0 PCItoPCI
Nov  6 10:40:35 rx6600-1 vmunix: fcd: Claimed HP AD193-60001 4Gb Fibre Channel port at hardware path 0/3/1/0/4/0 (FC Port 1 on HBA)
Nov  6 10:40:35 rx6600-1 vmunix: 0/3/1/0/4/0 fcd
Nov  6 10:40:35 rx6600-1 vmunix: 0/3/1/0/6/0 iether
Nov  6 10:40:35 rx6600-1 vmunix: 0/4 lba
Nov  6 10:40:35 rx6600-1 vmunix: sasd: Claimed HP PCI/PCI-X SAS MPT adapter at hardware path 0/4/1/0 
Nov  6 10:40:35 rx6600-1 vmunix: 0/4/1/0 sasd
Nov  6 10:40:35 rx6600-1 vmunix: 0/4/2/0 iether
Nov  6 10:40:35 rx6600-1 vmunix: 0/4/2/1 iether
Nov  6 10:40:35 rx6600-1 vmunix: 0/5 lba
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0 PCItoPCI
Nov  6 10:40:35 rx6600-1 vmunix: fcd: Claimed HP AD193-60001 4Gb Fibre Channel port at hardware path 0/5/1/0/4/0 (FC Port 1 on HBA)
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0 fcd
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/6/0 iether
Nov  6 10:40:35 rx6600-1 vmunix: 0/6 lba
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0 PCItoPCI
Nov  6 10:40:35 rx6600-1 vmunix: fcd: Claimed HP AD193-60001 4Gb Fibre Channel port at hardware path 0/6/1/0/4/0 (FC Port 1 on HBA)
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0 fcd
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/6/0 iether
Nov  6 10:40:35 rx6600-1 vmunix: 0/7 lba
Nov  6 10:40:35 rx6600-1 vmunix: Initializing the Ultra320 SCSI Controller at 0/7/1/0. Controller firmware version is 01.03.35.70
Nov  6 10:40:35 rx6600-1 vmunix: 0/7/1/0 mpt
Nov  6 10:40:35 rx6600-1 vmunix: Initializing the Ultra320 SCSI Controller at 0/7/1/1. Controller firmware version is 01.03.35.70
Nov  6 10:40:35 rx6600-1 vmunix: 0/7/1/1 mpt
Nov  6 10:40:35 rx6600-1 vmunix: 120 processor
Nov  6 10:40:35 rx6600-1 vmunix: 121 processor
Nov  6 10:40:35 rx6600-1 vmunix: 122 processor
Nov  6 10:40:35 rx6600-1 vmunix: 123 processor
Nov  6 10:40:35 rx6600-1 vmunix: 124 processor
Nov  6 10:40:35 rx6600-1 vmunix: 125 processor
Nov  6 10:40:35 rx6600-1 vmunix: 126 processor
Nov  6 10:40:35 rx6600-1 vmunix: 127 processor
Nov  6 10:40:35 rx6600-1 vmunix: 250 pdh
Nov  6 10:40:35 rx6600-1 vmunix: 250/0 ipmi
Nov  6 10:40:35 rx6600-1 vmunix: 250/1 asio0
Nov  6 10:40:35 rx6600-1 vmunix: 250/2 acpi_node
Nov  6 10:40:35 rx6600-1 vmunix: 0/7/1/0.7 tgt
Nov  6 10:40:35 rx6600-1 vmunix: 0/7/1/0.7.0 sctl
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0.1 fcd_fcp
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0.1.9.0.0 fcd_vbus
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0.1.9.255.0 fcd_vbus
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0.1.13.255.0 fcd_vbus
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0.1.13.255.0.0 tgt
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0.1.13.255.0.0.0 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0.1.9.0.0.0 tgt
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0.1.9.255.0.0 tgt
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0.1.9.0.0.0.0 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0.1.9.255.0.0.0 sctl
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0.1.9.0.0.0.1 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0.1.9.0.0.0.2 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0.1.9.0.0.0.3 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/5/1/0/4/0.1.9.0.0.0.4 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1 fcd_fcp
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.9.0.0 fcd_vbus
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.9.255.0 fcd_vbus
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.13.0.0 fcd_vbus
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.13.255.0 fcd_vbus
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.13.0.0.0 tgt
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.13.255.0.0 tgt
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.9.0.0.0 tgt
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.13.0.0.0.0 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.9.255.0.0 tgt
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.13.255.0.0.0 sctl
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.9.0.0.0.0 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.9.255.0.0.0 sctl
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.13.0.0.0.1 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.13.0.0.0.2 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.13.0.0.0.3 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.9.0.0.0.1 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.13.0.0.0.4 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.9.0.0.0.2 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.9.0.0.0.3 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/6/1/0/4/0.1.9.0.0.0.4 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: 0/7/1/1.7 tgt
Nov  6 10:40:35 rx6600-1 vmunix: 0/7/1/1.7.0 sctl
Nov  6 10:40:35 rx6600-1 vmunix: 0/4/1/0.0.0 sasd_vbus
Nov  6 10:40:35 rx6600-1 vmunix: 0/4/1/0.0.0.0 tgt
Nov  6 10:40:35 rx6600-1 vmunix: 0/4/1/0.0.0.0.0 sdisk
Nov  6 10:40:35 rx6600-1 vmunix: Boot device's HP-UX HW path is: 0/4/1/0.0.0.0.0
Nov  6 10:40:35 rx6600-1 vmunix: 
Nov  6 10:40:35 rx6600-1 vmunix:     System Console is on the Built-In Serial Interface
Nov  6 10:40:35 rx6600-1 vmunix: iether0: INITIALIZING HP AD193-60001 PCI/PCI-X 1000Base-T 4Gb FC/1000B-T Combo Adapter at hardware path 0/2/1/0/6/0
Nov  6 10:40:35 rx6600-1 vmunix: iether1: INITIALIZING HP AD193-60001 PCI/PCI-X 1000Base-T 4Gb FC/1000B-T Combo Adapter at hardware path 0/3/1/0/6/0
Nov  6 10:40:35 rx6600-1 vmunix: iether2: INITIALIZING HP AB352-60003 PCI/PCI-X 1000Base-T Dual-port Core at hardware path 0/4/2/0
Nov  6 10:40:35 rx6600-1 vmunix: iether4: INITIALIZING HP AD193-60001 PCI/PCI-X 1000Base-T 4Gb FC/1000B-T Combo Adapter at hardware path 0/5/1/0/6/0
Nov  6 10:40:35 rx6600-1 vmunix: iether5: INITIALIZING HP AD193-60001 PCI/PCI-X 1000Base-T 4Gb FC/1000B-T Combo Adapter at hardware path 0/6/1/0/6/0
Nov  6 10:40:35 rx6600-1 vmunix: iether3: INITIALIZING HP AB352-60003 PCI/PCI-X 1000Base-T Dual-port Core at hardware path 0/4/2/1
Nov  6 10:40:35 rx6600-1 vmunix: Logical volume 64, 0x3 configured as ROOT
Nov  6 10:40:35 rx6600-1 vmunix: Logical volume 64, 0x2 configured as SWAP
Nov  6 10:40:35 rx6600-1 vmunix: Logical volume 64, 0x2 configured as DUMP
Nov  6 10:40:35 rx6600-1 vmunix:     Swap device table:  (start & size given in 512-byte blocks)
Nov  6 10:40:35 rx6600-1 vmunix:         entry 0 - major is 64, minor is 0x2; start = 0, size = 16777216
Nov  6 10:40:35 rx6600-1 vmunix:     Dump device table:  (start & size given in 1-Kbyte blocks)
Nov  6 10:40:35 rx6600-1 vmunix:         entry 0000000000000000 - major is 31, minor is 0x30000; start = 2349940, size = 8388604
Nov  6 10:40:35 rx6600-1 vmunix: Starting the STREAMS daemons-phase 1
Nov  6 10:40:35 rx6600-1 vmunix: Create STCP device files
Nov  6 10:40:35 rx6600-1 vmunix: Starting the STREAMS daemons-phase 2
Nov  6 10:40:35 rx6600-1 vmunix:      $Revision: vmunix:    B11.23_LR FLAVOR=perf Fri Aug 29 22:35:38 PDT 2003 $
Nov  6 10:40:35 rx6600-1 vmunix: Memory Information:
Nov  6 10:40:35 rx6600-1 vmunix:     physical page size = 4096 bytes, logical page size = 4096 bytes
Nov  6 10:40:35 rx6600-1 vmunix:     Physical: 25133536 Kbytes, lockable: 18994328 Kbytes, available: 22051156 Kbytes
Nov  6 10:40:35 rx6600-1 vmunix: 
Nov  6 10:40:36 rx6600-1 nettl[832]: nettl starting up.
Nov  6 10:40:48 rx6600-1 sshd[986]: Server listening on :: port 22.
Nov  6 10:40:48 rx6600-1 sshd[986]: Server listening on 0.0.0.0 port 22.
Nov  6 10:40:49 rx6600-1 rpcbind: check_netconfig: Found CLTS loopback transport
Nov  6 10:40:49 rx6600-1 rpcbind: check_netconfig: Found COTS loopback transport
Nov  6 10:40:49 rx6600-1 rpcbind: check_netconfig: Found COTS ORD loopback transport
Nov  6 10:40:49 rx6600-1 rpcbind: init_transport: check binding for udp
Nov  6 10:40:49 rx6600-1 rpcbind: init_transport: check binding for tcp
Nov  6 10:40:49 rx6600-1 rpcbind: init_transport: check binding for ticlts
Nov  6 10:40:49 rx6600-1 rpcbind: init_transport: check binding for ticotsord
Nov  6 10:40:49 rx6600-1 rpcbind: init_transport: check binding for ticots
Nov  6 10:40:50 rx6600-1 inetd[1100]: Reading configuration
Nov  6 10:40:50 rx6600-1 inetd[1100]: ftp/tcp: Added service, server /usr/lbin/ftpd
Nov  6 10:40:50 rx6600-1 inetd[1100]: telnet/tcp: Added service, server /usr/lbin/telnetd
Nov  6 10:40:50 rx6600-1 inetd[1100]: tftp/udp: Added service, server /usr/lbin/tftpd
Nov  6 10:40:50 rx6600-1 inetd[1100]: login/tcp: Added service, server /usr/lbin/rlogind
Nov  6 10:40:50 rx6600-1 inetd[1100]: shell/tcp: Added service, server /usr/lbin/remshd
Nov  6 10:40:50 rx6600-1 inetd[1100]: exec/tcp: Added service, server /usr/lbin/rexecd
Nov  6 10:40:50 rx6600-1 inetd[1100]: ntalk/udp: Added service, server /usr/lbin/ntalkd
Nov  6 10:40:50 rx6600-1 inetd[1100]: auth/tcp: Added service, server /usr/lbin/identd
Nov  6 10:40:50 rx6600-1 inetd[1100]: printer/tcp: Added service, server /usr/sbin/rlpdaemon
Nov  6 10:40:51 rx6600-1 inetd[1100]: daytime/tcp: Added service, server internal
Nov  6 10:40:51 rx6600-1 inetd[1100]: daytime/udp: Added service, server internal
Nov  6 10:40:51 rx6600-1 inetd[1100]: time/tcp: Added service, server internal
Nov  6 10:40:51 rx6600-1 inetd[1100]: echo/tcp: Added service, server internal
Nov  6 10:40:51 rx6600-1 inetd[1100]: echo/udp: Added service, server internal
Nov  6 10:40:51 rx6600-1 inetd[1100]: discard/tcp: Added service, server internal
Nov  6 10:40:51 rx6600-1 inetd[1100]: discard/udp: Added service, server internal
Nov  6 10:40:51 rx6600-1 inetd[1100]: chargen/tcp: Added service, server internal
Nov  6 10:40:51 rx6600-1 inetd[1100]: chargen/udp: Added service, server internal
Nov  6 10:40:51 rx6600-1 inetd[1100]: kshell/tcp: Added service, server /usr/lbin/remshd
Nov  6 10:40:51 rx6600-1 inetd[1100]: klogin/tcp: Added service, server /usr/lbin/rlogind
Nov  6 10:40:51 rx6600-1 inetd[1100]: dtspc/tcp: Added service, server /usr/dt/bin/dtspcd
Nov  6 10:40:51 rx6600-1 inetd[1100]: recserv/tcp: Added service, server /usr/lbin/recserv
Nov  6 10:40:51 rx6600-1 inetd[1100]: swat/tcp: Added service, server /opt/samba/bin/swat
Nov  6 10:40:51 rx6600-1 inetd[1100]: registrar/tcp: Added service, server /etc/opt/resmon/lbin/registrar
Nov  6 10:40:51 rx6600-1 inetd[1100]: hacl-probe/tcp: Added service, server /opt/cmom/lbin/cmomd
Nov  6 10:40:51 rx6600-1 inetd[1100]: hacl-cfg/udp: Added service, server /usr/lbin/cmclconfd
Nov  6 10:40:51 rx6600-1 inetd[1100]: hacl-cfg/tcp: Added service, server /usr/lbin/cmclconfd
Nov  6 10:40:51 rx6600-1 inetd[1100]: instl_boots/udp: Added service, server /opt/ignite/lbin/instl_bootd
Nov  6 10:40:51 rx6600-1 inetd[1100]: omni/tcp: Added service, server /opt/omni/lbin/inet
Nov  6 10:40:51 rx6600-1 inetd[1100]: rpc.cmsd/udp: Added service, server /usr/dt/bin/rpc.cmsd
Nov  6 10:40:51 rx6600-1 inetd[1100]: rpc.ttdbserver/tcp: Added service, server /usr/dt/bin/rpc.ttdbserver
Nov  6 10:40:51 rx6600-1 inetd[1100]: Configuration complete
Nov  6 10:40:53 rx6600-1 EMCPP: emcpAudit: Info: cmd=powermt: restore  (user ID real=0 effective=0)
Nov  6 10:40:53 rx6600-1 EMCPP: emcpAudit: Info: cmd=powermt: config  (user ID real=0 effective=0)
Nov  6 10:40:53 rx6600-1 EMCPP: emcpAudit: Info: cmd=powermt: save  (user ID real=0 effective=0)
Nov  6 10:40:54 rx6600-1 su: + tty?? root-sfmdb
Nov  6 10:41:06 rx6600-1 cimserver[1706]: starting
Nov  6 10:41:29 rx6600-1 cimserver[1707]: PGS10026:  THE CIM SERVER IS LISTENING ON HTTPS PORT 5,989.
Nov  6 10:41:29 rx6600-1 cimserver[1707]: PGS10028: THE CIM SERVER IS LISTENING ON THE LOCAL CONNECTION SOCKET.
Nov  6 10:41:29 rx6600-1 cimserver[1707]: PGS10030:  STARTED HP-UX WBEM Services VERSION A.02.07.
Nov  6 10:41:32 rx6600-1 FontServer[1755]: Warning: Bad font path element: "/usr/lib/X11/fonts/hp_japanese/100dpi/"
Nov  6 10:41:32 rx6600-1 FontServer[1755]: Warning: Bad font path element: "/usr/lib/X11/fonts/hp_japanese/75dpi/"
Nov  6 10:41:32 rx6600-1 FontServer[1755]: Warning: Bad font path element: "/usr/lib/X11/fonts/hp_korean/75dpi/"
Nov  6 10:41:32 rx6600-1 FontServer[1755]: Warning: Cannot initialize font path element: "/usr/lib/X11/fonts/hp_chinese_t/75dpi/"
Nov  6 10:41:32 rx6600-1 FontServer[1755]: Warning: Bad font path element: "/usr/lib/X11/fonts/ttfjpn.st"
Nov  6 10:41:32 rx6600-1 FontServer[1755]: Warning: Bad font path element: "/usr/lib/X11/fonts/ifojpn.st"
Nov  6 10:41:34 rx6600-1 pwgrd: Started at Thu Nov  6 10:41:34 2014, pid = 1798
Nov  6 10:41:34 rx6600-1 diagmond[1833]: started
Nov  6 10:41:34 rx6600-1 /usr/sbin/envd[1837]: VXPBFt6/, 2"6A3vEdVCND< ~
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2180]: Setting STREAMS-HEAD high water value to 131072.
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2185]: nfsd do_one mpctl succeeded: ncpus = 8.
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2185]: nfsd do_one pmap 2
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2185]: nfsd do_one pmap 3
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2190]: nfsd do_one bind 0
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2191]: nfsd do_one bind 1
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2192]: nfsd do_one bind 2
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2193]: nfsd do_one bind 3
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2194]: nfsd do_one bind 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2195]: nfsd do_one bind 5
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2185]: nfsd do_one bind 7
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2195]: Return from t_optmgmt(XTI_DISTRIBUTE) 0
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2195]: nfsd 5 1  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2197]: nfsd 5 0  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2193]: Return from t_optmgmt(XTI_DISTRIBUTE) 0
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2192]: Return from t_optmgmt(XTI_DISTRIBUTE) 0
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2193]: nfsd 3 1  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2192]: nfsd 2 1  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2200]: nfsd 2 0  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2191]: Return from t_optmgmt(XTI_DISTRIBUTE) 0
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2194]: Return from t_optmgmt(XTI_DISTRIBUTE) 0
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2191]: nfsd 1 1  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2201]: nfsd 1 0  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2199]: nfsd 3 0  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2194]: nfsd 4 1  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2202]: nfsd 4 0  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2185]: Return from t_optmgmt(XTI_DISTRIBUTE) 0
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2185]: nfsd 7 1  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2219]: nfsd 7 0  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2196]: nfsd do_one bind 6
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2190]: Return from t_optmgmt(XTI_DISTRIBUTE) 0
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2196]: Return from t_optmgmt(XTI_DISTRIBUTE) 0
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2190]: nfsd 0 1  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2220]: nfsd 0 0  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2196]: nfsd 6 1  sock 4
Nov  6 10:41:50 rx6600-1 /usr/sbin/nfsd[2221]: nfsd 6 0  sock 4
Nov  6 10:41:53 rx6600-1 krsd[2300]: Delay time is 300 seconds
Nov  6 10:41:53 rx6600-1 sfd[2301]: daemon already running.
Nov  6 10:41:54 rx6600-1 sfd[2314]: starting the daemon.
Nov  6 10:41:54 rx6600-1 emcp_mond: PP daemon: Info: New event pair [0] (2,4,60)
Nov  6 10:41:54 rx6600-1 emcp_mond: PP daemon: Info: New event pair [1] (20,40,300)
Nov  6 10:41:54 rx6600-1 emcp_mond: PP daemon: Info: SetLogMask:: EventLogMask set to 0x66 
Nov  6 10:41:54 rx6600-1 emcp_mond: PP daemon: Info: Using hostname localhost community public debug 0
Nov  6 10:41:54 rx6600-1 emcp_mond: PP daemon: Info: Daemon created successfully.  Starting it now
Nov  6 10:41:54 rx6600-1 emcp_mond: PP daemon: Info: SNMP trap processing disabled.
Nov  6 10:41:54 rx6600-1 emcp_mond: PP daemon: Info: PP Remote Management disabled.
Nov  6 10:45:17 rx6600-1 vmunix: emcp:Mpx:Info: PowerPath Auto Host Registration on VNX-FCN00125000137 is unavailable: incompatible initiator information received from the array
Nov  6 10:45:42 rx6600-1 /usr/sbin/envd[1837]: ***** 9} HH AY =g >/ 8f *****
Nov  6 10:45:42 rx6600-1 /usr/sbin/envd[1837]: NB6H3,9}U}3#9$WwAY=gV5, P^U}9}HHLu< ~!#
Nov  6 10:45:42 rx6600-1 EMS [2970]: ------ EMS Event Notification ------   Value: "MAJORWARNING (3)" for Resource: "/system/events/ia64_corehw/core_hw"     (Threshold:  >= " 3")   
 Execute the following command to obtain event details:   /opt/resmon/bin/resdata -R 194641922 -r /system/events/ia64_corehw/core_hw -n 194641921 -a 
Nov  6 10:49:14 rx6600-1 EMS [2928]: ------ EMS Event Notification ------   Value: "CRITICAL (5)" for Resource: "/system/events/ipmi_fpl/ipmi_fpl"     (Threshold:  >= " 3")    
Execute the following command to obtain event details:   /opt/resmon/bin/resdata -R 191889410 -r /system/events/ipmi_fpl/ipmi_fpl -n 191889409 -a 
Nov  6 18:48:12 rx6600-1 EMS [2970]: ------ EMS Event Notification ------   Value: "CRITICAL (5)" for Resource: "/system/events/ia64_corehw/core_hw"     (Threshold:  >= " 3")    
Execute the following command to obtain event details:   /opt/resmon/bin/resdata -R 194641922 -r /system/events/ia64_corehw/core_hw -n 194641922 -a 
Nov  6 19:00:00 rx6600-1 su: + tty?? root-oracle
Nov  7 08:00:00 rx6600-1 su: + tty?? root-root

从如下信息看到服务器已经出问题了,且信息已经指出可以执行
/opt/resmon/bin/resdata -R 194641922 -r /system/events/ia64_corehw/core_hw -n 194641921 -a 命令来查看详细信息

Nov  6 10:41:54 rx6600-1 emcp_mond: PP daemon: Info: SNMP trap processing disabled.
Nov  6 10:41:54 rx6600-1 emcp_mond: PP daemon: Info: PP Remote Management disabled.
Nov  6 10:45:17 rx6600-1 vmunix: emcp:Mpx:Info: PowerPath Auto Host Registration on VNX-FCN00125000137 is unavailable: incompatible initiator information received from the array
Nov  6 10:45:42 rx6600-1 /usr/sbin/envd[1837]: ***** 9} HH AY =g >/ 8f *****
Nov  6 10:45:42 rx6600-1 /usr/sbin/envd[1837]: NB6H3,9}U}3#9$WwAY=gV5, P^U}9}HHLu< ~!#
Nov  6 10:45:42 rx6600-1 EMS [2970]: ------ EMS Event Notification ------   Value: "MAJORWARNING (3)" for Resource: "/system/events/ia64_corehw/core_hw"     (Threshold:  >= " 3")   
Execute the following command to obtain event details:   /opt/resmon/bin/resdata -R 194641922 -r /system/events/ia64_corehw/core_hw -n 194641921 -a 

执行/opt/resmon/bin/resdata -R 194641922 -r /system/events/ia64_corehw/core_hw -n 194641921 -a 命令来查看详细信息

rx6600-1:[/]#/opt/resmon/bin/resdata -R 194641922 -r /system/events/ia64_corehw/core_hw -n 194641921 -a 

ARCHIVED MONITOR DATA:

Event Time..........: Thu Nov  6 10:45:42 2014
Severity............: MAJORWARNING
Monitor.............: ia64_corehw
Event #.............: 101011              
System..............: rx6600-1

Summary:
     System temperature is out of normal range. 


Description of Error:

     The system temperature is not within normal operating range. It is higher
     than required operating range.

这个错误描述是说系统的温度超出了正常范围,下面信息说明了可能的原因

Probable Cause / Recommended Action:

     Something may be blocking the cooling intakes of the fans. Check for
     obstruction.
     One or more fans may be operating at lower speed than normal. Check the
     fan performance.

     Check for problems with the room air conditioning.

     If the problem is not fixed, the operating temperature may become
     non-recoverable, in which case there are chances that the hardware may be
     damaged.  At that temperature level, on Integrity servers, the firmware
     will shutdown the system automatically. However on HP 9000 servers, the
     action specified in the envd config file will be taken - which may be to
     shutdown the system automatically.

     For information on the sensor that generated this event, refer to FRU ID
     in Event Details section.

上面的信息是说,可能需要清理一下风机,或者风机性能出现问题,或者检查空调情况,如果不是这些原因造成那么可能是硬件出现问题了。下面的论断事件的数据:

Additional Event Data: 
     System IP Address...: 10.138.129.5
     Event Id............: 0x545ae0d600000000
     Monitor Version.....: B.01.00
     Event Class.........: System
     Client Configuration File...........:
     /var/stm/config/tools/monitor/default_ia64_corehw.clcfg 
     Client Configuration File Version...: A.01.00 
          Qualification criteria met.
               Number of events..: 1 
     Associated OS error log entry id(s): 
          None
     Additional System Data:
          System Model Number.............: ia64 hp server rx6600 
          EMS Version.....................: A.04.20 
          STM Version.....................: C.58.00 
          System Serial Number............: SGH48045VY 
     Latest information on this event:
          http://docs.hp.com/hpux/content/hardware/ems/ia64_corehw.htm#101011

v-v-v-v-v-v-v-v-v-v-v-v-v    D  E  T  A  I  L  S    v-v-v-v-v-v-v-v-v-v-v-v-v


Event Details :

     Event Date .............: Thu Nov  6 10:44:08 2014
     Sensor Number ..........: 0xdb
     Sensor Type ............: Temperature
     Sensor Class ...........: Threshold based
     Sensor Reading/Offset...: 0x07 (Offset)
     Event  Type.............: Assertion
     Entity ID ..............: 3
     Generic Message.........: 
       Temperature :  Upper non-critical - going high 
     Entity FRU Id Info......: 
       processor (Sensor ID: Processor 2)

从上面的Event Details信息可以看到,传感器类型是温度方面的问题,传感器类别是基于阈值,事件类型是断言,是说2号cpu的温度已经超过了阈值.经过检查不是机房空调,通风口堵塞问题,需要联系小机厂商来进行一步检查是什么原因造成cpu温度超过阈值,平时cpu使用率只有10%。

linux系统中的调度周期任务:cron

linux系统中的调度周期任务:cron
主要概念
cron工具用来调度经常重复的任务
crontab命令是编辑crontab文件的一个前端程序
crontab文件使用5个字段来规定计时信息
cron作业中的标准输出会作为邮件寄给用户

执行周期任务
人们经常发现自己会定期执行一些任务.在系统管理中,这些任务包括从/tmp目录下删除旧的,不使用的文件,或者经常
检查记录登录信息的文件以确保其不会变得过大.其他用户可能会有自己的任务,如检查不再使用的大型文件,或者查看
网站上是否公布了新的信息.

cron工具允许用户配置要定期运行的命令,如每隔十分钟,每周四一次,或每月两次.用户用crontab命令配置自己的任务
计划(cron table),指定何种命令在何时运行.这些任务由传统的linux(和unix)守护进程,即crond守护进程管理.

cron服务
crond守护进程是代表系统或个人用户执行周期任务的守护进程.通常这个守护进程随着系统的启动而启动,因此大多数
用户都不会注意到.通过列出所有进程且搜索crond,你可以确定crond守护进程有没有在运行.

[root@sidatabase /]# ps aux | grep crond
root      3204  0.0  0.0 117204  1368 ?        Ss   Aug09   0:11 crond
root      4687  0.0  0.0 103244   872 pts/0    S+   14:52   0:00 grep crond

如果crond守护进程没有在运行,系统管理员需要以根用户身份来启动crond守护进程.

crontab语法
用户通过配置一个称为”cron table”(经常缩写成”crontab”)的文件指定要运行哪些作业以及何时运行.下面列出了一个
crontab文件的例子.

30 23 * * 6  su - sybx -c "/sydata/app/db/bin/rman target / msglog=/sybak/bak0.log  cmdfile=/sybak/sybx_rman_script/bak0"
30 23 * * 0,1,2,3,4,5  su - sybx -c "/sydata/app/db/bin/rman target / msglog=/sybak/bak1.log  cmdfile=/sybak/sybx_rman_script/bak1"
30 2 * * 0,1,2,3,4,5,6  su - sybx -c "/sydata/app/db/bin/rman target / msglog=/sybak/bakarch.log  cmdfile=/sybak/sybx_rman_script/bakarch"
30 3 * * 0,1,2,3,4,5,6  su - sybx -c "/sydata/app/db/bin/rman target / msglog=/sybak/delbackup.log  cmdfile=/sybak/sybx_rman_script/delbackup"

crontab文件是一个以行为运行单位的配置文件,每行执行三种功能中的一种:
注释
首字符(非空格)是一个#的行被认为是注释,可忽略.

环境变量
具有name=value格式的所有行被用来定义环境变量

cron命令
其他的任何(非空)行被认为是cron命令,由下面描述的六个字段组成.
cron命令行包括六个用空白分隔的字段.前五个字段用来指定何时运行命令,剩余的第六个字段(包括所有在第五个字
段后的部分)指定要运行的命令.前五个字段指定下列时间信息:

minute      hour   day of month    month(1=January,....)   day of week (0=Sunday,....)    command to run

25          04     1               *                       *                              echo "HI"

前五种字段的每一种都必须含有一个使用下列语法的标记
crontab时间表示语法标记
标记 含义 例子 解释(如果用在第一个字段中)
* 每次 * 每分钟
n 在指定时间 10 在每小时过10分时
n,n,… 在任何指定时间 22,52 在每小时过22分和每小时过52分时
*/n 每隔n次 */15 每隔15分钟(在每个整点,一刻钟,半点,或差一刻整点时)

使用crontab命令
用户很少直接管理自己的crontab文件(甚至不知道crontab文件被保存在哪里),而是使用crontab命令来编辑,列出或者
删除它.
crontab {[-e] | [-l] | [-r]}
crontab file
编辑,列出或删除当胶crontab文件,或者用file取代当前crontab文件.crontab命令行选项释义如下
crontab命令行选项
选项 作用
-e 编辑当前文件
-l 列出当前文件
-r 删除当前文件

直接编辑crontab文件
用户经常用crontab -e 直接编辑自己的crontab文件.crontab命令将把当前crontab配置打开到用户默认的编辑器中.
当用户编辑完文件并退出编辑器时,修改过的文件内容作为新的crontab配置被添加.

默认的编辑器是/bin/vi,然而crontab像其他许多命令一样,检查editor环境变量.如果变理已经被设置,它将会被用来
替代默认编辑器.

环境变量与cron
配置cron作业时,用户应该知道一个微妙的细节.当crond守护进程启动用户命令时,它没有从shell中运行命令,而是
直接对这个命令派生和执行(fork和exec).这有一个重要的含义:启动时被shell配置的任何环境变量或别名(alias),
例如在/etc/profile或.bash_profile中被定义的任何环境变量,不会在cron执行命令时出现.

如果用户想定义一个环境变量,需要在自己的crontab配置中定义该变量.

linux系统中的调度延迟任务:at 命令

linux系统中的调度延迟任务:at 命令
主要概念
at命令可以使命令稍后运行
batch命令可以让命令在机器负载较低的情况下运行
可以直接进入命令,或者以脚本形式提交命令
作业中的标准输出用邮件发送给用户
atq命令和atrm命令用来查看和删除当前的计划任务

linux的守护进程是那些在后台运行的进程,脱离控制终端,执行通常成键盘输入无关的任务.守护进程经常与
网络服务相关联,例如网页服务器(httpd)或ftp服务器(vsftpd).其他守护进程处理系统任务,例如日志守护进程(
syslogd)和电源管理守护进程(apmd).这个主要解释说明两个守护进程:一个允许用户延迟任务(atd);另一个允许
用户在固定间隔时间运行命令(crond).

守护进程像其他任何进程一样,通常作为系统启动序列的一部分被启动,或者由根用户启动.因此,除非你特意寻找
它们,否则可能一直不知道它们的存在.

[root@sidatabase /]# ps aux | grep crond
root      3204  0.0  0.0 117204  1368 ?        Ss   Aug09   0:11 crond
root     21399  0.0  0.0 103244   868 pts/0    S+   14:07   0:00 grep crond
[root@sidatabase /]# ps aux | grep atd
rpcuser   2800  0.0  0.0  23340  1204 ?        Ss   Aug09   0:00 rpc.statd
root      3215  0.0  0.0  21448   464 ?        Ss   Aug09   0:00 /usr/sbin/atd
root     21405  0.0  0.0 103244   872 pts/0    S+   14:07   0:00 grep atd

有些守护进程作为根用户运行,而有些守护进程为了安全起见,则以一个系统用户的身份运行.在上面,crond守护进程
作为根用户运行,而ntpd守护进程则作为系统用户运行,如下所示.

[root@sidatabase /]# ps aux | grep ntpd
root     26538  0.0  0.0 103240   868 ?        14:22   0:00 ntpd -u ntp:ntp -p

atd守护进程
atd守护进程允许用户提交稍后运行的作业,如”at 14:13 “.atd守护进程必须在运行时才能使用,用户可以通过查看
运行的进程列表来确定atd是否在运行.

root@sidatabase /]# ps aux | grep atd
rpcuser   2800  0.0  0.0  23340  1204 ?        Ss   Aug09   0:00 rpc.statd
root      3215  0.0  0.0  21448   548 ?        Ss   Aug09   0:00 /usr/sbin/atd
root     28604  0.0  0.0 103244   872 pts/0    S+   14:24   0:00 grep atd

在上面的输出中第七列指出了与进程相关联的终端.对用户root的grep命令而言,终端是pts/2,这可甬指的网络shell
或X会话中的图形终端.注意,atd守护进程没有相关联的终端.守护进程的一个定义特征是,它结束与启动它的终端之
间的联系.

用at命令提交作业
at命令用来向atd守护进程提交需要在特定时间运行的作业.要运行的命令可以作为脚本提交(用-f命令行选项),也可以
通过标准输入直接输入.命令的标准输出将用电子邮件的形式寄给用户
at [[-f filename] | [-m]] time

规定一天中的时间可以用HH:MM格式,后面附加”am”或”pm”,也可以用”midnight”,”noon”和”teatime”待词语.日期也可以
用好几种格式规定,其中mm/dd/yy

例如要在14:13这个时间生成一个名叫at.txt的文件并在文件中写入”hello I am JingYong”信息

[root@sidatabase /]# echo "hello I am JingYong " > at.txt | at 14:13
job 1 at 2013-08-23 14:13

查看作业

[root@sidatabase /]# atq
1       2013-08-23 14:13 a root

删除作业

[root@sidatabase /]# atrm 1

用batch延迟任务
batch命令与at命令一样,用来延迟任务.与at命令不同的是,batch命令不在特定时间运行,而是等到系统不忙于别的
任务时运行.如果提交作业时机器不繁忙,可以立即运行作业.batch守护进程会监控系统的平均负载(load average)
等待它降到0.8以下,然后开始运行作业任务.

batch命令的语法与at命令的语法一模一样,可以用标准输入规定作业,也可以用-f命令行选项把作业作为batch文件
来提交.如果规定了时间,batch会延迟到指定的时间开始观察机器,那时,atd将开始监控系统的平均负载,并且在系统
不繁忙时运行作业.

在linux系统中在后台以作业形式运行命令

在linux系统中在后台以作业形式运行命令
通过给命令行附加一个”&”字符,任何指定的命令也可以在后台运行.通常,只有那些不需要键盘输入而且不
会生成大量输出的长时间运行的命令才适合在后台运行.当bash shell在后台运行命令时,该命令被称为作
业(job),被分配一个作业号码.

[root@sidatabase oradata]# cp system20130708.dmp / > cp.txt 2> /dev/null &
[1] 20629

在后台启动作业后,bash shell向用户报告了两条信息:第一条是作业号码,出现在方括号中;
第二条是后台作业的进程id.上面的信息说明该作业的作业号码为1,cp命令的进程id为20629

用jobs命令列出当前作业

[root@sidatabase /]# jobs
[1]+  Running                 cp -i system20130708.dmp / > cp.txt 2> /dev/null &  (wd: /oracle/oradata)

他的每个后台作业都和作业号码一起列出.最新操作的作业被作为当前作业,在jobs命令输出中用一个”+”修饰.

用fg命令把作业置于前台
可以用fg内置命令把后台作业置于前台运行.fg命令用作业号作为参数,如果没有提供任何作业号码,将在前台运行当前
作业.

[root@sidatabase oradata]# fg 1
cp -i system20130708.dmp / > cp.txt 2> /dev/null

cp -i system20130708.dmp / > cp.txt 2> /dev/null正在前台运行,因此,当进程仍在运行时,shell不会发送打印提示
符.

用ctrl+z挂起前台作业
ctrl+z控制组合键是挂起进程的一种方法.当用户挂起前台命令时,仔细观察bash shell的输出,会发现bash shell把任何
挂起的前台进程都看成作业.

[root@sidatabase oradata]# fg 1
cp -i system20130708.dmp / > cp.txt 2> /dev/null
^Z
[1]+  Stopped                 cp -i system20130708.dmp / > cp.txt 2> /dev/null
[root@sidatabase oradata]# jobs
[1]+  Stopped                 cp -i system20130708.dmp / > cp.txt 2> /dev/null

[root@sidatabase oradata]# ps u
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root      3297  0.0  0.0   4056   544 tty2     Ss+  Aug09   0:00 /sbin/mingetty
root      3299  0.0  0.0   4056   540 tty3     Ss+  Aug09   0:00 /sbin/mingetty
root      3301  0.0  0.0   4056   540 tty4     Ss+  Aug09   0:00 /sbin/mingetty
root      3303  0.0  0.0   4056   540 tty5     Ss+  Aug09   0:00 /sbin/mingetty
root      3305  0.0  0.0   4056   544 tty6     Ss+  Aug09   0:00 /sbin/mingetty
root      3345  0.0  0.0 129680 25964 tty1     Ss+  Aug09   5:39 /usr/bin/Xorg :
root      6828  0.0  0.0 108452  1932 pts/0    Ss   08:46   0:00 -bash
root     25925 37.0  0.0 113636   896 pts/0    T    11:12   1:36 cp -i system201
root     27324 12.0  0.0 110232  1168 pts/0    R+   11:16   0:00 ps u

当进程被挂起(即被停止)时,被分配给一个作业号码(如果没有的话),并被置于后台.jobs命令把该作业报告成”停止的”
作业.ps命令确定进程处于停止(挂起)状态.

重新启动挂起在后台的作业
挂起在后台的作业可以用bg内置命令重新启动.像fg命令一样,bg命令把作业号码作为参数,或者,如果没有提供任何
作业号码,就使用当前作业

[root@sidatabase oradata]# bg 1
[1]+ cp -i system20130708.dmp / > cp.txt 2> /dev/null &
[root@sidatabase oradata]# jobs
[1]+  Running                 cp -i system20130708.dmp / > cp.txt 2> /dev/null &
[root@sidatabase oradata]#

作业号码1现在再次处于运行状态

Linux系统下挂载NTFS移动硬盘的一个例子

先要下载ntfs-3g

下载地址:http://www.tuxera.com/community/ntfs-3g-download/
步骤一:解压安装NTFS-3G。

tar -xvzf ntfs-3g_ntfsprogs-2013.1.13.tgz

cd ntfs-3g_ntfsprogs-2013.1.13
执行安装过程如下所示:
  ./configure
  make
  make install
  之后系统会提示安装成功,下面就可以用ntfs-3g来实现对NTFS分区的读写了
步骤二:配置挂载NTFS格式的移动硬盘

1. 首先得到NTFS分区的信息

  sudo fdisk -l | grep NTFS
[[root@node6 ~]# sudo fdisk -l | grep NTFS
/dev/sdd1   *           1      601099   312571136    7  HPFS/NTFS
[root@node6 ~]#

2. 设置挂载点,用如下命令实现挂载

[root@node6 ~]# mkdir -p /jybackup
  mount -t ntfs-3g  

[root@node6 fuse-2.7.4]# mount -t ntfs-3g /dev/sdd1 /jybackup
可能会报错
FATAL: Module fuse not found.
ntfs-3g-mount: fuse device is missing, try 'modprobe fuse' as root

意思是没找到fuse模块,下载

http://jaist.dl.sourceforge.net/sourceforge/fuse/fuse-2.7.4.tar.gz


#tar zxvf fuse-2.7.4.tar.gz

#cd fuse-2.7.4

#./configure --prefix=/usr

#make

#make install

#make clean

注意:执行./configure别忘了加参数–prefix=/usr,否则默认安装在/usr/local/lib,这样有需要编辑/etc /ld.so.conf把/usr/local/lib加进去,再执行/sbin/ldconfig,不然安装ntfs-3g会有错误。

然后再进行挂载

[root@node6 fuse-2.7.4]# mount -t ntfs-3g /dev/sdd1 /jybackup
[root@node6 fuse-2.7.4]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             125G   13G  106G  11% /
/dev/sda1              99M   18M   76M  20% /boot
tmpfs                  28G     0   28G   0% /dev/shm
/dev/sdc1             111G  104M  105G   1% /backup
/dev/sdd1             299G   95G  204G  32% /jybackup

从输出结果可以看到/dev/sdd1已经被mount到系统中了

3. 如果想实现开机自动挂载,可以在/etc/fstab里面添加如下格式语句
   ntfs-3g silent,umask=0,locale=zh_CN.utf8 0 0
  这样可以实现NTFS分区里中文文件名的显示。

4. 卸载分区可以用umount实现,用
  umount   或者   umount

采用scp命令在Linux系统之间copy文件

复制文件或目录命令:

  复制文件:
  (1)将本地文件拷贝到 远程
  scp 文件名 –用户名 @计 算机IP或者计算机名称 :远程路径
        scp /home/test.ora root@10.138.130.29:/home/root
  (2)从远程将文件拷回本地
  scp –用户名 @计算机IP或者计算机名称 :文件名 本地路径
       scp root@10.138.130.29:/home/root     /home/test.ora
  复制目录:
  (1)将本地目录拷贝到远程
  scp -r 目录名 用 户名 @计算机IP或者计算机名称 :远程路径
        scp -r  /home/test.ora root@10.138.130.29:/home/root
  (2)从远程将目录拷回 本地
  scp -r 用户名 @计 算机IP或者计算机名称 :目录名 本地路径
       scp -r  root@10.138.130.29:/home/root     /home/

主机可以访问虚拟机的网络设置

NAT方式

 右键单击主机任务栏上的网络连接图标,选择打开网络连接页面
 启动”VMware Network Adapter VMnet8” 和 “VMware Network Adapter VMnet1”这两个连接
 右键单击“本地连接”,选择属性,打开”本地连接属性”对话框,选择”高级”标签,打开高级标签页面,选中选项”允许其它网络用户通过此计算机的internet连接来连接”,然后在”家庭网络连接”下拉列表中选择VMware Network Adapter VMnet8。
 在虚拟机上右键单击你要设置的虚拟机选“setting”(因为有的不止虚拟一台),打开”Hardware”标签页,单击”device”下的ethernet,此时在右边选中NAT:Used to share the host’s IP address. 然后点击确定。
 打开虚拟机上的Edit菜单,选择virtual network settings打开virtual network editor页面,先打开automatic bridging标签页,关闭automagic bridging,点击应用;再打开DHCP标签页,启动DHCP服务,然后点击应用;最后打开NAT标签页,启动NAT服务,然后点击应用;点击确定后退出
 启动虚拟机中的系统。
 设置虚拟机中的系统的IP地址为192.168.100.200,默认网关为192.168.100.1 (VMware Network Adapter VMnet8的IP地址),DNS服务器设置和主机中的DNS服务器一致。
 重新加载网络参数或者重新启动虚拟机中的系统。
设置静态IP地址.
命令行设置(该方式只是临时设置,系统重启后失效)
设置IP和掩码
ifconfig 接口名(如eth0) IP地址 netmask 子网掩码
设置网关
route add default gw 默认网关
实例:假设设置eth0的IP:192.168.1.100,DNS:192.168.1.12
ifconfig eth0 192.168.100.200 netmask 255.255.255.0
route add default gw 192.168.100.1
修改文件来实现配置网络(需要重启网络接口重启后IP不会改变)
修改IP地址,文件:/etc/sysconfig/network-scripts/ifcfg-接口名
这里假设网络接口名为eth0.
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0 (注:这里填的是网络接口名)
BOOTPROTO=static
ONBOOT=yes (注:是否随系统启动)
IPADDR=192.168.100.200(注:这里填写的是IP地址)
PREFIX=24 (注:这里填写的是掩码的长度)
NETMASK=255.255.255.0
GATEWAY=192.168.100.1( 注:这里写的是网关)就是VMware Network Adapter VMnet8的IP地址
保存退出
  #/sbin/service network restart
  如果网卡启动是OK的话就说明IP地址设定成功了。另外我们可以用ifconfig eth0来显示当前的IP来确认是否设置正确。
  利用以下命令:
  /etc/init.d/network reload 命令或service network [命令]
  重新导入该文件,实现网络启动。
关闭linux防火墙
(1) 重启后永久性生效:
开启:chkconfig iptables on
关闭:chkconfig iptables off
(2) 即时生效,重启后失效:
开启:service iptables start
关闭:service iptables stop
其实可以在虚拟机中安装系统时进行设置