首页 > 网络 > 其他 >

网络的路由配置以及Centos7的网络组实现

2016-09-13

路由配置路由是互联网络的核心,没有路由的网络如同一座孤岛,掌握路由的配置是IT人员的必备技能。网络的路由配置以及Centos7的网络组实现。

1、路由配置

路由是互联网络的核心,没有路由的网络如同一座孤岛,掌握路由的配置是IT人员的必备技能。

例如:现在有三台主机需要通信,其中A和B在同一网段,C在另一网段,这两个网段有三个路由相隔,如何实现他们之间的通信呢?

wKiom1fTeJ_Cpe-kAABTEe9nLZ4515.jpg

主机A:IP=192.168.1.100/24

主机B:IP=192.168.1.63/24

主机C:IP=10.2.110.100/16

R1的接口0:IP=192.168.1.1/24,接口1:IP=110.1.24.10/24

R2的接口0:IP=110.1.24.20/24,接口1:IP=72.98.2.10/16

R3的接口0:IP=72.98.70.20/16,接口1:IP=10.2.0.1/16

通过分析上面的网络环境,可以得到R1,R2和R3的路由信息,这里我们指定每一个路由的静态路由表

R1:路由表

网段 网关接口

192.168.1.0/24 0.0.0.0/0 eth0

110.1.24.10/24 0.0.0.0/0 eth1

72.98.0.0/16 110.1.24.20 eth1

10.2.0.0/16    110.1.24.20 eth1

0.0.0.0/0 110.1.24.20 eth1

R2:路由表

网段 网关接口

192.168.1.0/24 110.1.24.10 eth0

110.1.24.10/24 0.0.0.0/0 eth0

72.98.0.0/16 0.0.0.0/0 eth1

10.2.0.0/1672.98.70.20 eth1

0.0.0.0/0 外网IP(这里不写)

R3:路由表

网段 网关接口

192.168.1.0/24 72.98.2.10 eth0

110.1.24.10/24 72.98.2.10 eth0

72.98.0.0/16 0.0.0.0/0 eth0

10.2.0.0/160.0.0.0/0 eth1

0.0.0.0/0 72.98.2.10 eth0

这里用3台centos系统作为路由

用node1主机来做route1

[root@node1~]#ipaddrshowdeveth1
3:eth1:mtu1500qdiscpfifo_faststateUPqlen1000
link/ether00:0c:29:e2:96:7cbrdff:ff:ff:ff:ff:ff
inet192.168.1.1/24scopeglobaleth1
inet6fe80::20c:29ff:fee2:967c/64scopelink
valid_lftforeverpreferred_lftforever
[root@node1~]#ipaddrshowdeveth2
4:eth2:mtu1500qdiscpfifo_faststateUPqlen1000
link/ether00:0c:29:e2:96:86brdff:ff:ff:ff:ff:ff
inet110.1.24.10/24scopeglobaleth2
inet6fe80::20c:29ff:fee2:9686/64scopelink
valid_lftforeverpreferred_lftforever
[root@node1~]#routeadd-net10.2.0.0/16gw110.1.24.20deveth2
[root@node1~]#route-n
KernelIProutingtable
DestinationGatewayGenmaskFlagsMetricRefUseIface
192.168.1.00.0.0.0255.255.255.0U000eth1
110.1.24.00.0.0.0255.255.255.0U000eth2
10.2.0.0110.1.24.20255.255.0.0UG000eth2
72.98.0.0110.1.24.20255.255.0.0UG000eth2
10.1.0.00.0.0.0255.255.0.0U000eth0
169.254.0.00.0.0.0255.255.0.0U10020
[root@node1~]#echo1>/proc/sys/net/ipv4/ip_forward

note2用来做route2

[root@node2~]#ipaddradd110.1.24.20/24deveth1
[root@node2~]#ipaddradd72.98.2.10/16deveth2
[root@node2~]#ipaddrshowdeveth1
3:eth1:mtu1500qdiscpfifo_faststateUPqlen1000
link/ether00:0c:29:00:90:24brdff:ff:ff:ff:ff:ff
inet110.1.24.20/24scopeglobaleth1
inet6fe80::20c:29ff:fe00:9024/64scopelink
valid_lftforeverpreferred_lftforever
[root@node2~]#ipaddrshowdeveth2
4:eth2:mtu1500qdiscpfifo_faststateUPqlen1000
link/ether00:0c:29:00:90:2ebrdff:ff:ff:ff:ff:ff
inet72.98.2.10/16scopeglobaleth2
inet6fe80::20c:29ff:fe00:902e/64scopelink
valid_lftforeverpreferred_lftforever
[root@node2~]#route-n
KernelIProutingtable
DestinationGatewayGenmaskFlagsMetricRefUseIface
110.1.24.00.0.0.0255.255.255.0U000eth1
72.98.0.00.0.0.0255.255.0.0U000eth2
10.1.0.00.0.0.0255.255.0.0U000eth0
169.254.0.00.0.0.0255.255.0.0U100200eth0
169.254.0.00.0.0.0255.255.0.0U100300eth1
[root@node2~]#routeadd-net192.168.1.0/24gw110.1.24.10deveth1
[root@node2~]#routeadd-net10.2.0.0/16gw72.98.70.20deveth2
[root@node2~]#route-n
KernelIProutingtable
DestinationGatewayGenmaskFlagsMetricRefUseIface
192.168.1.0110.1.24.10255.255.255.0UG000eth1
110.1.24.00.0.0.0255.255.255.0U000eth1
10.2.0.072.98.70.20255.255.0.0UG000eth2
72.98.0.00.0.0.0255.255.0.0U000eth2
10.1.0.00.0.0.0255.255.0.0U000eth0
169.254.0.00.0.0.0255.255.0.0U100200eth0
169.254.0.00.0.0.0255.255.0.0U100300eth1
[root@node2~]#echo1>/proc/sys/net/ipv4/ip_forward

note3用来做route3

[root@node3~]#ipaddradd72.98.70.20/16deveth1
[root@node3~]#ipaddradd10.2.0.1/16deveth2
[root@node3~]#ipaddrshowdeveth1
3:eth1:mtu1500qdiscpfifo_faststateUPqlen1000
link/ether00:0c:29:47:d8:e1brdff:ff:ff:ff:ff:ff
inet72.98.70.20/16scopeglobaleth1
inet6fe80::20c:29ff:fe47:d8e1/64scopelink
valid_lftforeverpreferred_lftforever
[root@node3~]#ipaddrshowdeveth2
4:eth2:mtu1500qdiscpfifo_faststateUPqlen1000
link/ether00:0c:29:47:d8:ebbrdff:ff:ff:ff:ff:ff
inet10.2.0.1/16scopeglobaleth2
inet6fe80::20c:29ff:fe47:d8eb/64scopelink
valid_lftforeverpreferred_lftforever
[root@node3~]#routeadd-net110.1.24.0/24gw72.98.2.10deveth1
[root@node3~]#route-n
KernelIProutingtable
DestinationGatewayGenmaskFlagsMetricRefUseIface
192.168.1.072.98.2.10255.255.255.0UG000eth1
110.1.24.072.98.2.10255.255.255.0UG000eth1
10.2.0.00.0.0.0255.255.0.0U000eth2
72.98.0.00.0.0.0255.255.0.0U000eth1
10.1.0.00.0.0.0255.255.0.0U000eth0
169.254.0.00.0.0.0255.255.0.0U100200eth0
169.254.0.00.0.0.0255.255.0.0U100300eth1
[root@node3~]#echo1>/proc/sys/net/ipv4/ip_forward

主机A:

[root@host1~]#ipaddradd192.168.1.100/24deveno33554984
[root@host1~]#iprouteadddefaultvia192.168.1.1
[root@host1~]#ipaddrshowdeveno33554984
3:eno33554984:mtu1500qdiscpfifo_faststateUPqlen1000
link/ether00:0c:29:2b:82:a6brdff:ff:ff:ff:ff:ff
inet192.168.1.100/24scopeglobaleno33554984
valid_lftforeverpreferred_lftforever
inet6fe80::20c:29ff:fe2b:82a6/64scopelink
valid_lftforeverpreferred_lftforever
[root@host1~]#route-n
-bash:route:commandnotfound
[root@host1~]#iprouteshow
10.1.0.0/16deveno16777736protokernelscopelinksrc10.1.70.171metric100
192.168.1.0/24deveno33554984protokernelscopelinksrc192.168.1.100
0.0.0.0via192.168.1.1deveno33554984

主机B:

[root@host2~]#ipaddrshowdeveno33554984
3:eno33554984:mtu1500qdiscpfifo_faststateUPqlen1000
link/ether00:0c:29:aa:22:47brdff:ff:ff:ff:ff:ff
inet192.168.1.63/24scopeglobaleno33554984
valid_lftforeverpreferred_lftforever
inet6fe80::20c:29ff:feaa:2247/64scopelink
valid_lftforeverpreferred_lftforever
[root@host2~]#route-n
KernelIProutingtable
DestinationGatewayGenmaskFlagsMetricRefUseIface
10.1.0.00.0.0.0255.255.0.0U10000eno16777736
192.168.1.00.0.0.0255.255.255.0U000eno33554984
0.0.0.0192.168.1.1255.255.255.255UGH000eno33554984

主机C:

root@debian:~#ipaddrshowdeveth1
3:eth1:mtu1500qdiscpfifo_faststateUPgroupdefaultqlen1000
link/ether00:0c:29:f1:04:08brdff:ff:ff:ff:ff:ff
inet10.2.110.100/16scopeglobaleth1
valid_lftforeverpreferred_lftforever
root@debian:~#route-n
KernelIProutingtable
DestinationGatewayGenmaskFlagsMetricRefUseIface
10.1.0.00.0.0.0255.255.0.0U000eth0
10.2.0.00.0.0.0255.255.0.0U000eth1
0.0.0.010.2.0.1255.255.255.255UGH000eth1
root@debian:~#

至此所有配置已经结束,关闭所有主机的网关和selinux

测试:

在主机C上:

root@debian:~#ping-Ieth1192.168.1.1
PING192.168.1.1(192.168.1.1)from10.2.110.100eth1:56(84)bytesofdata.
64bytesfrom192.168.1.1:icmp_seq=1ttl=62time=0.691ms
64bytesfrom192.168.1.1:icmp_seq=2ttl=62time=1.17ms
^C
---192.168.1.1pingstatistics---
2packetstransmitted,2received,0%packetloss,time1000ms
rttmin/avg/max/mdev=0.691/0.931/1.171/0.240ms
root@debian:~#ping-Ieth1192.168.1.63
PING192.168.1.63(192.168.1.63)from10.2.110.100eth1:56(84)bytesofdata.
64bytesfrom192.168.1.63:icmp_seq=1ttl=61time=1.22ms
64bytesfrom192.168.1.63:icmp_seq=2ttl=61time=0.927ms
^C
---192.168.1.63pingstatistics---
2packetstransmitted,2received,0%packetloss,time1001ms
rttmin/avg/max/mdev=0.927/1.074/1.221/0.147ms
root@debian:~#ping-Ieth1192.168.1.100
PING192.168.1.100(192.168.1.100)from10.2.110.100eth1:56(84)bytesofdata.
64bytesfrom192.168.1.100:icmp_seq=1ttl=61time=1.21ms
64bytesfrom192.168.1.100:icmp_seq=2ttl=61time=1.78ms
^C
---192.168.1.100pingstatistics---
2packetstransmitted,2received,0%packetloss,time1001ms
rttmin/avg/max/mdev=1.214/1.497/1.780/0.283ms
root@debian:~#

在主机A上:

[root@host1~]#ping-Ieno3355498410.2.110.100
PING10.2.110.100(10.2.110.100)from192.168.1.100eno33554984:56(84)bytesofdata.
64bytesfrom10.2.110.100:icmp_seq=1ttl=61time=0.985ms
64bytesfrom10.2.110.100:icmp_seq=2ttl=61time=1.09ms
64bytesfrom10.2.110.100:icmp_seq=3ttl=61time=1.89ms
64bytesfrom10.2.110.100:icmp_seq=4ttl=61time=2.00ms
^C
---10.2.110.100pingstatistics---
4packetstransmitted,4received,0%packetloss,time3005ms
rttmin/avg/max/mdev=0.985/1.493/2.008/0.459ms
[root@host1~]#

在主机B上:

[root@host2~]#ping-Ieno3355498410.2.110.100
PING10.2.110.100(10.2.110.100)from192.168.1.63eno33554984:56(84)bytesofdata.
64bytesfrom10.2.110.100:icmp_seq=1ttl=61time=1.15ms
64bytesfrom10.2.110.100:icmp_seq=2ttl=61time=1.93ms
64bytesfrom10.2.110.100:icmp_seq=3ttl=61time=0.979ms
^C
---10.2.110.100pingstatistics---
3packetstransmitted,3received,0%packetloss,time2003ms
rttmin/avg/max/mdev=0.979/1.355/1.930/0.412ms
[root@host2~]#ping-Ieno3355498472.98.70.20
PING72.98.70.20(72.98.70.20)from192.168.1.63eno33554984:56(84)bytesofdata.
64bytesfrom72.98.70.20:icmp_seq=1ttl=62time=0.751ms
64bytesfrom72.98.70.20:icmp_seq=2ttl=62time=0.807ms
64bytesfrom72.98.70.20:icmp_seq=3ttl=62time=1.33ms
^C
---72.98.70.20pingstatistics---
3packetstransmitted,3received,0%packetloss,time2000ms
rttmin/avg/max/mdev=0.751/0.964/1.335/0.264ms
[root@host2~]#ping-Ieno3355498472.98.70.10###不知道为啥ping不通
PING72.98.70.10(72.98.70.10)from192.168.1.63eno33554984:56(84)bytesofdata.
From110.1.24.20icmp_seq=1DestinationHostUnreachable
From110.1.24.20icmp_seq=2DestinationHostUnreachable
From110.1.24.20icmp_seq=3DestinationHostUnreachable
^C
---72.98.70.10pingstatistics---
5packetstransmitted,0received,+3errors,100%packetloss,time4002ms
pipe4
[root@host2~]#ping-Ieno33554984110.1.24.20
PING110.1.24.20(110.1.24.20)from192.168.1.63eno33554984:56(84)bytesofdata.
64bytesfrom110.1.24.20:icmp_seq=1ttl=63time=0.556ms
64bytesfrom110.1.24.20:icmp_seq=2ttl=63time=2.15ms
64bytesfrom110.1.24.20:icmp_seq=3ttl=63time=0.972ms
^C
---110.1.24.20pingstatistics---
3packetstransmitted,3received,0%packetloss,time2002ms
rttmin/avg/max/mdev=0.556/1.228/2.157/0.678ms
[root@host2~]#ping-Ieno33554984110.1.24.10
PING110.1.24.10(110.1.24.10)from192.168.1.63eno33554984:56(84)bytesofdata.
64bytesfrom110.1.24.10:icmp_seq=1ttl=64time=0.282ms
64bytesfrom110.1.24.10:icmp_seq=2ttl=64time=0.598ms
64bytesfrom110.1.24.10:icmp_seq=3ttl=64time=0.367ms
^C
---110.1.24.10pingstatistics---
3packetstransmitted,3received,0%packetloss,time2000ms
rttmin/avg/max/mdev=0.282/0.415/0.598/0.135ms
[root@host2~]#

2.centos7的网络组实现

网络组类似于centos6的bond,都是多个网卡使用一个IP,是增强网络健壮性的一个手段

网络组:是将多个网卡聚合在一起方法,从而实现冗错和提高吞吐量

网络组不同于旧版中bonding技术,提供更好的性能和扩展性

网络组由内核驱动和teamd守护进程实现.包名是teamd

启动网络组接口不会自动启动网络组中的port接口启动网络组接口中的port接口不会自动启动网络组接口禁用网络组接口会自动禁用网络组中的port接口没有port接口的网络组接口可以启动静态IP连接启用DHCP连接时,没有port接口的网络组会等待port接口的加入

具体的runner方式可以查看man 5 teamd.conf帮助

创建网络组接口:

[root@linux~]#nmcliconaddtypeteamcon-nametestifnameteam0config'{"runner":{"name":"activebackup"}}'
Connection'test'(5a3bfb26-993f-45ad-add6-246ff419e7bd)successfullyadded.

此时在网络配置目录下生成了一个文件

[root@linux~]#ls/etc/sysconfig/network-scripts/ifcfg-test
/etc/sysconfig/network-scripts/ifcfg-test
[root@linux~]#nmclidevshowteam0
GENERAL.DEVICE:team0
GENERAL.TYPE:team
GENERAL.HWADDR:82:D0:69:2C:48:6E
GENERAL.MTU:1500
GENERAL.STATE:70(connecting(gettingIPconfiguration))
GENERAL.CONNECTION:test
GENERAL.CON-PATH:/org/freedesktop/NetworkManager/ActiveConnection/3
[root@linux~]#nmcliconshow
NAMEUUIDTYPEDEVICE
eno33554984fb67dbad-ec81-39b4-42b1-ebf975c3ff13802-3-etherneteno33554984
eno16777736d329fbf7-4423-4a10-b097-20b266c26768802-3-etherneteno16777736
eno50332208d2665055-8e83-58f1-e9e3-49a5fb133641802-3-etherneteno50332208
test5a3bfb26-993f-45ad-add6-246ff419e7bdteamteam0

给team0设置静态IP和开机自启动

[root@linux~]#nmcliconmodtestipv4.methodmanualipv4.addresses"10.1.70.24/16"connection.autoconnectyes
[root@linux~]#cat/etc/sysconfig/network-scripts/ifcfg-test
DEVICE=team0
TEAM_CONFIG="{\"runner\":{\"name\":\"activebackup\"}}"
DEVICETYPE=Team
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=test
UUID=5a3bfb26-993f-45ad-add6-246ff419e7bd
ONBOOT=yes
IPADDR=10.1.70.24
PREFIX=16
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
[root@linux~]#

创建两个port接口

[root@linux~]#nmcliconaddtypeteam-slavecon-nametest-1ifnameeno33554984masterteam0
Connection'test-1'(234c3e91-d90d-421c-ae88-133deddfce94)successfullyadded.
[root@linux~]#nmcliconaddtypeteam-slavecon-nametest-2ifnameeno50332208masterteam0
Connection'test-2'(116ef596-d983-456c-a6ae-a74a4f8c03dc)successfullyadded.
[root@linux~]#
[root@linux~]#cat/etc/sysconfig/network-scripts/ifcfg-test-1
NAME=test-1
UUID=234c3e91-d90d-421c-ae88-133deddfce94
DEVICE=eno33554984
ONBOOT=yes
TEAM_MASTER=team0
DEVICETYPE=TeamPort
[root@linux~]#cat/etc/sysconfig/network-scripts/ifcfg-test-2
NAME=test-2
UUID=116ef596-d983-456c-a6ae-a74a4f8c03dc
DEVICE=eno50332208
ONBOOT=yes
TEAM_MASTER=team0
DEVICETYPE=TeamPort

查看网络组状态:

[root@linux~]#teamdctlteam0stat
setup:
runner:activebackup
runner:
activeport:

发现port端口均没有开启

开启port端口

[root@linux~]#nmcliconuptest-1
Connectionsuccessfullyactivated(D-Busactivepath:/org/freedesktop/NetworkManager/ActiveConnection/5)
[root@linux~]#nmcliconuptest-2
Connectionsuccessfullyactivated(D-Busactivepath:/org/freedesktop/NetworkManager/ActiveConnection/7)
[root@linux~]#teamdctlteam0stat
setup:
runner:activebackup
ports:
eno33554984
linkwatches:
linksummary:up
instance[link_watch_0]:
name:ethtool
link:up
downcount:0
eno50332208
linkwatches:
linksummary:up
instance[link_watch_0]:
name:ethtool
link:up
downcount:0
runner:
activeport:eno33554984

可以看到端口开启成功

[root@linux~]#ping-Iteam010.1.70.172
PING10.1.70.172(10.1.70.172)from10.1.70.24team0:56(84)bytesofdata.
64bytesfrom10.1.70.172:icmp_seq=1ttl=64time=0.500ms
64bytesfrom10.1.70.172:icmp_seq=2ttl=64time=0.804ms
^C
---10.1.70.172pingstatistics---
2packetstransmitted,2received,0%packetloss,time1001ms
rttmin/avg/max/mdev=0.500/0.652/0.804/0.152ms
[root@linux~]#

配置成功,可以看到当前活动的是eno33554984,测试禁用后能否成功

[root@linux~]#nmclidevicedisconnecteno33554984
Device'eno33554984'successfullydisconnected.
[root@linux~]#ping-Iteam010.1.70.172
PING10.1.70.172(10.1.70.172)from10.1.70.24team0:56(84)bytesofdata.
测试不成功,通过查找资料了解到当使用activebackup的runner时,必须加上一个参数
[root@linux~]#nmcliconmodifytestteam.config'{"runner":{"name":"activebackup","hwaddr_policy":"by_active"}}'
[root@linux~]#cat/etc/sysconfig/network-scripts/ifcfg-test
DEVICE=team0
TEAM_CONFIG="{\"runner\":{\"name\":\"activebackup\",\"hwaddr_policy\":\"by_active\"}}"
DEVICETYPE=Team
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=test
UUID=5a3bfb26-993f-45ad-add6-246ff419e7bd
ONBOOT=yes
IPADDR=10.1.70.24
PREFIX=16
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
相关文章
最新文章
热点推荐