收藏 分享(赏)

ibm VIOS实现双机的方案.pdf

上传人:精品资料 文档编号:9661242 上传时间:2019-08-21 格式:PDF 页数:59 大小:1.60MB
下载 相关 举报
ibm VIOS实现双机的方案.pdf_第1页
第1页 / 共59页
ibm VIOS实现双机的方案.pdf_第2页
第2页 / 共59页
ibm VIOS实现双机的方案.pdf_第3页
第3页 / 共59页
ibm VIOS实现双机的方案.pdf_第4页
第4页 / 共59页
ibm VIOS实现双机的方案.pdf_第5页
第5页 / 共59页
点击查看更多>>
资源描述

1、通过VIOS实现双机的一些讨论Gao Zhi QiangAgenda VIOS Basic 实现分区双机需要几个VIOS 心跳/网卡/HBA的数量 访问存储vSCSI or NPIV?哪种性能更好?What is PowerVM? PowerVM Editions featureMicro-PartitioningVirtual I/O ServerIntegrated Virtualization ManagerLive Partition MobilityActive Memory SharingLx86 (formerly System p AVE)Hardware and softwa

2、re that delivers industry-leading virtualization on IBM POWER processor-based processors for UNIX, Linux and i5/OS customersPowerVM Editions:PowerVM Enterprise EditionPowerVM Standard EditionPowerVM Express EditionIBM develops hypervisor that would become VM on the mainframeIBM announces first machi

3、nes to do physical partitioningIBM announces LPAR on the mainframeIBM announces LPAR on POWER1967 1973 1987IBM intros POWER Hypervisor for System p and System iIBM announces PowerVM200720041999client quote source: Brakes India case study published at http:/ ensures that we are making the best possib

4、le use of hardware resources across our entire environment.” - T N Rangarajan, VP of IT, Brakes IndiaAugust 2007 2008IBM announces POWER6, the first UNIX servers with Live Partition MobilityIBM虚拟化技术发展历程A 40 year tradition culminates with PowerVMPowerVM EditionExpressEditionStandardEditionEnterpriseE

5、ditionServersSupported p520 / p550 p5, p6 and JS2X p6 and JS22Max LPARs 3 / Server 10 / Core 10 / CoreManagement IVM IVM & HMC IVM & HMCVIOS Yes Yes YesLive Partition Mobility No No YesMultiple Shared Processor Pools No Yes(HMC) Yes(HMC)Shared Dedicated Capacity Yes Yes YesPowerVM Lx86 Yes Yes YesOp

6、erating Systems AIX & Linux AIX & Linux AIX & LinuxVirtual I/O Server (VIOS) Basics Power LPAR based I/O virtualization appliance - Facilitates sharing physical I/O resources amongst LPARs- Power5, Power6, Power7, Blade- VIOS serves the AIX, Linux, and IBM i operating systems- multiple VIOSs per CEC

7、, typically deployed in pairs- packaged with PowerVM editions (an optional platform feature) Virtual I/O - Storage - vSCSI (Storage Virtualizer)- NPIV (Pass-through)- Virtual Networking - Shared Ethernet Adapter (SEA)- Integrated Virtual Ethernet (IVE) Virtualization of POWER5 and POWER6 servers is

8、accomplished using two layers of firmware: A thin core hypervisor that virtualizes processors, memory, and local networks One or more Virtual I/O Server partitions that virtualize I/O adapters and devicesPower Systems Isolates I/O Virtualization with the Virtual I/O ServerPOWERHypervisorFirmwareServ

9、ice Subsyst.ProcessorsMemoryI/O Expansion SlotsPOWERServerHardwareNetworks and Networked StorageLocal Devices and Storage Virtual Memory Virtual Processors VirtualI/OServersVirtualAdaptersVirtual Disks Virtual Networks AIXPartitionsi5/OSPartitionsLinuxPartitionsUnassignedOn DemandResourcesHardwareMa

10、nagementConsoleWorkload Management and ProvisioningVIOS Virtual StoragevSCSI (Virtual SCSI) and NPIV (N_Port ID Virtualization) are two different methods ofvirtualizing physical storage resources. vSCSI: Storage Virtualizer NPIV : Pass-throughvSCSI specifics Storage Virtualizer: - FC, SCSI, iSCSI, S

11、AS, SATA, USB- SCSI Target- SCSI peripheral device types: - Disk (backed by physical volume, logical volume, or file)- Optical (backed by physical optical, or file)- Tape (backed by physical tape)- Adapter and device level sharingNPIV basics N_Port ID Virtualization (NPIV) is a fibre channel industr

12、y standard method for virtualizing a physical fibre channel port. NPIV allows multiple N_Port IDs to share a physical fibre channel port enabling physical fibre channel HBAs to be shared across multiple guest operating systems in the virtualized Power environment. On POWER, NPIV allows logical parti

13、tions(LPARs) to have dedicated N_Port IDs, giving the OS a unique identity to the SAN, just as if it had a dedicated physical HBA(s). 虚拟存储LinuxAIX 5LV5.3LinuxAIX 5LV5.3Micro-partitionsPOWER5 ServerVIOSPOWER HypervisorExternal StoragevLANvSCSISharedFiber ChanAdapterSharedSCSIAdapterVirtual SCSIA3B1 B

14、2 B3A1 A2Multiple LPARs can use same or different physical diskConfigure as logical volume on VIOSAppear a hdisk on the micro-partitionCan assign entire hdisk to a single clientVIOS owns physical disk resourcesLVM based storage on VIO ServerPhysical Storage can be SCSI or FCLocal or remoteMicro-part

15、ition sees disks as vSCSI (Virtual SCSI) devicesVirtual SCSI devices added to partition via HMCLUNs on VIOS accessed as vSCSI diskVIOS must be active for client to bootA1B1 B2B3A2 A3B4 B5A4A5Virtual I/O helps reduce hardware costs by sharing disk drives Available via optional Advance POWER Virtualiz

16、ation or POWER Hypervisor and VIOS features.Virtual SCSI Basic ArchitectureAIX Partition Virtual I/O ServerFC or SCSI DeviceHdiskvSCSIClientAdapterPOWER5 HypervisorvSCSIServerAdaptervSCSI Target DeviceAdapter / DriversLVVSCSIPVVSCSIMulti-PathorDisk DriversLVMDVDOpticalVSCSIOpticalDriverVirtual SCSI

17、optical devices A DVD or CD device can be virtualized and assigned to Virtual I/O Clients. Only one virtual I/O client can have access to the drive at a time. The advantage of a virtual optical device is that you do not have to move the parent SCSI adapter between virtual I/O clients.Virtual SCSI Op

18、tions (1)Single VIOS, LV VSCSI Disks Complexity Simpler to setup and manage than dual VIOS No specialized setup on the client Resilience VIOS, SCSI adapter, SCSI disk are potential single points of failure The loss of a physical disk may impact more than one client Throughput / Scalability Performan

19、ce limited by single SCSI adapter and internal SCSI disks. Notes Low cost disk alternativeVirtual SCSI Options (1) Details Single VIOS, LV VSCSI DisksVIOS 1HypervisorAIX Client LPAR Avscsi0AIX Client LPAR Bhdisk0scsi0B diskA diskLV LV vscsi0hdisk0vtscsi0vhost0vtscsi1vhost1Virtual SCSI Options (2)Sin

20、gle VIOS, PV VSCSI Disks Complexity Simpler to setup and manage than dual VIOS No specialized setup on the client Resilience VIOS, SCSI adapter, SCSI disk are potential single points of failure The loss of a single physical client disk will affect only that client Throughput / Scalability Performanc

21、e limited by single SCSI adapter and internal SCSI disks.Virtual SCSI Options (2) Details Single VIOS, PV VSCSI DisksVIOS 1HypervisorAIX Client LPAR Avscsi0AIX Client LPAR Bhdisk0scsi0vscsi0hdisk0vtscsi0vhost0vtscsi1vhost1A diskB diskVirtual SCSI Options (3) Single VIOS with Multi-Path I/O Complexit

22、y Simpler to setup and manage than dual VIO servers Requires Multi-Path I/O setup on the VIOS No specialized setup on the client Resilience VIOS is a single point of failure Throughput / Scalability Potential for increased bandwidth due to multi-path I/O. Could divide clients across independent VIOS

23、 allowing more VIOS adapter bandwidth.Virtual SCSI Options (3) Details Single VIOS with Multi-Path I/O VIOS 1HypervisorAIX Client LPAR 1vscsi0AIX Client LPAR 2hdisk0vscsi0hdisk0vtscsi0vhost0vtscsi1vhost1fcs0 fcs1PV LUNsB diskA diskMulti-Path DriverVirtual SCSI Options (4)AIX Client Mirroring with di

24、rect attach SCSI and VIOS PV VSCSI disks Complexity Requires LVM mirroring to be setup on the VIOC Multi-Path I/O setup on the VIOS If a VIOS is rebooted, the mirrored disks will need to be resynchronized via a varyonvg on the VIOC. Additional complexity due to multiple disk types, Multi-Path I/O se

25、tup, and client mirroring. Resilience Protection against failure of single adapter failure (or path) or disk Potential protection against FC adapter failures within VIOS (if Multi-Path I/O is configured)* Note: See the slide labeled VIOS Multi-Path Options for a high level overview of MPATH options.

26、Virtual SCSI Options (4) Details AIX Client Mirroring with direct attach SCSI and VIOS PV VSCSI disksVIOS 1HypervisorAIX Client LPAR 1 AIX Client LPAR 2vscsi0 vscsi0vtscsi0vhost0vtscsi1vhost1fcs0 fcs1PV LUNsB diskA diskMulti-Path DriverLVM Mirroring LVM Mirroringhdisk0 hdisk1hdisk1 hdisk0B diskA dis

27、kscsi0 scsi0Virtual SCSI Options (5)AIX Client Mirroring, Single Path in VIOS, PV VSCSI Disks Complexity More complicated than single VIO server but does not require SAN ports or setup Requires LVM mirroring to be setup on the client If a VIOS is rebooted, the mirrored disks will need to be resynchr

28、onized via a varyonvg on the VIOC Resilience Protection against single VIOS / SCSI disk / SCSI controller The loss of a single physical disk would affect only one client Throughput / Scalability VIOS performance limited by single SCSI adapter and internal SCSI disks.Virtual SCSI Options (5) Details

29、AIX Client Mirroring, Single Path in VIOS, PV VSCSI DisksVIOS 2VIOS 1HypervisorAIX Client LPAR Avscsi0AIX Client LPAR Avscsi1LVM Mirroringscsi0vscsi0 vscsi1LVM Mirroringscsi0vtscsi0vhost0vtscsi0vhost0vtscsi1vhost1vtscsi1vhost1B diskA diskB diskA diskhdisk0 hdisk0hdisk1 hdisk1Virtual SCSI Options (6)

30、AIX Client Mirroring, Multi-Path I/O in VIOS, LV or PV VSCSI FC Disks Complexity Requires LVM mirroring to be setup on the VIOC Requires Multi-Path I/O setup on the VIOS If a VIOS is rebooted, the mirrored disks will need to be resynchronized via a varyonvg on the VIOC Resilience Protection against

31、failure of single VIOS / FC adapter failure (or path) Protection against FC adapter failures within VIOS Throughput / Scalability Potential for increased bandwidth due to multi-path I/O Notes LUNs used for this purpose can only be assigned to a single VIOS LV VSCSI LUNs could also be PV VSCSI LUNs.*

32、 Note: See the slide labeled VIOS Multi-Path Options for a high level overview of MPATH options.Virtual SCSI Options (6) Details AIX Client Mirroring, Multi-Path I/O in VIOS, LV VSCSI FC DisksPV LUNsB diskA diskVIOS 2VIOS 1HypervisorAIX Client LPAR 1vscsi0AIX Client LPAR 2vscsi1LVM Mirroringvscsi0 v

33、scsi1LVM Mirroringvtscsi0vhost0vtscsi0vhost0vtscsi1vhost1vtscsi1vhost1fcs0 fcs1 fcs0 fcs1PV LUNsB diskA diskMulti-Path Driver Multi-Path Driverhdisk0 hdisk0hdisk1 hdisk1NPIV specifics Fibre Channel industry standard for adapter sharing Pass-through model unique WWPN generation (allocated in pairs)-

34、Each virtual FC HBA has a unique and persistent identity Each physical NPIV capable FC HBA will support 64 virtual ports HMC-managed and IVM-managed serversvSCSI NPIVEMCThe vSCSI model for sharing storage resources is storage virtualizer. Heterogeneous storage is pooled by the VIOS into a homogeneou

35、s pool of block storage and then allocated to client LPARs in the form of generic SCSI LUNs. The VIOS performs SCSI emulation and acts as the SCSI Target.With NPIV, the VIOSs role is fundamentally different. The VIOS facilitates adapter sharing only, there is no device level abstraction or emulation

36、. Rather than a storage virtualizer, the VIOS serving NPIV is a passthru, extending an FCP connection from the client LPAR to the SAN. vio client VIOSFC HBAsEMCgeneric scsi diskgeneric scsi diskIBM 2105VIOSFC HBAsSANvio client FCPVIOSFC HBAsEMC IBM 2105VIOSFC HBAsSANIBM 2105EMCSCSIVFCVFCvSCSI vSCSIV

37、IOSvSCSI clientphypData flow using LRDMA for vSCSI and NPIVvSCSI&VFCcontrolpci adapter physicaladapterdriverData(LRDMA)*databuffer vSCSI &VFC* No data copy requiredHeterogeneous multipathingAIXPOWER HypervisorVIOS#1SAN SwitchA B DCNPIVFibre HBA NPIVA B DCStorage ControllerASAN SwitchFibreHBAPassthru module

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 企业管理 > 管理学资料

本站链接:文库   一言   我酷   合作


客服QQ:2549714901微博号:道客多多官方知乎号:道客多多

经营许可证编号: 粤ICP备2021046453号世界地图

道客多多©版权所有2020-2025营业执照举报