Skip to main content

IBM Flex System p24L, p260 and p460 Compute Nodes

Web Doc

thumbnail 

Published on 11 April 2012, updated 29 May 2014

  1. View in HTML
  2. .PDF (1.1 MB)

Share this page:   

IBM Form #: TIPS0880


Authors: David Watts

    menu icon

    Abstract

    The IBM Flex System™ p24L, p260 and p460 Compute Nodes are servers based on IBM® POWER® architecture technologies. These compute nodes run in IBM Flex System Enterprise Chassis units to provide a high-density, high-performance compute node environment, using advanced processing technology. The p260 and p460 support IBM AIX, IBM i, and Linux operating environments and are designed to run a wide variety of workloads. The p24L is designed to run Linux. The p24L and p260 are half-wide compute nodes with two processors, and the p460 is a full-wide compute node with four processors.

    Note: The following models have been withdrawn from marketing and are no longer available for ordering from IBM:

    * IBM Flex System p24L Compute Node, 1457-7FL

    * IBM Flex System p460 Compute Node, 7895-42X

    Changes in the May 29 update:

    * The p460 supports AIX 5.3

    Contents

    Table of contents


    Introduction



    The IBM Flex System™ p260 and p460 Compute Nodes are servers based on IBM® POWER® architecture technologies. These compute nodes run in IBM Flex System Enterprise Chassis units to provide a high-density, high-performance compute node environment, using advanced processing technology. The nodes support IBM AIX, IBM i, or Linux operating environments and are designed to run a wide variety of workloads. The p260 and p24L are standard compute nodes with two POWER7 or POWER7+ processors, and the p460 is a double-wide compute node with four POWER7 or POWER7+ processors.

    Figure 1 shows the IBM Flex System p260 Compute Node.

    The NGP p260 Compute Node
    Figure 1. The IBM Flex System p260 Compute Node

    Did you know?



    IBM Flex System is a new category of computing that integrates multiple server architectures, networking, storage, and system management capability into a single system that is easy to deploy and manage. IBM Flex System has full built-in virtualization support of servers, storage, and networking to speed provisioning and increase resiliency. In addition, it supports open industry standards, such as operating systems, networking and storage fabrics, virtualization, and system management protocols, to easily fit within existing and future data center environments. IBM Flex System is scalable and extendable with multi-generation upgrades to protect and maximize IT investments.

    Key features

    The IBM Flex System p24L, p260 and p460 Compute Nodes are high-performance POWER7-based servers optimized for virtualization, performance, and efficiency. This section describes the key features of the server.


    Scalability and performance

    The compute nodes offers numerous features to boost performance, improve scalability, and reduce costs:
    • The IBM POWER7 and POWER7+ processors, which improve productivity by offering superior system performance with AltiVec floating point and integer SIMD instruction set acceleration.
    • Integrated PowerVM technology, providing superior virtualization performance and flexibility.
    • Choice of processors, including an 8-core POWER7+ processor operating at 4.1 GHz with 80 MB of L3 cache (10 MB per core).
    • Up to four processors, 32 cores, and 128 threads to maximize the concurrent execution of applications.
    • Three levels of integrated cache including 10 MB (POWER7+) or 4 MB (POWER7) of L3 cache per core.
    • Up to 16 (p24L, p260) or 32 (p460) DDR3 ECC memory RDIMMs that provide a memory capacity of up to 256 GB (p260) or 512 GB (p460).
    • Support for Active Memory Expansion, which allows the effective maximum memory capacity to be much larger than the true physical memory through innovative compression techniques.
    • The use of solid-state drives (SSDs) instead of traditional spinning drives (HDDs), which can significantly improve I/O performance. An SSD can support up to 100 times more I/O operations per second (IOPS) than a typical HDD.
    • Up to eight (p24L, p260) or 16 (p460) 10Gb Ethernet ports per compute node to maximize networking resources in a virtualized environment.
    • Includes two (p24L, p260) or four (p460) P7IOC high-performance I/O bus controllers to maximize throughput and bandwidth.
    • Support for high-bandwidth I/O adapters, up to two in each p24L or p260 Compute Node or up to four in each p460 Compute Node. Support for 10 Gb Ethernet, 8 Gb Fibre Channel, and QDR InfiniBand.


    Availability and serviceability

    The p24L, p260 and p460 provide many features to simplify serviceability and increase system uptime:
    • ECC and chipkill provide error detection and recovery in the event of a non-correctable memory failure.
    • Tool-less cover removal provides easy access to upgrades and serviceable parts, such as drives, memory, and adapter cards.
    • A light path diagnostics panel and individual light path LEDs quickly lead the technician to failed (or failing) components. This simplifies servicing, speeds up problem resolution, and helps improve system availability.
    • Predictive Failure Analysis (PFA) detects when system components (for example, processors, memory, and hard disk drives) operate outside of standard thresholds and generates proactive alerts in advance of possible failure, therefore increasing uptime.
    • Available solid-state drive (SSDs) offer significantly better reliability than traditional mechanical HDDs for greater uptime.
    • A built-in Integrated Flexible Service Processor (FSP) continuously monitors system parameters, triggers alerts, and performs recovering actions in case of failures to minimize downtime.
    • There is a front panel USB port for upgrades and local servicing tasks.
    • There is a three-year customer replaceable unit and onsite limited warranty, next business day 9x5. Optional service upgrades are available.


    Manageability and security

    Powerful systems management features simplify management of the p260 and p460:
    • Includes an FSP to monitor server availability and to provide diagnostics.
    • Integrates with the IBM Flex System™ Manager for proactive systems management. It offers comprehensive systems management for the entire IBM Flex System platform that help to increase uptime, reduce costs, and improve productivity through advanced server management capabilities.


    Energy efficiency

    The compute nodes offer the following energy-efficiency features to save energy, reduce operational costs, increase energy availability, and contribute to the green environment:
    • The component-sharing design of the IBM Flex System chassis provides ultimate power and cooling savings.
    • Support for IBM EnergyScale to dynamically optimize processor performance versus power consumption and system workload.
    • Active Energy Manager provides advanced power management features with actual real-time energy monitoring, reporting, and capping features.
    • SSDs consume as much as 80% less power than traditional spinning 2.5-inch HDDs.
    • The servers use hexagonal ventilation holes, a part of IBM Calibrated Vectored Cooling™ technology. Hexagonal holes can be grouped more densely than round holes, providing more efficient airflow through the system.

    Locations of key components and connectors



    Figure 2 shows the front of the p260. The p24L is similar.

    Front view of the NGP p260 Compute Node
    Figure 2. Front view of the IBM Flex System p260 Compute Node

    Figure 3 shows the locations of key components inside the p260.

    Inside view of the NGP p260 Compute Node
    Figure 3. Inside view of the IBM Flex System p260 Compute Node

    Figure 4 shows the front of the p460.

    Front view of the NGP p460 Compute Node
    Figure 4. Front view of the IBM Flex System p460 Compute Node

    Figure 5 shows the locations of key components inside the p460.

    Inside view of the NGP p460 Compute Node
    Figure 5. Inside view of the IBM Flex System p460 Compute Node

    Standard specifications



    The following table lists the standard specifications.

    Table 1. Standard specifications
    ComponentsSpecification
    Model numbersIBM Flex System p24L Compute Node: 1457-7FL
    IBM Flex System p260 Compute Node: 7895-22X, 23A, and 23X
    IBM Flex System p460 Compute Node: 7895-42X and 43X.
    Form factorp24L: Standard-width compute node.
    p260: Standard-width compute node.
    p460: Double-width compute node.
    Chassis supportIBM Flex System Enterprise Chassis.
    Processorp24L: Two IBM POWER7 processors
    p260: Two IBM POWER7 (model 22X) or POWER7+ (models 23A and 23X) processors.
    p460: Four IBM POWER7 (model 42X) or POWER7+ (model 43X) processors.

    POWER7 processors: Each processor is a single-chip module (SCM) that contains either eight cores (up to 3.55 GHz and 32 MB L3 cache) or four cores (3.3 GHz and 16 MB L3 cache). Each processor has 4 MB L3 cache per core. Integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. There is one GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. Uses 45 nm fabrication technology.

    POWER7+ processors: Each processor is a single-chip module (SCM) that contains either eight cores (up to 4.1 GHz or 3.6 GHz and 80 MB L3 cache) , four cores (4.0 GHz and 40 MB L3 cache) or two cores (4.0 GHz and 20 MB L3 cache). Each processor has 10 MB L3 cache per core. There is an integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. There is one GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. Uses 32 nm fabrication technology.
    ChipsetIBM P7IOC I/O hub.
    Memoryp24L: 16 DIMM sockets
    p260: 16 DIMM sockets.
    p460: 32 DIMM sockets.
    RDIMM DDR3 memory supported. Integrated memory controller in each processor, each with four memory channels. Supports Active Memory Expansion with AIX 6.1 or later. All DIMMs operate at 1066 MHz. Both LP (low profile) and VLP (very low profile) DIMMs supported, although only VLP DIMMs are supported if internal HDDs are configured. The use of 1.8-inch solid-state drives allows the use of LP and VLP DIMMs.
    Memory maximumsp24L: 512 GB using 16x 32 GB DIMMs.
    p260: 512 GB using 16x 32 GB DIMMs.
    p460: 1 TB using 32x 32 GB DIMMs.
    Memory protectionECC, chipkill.
    Disk drive baysTwo 2.5-inch non-hot-swap drive bays supporting 2.5-inch SAS HDD or 1.8-inch SATA SSD drives. If LP DIMMs are installed, then only 1.8-inch SSDs are supported. If VLP DIMMs are installed, then both HDDs and SSDs are supported. An HDD and an SSD cannot be installed together.
    Maximum internal storage 1.8 TB using two 900 GB SAS HDD drives, or 354 GB using two 177 GB SSD drives.
    RAID supportRAID support via the operating system.
    Network interfacesNone standard. Optional 1Gb or 10Gb Ethernet adapters.
    PCI Expansion slotsp24L: Two I/O connectors for adapters. PCIe 2.0 x16 interface.
    p260: Two I/O connectors for adapters. PCIe 2.0 x16 interface.
    p460: Four I/O connectors for adapters. PCIe 2.0 x16 interface.
    PortsOne external USB port.
    Systems management FSP, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, Serial over LAN support. IPMI compliant. Support for IBM Flex System Manager, IBM Systems Director, and Active Energy Manager.
    Security featuresPower-on password, selectable boot sequence.
    VideoNone. Remote management via Serial over LAN and IBM Flex System Manager.
    Limited warranty 3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD.
    Operating systems supported IBM AIX, IBM i, and Linux. See "Supported operating systems" for details.
    Service and supportOptional service upgrades are available through IBM ServicePacs®: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM hardware and selected IBM and OEM software.
    Dimensionsp24L: Width: 215mm (8.5”), height 51mm (2.0”), depth 493mm (19.4”).
    p260: Width: 215mm (8.5”), height 51mm (2.0”), depth 493mm (19.4”).
    p460: Width: 437 mm (17.2"), height 51mm (2.0”), depth 493mm (19.4”).
    Weightp24L: Maximum configuration: 7.0 kg (15.4 lb).
    p260: Maximum configuration: 7.0 kg (15.4 lb).
    p460: Maximum configuration: 14.0 kg (30.6 lb).

    The compute nodes are shipped with the following items:
    • Statement of Limited Warranty
    • Important Notices
    • Documentation CD that contains the Installation and User's Guide

    Chassis support



    The p24L, p260 and p460 are supported in the IBM Flex System Enterprise Chassis. Up to fourteen p24L or p260 compute nodes or up to seven x440 compute nodes (or a combination of all three) can be installed in the chassis in 10U of rack space. The actual number of compute nodes that can be installed in a chassis depends on these factors:
    • The number of power supplies installed
    • The capacity of the power supplies installed (2100 W or 2500 W)
    • The power redundancy policy used (N+1 or N+N)

    The following table provides guidelines about what number of compute nodes can be installed. For more guidance, use the Power Configurator, found at the following website: http://ibm.com/systems/bladecenter/resources/powerconfig.html

    In the table:
    • Green = No restriction to the number of compute nodes that are installable
    • Yellow = Some bays must be left empty in the chassis

    Table 3. Maximum number of x440 Compute Nodes installable based on power supplies installed and power redundancy policy used
    Compute
    node
    2100 W power supplies installed
    2500 W power supplies installed
    N+1, N=5
    6 power supplies
    N+1, N=4
    5 power supplies
    N+1, N=3
    5 power supplies
    N+N, N=3
    6 power supplies
    N+1, N=5
    6 power supplies
    N+1, N=4
    5 power supplies
    N+1, N=3
    4 power supplies
    N+N, N=3
    6 power supplies
    p24L141291014141213
    p260141291014141213
    p46076457766

    IBM PureFlex System



    IBM PureFlex™ System consists of pre-defined, pre-configured components to simplify client purchasing and provide the total IBM Flex System integrated value proposition.

    The IBM Flex System p24L, p260 and p460 Compute Nodes can be ordered as part of an PureFlex System solution. There are two PureFlex System offerings available:
    • IBM PureFlex System Single-chassis for smaller installations (Feature #EFD1)
    • IBM PureFlex System Multi-chassis for database or transactional systems (Feature #EFD3)

    A PureFlex System solution consists of these:
    • An IBM Flex System Compute Node, either the p260, p460, or x240
    • An IBM Flex System Enterprise Chassis (7893-92X)
    • An IBM Flex System Manager (7955-01M)
    • An IBM Storwize V7000 Disk System (2076-124)
    • Two IBM System Network 1455 Rack Switches G8264R Model 64C (with PureFlex System Enterprise only, with the p460)
    • Two IBM 2498 SAN24B-4 Express Model B24 (with PureFlex System Enterprise only, with the p460)
    • An IBM 42U rack (7953-94X)

    Notes:
    • The p260 and p24L cannot be used in an initial PureFlex System Multi-chassis configuration.
    • The IBM Flex System p460 cannot be used in an initial PureFlex System Single-chassis configuration.
    • An initial PureFlex System Multi-chassis requires at least two compute nodes.

    Additional compute nodes, chassis, and IBM 42U racks can be ordered after the basic requirements for the PureFlex System solution are met. These additional orders will be indicated by feature number EFD4 (Expansion Option) or EFD5 (Custom Expansion).

    Processor features



    The compute nodes support the processor features listed in the following table. The selected feature code includes two or four processors, as indicated.

    Table 2. Processor features
    Feature codeProcessor description
    IBM Flex System p260 Compute Node - model 7895-23X
    EPRD8-core 4.0 GHz POWER7+ Processor Module (two 4-core processors)
    EPRB16-core 3.6 GHz POWER7+ Processor Module (two 8-core processors)
    EPRA16-core 4.1 GHz POWER7+ Processor Module (two 8-core processors)
    IBM Flex System p260 Compute Node - model 7895-23A
    EPRC4-core 4.0 GHz POWER7+ Processor Module (two 2-core processors)
    IBM Flex System p260 Compute Node - model 7895-22X
    EPR18-core 3.3 GHz POWER7 Processor Module (two 4-core processors)
    EPR316-core 3.2 GHz POWER7 Processor Module (two 8-core processors)
    EPR516-core 3.55 GHz POWER7 Processor Module (two 8-core processors)
    IBM Flex System p460 Compute Node - model 7895-42X
    EPR216-core 3.3 GHz POWER7 Processor Module (four 4-core processors)
    EPR432-core 3.2 GHz POWER7 Processor Module (four 8-core processors)
    EPR632-core 3.55 GHz POWER7 Processor Module (four 8-core processors)
    IBM Flex System p460 Compute Node - model 7895-43X
    EPRK16-core 4.0 GHz POWER7+ Processor Module (four 4-core processors)
    EPRH32-core 3.6 GHz POWER7+ Processor Module (four 8-core processors)
    EPRJ32-core 4.1 GHz POWER7+ Processor Module (four 8-core processors)
    IBM Flex System p24L Compute Node - model 1457-7FL
    EPR712-core 3.72 GHz POWER7 Processor Module (two 6-core processors)
    EPR816-core 3.2 GHz POWER7 Processor Module (two 8-core processors)
    EPR916-core 3.55 GHz POWER7 Processor Module (two 8-core processors)

    Memory features



    IBM DDR3 memory is compatibility tested and tuned for optimal performance and throughput. IBM memory specifications are integrated into the light path diagnostics for immediate system performance feedback and optimum system uptime. From a service and support standpoint, IBM memory automatically assumes the IBM system warranty, and IBM provides service and support worldwide.

    The compute nodes support low profile (LP) or very low profile (VLP) DDR3 memory RDIMMs. If LP memory is used, 2.5-inch HDDs are not supported in the system due to physical space restrictions. However, 1.8-inch SSDs are still supported. If VLP memory is used, either 2.5-inch HDDs or 1.8-inch SSDs are supported.

    The p260 and p24L supports up to 16 DIMMs. The p460 supports up to 32 DIMMs. Each processor has four memory channels, and there are two DIMMs per channel. All supported DIMMs operate at 1066 MHz.

    The following table lists memory features available for the compute nodes. DIMMs are ordered and can be installed two at a time, but to maximize memory performance, install them in sets of eight (one for each of the memory channels).

    Table 3. Memory features
    Feature
    code
    DescriptionForm
    factor*
    p24L
    7FL
    p260
    22X
    p260
    23A
    p260
    23X
    p460
    42X
    p460
    43X
    p270
    24X
    EM04*2x 2 GB DDR3 RDIMM 1066 MHzLPYesYesNoNoYesNoNo
    81962x 4 GB DDR3 RDIMM 1066 MHzVLPYesYesYesYesYesYesYes
    81992x 8 GB DDR3 RDIMM 1066 MHzVLPYesYesNoNoYesNoNo
    EEMD2x 8 GB DDR3 RDIMM 1066 MHzVLPYesYesYesYesYesYesYes
    8145*2x 16 GB DDR3 RDIMM 1066 MHzLPYesYesNoNoYesNoNo
    EEME*2x 16 GB DDR3 RDIMM 1066 MHzLPYesYesYesYesYesYesYes
    EEMF*2x 32 GB DDR3 RDIMM 1066 MHzLPYesYesYesYesYesYesYes
    * If 2.5-inch HDDs are installed, low-profile DIMM features cannot be used - EM04, 8145, EEME and EEMF cannot be used.

    Internal disk storage options



    The compute nodes have two non-hot-swap drive bays attached to the cover of the system. Either two 2.5-inch SAS HDDs or two 1.8-inch SATA SSDs can be installed. If 2.5-inch HDDs are installed, then LP memory DIMMs are not supported, only VLP DIMMs. The use of 1.8-inch SSDs does not limit the memory DIMMs used. SSDs and HDDs cannot be mixed.

    The following table lists the supported drive options.

    Table 4. 2.5-inch drive options for internal disk storage
    Feature codeDescriptionMaximum supported
    2.5-inch SAS hard disk drives (HDDs)
    8274300 GB 10K RPM non-hot-swap 6 Gbps SAS2
    8276600 GB 10K RPM non-hot-swap 6 Gbps SAS2
    8311900 GB 10K RPM non-hot-swap 6 Gbps SAS2
    1.8-inch SATA solid-state drives (SSDs)
    8207177 GB SATA non-hot-swap SSD2

    The choice of drive also determines what cover is used for the compute node, because the drives are attached to the cover. The following table lists the options.

    Table 5. Compute node cover options
    Feature codeDescription
    IBM Flex System p24L and p260 Compute Nodes
    7069Top cover with HDD connectors
    7068Top cover with SSD connectors
    7067Top cover for no drives
    IBM Flex System p460 Compute Node
    7066Top cover with HDD connectors
    7065Top cover with SSD connectors
    7005Top cover for no drives

    Internal tape drives



    The server does not support an internal tape drive. However, it can be attached to external tape drives using Fibre Channel connectivity.

    Optical drives



    The server does not support an internal optical drive option. However, you can connect an external USB optical drive, such as IBM and Lenovo part number 73P4515 or 73P4516.

    I/O architecture



    The p24L and p260 have two I/O expansion connectors for attaching I/O adapter cards, and the p460 has four I/O expansion connectors. All I/O adapters are the same shape and can be used in any location.

    The following figures show the location of the I/O adapters in the p24L, p260 and p460.

    Location of the I/O adapter slots in the NGP p260 Compute Node
    Figure 6. Location of the I/O adapter slots in the IBM Flex System p24L and p260 Compute Nodes

    Location of the I/O adapter slots in the NGP p460 Compute Node
    Figure 7. Location of the I/O adapter slots in the IBM Flex System p460 Compute Node

    All I/O adapters are the same shape and can be used in any available slot.. A compatible switch or pass-through module must be installed in the corresponding I/O bays in the chassis, as indicated in the following table. Installing two switches means that all ports of the adapter are enabled, which improves performance and network availability.

    Table 5. Adapter to I/O bay correspondence
    I/O adapter slot
    in the server
    Port on the adapter*
    Corresponding I/O module bay
    in the chassis
    Bay 1Bay 2Bay 3Bay 4
    Slot 1Port 1YesNoNoNo
    Port 2NoYesNoNo
    Port 3 (for 4- and 8-port cards)YesNoNoNo
    Port 4 (for 4- and 8-port cards)NoYesNoNo
    Port 5 (for 8-port cards)YesNoNoNo
    Port 6 (for 8-port cards)NoYesNoNo
    Port 7 (for 8-port cards)**YesNoNoNo
    Port 8 (for 8-port cards)**NoYesNoNo
    Slot 2Port 1NoNoYesNo
    Port 2NoNoNoYes
    Port 3 (for 4- and 8-port cards)NoNoYesNo
    Port 4 (for 4- and 8-port cards)NoNoNoYes
    Port 5 (for 8-port cards)NoNoYesNo
    Port 6 (for 8-port cards)NoNoNoYes
    Port 7 (for 8-port cards)**NoNoYesNo
    Port 8 (for 8-port cards)**NoNoNoYes
    Slot 3
    (p460 Compute Node only)
    Port 1YesNoNoNo
    Port 2NoYesNoNo
    Port 3 (for 4- and 8-port cards)YesNoNoNo
    Port 4 (for 4- and 8-port cards)NoYesNoNo
    Port 5 (for 8-port cards)YesNoNoNo
    Port 6 (for 8-port cards)NoYesNoNo
    Port 7 (for 8-port cards)**YesNoNoNo
    Port 8 (for 8-port cards)**NoYesNoNo
    Slot 4
    (p460 Compute Node only)
    Port 1NoNoYesNo
    Port 2NoNoNoYes
    Port 3 (for 4- and 8-port cards)NoNoYesNo
    Port 4 (for 4- and 8-port cards)NoNoNoYes
    Port 5 (for 8-port cards)NoNoYesNo
    Port 6 (for 8-port cards)NoNoNoYes
    Port 7 (for 8-port cards)**NoNoYesNo
    Port 8 (for 8-port cards)**NoNoNoYes
    * The use of adapter ports 3, 4, 5, and 6 require upgrades to the switches as described in Table 4. The EN4091 Pass-thru only supports ports 1 and 2 (and only when two Pass-thru modules are installed).
    ** Adapter ports 7 and 8 are reserved for future use. The chassis supports all eight ports but there are currently no switches available that connect to these ports.

    The following figure shows the location of the switch bays in the IBM Flex System Enterprise Chassis.

    Location of the switch bays in the NGP Enterprise Chassis
    Figure 8. Location of the switch bays in the IBM Flex System Enterprise Chassis

    The following figure shows how two-port adapters are connected to switches installed in the chassis.

    Logical layout of the interconnects between I/O adapters and I/O modules
    Figure 9. Logical layout of the interconnects between I/O adapters and I/O modules

    Network adapters



    The compute nodes do not have any networking integrated on the system, allowing flexibility. The following table lists the supported network adapters and which slots each is supported in.

    Table 7. Network adapters
    Feature codeDescriptionSlot 1Slot 2Slot 3
    (p460)
    Slot 4
    (p460)
    10 Gb Ethernet
    EC24IBM Flex System CN4058 8-port 10Gb Converged AdapterYesYesYesYes
    EC26IBM Flex System EN4132 2-port 10Gb RoCE AdapterNoYesYesYes
    1762IBM Flex System EN4054 4-port 10Gb Ethernet AdapterYesYesYesYes
    1 Gb Ethernet
    1763IBM Flex System EN2024 4-port 1Gb Ethernet AdapterYesYesYesYes
    InfiniBand
    1761IBM Flex System IB6132 2-port QDR InfiniBand AdapterNoYesNoYes

    When adapters are installed in slots, ensure that compatible switches are installed in the corresponding bays of the chassis:
    • IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch (#ESW2)
    • IBM Flex System Fabric EN4093R 10Gb Scalable Switch (#ESW7)
    • IBM Flex System Fabric EN4093 10Gb Scalable Switch (#3593)
    • IBM Flex System Fabric SI4093 System Interconnect Module (#ESWA)
    • IBM Flex System EN4091 10Gb Ethernet Pass-thru (#3700)
    • IBM Flex System EN2092 1Gb Ethernet Scalable Switch (#3598)

    Storage host bus adapters



    Table 8 lists storage HBAs supported by the compute nodes.

    Table 8. Storage adapters
    Part
    number
    DescriptionSlot 1Slot 2Slot 3
    (p460)
    Slot 4
    (p460)
    Fibre Channel
    1764IBM Flex System FC3172 2-port 8Gb FC Adapter NoYesNoYes
    EC23IBM Flex System FC5052 2-port 16Gb FC AdapterNoYesNoYes
    EC2EIBM Flex System FC5054 4-port 16Gb FC AdapterNoYesNoYes

    Power supplies



    Server power is derived from the power supplies installed in the chassis. There are no server options regarding power supplies.

    Integrated virtualization



    The compute nodes support PowerVM virtualization capabilities for AIX, IBM i, and Linux environments. PowerVM contains the following features:
    • Support for the following number of maximum virtual servers (or logical partitions, LPARs):
      • p24L: Up to 160 virtual servers
      • p260: Up to 160 virtual servers
      • p460: Up to 320 virtual servers
    • Role-based access control (RBAC)
    • RBAC brings an added level of security and flexibility to the administration of the Virtual I/O Server (VIOS), a part of PowerVM. With RBAC, you can create a set of authorizations for the user management commands. You can assign these authorizations to a role named UserManagement, and this role can be given to any other user. So one user with this role can manage the users on the system, but will not have any further access. With RBAC, the VIOS has the capability of splitting management functions that presently can be done only by the padmin user, providing better security by giving only the necessary access to users, and easy management and auditing of system functions.

    • Suspend/resume
    • Using suspend/resume, you can provide long-term suspension (greater than five to ten seconds) of partitions, saving partition state (that is, memory, NVRAM, and VSP state) on persistent storage. This makes server resources available that were in use by that partition, restoring partition state to server resources, and resuming operation of that partition and its applications, either on the same server or on another server.

    • Shared storage pools
    • VIOS allows the creation of storage pools that can be accessed by VIOS partitions that are deployed across multiple Power Systems servers. Therefore, an assigned allocation of storage capacity can be efficiently managed and shared. Up to four systems can participate in a Shared Storage Pool configuration. This can improve efficiency, agility, scalability, flexibility, and availability.

      The Storage Mobility feature allows data to be moved to new storage devices within Shared Storage Pools, while the virtual servers remain completely active and available. The VM Storage Snapshots/Rollback feature allows multiple point-in-time snapshots of individual virtual server storage, and these copies can be used to quickly roll back a virtual server to a particular snapshot image. The VM Storage Snapshots/Rollback functionality can be used to capture a VM image for cloning purposes or before applying maintenance.

    • Thin provisioning
    • VIOS supports highly efficient storage provisioning, whereby virtualized workloads in VMs can have storage resources from a shared storage pool dynamically added or released, as required.

    • VIOS grouping
    • Multiple VIOS partitions can utilize a common shared storage pool to more efficiently utilize limited storage resources and simplify the management and integration of storage subsystems.

    • Network node balancing for redundant Shared Ethernet Adapters (SEAs)
    • This is a useful function when multiple VLANs are being supported in a dual VIOS environment. The implementation is based on a more granular treatment of trunking, where there are different trunks defined for the SEAs on each VIOS. Each trunk serves different VLANs, and each VIOS can be the primary for a different trunk. This occurs with just one SEA definition on each VIOS.


    Light path diagnostics



    For quick problem determination when located physically at the server, the compute nodes offers a three-step guided path:
    1. The fault LED on the front panel
    2. The light path diagnostics panel, shown in the following figure
    3. LEDs next to key components on the system board

    The light path diagnostics panel is visible when you remove the server from the chassis. The panel is located on the top right-hand side of the compute node, as shown in the following figure.

    Location of light path diagnostics panel
    Figure 10. Location of light path diagnostics panel

    To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the server is removed from the chassis.

    The meanings of the LEDs in the light path diagnostics panel are listed in the following table.

    Table 9. Light path diagnostic panel LEDs
    LEDMeaning
    LPThe light path diagnostics panel is operational.
    S BRDA system board error is detected.
    MGMTThere is an error with the FSP.
    D BRDThere is a fault with the disk drive board.
    DRV 1There is a drive 1 fault.
    DRV 2There is a drive 2 fault.
    ETEA fault has been detected with the expansion unit (p260 only).

    If problems occur, the light path diagnostics LEDs assist in identifying the subsystem involved. To illuminate the LEDs with the compute node removed, press the power button on the front panel. This temporarily illuminates the LEDs of the troubled subsystem to direct troubleshooting efforts towards a resolution.

    Typically, an administrator has already obtained this information from the IBM Flex System Manager or Chassis Management Module before removing the node, but having the LEDs helps with repairs and troubleshooting if onsite assistance is needed.

    Supported operating systems



    The p460 and p260 model 22X support the following operating systems:
    • AIX V7.1 with the 7100-01 Technology Level with Service Pack 3 with APAR IV14284
    • AIX V7.1 with the 7100-01 Technology Level with Service Pack 4, or later
    • AIX V7.1 with the 7100-00 Technology Level with Service Pack 6, or later
    • AIX V6.1 with the 6100-07 Technology Level, with Service Pack 3 with APAR IV14283
    • AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later
    • AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later
    • AIX V5.3 with the 5300-12 Technology Level with Service Pack 6, or later (AIX 5L V5.3 Service Extension is required)
    • IBM i 6.1 with i 6.1.1 machine code, or later; Requires VIOS
    • IBM i 7.1 TR4, or later; Requires VIOS
    • VIOS 2.2.1.4, or later
    • SUSE Linux Enterprise Server 11 Service Pack (SP) 2 for POWER
    • Red Hat Enterprise Linux 5.7, for POWER, or later
    • Red Hat Enterprise Linux 6.2, for POWER, or later

    The p24L supports the following operating systems:
    • Virtual I/O Server 2.2.1.4 or later
    • SUSE Linux Enterprise Server 11 SP 2 for POWER
    • Red Hat Enterprise Linux 5.7 for POWER
    • Red Hat Enterprise Linux 6.2 for POWER

    The p260 model 23X supports the following operating systems:
    • IBM i 6.1 with i 6.1.1 machine code, or later; Requires VIOS
    • IBM i 7.1 TR5, or later; Requires VIOS
    • VIOS 2.2.2.1 or later
    • VIOS 2.2.1.5 or later
    • AIX V7.1 with the 7100-02 Technology Level or later
    • AIX V7.1 with the 7100-01 Technology Level with Service Pack 7 or later
    • AIX V6.1 with the 6100-08 Technology Level or later
    • AIX V6.1 with the 6100-07 Technology Level, with Service Pack 7, or later
    • AIX V6.1 with the 6100-06 Technology Level with Service Pack 11, or later
    • AIX V5.3 with the 5300-12 Technology Level with Service Pack 7, or later (AIX 5L V5.3 Service Extension is required)
    • SUSE Linux Enterprise Server 11 Service Pack (SP) 2 for POWER
    • Red Hat Enterprise Linux 5.7 for POWE
    • Red Hat Enterprise Linux 6.2 for POWER

    The p260 model 23A and p460 model 43X support the following operating systems:
    • AIX V7.1 with the 7100-02 Technology Level with Service Pack 3 or later
    • AIX V6.1 with the 6100-08 Technology Level with Service Pack 3 or later
    • AIX V5.3 Technology Level Support offering with the Service Extension
    • VIOS 2.2.2.3 or later
    • IBM i 6.1 with i 6.1.1 machine code, or later; Requires VIOS
    • IBM i 7.1 TR6 or later; Requires VIOS
    • SUSE Linux Enterprise Server 11 Service Pack (SP) 2 for POWER
    • Red Hat Enterprise Linux 6.4 for POWER

    Note: Support by some of these operating system versions will be post generally availability. See the IBM ServerProven® website for the latest information about the specific versions and service levels supported and any other prerequisites:
    http://www.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/matrix.shtml

    Physical specifications



    Dimensions and weight of the p24L and p260:

    Width: 215 mm (8.5”)
    Height: 51 mm (2.0”)
    Depth: 493 mm (19.4”)
    Maximum configuration: 7.0 kg (15.4 lb)

    Dimensions and weight of the p460:

    Width: 437 mm (17.2")
    Height: 51 mm (2.0”)
    Depth: 493 mm (19.4”)
    Maximum configuration: 14.0 kg (30.6 lb)

    Supported environment



    The IBM Flex System p24L, p260 and p460 compute nodes and the IBM Flex System Enterprise Chassis comply with ASHRAE Class A3 specifications.

    This is the supported operating environment:
    • 5 - 40 °C (41 - 104 °F) at 0 - 914 m (0 - 3,000 ft)
    • 5 - 28 °C (41 - 82 °F) at 914 - 3,050 m (3,000 - 10,000 ft)
    • Relative humidity: 8 - 85%
    • Maximum altitude: 3,050 m (10,000 ft)

    Warranty options



    The IBM Flex System p24L, p260 and p460 Compute Nodes have a three-year onsite warranty with 9x5 next-business-day terms. IBM offers the warranty service upgrades through IBM ServicePac, discussed in this section. The IBM ServicePac is a series of prepackaged warranty maintenance upgrades and post-warranty maintenance agreements with a well-defined scope of services, including service hours, response time, term of service, and service agreement terms and conditions.

    IBM ServicePac offerings are country-specific. That is, each country might have its own service types, service levels, response times, and terms and conditions. Not all covered types of ServicePac might be available in a particular country. For more information about IBM ServicePac offerings available in your country visit the IBM ServicePac Product Selector athttps://www-304.ibm.com/sales/gss/download/spst/servicepac.

    Table 10 explains warranty service definitions in more detail.

    Table 10. Warranty service definitions
    TermDescription
    IBM onsite repair (IOR)A service technician will come to the server's location for equipment repair.
    24x7x2 hourA service technician is scheduled to arrive at your customer’s location within two hours after remote problem determination is completed. We provide 24-hour service, every day, including IBM holidays.
    24x7x4 hourA service technician is scheduled to arrive at your customer’s location within four hours after remote problem determination is completed. We provide 24-hour service, every day, including IBM holidays.
    9x5x4 hourA service technician is scheduled to arrive at your customer’s location within four business hours after remote problem determination is completed. We provide service from 8:00 a.m. to 5:00 p.m. in the customer's local time zone, Monday through Friday, excluding IBM holidays. If after 1:00 p.m. it is determined that on-site service is required, the customer can expect the service technician to arrive the morning of the following business day. For noncritical service requests, a service technician will arrive by the end of the following business day.
    9x5 next business dayA service technician is scheduled to arrive at your customer’s location on the business day after we receive your call, following remote problem determination. We provide service from 8:00 a.m. to 5:00 p.m. in the customer's local time zone, Monday through Friday, excluding IBM holidays.

    In general, these are the types of IBM ServicePac warranty and maintenance service upgrades:
    • One, two, three, four, or five years of 9x5 or 24x7 service coverage
    • Onsite repair from next-business-day to four or two hours
    • One or two years of warranty extension

    Regulatory compliance



    The servers conform to the following standards:
    • ASHRAE Class A3
    • U.S.: FCC - Verified to comply with Part 15 of the FCC Rules Class A
    • Canada: ICES-004, issue 3 Class A
    • EMEA: EN55022: 2006 + A1:2007 Class A
    • EMEA: EN55024: 1998 + A1:2001 + A2:2003
    • Australia and New Zealand: CISPR 22, Class A
    • U.S.: (UL Mark) UL 60950-1 1st Edition
    • CAN: (cUL Mark) CAN/CSA22.2 No.60950-1 1st Edition
    • Europe: EN 60950-1:2006+A11:2009
    • CB: IEC60950-1, 2nd Edition
    • Russia: (GOST Mark) IEC60950-1

    Related publications and links



    Announcement letters:


    For more information, see the following resources:

     

    Others who read this also read

    Special Notices

    The material included in this document is in DRAFT form and is provided 'as is' without warranty of any kind. IBM is not responsible for the accuracy or completeness of the material, and may update the document at any time. The final, published document may not include any, or all, of the material included herein. Client assumes all risks associated with Client's use of this document.