Table of contents
Back to top
Introduction
The IBM Flex System™ p260 and p460 Compute Nodes are servers based on IBM® POWER® architecture technologies. These compute nodes run in IBM Flex System Enterprise Chassis units to provide a high-density, high-performance compute node environment, using advanced processing technology. The nodes support IBM AIX, IBM i, or Linux operating environments and are designed to run a wide variety of workloads. The p260 and p24L are standard compute nodes with two POWER7 or POWER7+ processors, and the p460 is a double-wide compute node with four POWER7 or POWER7+ processors.
Figure 1 shows the IBM Flex System p260 Compute Node.
Figure 1. The IBM Flex System p260 Compute Node
Back to top
Did you know?
IBM Flex System is a new category of computing that integrates multiple server architectures, networking, storage, and system management capability into a single system that is easy to deploy and manage. IBM Flex System has full built-in virtualization support of servers, storage, and networking to speed provisioning and increase resiliency. In addition, it supports open industry standards, such as operating systems, networking and storage fabrics, virtualization, and system management protocols, to easily fit within existing and future data center environments. IBM Flex System is scalable and extendable with multi-generation upgrades to protect and maximize IT investments.
Back to top
Key features
The IBM Flex System p24L, p260 and p460 Compute Nodes are high-performance POWER7-based servers optimized for virtualization, performance, and efficiency. This section describes the key features of the server.
Scalability and performance
The compute nodes offers numerous features to boost performance, improve scalability, and reduce costs:
- The IBM POWER7 and POWER7+ processors, which improve productivity by offering superior system performance with AltiVec floating point and integer SIMD instruction set acceleration.
- Integrated PowerVM technology, providing superior virtualization performance and flexibility.
- Choice of processors, including an 8-core POWER7+ processor operating at 4.1 GHz with 80 MB of L3 cache (10 MB per core).
- Up to four processors, 32 cores, and 128 threads to maximize the concurrent execution of applications.
- Three levels of integrated cache including 10 MB (POWER7+) or 4 MB (POWER7) of L3 cache per core.
- Up to 16 (p24L, p260) or 32 (p460) DDR3 ECC memory RDIMMs that provide a memory capacity of up to 256 GB (p260) or 512 GB (p460).
- Support for Active Memory Expansion, which allows the effective maximum memory capacity to be much larger than the true physical memory through innovative compression techniques.
- The use of solid-state drives (SSDs) instead of traditional spinning drives (HDDs), which can significantly improve I/O performance. An SSD can support up to 100 times more I/O operations per second (IOPS) than a typical HDD.
- Up to eight (p24L, p260) or 16 (p460) 10Gb Ethernet ports per compute node to maximize networking resources in a virtualized environment.
- Includes two (p24L, p260) or four (p460) P7IOC high-performance I/O bus controllers to maximize throughput and bandwidth.
- Support for high-bandwidth I/O adapters, up to two in each p24L or p260 Compute Node or up to four in each p460 Compute Node. Support for 10 Gb Ethernet, 8 Gb Fibre Channel, and QDR InfiniBand.
Availability and serviceability
The p24L, p260 and p460 provide many features to simplify serviceability and increase system uptime:
- ECC and chipkill provide error detection and recovery in the event of a non-correctable memory failure.
- Tool-less cover removal provides easy access to upgrades and serviceable parts, such as drives, memory, and adapter cards.
- A light path diagnostics panel and individual light path LEDs quickly lead the technician to failed (or failing) components. This simplifies servicing, speeds up problem resolution, and helps improve system availability.
- Predictive Failure Analysis (PFA) detects when system components (for example, processors, memory, and hard disk drives) operate outside of standard thresholds and generates proactive alerts in advance of possible failure, therefore increasing uptime.
- Available solid-state drive (SSDs) offer significantly better reliability than traditional mechanical HDDs for greater uptime.
- A built-in Integrated Flexible Service Processor (FSP) continuously monitors system parameters, triggers alerts, and performs recovering actions in case of failures to minimize downtime.
- There is a front panel USB port for upgrades and local servicing tasks.
- There is a three-year customer replaceable unit and onsite limited warranty, next business day 9x5. Optional service upgrades are available.
Manageability and security
Powerful systems management features simplify management of the p260 and p460:
- Includes an FSP to monitor server availability and to provide diagnostics.
- Integrates with the IBM Flex System™ Manager for proactive systems management. It offers comprehensive systems management for the entire IBM Flex System platform that help to increase uptime, reduce costs, and improve productivity through advanced server management capabilities.
Energy efficiency
The compute nodes offer the following energy-efficiency features to save energy, reduce operational costs, increase energy availability, and contribute to the green environment:
- The component-sharing design of the IBM Flex System chassis provides ultimate power and cooling savings.
- Support for IBM EnergyScale to dynamically optimize processor performance versus power consumption and system workload.
- Active Energy Manager provides advanced power management features with actual real-time energy monitoring, reporting, and capping features.
- SSDs consume as much as 80% less power than traditional spinning 2.5-inch HDDs.
- The servers use hexagonal ventilation holes, a part of IBM Calibrated Vectored Cooling™ technology. Hexagonal holes can be grouped more densely than round holes, providing more efficient airflow through the system.
Back to top
Locations of key components and connectors
Figure 2 shows the front of the p260. The p24L is similar.
Figure 2. Front view of the IBM Flex System p260 Compute Node
Figure 3 shows the locations of key components inside the p260.
Figure 3. Inside view of the IBM Flex System p260 Compute Node
Figure 4 shows the front of the p460.
Figure 4. Front view of the IBM Flex System p460 Compute Node
Figure 5 shows the locations of key components inside the p460.
Figure 5. Inside view of the IBM Flex System p460 Compute Node
Back to top
Standard specifications
The following table lists the standard specifications.
Table 1. Standard specifications
Components | Specification |
Model numbers | IBM Flex System p24L Compute Node: 1457-7FL
IBM Flex System p260 Compute Node: 7895-22X, 23A, and 23X
IBM Flex System p460 Compute Node: 7895-42X and 43X. |
Form factor | p24L: Standard-width compute node.
p260: Standard-width compute node.
p460: Double-width compute node. |
Chassis support | IBM Flex System Enterprise Chassis. |
Processor | p24L: Two IBM POWER7 processors
p260: Two IBM POWER7 (model 22X) or POWER7+ (models 23A and 23X) processors.
p460: Four IBM POWER7 (model 42X) or POWER7+ (model 43X) processors.
POWER7 processors: Each processor is a single-chip module (SCM) that contains either eight cores (up to 3.55 GHz and 32 MB L3 cache) or four cores (3.3 GHz and 16 MB L3 cache). Each processor has 4 MB L3 cache per core. Integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. There is one GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. Uses 45 nm fabrication technology.
POWER7+ processors: Each processor is a single-chip module (SCM) that contains either eight cores (up to 4.1 GHz or 3.6 GHz and 80 MB L3 cache) , four cores (4.0 GHz and 40 MB L3 cache) or two cores (4.0 GHz and 20 MB L3 cache). Each processor has 10 MB L3 cache per core. There is an integrated memory controller in each processor, each with four memory channels. Each memory channel operates at 6.4 Gbps. There is one GX++ I/O bus connection per processor. Supports SMT4 mode, which enables four instruction threads to run simultaneously per core. Uses 32 nm fabrication technology. |
Chipset | IBM P7IOC I/O hub. |
Memory | p24L: 16 DIMM sockets
p260: 16 DIMM sockets.
p460: 32 DIMM sockets.
RDIMM DDR3 memory supported. Integrated memory controller in each processor, each with four memory channels. Supports Active Memory Expansion with AIX 6.1 or later. All DIMMs operate at 1066 MHz. Both LP (low profile) and VLP (very low profile) DIMMs supported, although only VLP DIMMs are supported if internal HDDs are configured. The use of 1.8-inch solid-state drives allows the use of LP and VLP DIMMs. |
Memory maximums | p24L: 512 GB using 16x 32 GB DIMMs.
p260: 512 GB using 16x 32 GB DIMMs.
p460: 1 TB using 32x 32 GB DIMMs. |
Memory protection | ECC, chipkill. |
Disk drive bays | Two 2.5-inch non-hot-swap drive bays supporting 2.5-inch SAS HDD or 1.8-inch SATA SSD drives. If LP DIMMs are installed, then only 1.8-inch SSDs are supported. If VLP DIMMs are installed, then both HDDs and SSDs are supported. An HDD and an SSD cannot be installed together. |
Maximum internal storage | 1.8 TB using two 900 GB SAS HDD drives, or 354 GB using two 177 GB SSD drives. |
RAID support | RAID support via the operating system. |
Network interfaces | None standard. Optional 1Gb or 10Gb Ethernet adapters. |
PCI Expansion slots | p24L: Two I/O connectors for adapters. PCIe 2.0 x16 interface.
p260: Two I/O connectors for adapters. PCIe 2.0 x16 interface.
p460: Four I/O connectors for adapters. PCIe 2.0 x16 interface. |
Ports | One external USB port. |
Systems management | FSP, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, Serial over LAN support. IPMI compliant. Support for IBM Flex System Manager, IBM Systems Director, and Active Energy Manager. |
Security features | Power-on password, selectable boot sequence. |
Video | None. Remote management via Serial over LAN and IBM Flex System Manager. |
Limited warranty | 3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD. |
Operating systems supported | IBM AIX, IBM i, and Linux. See "Supported operating systems" for details. |
Service and support | Optional service upgrades are available through IBM ServicePacs®: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for IBM hardware and selected IBM and OEM software. |
Dimensions | p24L: Width: 215mm (8.5”), height 51mm (2.0”), depth 493mm (19.4”).
p260: Width: 215mm (8.5”), height 51mm (2.0”), depth 493mm (19.4”).
p460: Width: 437 mm (17.2"), height 51mm (2.0”), depth 493mm (19.4”). |
Weight | p24L: Maximum configuration: 7.0 kg (15.4 lb).
p260: Maximum configuration: 7.0 kg (15.4 lb).
p460: Maximum configuration: 14.0 kg (30.6 lb). |
The compute nodes are shipped with the following items:
- Statement of Limited Warranty
- Important Notices
- Documentation CD that contains the Installation and User's Guide
Back to top
Chassis support
The p24L, p260 and p460 are supported in the IBM Flex System Enterprise Chassis. Up to fourteen p24L or p260 compute nodes or up to seven x440 compute nodes (or a combination of all three) can be installed in the chassis in 10U of rack space. The actual number of compute nodes that can be installed in a chassis depends on these factors:
- The number of power supplies installed
- The capacity of the power supplies installed (2100 W or 2500 W)
- The power redundancy policy used (N+1 or N+N)
The following table provides guidelines about what number of compute nodes can be installed. For more guidance, use the Power Configurator, found at the following website:
http://ibm.com/systems/bladecenter/resources/powerconfig.html
In the table:
- Green = No restriction to the number of compute nodes that are installable
- Yellow = Some bays must be left empty in the chassis
Table 3. Maximum number of x440 Compute Nodes installable based on power supplies installed and power redundancy policy used
Compute
node | 2100 W power supplies installed | 2500 W power supplies installed |
N+1, N=5
6 power supplies | N+1, N=4
5 power supplies | N+1, N=3
5 power supplies | N+N, N=3
6 power supplies | N+1, N=5
6 power supplies | N+1, N=4
5 power supplies | N+1, N=3
4 power supplies | N+N, N=3
6 power supplies |
p24L | 14 | 12 | 9 | 10 | 14 | 14 | 12 | 13 |
p260 | 14 | 12 | 9 | 10 | 14 | 14 | 12 | 13 |
p460 | 7 | 6 | 4 | 5 | 7 | 7 | 6 | 6 |
Back to top
IBM PureFlex System
IBM PureFlex™ System consists of pre-defined, pre-configured components to simplify client purchasing and provide the total IBM Flex System integrated value proposition.
The IBM Flex System p24L, p260 and p460 Compute Nodes can be ordered as part of an PureFlex System solution. There are two PureFlex System offerings available:
- IBM PureFlex System Single-chassis for smaller installations (Feature #EFD1)
- IBM PureFlex System Multi-chassis for database or transactional systems (Feature #EFD3)
A PureFlex System solution consists of these:
- An IBM Flex System Compute Node, either the p260, p460, or x240
- An IBM Flex System Enterprise Chassis (7893-92X)
- An IBM Flex System Manager (7955-01M)
- An IBM Storwize V7000 Disk System (2076-124)
- Two IBM System Network 1455 Rack Switches G8264R Model 64C (with PureFlex System Enterprise only, with the p460)
- Two IBM 2498 SAN24B-4 Express Model B24 (with PureFlex System Enterprise only, with the p460)
- An IBM 42U rack (7953-94X)
Notes:
- The p260 and p24L cannot be used in an initial PureFlex System Multi-chassis configuration.
- The IBM Flex System p460 cannot be used in an initial PureFlex System Single-chassis configuration.
- An initial PureFlex System Multi-chassis requires at least two compute nodes.
Additional compute nodes, chassis, and IBM 42U racks can be ordered after the basic requirements for the PureFlex System solution are met. These additional orders will be indicated by feature number EFD4 (Expansion Option) or EFD5 (Custom Expansion).
Back to top
Processor features
The compute nodes support the processor features listed in the following table. The selected feature code includes two or four processors, as indicated.
Table 2. Processor features
Feature code | Processor description |
IBM Flex System p260 Compute Node - model 7895-23X |
EPRD | 8-core 4.0 GHz POWER7+ Processor Module (two 4-core processors) |
EPRB | 16-core 3.6 GHz POWER7+ Processor Module (two 8-core processors) |
EPRA | 16-core 4.1 GHz POWER7+ Processor Module (two 8-core processors) |
IBM Flex System p260 Compute Node - model 7895-23A |
EPRC | 4-core 4.0 GHz POWER7+ Processor Module (two 2-core processors) |
IBM Flex System p260 Compute Node - model 7895-22X |
EPR1 | 8-core 3.3 GHz POWER7 Processor Module (two 4-core processors) |
EPR3 | 16-core 3.2 GHz POWER7 Processor Module (two 8-core processors) |
EPR5 | 16-core 3.55 GHz POWER7 Processor Module (two 8-core processors) |
IBM Flex System p460 Compute Node - model 7895-42X |
EPR2 | 16-core 3.3 GHz POWER7 Processor Module (four 4-core processors) |
EPR4 | 32-core 3.2 GHz POWER7 Processor Module (four 8-core processors) |
EPR6 | 32-core 3.55 GHz POWER7 Processor Module (four 8-core processors) |
IBM Flex System p460 Compute Node - model 7895-43X |
EPRK | 16-core 4.0 GHz POWER7+ Processor Module (four 4-core processors) |
EPRH | 32-core 3.6 GHz POWER7+ Processor Module (four 8-core processors) |
EPRJ | 32-core 4.1 GHz POWER7+ Processor Module (four 8-core processors) |
IBM Flex System p24L Compute Node - model 1457-7FL |
EPR7 | 12-core 3.72 GHz POWER7 Processor Module (two 6-core processors) |
EPR8 | 16-core 3.2 GHz POWER7 Processor Module (two 8-core processors) |
EPR9 | 16-core 3.55 GHz POWER7 Processor Module (two 8-core processors) |
Back to top
Memory features
IBM DDR3 memory is compatibility tested and tuned for optimal performance and throughput. IBM memory specifications are integrated into the light path diagnostics for immediate system performance feedback and optimum system uptime. From a service and support standpoint, IBM memory automatically assumes the IBM system warranty, and IBM provides service and support worldwide.
The compute nodes support low profile (LP) or very low profile (VLP) DDR3 memory RDIMMs. If LP memory is used, 2.5-inch HDDs are not supported in the system due to physical space restrictions. However, 1.8-inch SSDs are still supported. If VLP memory is used, either 2.5-inch HDDs or 1.8-inch SSDs are supported.
The p260 and p24L supports up to 16 DIMMs. The p460 supports up to 32 DIMMs. Each processor has four memory channels, and there are two DIMMs per channel. All supported DIMMs operate at 1066 MHz.
The following table lists memory features available for the compute nodes. DIMMs are ordered and can be installed two at a time, but to maximize memory performance, install them in sets of eight (one for each of the memory channels).
Table 3. Memory features
Feature
code | Description | Form
factor* | p24L
7FL | p260
22X | p260
23A | p260
23X | p460
42X | p460
43X | p270
24X |
EM04* | 2x 2 GB DDR3 RDIMM 1066 MHz | LP | Yes | Yes | No | No | Yes | No | No |
8196 | 2x 4 GB DDR3 RDIMM 1066 MHz | VLP | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
8199 | 2x 8 GB DDR3 RDIMM 1066 MHz | VLP | Yes | Yes | No | No | Yes | No | No |
EEMD | 2x 8 GB DDR3 RDIMM 1066 MHz | VLP | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
8145* | 2x 16 GB DDR3 RDIMM 1066 MHz | LP | Yes | Yes | No | No | Yes | No | No |
EEME* | 2x 16 GB DDR3 RDIMM 1066 MHz | LP | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
EEMF* | 2x 32 GB DDR3 RDIMM 1066 MHz | LP | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
* If 2.5-inch HDDs are installed, low-profile DIMM features cannot be used - EM04, 8145, EEME and EEMF cannot be used.
Back to top
Internal disk storage options
The compute nodes have two non-hot-swap drive bays attached to the cover of the system. Either two 2.5-inch SAS HDDs or two 1.8-inch SATA SSDs can be installed. If 2.5-inch HDDs are installed, then LP memory DIMMs are not supported, only VLP DIMMs. The use of 1.8-inch SSDs does not limit the memory DIMMs used. SSDs and HDDs cannot be mixed.
The following table lists the supported drive options.
Table 4. 2.5-inch drive options for internal disk storage
Feature code | Description | Maximum supported |
2.5-inch SAS hard disk drives (HDDs) |
8274 | 300 GB 10K RPM non-hot-swap 6 Gbps SAS | 2 |
8276 | 600 GB 10K RPM non-hot-swap 6 Gbps SAS | 2 |
8311 | 900 GB 10K RPM non-hot-swap 6 Gbps SAS | 2 |
1.8-inch SATA solid-state drives (SSDs) |
8207 | 177 GB SATA non-hot-swap SSD | 2 |
The choice of drive also determines what cover is used for the compute node, because the drives are attached to the cover. The following table lists the options.
Table 5. Compute node cover options
Feature code | Description |
IBM Flex System p24L and p260 Compute Nodes |
7069 | Top cover with HDD connectors |
7068 | Top cover with SSD connectors |
7067 | Top cover for no drives |
IBM Flex System p460 Compute Node |
7066 | Top cover with HDD connectors |
7065 | Top cover with SSD connectors |
7005 | Top cover for no drives |
Back to top
Internal tape drives
The server does not support an internal tape drive. However, it can be attached to external tape drives using Fibre Channel connectivity.
Back to top
Optical drives
The server does not support an internal optical drive option. However, you can connect an external USB optical drive, such as IBM and Lenovo part number 73P4515 or 73P4516.
Back to top
I/O architecture
The p24L and p260 have two I/O expansion connectors for attaching I/O adapter cards, and the p460 has four I/O expansion connectors. All I/O adapters are the same shape and can be used in any location.
The following figures show the location of the I/O adapters in the p24L, p260 and p460.
Figure 6. Location of the I/O adapter slots in the IBM Flex System p24L and p260 Compute Nodes
Figure 7. Location of the I/O adapter slots in the IBM Flex System p460 Compute Node
All I/O adapters are the same shape and can be used in any available slot.. A compatible switch or pass-through module must be installed in the corresponding I/O bays in the chassis, as indicated in the following table. Installing two switches means that all ports of the adapter are enabled, which improves performance and network availability.
Table 5. Adapter to I/O bay correspondence
I/O adapter slot
in the server | Port on the adapter* | Corresponding I/O module bay
in the chassis |
Bay 1 | Bay 2 | Bay 3 | Bay 4 |
Slot 1 | Port 1 | Yes | No | No | No |
Port 2 | No | Yes | No | No |
Port 3 (for 4- and 8-port cards) | Yes | No | No | No |
Port 4 (for 4- and 8-port cards) | No | Yes | No | No |
Port 5 (for 8-port cards) | Yes | No | No | No |
Port 6 (for 8-port cards) | No | Yes | No | No |
Port 7 (for 8-port cards)** | Yes | No | No | No |
Port 8 (for 8-port cards)** | No | Yes | No | No |
Slot 2 | Port 1 | No | No | Yes | No |
Port 2 | No | No | No | Yes |
Port 3 (for 4- and 8-port cards) | No | No | Yes | No |
Port 4 (for 4- and 8-port cards) | No | No | No | Yes |
Port 5 (for 8-port cards) | No | No | Yes | No |
Port 6 (for 8-port cards) | No | No | No | Yes |
Port 7 (for 8-port cards)** | No | No | Yes | No |
Port 8 (for 8-port cards)** | No | No | No | Yes |
Slot 3
(p460 Compute Node only) | Port 1 | Yes | No | No | No |
Port 2 | No | Yes | No | No |
Port 3 (for 4- and 8-port cards) | Yes | No | No | No |
Port 4 (for 4- and 8-port cards) | No | Yes | No | No |
Port 5 (for 8-port cards) | Yes | No | No | No |
Port 6 (for 8-port cards) | No | Yes | No | No |
Port 7 (for 8-port cards)** | Yes | No | No | No |
Port 8 (for 8-port cards)** | No | Yes | No | No |
Slot 4
(p460 Compute Node only) | Port 1 | No | No | Yes | No |
Port 2 | No | No | No | Yes |
Port 3 (for 4- and 8-port cards) | No | No | Yes | No |
Port 4 (for 4- and 8-port cards) | No | No | No | Yes |
Port 5 (for 8-port cards) | No | No | Yes | No |
Port 6 (for 8-port cards) | No | No | No | Yes |
Port 7 (for 8-port cards)** | No | No | Yes | No |
Port 8 (for 8-port cards)** | No | No | No | Yes |
* The use of adapter ports 3, 4, 5, and 6 require upgrades to the switches as described in Table 4. The EN4091 Pass-thru only supports ports 1 and 2 (and only when two Pass-thru modules are installed).
** Adapter ports 7 and 8 are reserved for future use. The chassis supports all eight ports but there are currently no switches available that connect to these ports.
The following figure shows the location of the switch bays in the IBM Flex System Enterprise Chassis.
Figure 8. Location of the switch bays in the IBM Flex System Enterprise Chassis
The following figure shows how two-port adapters are connected to switches installed in the chassis.
Figure 9. Logical layout of the interconnects between I/O adapters and I/O modules
Back to top
Network adapters
The compute nodes do not have any networking integrated on the system, allowing flexibility. The following table lists the supported network adapters and which slots each is supported in.
Table 7. Network adapters
Feature code | Description | Slot 1 | Slot 2 | Slot 3
(p460) | Slot 4
(p460) |
10 Gb Ethernet |
EC24 | IBM Flex System CN4058 8-port 10Gb Converged Adapter | Yes | Yes | Yes | Yes |
EC26 | IBM Flex System EN4132 2-port 10Gb RoCE Adapter | No | Yes | Yes | Yes |
1762 | IBM Flex System EN4054 4-port 10Gb Ethernet Adapter | Yes | Yes | Yes | Yes |
1 Gb Ethernet |
1763 | IBM Flex System EN2024 4-port 1Gb Ethernet Adapter | Yes | Yes | Yes | Yes |
InfiniBand |
1761 | IBM Flex System IB6132 2-port QDR InfiniBand Adapter | No | Yes | No | Yes |
When adapters are installed in slots, ensure that compatible switches are installed in the corresponding bays of the chassis:
- IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch (#ESW2)
- IBM Flex System Fabric EN4093R 10Gb Scalable Switch (#ESW7)
- IBM Flex System Fabric EN4093 10Gb Scalable Switch (#3593)
- IBM Flex System Fabric SI4093 System Interconnect Module (#ESWA)
- IBM Flex System EN4091 10Gb Ethernet Pass-thru (#3700)
- IBM Flex System EN2092 1Gb Ethernet Scalable Switch (#3598)
Back to top
Storage host bus adapters
Table 8 lists storage HBAs supported by the compute nodes.
Table 8. Storage adapters
Part
number | Description | Slot 1 | Slot 2 | Slot 3
(p460) | Slot 4
(p460) |
Fibre Channel |
1764 | IBM Flex System FC3172 2-port 8Gb FC Adapter | No | Yes | No | Yes |
EC23 | IBM Flex System FC5052 2-port 16Gb FC Adapter | No | Yes | No | Yes |
EC2E | IBM Flex System FC5054 4-port 16Gb FC Adapter | No | Yes | No | Yes |
Back to top
Power supplies
Server power is derived from the power supplies installed in the chassis. There are no server options regarding power supplies.
Back to top
Integrated virtualization
The compute nodes support PowerVM virtualization capabilities for AIX, IBM i, and Linux environments. PowerVM contains the following features:
- Support for the following number of maximum virtual servers (or logical partitions, LPARs):
- p24L: Up to 160 virtual servers
- p260: Up to 160 virtual servers
- p460: Up to 320 virtual servers
- Role-based access control (RBAC)
RBAC brings an added level of security and flexibility to the administration of the Virtual I/O Server (VIOS), a part of PowerVM. With RBAC, you can create a set of authorizations for the user management commands. You can assign these authorizations to a role named UserManagement, and this role can be given to any other user. So one user with this role can manage the users on the system, but will not have any further access. With RBAC, the VIOS has the capability of splitting management functions that presently can be done only by the padmin user, providing better security by giving only the necessary access to users, and easy management and auditing of system functions.
- Suspend/resume
Using suspend/resume, you can provide long-term suspension (greater than five to ten seconds) of partitions, saving partition state (that is, memory, NVRAM, and VSP state) on persistent storage. This makes server resources available that were in use by that partition, restoring partition state to server resources, and resuming operation of that partition and its applications, either on the same server or on another server.
- Shared storage pools
VIOS allows the creation of storage pools that can be accessed by VIOS partitions that are deployed across multiple Power Systems servers. Therefore, an assigned allocation of storage capacity can be efficiently managed and shared. Up to four systems can participate in a Shared Storage Pool configuration. This can improve efficiency, agility, scalability, flexibility, and availability.
The Storage Mobility feature allows data to be moved to new storage devices within Shared Storage Pools, while the virtual servers remain completely active and available. The VM Storage Snapshots/Rollback feature allows multiple point-in-time snapshots of individual virtual server storage, and these copies can be used to quickly roll back a virtual server to a particular snapshot image. The VM Storage Snapshots/Rollback functionality can be used to capture a VM image for cloning purposes or before applying maintenance.
- Thin provisioning
VIOS supports highly efficient storage provisioning, whereby virtualized workloads in VMs can have storage resources from a shared storage pool dynamically added or released, as required.
- VIOS grouping
Multiple VIOS partitions can utilize a common shared storage pool to more efficiently utilize limited storage resources and simplify the management and integration of storage subsystems.
- Network node balancing for redundant Shared Ethernet Adapters (SEAs)
This is a useful function when multiple VLANs are being supported in a dual VIOS environment. The implementation is based on a more granular treatment of trunking, where there are different trunks defined for the SEAs on each VIOS. Each trunk serves different VLANs, and each VIOS can be the primary for a different trunk. This occurs with just one SEA definition on each VIOS.
Back to top
Light path diagnostics
For quick problem determination when located physically at the server, the compute nodes offers a three-step guided path:
- The fault LED on the front panel
- The light path diagnostics panel, shown in the following figure
- LEDs next to key components on the system board
The light path diagnostics panel is visible when you remove the server from the chassis. The panel is located on the top right-hand side of the compute node, as shown in the following figure.
Figure 10. Location of light path diagnostics panel
To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the server is removed from the chassis.
The meanings of the LEDs in the light path diagnostics panel are listed in the following table.
Table 9. Light path diagnostic panel LEDs
LED | Meaning |
LP | The light path diagnostics panel is operational. |
S BRD | A system board error is detected. |
MGMT | There is an error with the FSP. |
D BRD | There is a fault with the disk drive board. |
DRV 1 | There is a drive 1 fault. |
DRV 2 | There is a drive 2 fault. |
ETE | A fault has been detected with the expansion unit (p260 only). |
If problems occur, the light path diagnostics LEDs assist in identifying the subsystem involved. To illuminate the LEDs with the compute node removed, press the power button on the front panel. This temporarily illuminates the LEDs of the troubled subsystem to direct troubleshooting efforts towards a resolution.
Typically, an administrator has already obtained this information from the IBM Flex System Manager or Chassis Management Module before removing the node, but having the LEDs helps with repairs and troubleshooting if onsite assistance is needed.
Back to top
Supported operating systems
The p460 and p260 model 22X support the following operating systems:
- AIX V7.1 with the 7100-01 Technology Level with Service Pack 3 with APAR IV14284
- AIX V7.1 with the 7100-01 Technology Level with Service Pack 4, or later
- AIX V7.1 with the 7100-00 Technology Level with Service Pack 6, or later
- AIX V6.1 with the 6100-07 Technology Level, with Service Pack 3 with APAR IV14283
- AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later
- AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later
- AIX V5.3 with the 5300-12 Technology Level with Service Pack 6, or later (AIX 5L V5.3 Service Extension is required)
- IBM i 6.1 with i 6.1.1 machine code, or later; Requires VIOS
- IBM i 7.1 TR4, or later; Requires VIOS
- VIOS 2.2.1.4, or later
- SUSE Linux Enterprise Server 11 Service Pack (SP) 2 for POWER
- Red Hat Enterprise Linux 5.7, for POWER, or later
- Red Hat Enterprise Linux 6.2, for POWER, or later
The p24L supports the following operating systems:
- Virtual I/O Server 2.2.1.4 or later
- SUSE Linux Enterprise Server 11 SP 2 for POWER
- Red Hat Enterprise Linux 5.7 for POWER
- Red Hat Enterprise Linux 6.2 for POWER
The p260 model 23X supports the following operating systems:
- IBM i 6.1 with i 6.1.1 machine code, or later; Requires VIOS
- IBM i 7.1 TR5, or later; Requires VIOS
- VIOS 2.2.2.1 or later
- VIOS 2.2.1.5 or later
- AIX V7.1 with the 7100-02 Technology Level or later
- AIX V7.1 with the 7100-01 Technology Level with Service Pack 7 or later
- AIX V6.1 with the 6100-08 Technology Level or later
- AIX V6.1 with the 6100-07 Technology Level, with Service Pack 7, or later
- AIX V6.1 with the 6100-06 Technology Level with Service Pack 11, or later
- AIX V5.3 with the 5300-12 Technology Level with Service Pack 7, or later (AIX 5L V5.3 Service Extension is required)
- SUSE Linux Enterprise Server 11 Service Pack (SP) 2 for POWER
- Red Hat Enterprise Linux 5.7 for POWE
- Red Hat Enterprise Linux 6.2 for POWER
The p260 model 23A and p460 model 43X support the following operating systems:
- AIX V7.1 with the 7100-02 Technology Level with Service Pack 3 or later
- AIX V6.1 with the 6100-08 Technology Level with Service Pack 3 or later
- AIX V5.3 Technology Level Support offering with the Service Extension
- VIOS 2.2.2.3 or later
- IBM i 6.1 with i 6.1.1 machine code, or later; Requires VIOS
- IBM i 7.1 TR6 or later; Requires VIOS
- SUSE Linux Enterprise Server 11 Service Pack (SP) 2 for POWER
- Red Hat Enterprise Linux 6.4 for POWER
Note: Support by some of these operating system versions will be post generally availability. See the IBM ServerProven® website for the latest information about the specific versions and service levels supported and any other prerequisites:
http://www.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/matrix.shtml
Back to top
Physical specifications
Dimensions and weight of the p24L and p260:
Width: 215 mm (8.5”)
Height: 51 mm (2.0”)
Depth: 493 mm (19.4”)
Maximum configuration: 7.0 kg (15.4 lb)
Dimensions and weight of the p460:
Width: 437 mm (17.2")
Height: 51 mm (2.0”)
Depth: 493 mm (19.4”)
Maximum configuration: 14.0 kg (30.6 lb)
Back to top
Supported environment
The IBM Flex System p24L, p260 and p460 compute nodes and the IBM Flex System Enterprise Chassis comply with ASHRAE Class A3 specifications.
This is the supported operating environment:
- 5 - 40 °C (41 - 104 °F) at 0 - 914 m (0 - 3,000 ft)
- 5 - 28 °C (41 - 82 °F) at 914 - 3,050 m (3,000 - 10,000 ft)
- Relative humidity: 8 - 85%
- Maximum altitude: 3,050 m (10,000 ft)
Back to top
Warranty options
The IBM Flex System p24L, p260 and p460 Compute Nodes have a three-year onsite warranty with 9x5 next-business-day terms. IBM offers the warranty service upgrades through IBM ServicePac, discussed in this section. The IBM ServicePac is a series of prepackaged warranty maintenance upgrades and post-warranty maintenance agreements with a well-defined scope of services, including service hours, response time, term of service, and service agreement terms and conditions.
IBM ServicePac offerings are country-specific. That is, each country might have its own service types, service levels, response times, and terms and conditions. Not all covered types of ServicePac might be available in a particular country. For more information about IBM ServicePac offerings available in your country visit the IBM ServicePac Product Selector at
https://www-304.ibm.com/sales/gss/download/spst/servicepac.
Table 10 explains warranty service definitions in more detail.
Table 10. Warranty service definitions
Term | Description |
IBM onsite repair (IOR) | A service technician will come to the server's location for equipment repair. |
24x7x2 hour | A service technician is scheduled to arrive at your customer’s location within two hours after remote problem determination is completed. We provide 24-hour service, every day, including IBM holidays. |
24x7x4 hour | A service technician is scheduled to arrive at your customer’s location within four hours after remote problem determination is completed. We provide 24-hour service, every day, including IBM holidays. |
9x5x4 hour | A service technician is scheduled to arrive at your customer’s location within four business hours after remote problem determination is completed. We provide service from 8:00 a.m. to 5:00 p.m. in the customer's local time zone, Monday through Friday, excluding IBM holidays. If after 1:00 p.m. it is determined that on-site service is required, the customer can expect the service technician to arrive the morning of the following business day. For noncritical service requests, a service technician will arrive by the end of the following business day. |
9x5 next business day | A service technician is scheduled to arrive at your customer’s location on the business day after we receive your call, following remote problem determination. We provide service from 8:00 a.m. to 5:00 p.m. in the customer's local time zone, Monday through Friday, excluding IBM holidays. |
In general, these are the types of IBM ServicePac warranty and maintenance service upgrades:
- One, two, three, four, or five years of 9x5 or 24x7 service coverage
- Onsite repair from next-business-day to four or two hours
- One or two years of warranty extension
Back to top
Regulatory compliance
The servers conform to the following standards:
- ASHRAE Class A3
- U.S.: FCC - Verified to comply with Part 15 of the FCC Rules Class A
- Canada: ICES-004, issue 3 Class A
- EMEA: EN55022: 2006 + A1:2007 Class A
- EMEA: EN55024: 1998 + A1:2001 + A2:2003
- Australia and New Zealand: CISPR 22, Class A
- U.S.: (UL Mark) UL 60950-1 1st Edition
- CAN: (cUL Mark) CAN/CSA22.2 No.60950-1 1st Edition
- Europe: EN 60950-1:2006+A11:2009
- CB: IEC60950-1, 2nd Edition
- Russia: (GOST Mark) IEC60950-1
Back to top
Related publications and links
Announcement letters:
For more information, see the following resources: