IBM Software Defined Infrastructure for Big Data Analytics Workloads

An IBM Redbooks publication

Published 29 June 2015

cover image

ISBN-10: 0738440779
ISBN-13: 9780738440774
IBM Form #: SG24-8265-00
(178 pages)

More options

Rate and comment

Authors: Dino Quintero, Daniel de Souza Casali, Marcelo Correia Lima, Istvan Gabor Szabo, Maciej Olejniczak, Tiago Rodrigues de Mello, Nilton Carlos dos Santos

Abstract

This IBM® Redbooks® publication documents how IBM Platform Computing, with its IBM Platform Symphony® MapReduce framework, IBM Spectrum Scale (based Upon IBM GPFS™), IBM Platform LSF®, the Advanced Service Controller for Platform Symphony are work together as an infrastructure to manage not just Hadoop-related offerings, but many popular industry offeringsm such as Apach Spark, Storm, MongoDB, Cassandra, and so on.

It describes the different ways to run Hadoop in a big data environment, and demonstrates how IBM Platform Computing solutions, such as Platform Symphony and Platform LSF with its MapReduce Accelerator, can help performance and agility to run Hadoop on distributed workload managers offered by IBM. This information is for technical professionals (consultants, technical support staff, IT architects, and IT specialists) who are responsible for delivering cost-effective cloud services and big data solutions on IBM Power Systems™ to help uncover insights among client’s data so they can optimize product development and business results.

Table of contents

Chapter 1. Introduction to big data
Chapter 2. Big data, analytics, and risk calculation software portfolio
Chapter 3. IBM Platform Symphony with Application Service Controller
Chapter 4. Mixed IBM Power Systems and Intel environment for big data
Chapter 5. IBM Spectrum Scale for big data environments
Chapter 6. IBM Application Service Controller in a mixed environment
Chapter 7. IBM Platform Computing cloud services

Follow IBM Redbooks

Follow IBM Redbooks