JOIN SHPCP   |   EVENTS   |  NEWS  |  LOGIN

As a charitable service-based nonprofit organization (NPO) coordinating individuals, businesses, academia and governments with interests in High Technology, Big Data and Cybersecurity, we bridge the global digital divide by providing supercomputing access, applied research, training, tools and other digital incentives “to empower the underserved and disadvantaged.”

PRESENTERS

Peter Cole, Senior Solutions Architect

Title: Networking Disaggregation - New trends in Commoditization of Networking within the computer industry
Time: 1130 Tuesday 20th

Abstract:
Networking Disaggregation on both dimensions: (i) Data Forwarding Plane and Control Plane, (ii) Hardware and Software have benefited data center customers by (a) reducing capital and operational expenses, (b) preventing vendor lock-in with more choices, increasing buyer power and customer control on their network and (c) increased programmability and visibility into the network. The paradigm shift has resulted in a highly commoditized market serving the network hardware layer and myriad products and solutions relating to the networking software stack. However, the challenge brought about by disaggregation lies in the customer being able to design his networking fabric around off-the-shelf hardware, understanding the programmable silicon potential and the protocol support needed to design, architect and manage a network that is optimal and seamlessly scalable to accommodate the ever growing networking needs.

In this paper, we take an interesting case study of a specific set of networking requirements, like port density, latency and throughput characteristics, and identify a design paradigm that could be universally applied across different domains to design a network from off-the-shelf hardware with the right topology and software stack for appropriate protocol support.


<< Back to Schedule

<< Top of Page

Eric Collins, CTO PCPC Direct

Title: Summarize a year of trends and directions in High Performance Computing for seismic processing.
Time: 1700 Monday 19th, 1330 Wednesday 21st

Abstract:
This technical overview will touch on the computer side of the seismic industry, covering processors; interconnect fabrics, and industry futures.

<< Back to Schedule

<< Top of Page


 

John Fragalla, Principal Engineer, Seagate Systems Group – HPC, Seagate

Title: Enterprise-Ready Parallel Storage with Leading Energy Efficiency & Performance
Time: 1400 Monday 19th & Tuesday 20th 1300 Wednesday 21st

Abstract:
A great many systems currently under deployment or in planning stages appear to be putting ever increasing requirements on storage density, capacity, manageability and power consumption. While performance is still of utmost concern for bulk of Top500 systems, reliability, availability and supportability are the new critical keywords to the industry. In addition, enterprise features involving data integrity, data management and automated archiving are raising the bar for companies delivering fully integrated and engineered solutions. This talk will touch on some of the requirements recently put forward by the largest storage procurements in HPC in 2014 and what Seagate has accomplished in designing and delivering a solution that maximizes density and performance while fitting into a restrictive power envelopes. Seagate will describe how we can ship a 45PB+ usable storage volume delivering around 500 GB/s using less than 200 kW in power based on Lustre 2.5.x.

<< Back to Schedule

<< Top of Page


Matt Garrison

Abstract:
In this talk, we will explore the advantages of CPU-based “Software-Defined Visualization” for large geospatial data. Ray tracing algorithms can tackle large and small data with roughly the same performance, and CPU memory is increasingly cheap. Consequently, an efficient CPU ray tracing framework can effectively replace GPUs for many gigascale and terascale workloads, without relying on simplification or data-parallel techniques. Ray tracing allows for interactive and production-quality rendering in the same software framework. Intel’s OSPRay ray tracer, based on the Intel SPMD Program Compiler (ISPC) provides a clean API for prototyping such problems. This talk discusses potential impacts on GIS software, considering work from the OSPRay team in large-scale volume visualization for fault/horizon modeling, and our own work at the University of Utah using OSPRay as an efficient backend for LIDAR and UAV data.

<< Back to Schedule

<< Top of Page


Claire Giordano &
Shawn Stephens

Title: 5 Ways That Object Storage Will Help You Get More, Higher, Bigger, Better
Time: 0900 Wednesday 21st

Claire Giordano is Senior Director for Emerging Storage Markets at Quantum, focused on solving the demands of big data use cases, including oil & gas, geospatial, and cybersecurity. Ms. Giordano has over 20 years of experience in product management, product marketing, and engineering. Ms. Giordano holds a Sc.B. degree from Brown University in Applied Mathematics and Computer Science.

Shawn Stephens is a Systems Engineer supporting oil & gas accounts for HGST. Prior to HGST, Shawn worked at a seismic processing company supporting a large HPC and storage environment, where he managed the acceleration of seismic processing workflow using SSDs, improving an I/O intensive workflow from 3 weeks to 1 day. Earlier in his career, Shawn managed over 4PB of globally-deployed storage for a top-tier oil company, along with a global deployment of HPC clusters at 20 sites and over 2,000 nodes. Shawn has been working with applications, storage and HPC in oil & gas exploration for over 16 years.

Abstract:
Seismic data holds an ocean of valuable information, if you can harness it. With massive increases in the amount of pre-stack data collected, and advances in the software for processing, interpretation, and modeling—the data growth in the Oil & Gas world is adding up! Today’s upstream processes need storage infrastructure that can meet the needs of today’s seismic workflows—which means, oil & gas companies need object storage technology. Come learn 5 ways next-generation object storage helps make seismic workflows more efficient and more affordable.

<< Back to Schedule

<< Top of Page


Steve Gombosi, Senior Application Engineer, Altair

Title: Using Burst Buffers with Workflows to Speed Computations
Time: 0930 Tuesday 20th

Abstract:
PBS Professional is responsible for optimally scheduling jobs in all types of high-performance computing and data environments which include elements like Xeon Phi, GPUs, Burst Buffers, Storage, Network, and common resources like cpus and memory. Burst Buffer is a technology that allows for faster I/O, allowing an application to return to computation sooner. This presentation will explain how PBS Professional can be customized to use workflows to create the Burst Buffer environment, execute the user’s job, and teardown the environment. Our HPC implementation expert will also explain the benefits of using this workflow methodology, including improved resource utilization, better usability, and flexibility for IT administrators to extend capabilities.

<< Back to Schedule

<< Top of Page


Gerard Gorman &
Renato Miceli

Title: On the route to sustainable performance portable software for hydrocarbon energy exploration presented
Time: 1530 Tuesday 20th

Abstract:
Energy exploration relies on accurate subsurface imaging techniques such as Full Waveform Inversion. Such is the data and compute intensity of this problem; subsurface imaging is arguably the largest big data and high performance computing application in the private sector. Advancement in this area requires innovations on many fronts: inversion algorithms; advanced numerical discretization (high order methods, finite difference, finite element, spectral element methods); more complete physics models (e.g. anisotropic elastic waves); and software optimization (sometime means reimplementation) for modern low power many-core and heterogeneous computing platforms.

These challenges stack up to bring the curse of dimensionality to innovation – the implementation overhead related to introducing of new numerical methods and algorithms can consume whole research teams and potentially involve modifying millions of lines of code. In our research we have abandoned the idea that humans have to write most of the code manually. High level languages are combined with multiple layers of abstraction and domain specific languages to maintain the flexibility and expressiveness of a high level language like Python, while at the same time exploiting code generation and compiler technologies to generate high performance native code for different computer architectures. The result is a separation of concerns where new numerical approaches are readily evaluated and are capable of outperforming hand tuned code by exploring implementation options that would be unfeasible or unsustainable to maintain manually.

<< Back to Schedule

<< Top of Page 

Andrew Jones, Vice-President HPC Consulting & Services, Numerical Algorithms Group

Title: Lessons from 40+ HPC procurements, impartial service reviews and strategy projects
Time: Monday 1730 Monday 19th , 1600 Tuesday 20th, 1530 Wednesday 21st

Abstract:
This talk will introduce a range of lessons learned by a team that has been involved in over 40 HPC projects spanning strategy development, requirements capture and planning, technology evaluation, procurement, commissioning, impartial review of service delivery, and so on. These projects have been delivered around the world, in industry, academia and government, and have covered small and large facilities. The talk will discuss a selection of the lessons in further detail. Attendees should walk away with at least one lesson useful to their planning for or use of HPC.

<< Back to Schedule

<< Top of Page


Tony Katz, Senior Engineer Oil & Gas, Seagate

Title: Building a True Enterprise HPC Storage
Time: 1030 Monday 19th Tuesday 20th & Wednesday 21st

Abstract:
Many systems currently under deployment or in planning stages are putting ever increasing requirements on storage density, capacity, manageability and power consumption. While performance is still of utmost concern for bulk of Top500 systems, reliability, availability and supportability are the new critical keywords to the industry. In addition, enterprise features involving data integrity, data management and automated archiving are raising the bar for companies delivering fully integrated and engineered solutions. 

<< Back to Schedule

<< Top of Page



Kent Koeninger, IBM Global Spectrum Scale Business Development Leader 

Title: Date Management at Scale
Time: Tuesday 1500 October 20th

Abstract:
The energy industry is one of the most demanding as it pertains to challenges in working with structured and unstructured data. Most energy companies need to work with finer grain levels of detail in exploration, larger volumes with reservoir modeling, and see increasing requirements for data management at extremes. Analytics is also playing a role in upstream and downstream use cases as sensors, video surveillance, and other requirements increase the amount of unstructured data being used. This session will be focused on sharing the global trends of data growth, showing how they have stressed traditional technologies, and will review emerging approaches for grand scale data challenges. The scope of emerging technologies covered will cover flash, tape, disk, and software defined storage.

<< Back to Schedule

<< Top of Page



Andy Knoll, Research Scientist at the SCI Institute

Title: Visualizing Large LIDAR/UAV data with Intel OSPRay
Time: 1400 Wednesday 21st

Abstract:
In this talk, we will explore the advantages of CPU-based “Software-Defined Visualization” for large geospatial data. Ray tracing algorithms can tackle large and small data with roughly the same performance, and CPU memory is increasingly cheap. Consequently, an efficient CPU ray tracing framework can effectively replace GPUs for many gigascale and terascale workloads, without relying on simplification or data-parallel techniques. Ray tracing allows for interactive and production-quality rendering in the same software framework. Intel’s OSPRay ray tracer, based on the Intel SPMD Program Compiler (ISPC) provides a clean API for prototyping such problems. This talk discusses potential impacts on GIS software, considering work from the OSPRay team in large-scale volume visualization for fault/horizon modeling, and our own work at the University of Utah using OSPRay as an efficient backend for LIDAR and UAV data.

<< Back to Schedule

<< Top of Page


Steve Lutz, Verrex

Title: It was hidden in plain sight

Abstract:
Founded in 1947, Verrex is a global design-build integrator and managed services provider of complex video conferencing, telepresence and digital media technologies. How do we stand out amongst other integrators? Superior performance – we offer the highest level of execution in AV systems design, integration, managed services and onsite staffing. Highlights about Verrex I think you will find of most value:

  • The Verrex Process. Verrex’s proprietary quality management process breaks down every critical procedure and activity related to a project and/or service deployment. All processes are documented and all employees act according to it. This ensures the highest level of quality is repeated regardless of scope, location or challenges.
  • Design & Engineering. Verrex ensures the integrity of our engineered systems by fully vetting and documenting those systems. Examples include audiovisual facilities and system documents required for construction, as well as specifications and signal flow schematics.
  • Enterprise-Level Solutions. Clients looking for standards throughout a single campus or multiple locations rely on Verrex’s enterprise approach. We focus on consistency for a client’s AV & conferencing systems. Examples include templates for a client’s own standard room types.
  • Comprehensive Services. From design & integration through to service & support, Verrex completes the full life-cycle of a client’s technology investment.
  • Global Reach. Our own strategically positioned full-service offices in three of North America’s top metros for global business, combined with our Allied Network, provides clients with a global resource for their AV needs. << Back to Schedule

    << Top of Page

Melinda McDade, IBM Senior Technical Architect, Global Petroleum, IBM

Title: The Role of Seismic in Upstream Analytics
Time: 1600 Monday October 19th

Abstract:
Seismic data, traditionally used exclusively in Exploration, is proving a true advantage in and beyond exploration to reduce costs, non-productive time, optimize well penetration and resource extraction while mitigating risk once a resource has been identified. This presentation will define some of the new acquisition techniques being used today before detailing challenges and considerations as E&P transitions from compute to data centric models for Exascale computing. It will define the essential data requirements to provide timely, comprehensive information to make critical business decisions in a timely manner.

<< Back to Schedule

<< Top of Page


Dave McDonnell, Global IBM Spectrum Scale Business Development Leader, IBM

Title: Date Management at Scale
Time: Tuesday 1500 October 20th

Abstract:
The energy industry is one of the most demanding as it pertains to challenges in working with structured and unstructured data. Most energy companies need to work with finer grain levels of detail in exploration, larger volumes with reservoir modeling, and see increasing requirements for data management at extremes. Analytics is also playing a role in upstream and downstream use cases as sensors, video surveillance, and other requirements increase the amount of unstructured data being used. This session will be focused on sharing the global trends of data growth, showing how they have stressed traditional technologies, and will review emerging approaches for grand scale data challenges. The scope of emerging technologies covered will cover flash, tape, disk, and software defined storage.

<< Back to Schedule

<< Top of Page


Sharad Mehrotra, CEO

Title: Developing innovative flash platforms for the in-memory computing
Time: 1700 Tuesday 20th

Dr. Sharad Mehrotra is the founder and CEO of Saratoga Speed. He is a veteran technology executive and serial entrepreneur. In his 25+ years in the information technology industry, he has held a broad range of P&L, functional, and technical leadership roles in start-ups, and small and large public companies. He particularly enjoys the intersection of complex technology and business management.

Sharad was the founder and CEO of two prior startups: Procket Networks, that built carrier-class routers and Fabric7 Systems, that built x86-based enterprise-class servers. He received his doctorate in Computer Science from University of Illinois in Urbana-Champaign.

Abstract:
Saratoga Speed is a Silicon Valley startup incubated at Sanmina Corporation. The company is addressing data storage and processing challenges in working with “Big and Fast” data sets at sizes and speeds that are rapidly becoming the norm, but present impossible challenges in today’s computing environments. The initial products from the company offer the world's fastest, densest, and highest capacity all-flash storage arrays. Designed to ingest, process, and store hundreds of terabytes of data at tens of gigabytes per second transfer speeds, the arrays have very low space, power and cooling requirements.

There are two common platform design approaches prevalent in the storage or Big Data infrastructure space today. The first is to select inexpensive and lowest-common-denominator commodity hardware, avoid all custom hardware or firmware, and build a platform exclusively with software. The result is performance, capacity, and density all capped at lowest-common-denominator levels with admittedly more flexible integration and ease-of-use.

The second approach favors aggressive use of custom hardware with a thin layer of software for access. While higher levels of performance, capacity and density can be realized with this approach, development cycles are longer. Saratoga Speed platforms start with commodity hardware and off-the-shelf operating systems. Aggressive design, implementation and integration of proprietary hardware, firmware and software in carefully chosen areas results in capabilities that far beyond other currently available products.

The talk will describe the challenges in building out a line of products with this design and development approach. Flash packaging, RAID, caching, data path acceleration, service layering and offloads, FPGA integration and application hosting are some of the specific areas that will be covered. What worked well, what did not and how we had to adjust the strategy to technical and business reality as we went along.

<< Back to Schedule

<< Top of Page



Ulisses Mello, Director, IBM Research 

Title: Data Centric Computing in Oil and Gas
Time: Monday 1300 October 19th and Tuesday 1600 October 20th 

Abstract:
In this talk, Ulisses Mello, Director, IBM Research will discuss the current transformation of High Performance Computing that requires specific consideration of Big Data on supercomputer design. Ulisses will cover IBM's point of view on "data centric" computing for HPC and the evolution of our technical roadmap with respect to oil and gas applications.


<< Back to Schedule

<< Top of Page


Joshua Newman, Senior Application Engineer, Altair

Title: Simplifying Big Data Access with HP RGS and Altair Display Manager
Time: 1100 Wednesday 21st

Abstract:
PBS Works provides HP cluster users with a more efficient, reliable solution for HPC workload management. As an HP-integrated product, PBS Professional optimizes job scheduling on HP Apollo and ProLiant systems to achieve the highest levels of system utilization. Display Manager, PBS Works’ portal for remote visualization of applications and data, now supports HP Remote Graphics Software (RGS) for lightning-fast access to view big data sets with ease. Attend this presentation to learn details on the integration between Display Manager and HP RGS, including usage details and sample scenarios.

<< Back to Schedule

<< Top of Page



Leo Reiter, CTO, Nimbix, Inc.

Title: Secure Storage Options for High Performance Computing on Demand
Time: 1300 Tuesday 20th

Abstract:
High performance computing workloads demand more than just large amounts of memory and vast parallel arrays of powerful CPUs and GPUs. Storage architecture can quickly and easily become a major bottleneck and/or liability if not designed carefully.
When considering HPC on demand, securing your data is more important than ever. Whether it’s for competitive, regulatory, and/or privacy reasons, encrypting both data at rest and in transit is of paramount importance. Blending security and performance is a fundamental requirement for HPC on demand, especially in public or hybrid cloud models.

This presentation will explore the various options for secure, high performance data storage on demand. We’ll focus on architectures, use cases, and best practices. As HPC on demand emerges as a viable alternative to expensive private clusters, we must ensure our data storage and security choices are up to the task.

<< Back to Schedule

<< Top of Page



Zhi Shang, Researcher at Louisiana State University

Title: High Performance Computing with the Intel Xeon Phi Coprocessor for Discrete Particle Model of OpenFOAM
Time: 1530 Monday 19th,

Abstract:
The Discrete Particle Model (DPM) based on OpenFOAM was established at the high performance computing (HPC) platform (SuperMIC in LSU with Intel Xeon Phi Coprocessor). It coupled a multiphysics simulation algorithm for both fundamental physics needs as well as efficient usage of HPC resources.

<< Back to Schedule

<< Top of Page

 

Kevin R. Tubbs,, Ph.D.,Director of Technology and Sales, Southeast Region

Title: Tundra Open HPC - Ushering OCP Efficiency and Innovation into the HPC World
Time: 1100 Monday 19th, 1130 Wednesday 21st

Abstract:
Turnkey optimized solutions integrated with various emerging technologies have redefined traditional HPC. The emerging technologies coupled with new flexible form-factors and dis-aggregated power supplies reduce costs and increase efficiency while providing a truly custom solution to local and specific compute challenges.
Penguin’s Tundra platform is a high-density computing platform that uses facebook’s Open Compute Project initiative standards with disaggregated components and technologies. Tundra provides flexible power components that are able to conform to whatever power architecture available in the data center as well as flexible cooling options that can be aligned with local ambient air temperatures. Tundra provides a high-density open-source server environment based on a universal platform that allows organizations to select the cooling methods, compute and memory specializations, interconnect and software-based management specific to their workload.

We provide case studies of applying Tundra solutions to specific technical requirements like compute specialization (ARM, OpenPower, GPGPU, APU, PHi) , interconnect (Ethernet, InfiniBand, others), memory specialization (NVM DIMMs, SSDs) and Software based management (Openstack, Docker, SDN, SDS). We identify an approach that could be universally applied across different domains to design an optimized solution to specific technical computing workloads and datacenter challenges.

<< Back to Schedule

<< Top of Page



David Turek, Vice President, Exascale Computing, IBM

Title: Data Centric Computing in Oil and Gas
Time: Monday 1300 October 19th and Tuesday 1600 October 20th

Abstract:
In this talk, David Turek, Vice President, IBM Exascale Computing will discuss the current transformation of High Performance Computing that requires specific consideration of Big Data on supercomputer design. Dave will cover IBM's point of view on "data centric" computing for HPC and the evolution of our technical roadmap with respect to oil and gas applications.

<< Back to Schedule

<< Top of Page


Geert C. Wenes, Ph.D. Sr. Practice Leader, HPC Architect, Cray

Title: Architectures for emerging migration algorithms and workflows
Time: 1430 Monday 19th, 1330 Tuesday 20th, 1500 Wednesday 21st

Geert specializes in matching IT technologies to critical data and workflow processes in HPC to help solve customer’s most demanding applications. He holds a degree in Physics but has spent most of his professional career in HPC.

Abstract:
Production implementations of the RTM algorithm depend on a series of other processing steps that require multi-tiered data movement. For example: RTM is likely to be integrated with complimentary migration schemes, visualization requirements, rapid diagnostics and post-image processing etc.. In this talk, we will address the impact of a wide range of implementation schemes and tradeoffs related to system architectures.

<< Back to Schedule

<< Top of Page



Geert C. Wenes, Ph.D. Sr. Practice Leader, HPC Architect, Cray &
Ty McKercher, Director, Solution Architecture & Engineering, NVIDIA Corporation

Title: Implementing RTM on dense GPU platforms (joint presentation: Cray with NVIDIA)
Time: 1000 Monday 19th , Tuesday 20th & Wednesday

Geert specializes in matching IT technologies to critical data and workflow processes in HPC to help solve customer’s most demanding applications. He holds a degree in Physics but has spent most of his professional career in HPC.

Ty leads a team that specializes in systems architecture across multiple industries. He often serves as a liaison between customer and product engineering teams leading emerging technology evaluations. His passion is to help solve problems that combine visualization and HPC challenges

Abstract:
This presentation explains why and how dense GPU systems are deployed in production seismic processing centers. The presentation also explains how developers can take advantage of familiar software tools to unlock the power of massively parallel GPU-based systems.

<< Back to Schedule

<< Top of Page


Victor Wright, Account Manager, Altair

Title: Simplifying the Management of HPC Application Environments
Time: 1130 Monday 19th

Abstract:
High-performance computing (HPC) is vital to staying competitive in the oil and gas industry, especially for upstream exploration and development. Exascale-level computing and data analytics requirements are not only being driven by established areas of study such as Seismic Processing and Reservoir Simulation, but by growing disciplines such as Computational Fluid Dynamics and Geomechanics. To maximize user productivity, oil and gas companies need HPC resources operating reliably around the clock -- and these systems need to be easy to use, manage, and scale while still being cost-effective. Attend this session to learn how Altair’s PBS Works addresses these challenges with the most comprehensive suite of integrated HPC workload management products available. PBS Works simplifies and streamlines the management of HPC resources with powerful policy-based scheduling, user-friendly portals for job submission and remote visualization, and deep analytics and reporting capabilities. The presentation will review the latest capabilities of PBS Professional 13.0, including million-core scalability, fast throughput, and include a case study of an HPC oil and gas implementation.

<< Back to Schedule

<< Top of Page

 

 

 

 

Contact Us

Society for HPC Professionals
10690 Shadow Wood Dr, Ste 132 Houston, TX 77043-2840

info@hpcsociety.org
Copyright © 2016 Society of HPC Professionals