Thursday, 2017-02-09

Sponsors


Corporate Support


ICPE 2015 Tutorials

A Tutorial on Load Testing Large-Scale Software Systems
Author: Zhen Ming (Jack) Jiang, York University, Toronto, ON, Canada
Tutorial date/time: Saturday, January 31, 2015 (2 hours, 8am - 10am)

Abstract: Large-scale software systems like AT&T and EBay must be load tested to ensure that they can handle thousands or even millions of requests simultaneously. In this tutorial, we will present the current research and practices in load testing large-scale software systems. We will explain the techniques used in the three phases of a load test: (1) test design, (2) test execution, and (3) test analysis. This tutorial will be useful to load testing practitioners and researchers interested in testing and analyzing the behavior of large-scale software systems under load.

 

Dos and Don'ts of Conducting Performance Measurements in Java
Authors: Vojtech Horky, Peter Libic, Antonín Steinhauser, and Petr Tuma, Charles University, Czech Republic
Tutorial date/time: Saturday, January 31, 2015 (2 hours, 10:30am - 12:30pm)

Abstract: The tutorial aims at practitioners -- researchers or developers -- who need to execute small scale performance experiments in Java. The goal is to provide the attendees with a compact overview of some of the issues that can hinder the experiment or mislead the evaluation, and discuss the methods and tools that can help avoid such issues. The tutorial will examine multiple elements of the software execution stack that impact performance, including common virtual machine mechanisms (just-in-time compilation and garbage collection together with associated runtime adaptation), some operating system features (timers) and hardware (memory) -- although the focus will be on Java, some of the take away points should apply even in a more general performance experiment context.

 

Platform Selection and Performance Engineering for Graph Processing on Parallel and Distributed Platforms
Authors: Ana Lucia Varbanescu, University of Amsterdam, NL, Alexandru Iosup, Mihai Capotă, Delft University of Technology, NL
Tutorial date/time: Saturday, January 31, 2015 (3 hours, 1:30pm - 5pm, incl. 30-minute coffee break)

Abstract: Graph processing, especially at large scale, is increasingly useful in a variety of business, engineering, and scientific domains. The challenges of enabling existing algorithms, and analytics pipelines of increased complexity, to fit modern architectures and to scale with ever larger graphs have led to the appearance of many graph processing platforms, such as the distributed Giraph and GraphLab, and the GPU-enabled Totem and Medusa. In this tutorial, we will show how to evaluate and compare graph processing platforms using the GRAPHALYTICS benchmarking tools. Among the metrics targeted by GRAPHALYTICS, we include Vertices and Edges Processed Per Second (V/EPPS), various scalability and traditional performance metrics, and an estimate of cost normalized by performance.

 

Hybrid Machine Learning/Analytical Models for Performance Prediction: A Tutorial
Authors: Diego Didona and Paolo Romano, INESC-ID / Instituto Superior Técnico, Universidade de Lisboa
Tutorial date/time: Sunday February 1, 2015 (3 hours, 8:30am - 12:00 pm, incl. 30-minute coffee break)
Slides available
on Diego's publication page

Abstract: Classical approaches to performance prediction rely on two, typically antithetic, techniques: Machine Learning (ML) and Analytical Modeling (AM). ML undertakes a black box approach, whose accuracy depends on the representativeness of the dataset used during the initial training phase. This approach typically achieves very good accuracy in areas of the features' space that have been sufficiently explored during the training process. Conversely, AM relies on a white-box approach and has its key advantage in the fact that it requires no or minimal training, hence supporting prompt instantiation of the target system's performance model. However, to ensure their tractability, AM-based performance models typically rely on simplifying assumptions. Consequently, AM's accuracy is challenged in scenarios not matching such assumptions. This tutorial describes several hybrid techniques that exploit AM and ML in synergy in order to get the best of the two worlds. It surveys several such hybrid techniques and presents use cases spanning a wide range of application domains, from performance prediction in data centers to self-tuning of transactional platforms.

 

The CloudScale Method for Software Scalability, Elasticity, and Efficiency Engineering: A Tutorial
Authors: Sebastian Lehrig and Steffen Becker, Chemnitz University of Technology, Chemnitz, Germany
Tutorial date/time: Sunday February 1, 2015 (3 hours, 1:30pm - 5pm)

Abstract: In cloud computing, software engineers design systems for virtually unlimited resources that cloud providers account on a pay-per-use basis. Elasticity management systems provision these resource autonomously to deal with changing workloads. Such workloads call for new objective metrics allowing engineers to quantify quality properties like scalability, elasticity, and efficiency. However, software engineers currently lack engineering methods that aid them in engineering their software regarding such properties. Therefore, the CloudScale project developed tools for such engineering tasks. These tools cover reverse engineering of architectural models from source code, editors for manual design/adaption of such models, as well as tools for the analysis of modeled and operating software regarding scalability, elasticity, and efficiency. All tools are interconnected via ScaleDL, a common architectural language, and a method that leads through the engineering process. In this tutorial, we describe ScaleDL and execute our method step-by-step such that every tool is briefly introduced.

 

How to build a benchmark
Authors: Jóakim v. Kistowski, University of Würzburg, Germany, Jeremy A. Arnold, IBM Corporation, Karl Huppler, Paul Cao, Klaus-Dieter Lange, Hewlett-Packard Company, John L. Henning, Oracle
Tutorial date/time: Sunday February 1, 2015 (3 hours, 1:30pm - 5pm)

Abstract: SPEC and TPC benchmarks are created under consortia confidentiality agreements which provide little opportunity for outside observers to get a look at the processes and concerns that are prevalent in benchmark development. This tutorial introduces the primary concerns of benchmark development from the perspectives of SPEC and TPC committees. We outline the characteristics of a good benchmark and present the processes, with which these characteristics can be ensured. We also provide specific examples by introducing the approaches of selected SPEC and TPC subcommittees at benchmark creation and workload selection. Specifically, we present the SPECpower approach at creation of a new benchmark from scratch with SPECpower_ssj2008 and SERT. The work on the SPEC CPU benchmarks serves as an example for the selection of representative workloads from existing work. Finally, we introduce the TPC approach on benchmark creation, including specification driven benchmark development.