Robot - Doctor

Keynote

05/12/16

Home
Topics
Theme
Program
Keynote
Social Event
Submission
Proceedings
Journal SI
Dates
Organisers

 

Keynote Speakers

  • John Penix, Google
John Penix is a Software Engineer on the Developer Infrastructure Team at Google where he has spent 8 years building, supporting and breaking various pieces of Google's internal developer platforms. Prior to joining Google, he worked in the Automated Software Engineering Group at NASA Ames Research Center working on software verification tools. John received a Ph.D. in Computer Engineering from the University of Cincinnati. He is a member of the Steering Committee for the IEEE/ACM International Conference on Automated Software Engineering and the IEEE International Conference on Software Testing, Verification and Validation.

Title: The Design and Evolution of Google's Test Automation Project

Abstract: In 2008 Google began the development of a centralized, company-wide continuous integration system dubbed the Test Automation Project.  Over the past eight years, TAP has expanded in both scope and scale, evolving from a simple pipeline architecture to a collection of interacting services.  In this talk, I will describe how the requirements changed over time and how the architecture was modified to adapt to these changes.  I'll also provide glimpses into the internal workings of some of the services and describe how they have been generalized to support workflows beyond basic continuous integration.

 

 

  • Kim Herzig, Microsoft
Kim Herzig is as Software Development Engineer and Researcher for the Tools for Software Engineers team at Microsoft Redmond. He is closely collaborating with the Empirical Software Engineering (ESE) group at Microsoft Research. His work is focused on optimizing development and testing processes. Currently, Kim is analyzing build, test, and verification processes of large Microsoft product groups, such as Windows, OneDrive and Bing, aiming to improve and optimize the effectiveness and reliability these processes.

Title: Let's assume we had to pay for testing

Abstract: It should not come as a surprise but testing is not for free, it costs money and effort. Test automation is supposed to help development teams to "[...] reduce the cost and improve the effectiveness of software testing [...]" [1]. But what exactly is it we are automating and does it actually help to solve a problem? Are we providing solutions that are actually worth the effort and who is going to pay for it? In this talk, I provide insights into large-scale verification processes and tool-chains that solved some pressing testing issues, but also provide new challenges that require sustainable and acceptable solutions that are worth automating.