|07-Jun-2018, 09:15 – 10:15
First speaker: Dr. D. Paul Ralph, University of Auckland
Title: The Psychology of Software Development: Myths and Findings
Summary: Software development is typically a group process, rich in psychological and sociological phenomena. However, years of focusing on the technical aspects of software development have allowed serious misunderstandings of human aspects to flourish. This talk summarises the results of several empirical studies that address common misunderstandings. Key questions include “What does success mean for software professionals?” “What kinds of design decisions to programmers make while coding?” “How do software teams withstand disruption?” “How does requirements engineering affect success?” and “How exactly does iteration work in software development?”
Bio: Dr. D. Paul Ralph is an award-winning scientist, author and consultant, a senior lecturer in computer science at The University of Auckland. His research centers on empirical software engineering, game development, project management. Dr. Ralph’s research has been published in premier software engineering and information systems outlets including the International Conference on Software Engineering, IEEE Transactions on Software Engineering, the International Conference on Information Systems, the Journal of the Association for Information Systems and Information and Software Technology. He has received funding from Google and The National Sciences and Engineering Research Council of Canada. Additionally, he has written editorials on technology, education and design for influential outlets including Business Insider and The Conversation. Dr. Ralph is the founding director of the Auckland Game Lab, co-founder of the AIS Special Interest Group for Game Design and Research (SIGGAME) and a member of both the IEEE Technical Council on Software Engineering and the ACM Special Interest Group on Software Engineering. He holds a PhD in Management from the University of British Columbia.
|07-Jun-2018, 11:15 – 12:15
Second speaker: Prof. Dr. Marco Vieira, University of Coimbra
Title: Benchmarking the Security of Software Systems: TO BEnchmark OR NOT TO BEnchmark
Summary: A benchmark is a standard procedure that allows comparing systems or components according to specific characteristics (e.g., performance, dependability, security). A security benchmark should provide a metric (or small set of metrics) able to characterize the degree to which security goals are met in a given piece of code, allowing developers and administrators to make informed decisions. However, one of the biggest difficulties in designing such benchmarks is related to the fact that security assessment is, usually, much more dependent on what is unknown about the applications (e.g. unknown bugs, hidden vulnerabilities) than by what is known (e.g., known features, existing security mechanisms). In fact, security metrics are hard to define and compute because they involve making isolated estimations about the ability of an unknown individual (e.g., a hacker) to discover and maliciously exploit an unknown system characteristic (e.g., a vulnerability).The work on performance benchmarking has started long ago. Ranging from simple benchmarks that target a very specific hardware system or component to very complex benchmarks focusing on complex systems (e.g., database management systems, operating systems), performance benchmarks have contributed to improve successive generations of systems. Research on dependability benchmarking has been boosted in the beginning of the millennium, leading to the proposal of several dependability benchmarks. Several works have been carried out by different groups and following different approaches (e.g., experimental, modeling, fault injection). Due to the increasing relevance of security aspects, security benchmarking is a relevant research field.In this keynote we will discuss recent achievements in security benchmarking and the challenges that need to be addressed in order to effectively be able to compare alternative solutions from a security perspective. In addition to metrics and the benchmarking procedure, we will discuss enabling techniques and tools to support the benchmark, with particular focus on vulnerability and attack injection and trustworthiness measurement.
Bio: Marco Vieira is a Full Professor at the University of Coimbra, Portugal, where he has been involved in research on dependable and secure computing since 2000. His research interests include dependability and security assessment and benchmarking, fault injection and vulnerability & attack injection, robustness and security testing, software Verification & Validation, online failure prediction, and resilience benchmarking, subjects in which he has authored or co-authored more than 170 papers in refereed conferences and journals. Marco Vieira has served on the program committee of the major conferences on the dependability area and acted as referee for many international conferences and journals. He is currently Program Committee co-chair of the IEEE/IFIP 2018 International Conference on Dependable Systems and Networks (DSN 2018), and was in the recent past Program Committee Chair of the 12th European Dependable Computing Conference (EDCC 2016), Program Committee Co-Chair of the 7th Latin America Symposium on Dependable Computing (LADC 2016), and Program Committee Co-Chair of the International Symposium on Software Reliability Engineering (ISSRE 2015). Marco Vieira is an Associate Editor of the IEEE Transactions on Dependable and Secure Computing (TDSC), and guest edited a special issue on Security and Dependability of Cloud Systems and Services of the IEEE Transactions on Services Computing (TSC), and a special issue on Software Reliability Engineering of the Journal of Systems and Software (JSS). He is the coordinator of the EUBrasilCloudFORUM (H2020-EUB-2015-689495) and DEVASSES (PIRSES-GA-2013-612569) projects, and the principal investigator at the University of Coimbra of the EUBRA-BIGSEA (H2020-EUB-2015-690116) and ATMOSPHERE (H2020-EUB-2017-777154) projects.