Accurate and Reliable What-If Analysis of Business Processes: Is it Achievable?
Prof. Marlon Dumas
University of Tartu, Estonia and Apromore, Australia
Marlon Dumas is Professor of Information Systems at University of Tartu (Estonia) and co-founder of Apromore – a spin-off company that develops open-source solutions for process mining and optimization. His research focuses on data-driven methods for business process management, including process mining, predictive process monitoring and data-driven process simulation. He is recipient of an Advanced Grant from the European Research Council with the mission of developing algorithms for automated discovery and assessment of business process improvement opportunities from execution data.
Abstract
Business Process Management (BPM) is a cross-disciplinary field of study at the intersection between Informatics, Industrial Engineering, and Management Science. The goal of BPM is to provide conceptual frameworks, methods, and tools to enable organizations to continuously monitor and improve the way they perform work, in order to fulfill the ever-changing expectations of their customers and other stakeholders.
A central activity in the field of BPM is business process redesign: Applying changes to a business process (a.k.a. interventions) with the aim of improving it with respect to one or more quantitative performance measures such as cycle time, cost, or defect rate. Examples of interventions include automating part of a business process, adding or re-deploying human resources, or changing the flow of activities in a process.
In this talk, we will discuss a decades-old problem in the field of BPM, namely "what-if process analysis". In simple terms, this problem can be posed as follows: How to reliably and accurately predict the impact of an intervention on a business process in terms of one or more business process performance measures? We will discuss the limitations of approaches based on discrete event simulation developed in the 1990s, which have been relatively successful in the context of repetitive manufacturing processes but have largely failed in the context of human-intensive processes. We will then present ongoing efforts to tackle this problem by combining observational data, experimental data, and domain knowledge using hybrid modeling methods drawing from the fields of discrete event simulation, machine learning, and causal inference.
Big Data in Genomics and Bio-medicine
Alvis Brazma
European Bioinformatics Institute, European Molecular Biology Laboratory, EMBL-EBI Wellcome Genome Campus, Cambridge, UK
Dr Alvis Brazma is the Head of OMIC’s data resources at the European Bioinformatics Institute and a Senior Scientist at the European Molecular Biology Laboratory (EMBL). He is a foreign member of the Latvian Academy of Science. He studied mathematics at the University of Latvia and obtained his PhD in mathematical cybernetics in 1987. After spenign some time at the University of Latvia, New Mexico State University, and Helsinki University, in 1997 he joined the EMBL, where in 2000 he became a Team Leader. In 2000 he founded the Microarray Gene Expression Data society and established the first international repository for gene expression data – ArrayExpress. He has been the Principal Investigator in several large international collaborative genomics and biomedical projects, including co-leading the RNA group of the Pan-cancer Analysis of Whole Genomes of the International Cancer Genome Consortium. He has over scientific 150 publications.
Abstract
The sequencing of the first human genome about 20 years ago was an epic venture; nowadays a genome of a human individual can be sequenced in a couple of days. Genome sequences are now important tool not only for biomedical research, but also for medical diagnostics. However, non-trivial data analysis methods are needed to take advantage of genome data and, in fact, the genome sequencing itself would not be possible without computing. The amount of medical genomics and other OMIC’s (eg., proteomics and metabolomics) data are now growing faster than Moore’s law. This is creating major challenges, but also opening new opportunities, in particular in cancer research. In this talk I will discuss the modern computer applications in biomedical research in particular focusing on genomics.
Finding Connected Components in Massive Graphs
Robert E. Tarjan; joint work with Sixue (Cliff) Liu and Siddhartha Jayanti
Robert Tarjan is the James S. McDonnell Distinguished University Professor of Computer Science at Princeton University. He has held academic positions at Cornell, Berkeley, Stanford, and NYU, and industrial research positions at Bell Labs, NEC, HP, Microsoft, and Intertrust Technologies. He has invented or co-invented many of the most efficient known data structures and graph algorithms. He was awarded the first Nevanlinna Prize from the International Mathematical Union in 1982 for “for outstanding contributions to mathematical aspects of information science,” the Turing Award in 1986 with John Hopcroft for “fundamental achievements in the design and analysis of algorithms and data structures,” and the Paris Kanellakis Award in Theory and Practice in 1999 with Daniel Sleator for the invention of splay trees. He is a member of the U.S. National Academy of Sciences, the U. S. National Academy of Engineering, the American Academy of Arts and Sciences, and the American Philosophical Society.
Abstract
Finding connected components is a fundamental problem in algorithmic graph theory. Analysis of big data sets requires solving this problem on massive graphs. On such graphs, sequential algorithms are much too slow. We describe two classes of fast algorithms that rely on massive concurrency. Though simple, verifying that these algorithms are both correct and efficient requires careful and subtle analysis.
Innovation in the Era of the Citizen Inventor
Talal G. Shamoon
Chief Executive Officer
Talal Shamoon became Intertrust’s CEO in 2003. Under his leadership, Intertrust has grown from a small R&D and licensing company to a global leader in trusted computing products and services, licensing, and standardization. Today, Intertrust’s inventions enable billions of licensed products worldwide and its products are globally deployed. Shamoon joined Intertrust in 1997 as a member of the research staff, and then held a series of executive positions, including Executive Vice President for Business Development and Marketing. As an early pioneer of Digital Rights Management technology in the late 90s, he led Intertrust’s business and technology initiatives in the entertainment and media market, which established the company’s leadership in that space. He also presided over Intertrust’s record-setting growth as a licensing powerhouse, strategic investor and leading trusted distributed computing platform provider.
An electrical engineer and computer scientist by training, Shamoon was a researcher at the NEC Research Institute in Princeton, NJ, where he focused on digital signal processing and content security. Shamoon sits on several company boards – he is a member of the board of directors of Intertrust and InnerPlant. A recognized inventor, published author, and frequent public speaker, Shamoon holds B.S., M. Eng., and Ph.D. degrees in electrical engineering from Cornell University.
Abstract:
The last fifty years have witnessed one of the most disruptive leaps of innovation in human history. All told, there are more innovators and inventors alive today than in the rest of time that humans have been around. It’s both inspiring and maddening that humanity sits at the foot of a waterfall of opportunity, yet still hangs at the precipice of an extinction of its own creation. The distributed computing and Internet revolution has driven the most amazing transformation ever - bringing together 8 billion people to function as a village; the transparency that has come from connectivity has challenged national boundaries and the nature of corporations and have allowed us to function as a small village.
This talk analyses the evolution of innovation over the last 100 years and the emergence of the borderless Citizen Inventor. We discuss how people collaborate, innovate and invent, the nature of where and how research takes place, how we protect and monetize our inventions and hopefully evolve to a better place.
Doctoral Consortium Keynote
3 July 2022
Data Management Systems: Evolution, State of the Art and Open Issues
Prof. Dr. Abdelkader Hameurlain
Pyramid Team, Institut de Recherche en Informatique de Toulouse (IRIT), Paul Sabatier University, France
Abdelkader Hameurlain has been full Professor in Computer Science (until August 31, 2021) at Paul Sabatier University PSU (Toulouse, France) and currently (since 1st September 2021), Emerit. Professor at PSU, IRIT Lab. He is a member of the Institute of Research in Computer Science of Toulouse (IRIT). His current research interests are in query processing and optimization in parallel, cloud, and large-scale distributed environments, mobile databases, and database performance. Prof. Hameurlain has been the general chair of the International Conference on Database and Expert Systems Applications (DEXA'02, DEXA'2011, DEXA'2017 and DEXA'2018). He is Co-Editors-in-Chief of the International Journal "Transactions on Large-scale Data and Knowledge Centered Systems" (LNCS, Springer). He was guest editor of three special issues of "International Journal of Computer Systems Science and Engineering on "Mobile Databases", "Data Management in Grid and P2P Systems", and "Elastic Data Management in Cloud Systems".
Abstract
My talk will be centred on the evolution of Data Management Systems (DMS) in different environments (Uniprocessor, Parallel, Distributed, Cloud).
The interest of this talk to PhD students is twofold:
- on one hand, to have a synthetic view of the main characteristics of the DMS by highlighting the introduced concepts, the links between the DMS, their objectives and the targeted applications. These systems are at the base and at the heart of business decision support systems;
- on the other hand, in the field of ICT systems, the outline of my presentation will help PhD students in terms of work methodology followed by all PhD students until the defence of their doctoral thesis (Scientific Context /Motivation/Problem Position, State of the Art, Contributions, Open Issues).
For more than half a century, DMS have been intensively designed and developed in multiple environments and for widely varying classes of applications. Indeed, in the landscape of DMS, data analysis systems (OLAP) and transaction processing systems (OLTP) are separately managed. The reasons for this dichotomy are that both systems have very different functionalities, characteristics, requirements and objectives. The presentation will be centred on the first class of OLAP systems. It will be structured as follows: firstly, I describe synthetically the main problems of data management, the underlying concepts and the main characteristics of the proposed DMS. Next, parallel database systems and cloud data management systems (NoSQL Systems, Multi-tenant DBMS, Multistore/Polystore Systems) are overviewed and compared highlighting their advantages, weaknesses and the links between these DMS. Next, I briefly describe our (Pyramid Team/IRIT Lab.) contributions to DMS in large-scale parallel and cloud environments. Lastly, with respect to the evolution of DMS, I point out some open issues that should be tackled to ensure the viability of the next generation of large-scale data management systems for big data applications.