Upcoming Seminars

(page last updated on Tuesday 6 December 2016)

Tuesday 6 December 2016

On Bit Representations of Mixed-Integer Quadratic Programs
Abstract. A standard trick in integer programming is to replace each bounded general-integer variable with a small number of binary variables, using the bit representation of the given variable. (See, e.g., Owen & Mehrotra, 2002; Coppersmith & Lee, 2005; Muldoon et al., 2013; Bonami & Margot, 2015). Recently, bit representation was found to be useful for convexifying quadratic problems (Billionnet et al., 2012) and for linearising bilinear problems (Gupte et al., 2013). We show that,in the case of mixed-integer quadratic programs, bit representation has an additional benefit: it can enable one to obtain stronger linear programming relaxations.

Speaker: Prof. Adam Letchford [Bio sketch]

Bio Sketch. Adam N. Letchford is known internationally for his research on exact solution methods for NP-hard optimisation problems. He has been the recipient of an IBM Faculty Award and an EPSRC Advanced Research Fellowship, and is a Fellow of the Operational Research Society. He has been on the editorial boards of six journals, including Mathematical Programming and Operations Research. From 2008-2014, he was the coordinator of the optimisation cluster of the LANCS Initiative. Since 2012, he has been the director of NATCOR, the UK National Taught Course Centre in Operational Research.

Affilation: Management School - Lancaster University
Location: Huxley Building
Time: 3:00pm (duration: 1 hour)

Tuesday 13 December 2016

Smart Grids and Optimization: A Winning Combination
Abstract. A smart grid is the combination of a traditional electrical power system with information and energy both flowing back and forth between suppliers and consumers. This new paradigm introduces major challenges such as the integration of intermittent generation and storage, and the need for electricity consumers to play an active role in the operations of the system. We will summarize the opportunities provided by smart grid to the optimization community, and illustrate one such opportunity through some recent research on optimal aggregation of energy resources (joint work with F. Gilbert, P. Marcotte, and G. Savard).

Speaker: Prof. Miguel Anjos [Bio sketch]

Bio Sketch. Miguel Anjos is a Professor at Polytechnique Montreal. He holds a Canada Research Chair and an Inria International Chair. He is also a licensed professional engineer in Ontario, Canada. His research is concerned with using mathematical optimization to provide guaranteed optimal or near-optimal solutions for important classes of large-scale discrete nonlinear optimization problems arising in engineering applications.

Affilation: Polytechnique Montreal
Location: 217 Huxley Building
Time: 1:30pm (duration: 1 hour)

Past Seminars

(page last updated on Tuesday 6 December 2016)

Monday 5 December 2016

Solution of an old problem in vector optimisation: Support function of the generalised Jacobian
Abstract. Many optimisation problems are non-smooth in the sense that the objective function is not differentiable. This is invariably the case in many areas of application when the objective function contains simple constructs such as maximum or minimum of several differentiable functions or contains, for example, the absolute value function. The notion of the Clarke sub-gradient of locally Lipschitz maps has been used successfully in the past decades to solve non-smooth non-convex optimisation problems by methods such as the sub-gradient descent or the more powerful bundle method. The generalised Jacobian of a locally Lipschitz vector function, as introduced by Clarke in mid 1970's, plays the same role in non-smooth vector (multi-objective) optimisation that the (Clarke) sub-gradient or sub-differential plays in the usual non-smooth scalar optimisation. The sub-gradient of a non-smooth objective function is a non-empty, compact and convex subset of the finite Euclidean space of the same dimension as the domain of the objective function. Similarly, the generalised Jacobian is a non-empty, compact and convex subset of the space of real mxn matrices where n is the dimension of input and m is the dimension of the output of the multi-objective function. However, until now, there has been a huge discrepancy in the way they have been defined. The sub-gradient of a scalar function at a point was originally defined constructively by its support function which provides a simple expression for its boundary. In contrast, the generalised Jacobian has been defined non-constructively by using a theorem of Rademacher in analysis which states that every Lipschitz map between finite dimensional Euclidean spaces is differentiable for almost all its arguments. The non-constructive nature of this definition has created a major obstacle in the application of the generalised Jacobian in vector optimisation and several other fields of computation such as non-smooth Newton method or solution of parametric ODE's with non-smooth right hand side. There have been two attempts to tackle the problem of finding the support function of the generalised Jacobian. Hiriart-Urruty derived the support function of an approximation to the generalised Jacobian, namely its so-called plenary hull. Imbert used a Green-Stokes formula to obtain an intractable analytic expression for the support function of the generalized Jacobian that involves a limsup operation over surface integrals of the divergence of the derived function on a shrinking sequence of hypercubes. In this talk, I will derive a simple expression to compute the support function of the generalised Jacobian after 40 years. It determines the boundary of the non-empty compact convex set that the generalised Jacobian represents in the space of mxn matrices. The derivation is elementary, does not invoke Rademacher's theorem and is in the same spirit as that of the sub-gradient of a scalar objective function. Moreover, I will show that the same technique can be used to compute the generalised Jacobian of locally Lipschitz multi-objective functions defined on an infinite dimensional Banach space, which allows optimisation over function spaces.

Speaker: Prof. Abbas Edala [Bio sketch]

Bio Sketch. Abbas Edalat has been a professor of computer science and mathematics at the Department of Computing, Imperial College London, since 1997.

Affilation: Department of Computing - Imperial College London
Location: 217 Huxley Building
Time: 3:00pm (duration: 1 hour)

Monday 7 November 2016

On Robust Selection Problems
Abstract. Marc Goerigk is a Lecturer in the Department of Management Science at Lancaster University. He studied mathematics and computer science at the University of Gottingen, where he also completed his PhD in applied mathematics in 2012. From 2012 to 2015, he worked as a Post-Doc at the University of Kaiserslautern. Besides robust optimisation, his research interests include disaster management and public transportation problems.

Speaker: Dr Marc Goerigk [Bio sketch]

Bio Sketch. Marc Goerigk is a Lecturer in the Department of Management Science at Lancaster University. He studied mathematics and computer science at the University of Gottingen, where he also completed his PhD in applied mathematics in 2012. From 2012 to 2015, he worked as a Post-Doc at the University of Kaiserslautern. Besides robust optimisation, his research interests include disaster management and public transportation problems.

Affilation: Department of Management Science - Lancaster University
Location: LT1 Business School
Time: 2:30pm (duration: 1 hour)

Thursday 3 November 2016

On the Passage From Local to Global in Optimization: New Challenges in Theory and Practice
Abstract. Large scale problems in the design of networks, energy systems, medicine and drug design, in finance, and engineering are modeled as optimization problems. Humans and nature are constantly optimizing to minimize costs or maximize profits, to maximize the flow in a network, or to minimize the probability of a blackout in the smart grid. Due to new algorithmic developments and the computational power of machines, optimization algorithms have been used to solve problems in a wide spectrum of applications in science and engineering. In this talk we are going to address new challenges in the theory and practice of optimization. First, we have to reflect back a few decades and see what has been achieved and then address the new research challenges and new directions.

Speaker: Prof. Panos M. Pardalos [Bio sketch]

Bio Sketch. Panos Pardalos is a Distinguished Professor in the Departments of Industrial and Systems and Biomedical Engineering at the University of Florida, and a world renowned leader in Global Optimization, Mathematical Modeling, and Data Sciences. He is currently a Fellow of AAAS, AIMBE, and INFORMS and was awarded the 2013 Constantin Caratheodory Prize of the International Society of Global Optimization. In addition, Dr. Pardalos has been awarded the 2013 EURO Gold Medal prize bestowed by the Association for European Operational Research Societies. This medal is the preeminent European award given to Operations Research (OR) professionals for "scientific contributions that stand the test of time." Prof. Pardalos is also an elected Member of the New York Academy of Sciences, the Lithuanian Academy of Sciences, the Royal Academy of Spain, and the National Academy of Sciences of Ukraine. He is the Founding Editor of Optimization Letters, Energy Systems, and Co-Founder of the International Journal of Global Optimization. He has published over 500 papers, edited/authored over 200 books and organized over 80 conferences. He has about 34,000 citations on his work, an H-index of 81, an I10-index of 472 (Google Scholar) and has graduated 60 PhD students so far.

Affilation: Departments of Industrial and Systems and Biomedical Engineering - University of Florida
Location: Room 145 Huxley Building
Time: 4:30pm (duration: 1 hour)

Tuesday 25 October 2016

Optimization with occasionally accurate data
Abstract. We present global rates of convergence for a general class of methods for nonconvex smooth optimization that include linesearch, trust-region and regularisation strategies, but that allow inaccurate problem information. Namely, we assume the local (first- or second-order) models of our function are only sufficiently accurate with a certain probability, and they can be arbitrarily poor otherwise. This framework subsumes certain stochastic gradient analyses and derivative-free techniques based on random sampling of function values. It can also be viewed as a robustness assessment of deterministic methods and their resilience to inaccurate derivative computation such as due to processor failure in a distribute framework. We show that in terms of the order of the accuracy, the evaluation complexity of such methods is the same as their counterparts that use deterministic accurate models; the use of probabilistic models only increases the complexity by a constant, which depends on the probability of the models being good. Time permitting, we also discuss the case of inaccurate, probabilistic function value information, that arises in stochastic optimization. This work is joint with Katya Scheinberg (Lehigh University, USA).

Speaker: Coralia Cartis [Bio sketch]

Bio Sketch. Coralia Cartis (BSc Mathematics, Babesh-Bolyai University, Romania; PhD Mathematics, University of Cambridge (2005)) has joined the Mathematical Institute at Oxford and Balliol College in 2013 as Associate Professor in Numerical Optimization. Previously, she worked as a research scientist at Rutherford Appleton Laboratory and as a postdoctoral researcher at Oxford University. Between 2007-2013, she was a (permanent) lecturer and senior lecturer in the School of Mathematics, University of Edinburgh. Her research interests address the development, analysis and implementation of algorithms for linear and nonlinear non-convex optimization problems, suitable for large-scale problems. A particular focus of her recent research has been the complexity analysis/global rates of convergence of nonconvex optimization algorithms. Some of her methods have been included in GALAHAD optimization software library. She has also worked on applications of optimization in compressed sensing, signal processing and for parameter estimation in climate modelling.

Affilation: Mathematical Institute - Oxford and Balliol College
Location: Room 218 Huxley Building
Time: 2:00pm (duration: 1 hour)

Tuesday 4 October 2016

CloseOut Risk Evaluation: an optimization-based approach to risk measurement for complex financial portfolios
Abstract. The standard approach used in financial risk management for measuring (market) risk of financial portfolios is based on statistical risk measures such as Value-at-Risk (loss quantile) or CVaR (tail conditional expectation) computed from the portfolio loss distribution; optimization methods are then used to find allocations minimizing such risk measures under constraints. Yet these statistical risk measures have little or no economic foundation and may bear no direct relation to the losses resulting from partial or total liquidiation of the portfolio in stress scenarios; indeed, such losses involve the liquidation.
We argue that, on the contrary, liquidation value and liquidity risk cannot be left out of a proper risk measurement framework. We propose a novel approach to risk measurement for complex financial portfolios, CloseOut Risk Evaluation, which integrates market risk and liquidity risk in a single risk measure, defined as the value function of a min-max stochastic optimization problem which has a clear financial interpretation in terms of funding requirements. The method is implementable using Linear Programming techniques, which makes it scalable to large dimensional portfolios with thousands of risk factors, andleads to non-linear scaling of risk as a function of portfolio size. We illustrate the method using several examples, including the JP Morgan London Whale portfolio.
Joint work with: Marco AVELLANEDA (NYU Courant Institute).

Speaker: Prof. Rama Cont [Bio sketch]

Bio Sketch. Rama CONT is Professor of Mathematics and Chair in Mathematical Finance at Imperial College London, Director of the CFM-Imperial Institute of Quantitative Finance. and co-Director of the EPSRC Centre for Doctoral Training in Financial Analytics and Computing.
He joined Imperial College in 2012 after holding teaching and research positions at Ecole Polytechnique (France), Columbia University (New York) and Université Pierre & Marie Curie (Paris VI).
Rama Cont's research focuses on stochastic analysis, stochastic processes and mathematical modeling in finance, in particular the modeling of extreme market risks: discontinuities in market behavior, extreme risks, endogenous risk and systemic risk. He has participated in numerous consulting projects for financial institutions and regulators in the UK, Europe, US and Asia. He has co-authored more than 60 research publications, including the widely cited monograph Financial Modelling with Jump Processes (2003), and is the Editor-in-Chief of a major reference work, the Encyclopedia of Quantitative Finance (Wiley 2010). He served as Chair of the SIAM Activity Group on Financial Mathematics and Financial Engineering (2010-2012).
Prof. Cont was awarded the Louis Bachelier Prize by the French Academy of Sciences in 2010 for his research on mathematical modeling in finance.
He holds a Doctorat from Université de Paris Sud (Orsay), a Masters degree in Theoretical Physics from Ecole Normale Supérieure (Paris) and a BSc from Ecole Polytechnique (France).

Affilation: CFM-Imperial Institute of Quantitative Finance
Location: Room 217 Huxley Building
Time: 11:00am (duration: 1 hour)

Monday 3 October 2016

Pooling Problems: Advances in Theory and Applications
Abstract. The pooling problem is a nonconvex nonlinear programming problem with important applications. The nonlinearities of the problem arise from bilinear constraints that capture the blending of raw materials. In this talk, we summarise our recent contributions to the problem, which fall into the following categories:
Formulations: we propose new multi-commodity flow formulations based on output, input and output and (input, output) commodities, and evaluate their performance computationally.
Complexity: we show that the pooling problem with one pool and a bounded number of inputs can be solved in polynomial time.
Bounding the gap between the McCormick relaxation and the convex hull: we show that the so-called McCormick relaxation can be arbitrarily worse than the convex hull.
Convex hulls of bilinear functions: Padberg introduced new classes of inequalities that can significantly strengthen the McCormick relaxation. We study classes of bilinear functions where some of the Padberg inequalities characterise the convex hull, and evaluate computationally which of the inequalities are strongest.
We conclude the talk by studying an application of particular interest to Novocastrians: optimising coal blending operations at the port of Newcastle; the world's largest coal export port. This is joint work with my PhD supervisors, Dr Thomas Kalinowski, Prof Natashia Boland, and Prof Martin Savelsbergh.

Speaker: Fabian Rigterink [Bio sketch]

Bio Sketch. Fabian Rigterink is a PhD candidate at the University of Newcastle, Australia. He is supervised by Dr Thomas Kalinowski, Prof Natashia Boland, and Prof Martin Savelsbergh. Prior to commencing his PhD, Fabian received his BSc and MSc in Industrial Engineering and Management from Karlsruhe Institute of Technology, Germany.

Affilation: School of Mathematical and Physical Sciences - The University of Newcastle Australia
Location: Room 554 Huxley Building
Time: 3:00pm (duration: 1 hour)

Friday 16 September 2016

Mixed-integer convex optimization
Abstract. Mixed-integer convex optimization problems are convex problems with the additional (non-convex) constraints that some variables may take only integer values. Despite the past decades' advances in algorithms and technology for both mixed-integer *linear* and *continuous, convex* optimization, mixed-integer convex optimization problems have remained relatively more challenging and less widely used in practice. In this talk, we describe our recent algorithmic work on mixed-integer convex optimization which has yielded advances over the state of the art, including the globally optimal solution of open benchmark problems. Based on our developments, we have released Pajarito, an open-source solver written in Julia and accessible from popular optimization modeling frameworks. Pajarito is immediately useful for solving challenging mixed combinatorial continuous problems arising from engineering and statistical applications.

Speaker: Miles Lubin [Bio sketch]

Bio Sketch. Miles Lubin is a Ph.D. candidate in Operations Research at the Massachusetts Institute of Technology advised by Juan Pablo Vielma. His research interests span diverse areas of mathematical optimization, with a unifying theme of developing new methodologies for large-scale optimization drawing from motivating applications in renewable energy. He has published work in chance constrained optimization, mixed-integer conic optimization, robust optimization, stochastic programming, algebraic modeling, automatic differentiation, numerical linear algebra, and parallel computing techniques for large-scale problems. He is an author of the JuMP modeling package and co-founder of the JuliaOpt organization for optimization software written in Julia.

Affilation: Massachusetts Institute of Technology
Location: Room 217 Huxley Building
Time: 3:00pm (duration: 1 hour)

Friday 1 July 2016

Scheduling Algorithms for Energy Efficiency in Computing Systems
Abstract. Energy consumption of computing devices has become an important issue nowadays. A major tool for efficient energy management in the system level is dynamic speed (frequency) scaling combined with job scheduling. In this context, the processing time of a job is not fixed, but it depends on the speed at which it is processed while the energy is a convex function of the speed. The main goal is the design efficient algorithms which compute good trade-off solutions with respect to performance and energy. Our focus will be algorithmic techniques with provably good performances for fundamental problems of the area.

Speaker: Dimitrios Letsios [Bio sketch]

Bio Sketch. Dimitrios Letsios has very recently joined Imperial College as a postdoctoral researcher. Previously, he has served as a postdoctoral researcher with teaching duties at the University of Nice - Sophia Antipolis (2015-2016), at the Technical University of Munich (2014-2015) and at University Pierre and Marie Curie (2013-2014). Before, he obtained his PhD degree at the University of Evry in Paris (2010-2013) and his MSc and BSc degrees at Athens University of Economics and Business (2004-2010). His research interests lie in theoretical computer science and, more specifically, the design of algorithms with proven performance guarantees (approximation algorithms). He has mainly worked on scheduling problems taking into account the energy consumption and communication costs of computing systems.

Affilation: Department of Computing - Imperial College London
Location: Huxley Building
Time: 3:00pm (duration: 1 hour)

Tuesday 31 May 2016

Symmetry Groups and Topological Structure of Optimisation Problems
Abstract.

Speaker: Georgia Kouyialis [Bio sketch]

Bio Sketch. Georgia Kouyialis is a PhD student in the Department of Computing (QUADS group), at Imperial College, under the supervision of Dr. Ruth Misener. She obtained the MSci (Hons) degree in Mathematics from University College London (UCL). She received the EPSRC DTA funding and her research evolves around Mixed Integer Nonlinear Programming.

Affilation: Department of Computing - Imperial College London
Location: Huxley Building
Time: 5:00pm (duration: 1 hour)
Asymptotic Error Bounds for Control Constrained Singularly Perturbed Linear Quadratic Optimal Control Problems
Abstract.

Speaker: Sei Howe [Bio sketch]

Bio Sketch. Sei Howe is a PhD student in the QUADS group at Imperial College. She received her B.A in pure mathematics from Reed College, USA in 2011 and her M.Sc. in pure mathematics from Imperial College in 2012. Her supervisor is Dr. Panos Parpas and her research focuses on stochastic optimization of multi-scale processes.

Affilation: Department of Computing - Imperial College London
Location: Huxley Building
Time: 4:00pm (duration: 1 hour)

Tuesday 17 May 2016

A Parametric Approach to Solving the Pooling Problem
Abstract. We develop an algorithm solving specialised pooling problem instances and generating cutting planes for more generic instances. The approach parameterises the optimisation problem with respect to the pool concentration variables and uncovers embedded sparsity and polyhedral/topological properties for a variety of instances. The presentation generalises and extends recent work analysing computational complexity of the pooling problem [Boland et al. 2015, Haugland 2016]. Our analysis also integrates source-to-output streams and both upper and lower bounds on the network parameters.

Speaker: Radu Baltean Lugojan [Bio sketch]

Bio Sketch. Radu Baltean-Lugojan is a PhD student in the Department of Computing (QUADS group) at Imperial College London, under the supervision of Dr. Ruth Misener and Dr. Panos Parpas. He received EPSRC funding, and previously obtained the MEng Computing degree from Imperial College London.

Affilation: Department of Computing - Imperial College London
Location: Huxley Building
Time: 5:00pm (duration: 1 hour)
On the Convergence of Galerkin Type Multilevel Optimization Methods
Abstract.

Speaker: Chin Pang Ho (Clint) [Bio sketch]

Bio Sketch. Chin Pang Ho (Clint) is a PhD student in the Department of Computing (QUADS group) at Imperial College, under the supervision of Dr Panos Parpas. He received a BS in Applied Mathematics from the University of California, Los Angeles and an MSc in Mathematical Modeling and Scientific Computing from the University of Oxford.

Affilation: Department of Computing - Imperial College London
Location: Huxley Building
Time: 4:00pm (duration: 1 hour)

Tuesday 3 May 2016

Integrating Mixed Integer Optimisation and Logic with Satisfiability Modulo Theories
Abstract. Mixed integer optimisation problems, especially those involving design or organisation, often have an inherent logical structure. Existing frameworks to model and utilise such structure reformulate the problem into a mixed integer model or make use of specialised constraints. Using the application of two-dimensional bin packing, we explore Satisfiability Modulo Theories (SMT) as a means to exploit logical structure. The logical connectives and reasoning provided by SMT allows us to derive cuts to strengthen a Mixed Integer Linear Programming (MILP) solver and, by using unsatisfiability proofs, identify new ways of traversing the search tree.

Speaker: Miten Mistry [Bio sketch]

Bio Sketch. Miten Mistry is a PhD student in the Department of Computing (QUADS group) at Imperial College London, under the supervision of Dr Ruth Misener. He received EPSRC HiPEDs CDT funding. He previously obtained a MEng Mathematics and Computer Science degree from Imperial College London.

Affilation: Department of Computing - Imperial College London
Location: Room 418 Huxley Building
Time: 5:00pm (duration: 1 hour)
Multi-Level Accelerated Algorithm for Large-Scale Convex Composite Minimization
Abstract. We propose a multi-level algorithm for solving convex composite optimization problems. Our method exploits the fact that many applications that give rise to large-scale problems can be modelled using varying degrees of fidelity. We show that it converges to a minimizer with optimal rate. Using numerical experiments we show that on large-scale computer vision problems our algorithm is several times faster than the state of the art.

Speaker: Vahan Hovhannisyan [Bio sketch]

Bio Sketch. Vahan Hovhannisyan is a PhD student in the QUADS group at Imperial College, under the supervision of Dr Panos Parpas. He received a BS in Applied Mathematics from the State Engineering University of Armenia and an MSc in Applied Mathematical (with application area in operations management) from ETH Zurich. His research interests are convex robust optimization with applications in machine learning.

Affilation: Department of Computing - Imperial College London
Location: Room 418 Huxley Building
Time: 4:00pm (duration: 1 hour)

Wednesday 27 April 2016

Parallel Algorithms and Applications in Structured Large-scale Optimization
Abstract. Mathematical programming has proven to be an efficient tool for design and operation of chemical processes. However, engineering and scientific needs continue to push the boundaries of existing mathematical programming tools, often outstripping the capabilities of a single CPU workstation. Furthermore, computer chip manufacturers are no longer focusing on increasing clock speeds, and the free performance improvements that we have historically enjoyed will no longer be available, unless we develop algorithms that are capable of utilizing modern parallel architectures. This presentation discusses advances in parallel algorithms for structured nonlinear mathematical programming problems, along with a few applications of large-scale optimization. Large-scale optimization formulations arise from a number of different problem classes, including design and operations under uncertainty, optimization of complex networks, and optimization of discretized systems. In this presentation, I will outline applications in each of these areas. In design of process safety systems, we have developed advanced stochastic programming formulations for the optimal placement of gas detectors in chemical process facilities based on data from CFD simulations of leak dispersion. As well, we have partnered with both industry and federal agencies to develop a suite of tools for protecting drinking water distribution systems in the event of accidental or intentional contamination. Our research has focused on improved simulation capabilities, optimal placement of booster response units, real-time determination of contamination sources, and response optimization. Finally, we have been working with epidemiologists at Johns Hopkins University to develop improved models of infectious disease spread. These dynamic optimization formulations find seasonal patterns in inputs by solving inverse problems based on observed case counts. In particular, these results help quantify the importance of school-term holiday schedules on the spread of childhood infectious diseases.

Speaker: Prof. Carl Laird [Bio sketch]

Bio Sketch. Carl Laird is an associate professor in the School of Chemical Engineering at Purdue University. Dr. Laird's research interests include large-scale nonlinear optimization and parallel scientific computing. Focus areas include chemical process systems, homeland security applications, and large-scale infectious disease spread. Dr. Laird is the recipient of several research and teaching awards, including the CAST Division Outstanding Young Researcher Award, National Science Foundation Faculty Early Development (CAREER) Award and the Montague Center for Teaching Excellence Award. He is also a recipient of the prestigious Wilkinson Prize for Numerical Software and the IBM Bravo award for his work on IPOPT, a software library for solving nonlinear, nonconvex, large-scale continuous optimization problems. Dr. Laird earned his Ph.D. in Chemical Engineering from Carnegie Mellon in 2006 and his Bachelor of Science in Chemical Engineering from the University of Alberta.

Affilation: School of Chemical Engineering - Purdue University
Location: Lecture Theatre 2 ACEX Building
Time: 4:00pm (duration: 1 hour)

Monday 21 March 2016

On some tractable optimization models dealing with uncertainty
Abstract. In the first part of the talk we consider the minmax regret model for combinatorial optimization problems under uncertainty, which can be viewed as a zero-sum game played between an optimizing player and an adversary, where the optimizing player selects a solution and the adversary selects costs with the intention of maximizing the regret of the player. The conventional model considers only deterministic solutions/strategies, and minmax regret versions of most polynomial solvable problems are NP-hard. In this talk, we consider a randomized model where the optimizing player selects a probability distribution (corresponding to a mixed strategy) over solutions and the adversary selects costs with knowledge of the player’s distribution, but not its realization. We show that under this randomized model, the minmax regret version of any polynomial solvable combinatorial problem becomes polynomial solvable. This holds true for both interval and discrete scenario representations of uncertainty. In the second part of the talk we consider satisficing models, which, as an approach to decision-making under uncertainty, aims at achieving solutions that satisfy the problem’s constraints as much as possible. Mathematical optimization problems that are related to this form of decision-making include the P-model of Charnes and Cooper (1963), where satisficing is the objective, as well as chance-constrained and robust optimization problems, where satisficing is articulated in the constraints. In this talk, we introduce the most general framework of a satisficing model, termed the S-model, which seeks to maximize a satisficing decision criterion in its objective, and the corresponding satisficing-constrained optimization problem that generalizes robust optimization and chance-constrained optimization problems. We then focus on a tractable probabilistic S-model, termed the T-model whose objective is a lower bound of the P-model.

Speaker: Prof. Patrick Jaillet [Bio sketch]

Bio Sketch. Patrick Jaillet is the Dugald C. Jackson Professor in the Department of Electrical Engineering and Computer Science and a member of the Laboratory for Information and Decision Systems at MIT. He is also one of the two Directors of the MIT Operations Research Center. Before MIT, he held faculty positions at the University of Texas at Austin and at the Ecole Nationale des Ponts et Chaussees, Paris. He received a Diplôme d'Ingénieur from France, and a PhD in Operations Research from MIT. His current research interests include on-line and data-driven optimization under uncertainty. He is a Fellow of INFORMS and a member of SIAM.

Affilation: Laboratory for Information and Decision Systems - MIT
Location: LG19 seminar room Business School
Time: 12:00pm (duration: 1 hour)

Wednesday 9 March 2016

On Optimal selection of fixed-size populations: an application to tree breeding
Abstract. One of the problems that tree breeders face is the selection of a pedigree of trees with two aims: 1) conserving genetic diversity; 2) maximize response to selection. We tackled the problem of selecting a fixed-size breeding population while imposing a constraint on relatedness of the population members. The problem is expressed as a Mixed Integer Quadratically Constrained Optimization (MIQCO), in which a function is maximized subject to nonlinear quadratic constraints an discreteness of some variables, and solved using a variant of the branch-and-bound method that uses a linear relaxation of the original problem. I will discuss details of the problem and of the algorithm (including a fast heuristic to find feasible solutions). I will also illustrate case studies of the selection of breeding populations for Scots pine and loblolly pine (Joint work with Tim Mullin, Skogforsk, the Swedish Forestry Research Institute).

Speaker: Dr. Pietro Belotti [Bio sketch]

Bio Sketch. Pietro Belotti received a PhD in Computer Engineering in 2003 from the Technical University of Milan with a dissertation on optimal network design under survivability constraints. He has subsequently held a postdoctoral position at the Tepper School of Business, Carnegie Mellon University, a Visiting Professor post at the Department of Industrial and Systems Engineering, Lehigh University, and then an Assistant professor position at the department of Mathematical Sciences of Clemson University. He is currently working at Fair Isaac, in the development team of the Xpress Optimizer. His research interests lie primarily in mixed integer nonlinear optimization, robust optimization, and discrete bi-objective optimization.

Affilation: FICO company
Location: CPSE seminar room (C615 Roderic Hill)
Time: 4:00pm (duration: 1 hour)

Friday 26 February 2016

Large-scale MILP and MINLP problems in power system planning
Abstract. Achieving the ambitious decarbonization goals set by governments worldwide will entail significant changes to the way electrical energy is generated, transmitted and used. The cost-effective integration of inflexible low-carbon plant within conventional energy systems constitutes a significant challenge. Furthermore, transmission planners are unable to make fully-informed decisions due to the increasing uncertainty that surrounds future system developments. Recently, it has been shown that stochastic system planning based on scenario trees enables the identification of strategic opportunities for the management of long-term uncertainty. However, the description of these problems is given by large MILP models which contain thousands of binary variables and tens of millions of continuous variables and constraints. After reviewing the general characteristics of the stochastic multi-stage transmission planning problem we will present two novel solution algorithms; one based on hierarchical decomposition and one on temporal problem splitting. We will demonstrate their computational benefits and highlight how efficient solutions can inform the real-world planning process. We will also present a typology of MILP and MINLP problems encountered in energy system planning and operation.

Speaker: Dr. Ioannis Konstantelos [Bio sketch]

Bio Sketch. Ioannis Konstantelos is a Research Associate in the Control and Power group, Electrical Engineering, Imperial College London. He obtained a PhD from Imperial College in 2013. His work has focused on the development of optimisation models for transmission and distribution system planning and operation aimed at valuing the benefit of new technologies, demonstrating the strategic value of storage and demand-side response and other flexible technologies when facing long-term uncertainty. His research interests include the application of decomposition and machine learning techniques to large-scale optimization problems for energy systems.

Affilation: Department of Electrical and Electronic Engineering - Imperial College London
Location: Room 218 Huxley Building
Time: 4:00pm (duration: 1 hour)

Wednesday 24 February 2016

On the standard pooling problem and strong valid inequalities
Abstract. The focus of this talk will be on the standard pooling problem, i.e., a continuous, non-convex optimization problem arising in the chemical engineering context. First, we will introduce the problem that consists of finding the optimal composition of final products obtained by blending in pools different percentages of raw materials. Bilinear terms arise from the requirements on the quality of certain attributes of the final products. The quality is a linear combination of the attributes of the raw materials and intermediate products that compose the final product. Three different classical formulations have been proposed in the literature and their characteristics will be discussed and analysed. In the second part of the talk, strong relaxations for the pooling problem will be presented. In particular, we studied a structured non-convex subset of some special cases to derive valid nonlinear convex inequalities that we conjecture, and proved for a particular case, to define the convex hull of the non-convex subset. Preliminary computational results on instances from the literature are reported and demonstrate the utility of the inequalities when used in a global optimization solver. This is a joint work with Jeff Linderoth (University of Wisconsin-Madison), James Luedtke (University of Wisconsin-Madison), Jonas Schweiger (IBM).

Speaker: Dr. Claudia D Ambrosio [Bio sketch]

Bio Sketch. Claudia D'Ambrosio is a research scientist (chargé de recherche) at CNRS affiliated at LIX, Ecole Polytechnique (France). She holds a Computer Science Engineering Master Degree and a PhD in Operations Research from University of Bologna (Italy). Her research speciality is mixed integer nonlinear programming. During her whole carrier, she was involved both in theoretical and applied research projects. She was awarder the EURO Doctoral Dissertation Award for her PhD thesis supervised by Professor Andrea Lodi and the 2nd award "Prix Robert Faure" (3 candidates are awarded every 3 years) granted by ROADEF society. or more detailed info:

Affilation: Laboratory for Information - Ecole Polytechnique
Location: Room 311 Huxley Building
Time: 4:00pm (duration: 1 hour)

Monday 8 February 2016

Revenue-Optimising Scheduling in Parallel Stochastic Networks
Abstract. Cloud applications are often deployed on multiple virtual machines (VMs) with heterogeneous compute capacities. In this setting, we consider the optimal static scheduling of users to application servers hosted in a set of parallel VMs. Our investigation seeks for a revenue-maximizing solution subject to resource utilization constraints, multiple classes of users, and a stochastic queueing-based description of latency experienced by the users at the VMs. After overviewing the general characteristics of scheduling in queueing networks, and the underpinning optimization programs, I will show that under a limiting regime this problem reduces to a bilinear optimization program. I will then introduce an heuristic solution for this program and determine an optimality gap. I will also demonstrate the effectiveness of this heuristic in a real system implementation and in comparison to approximate solutions that rely on convex formulations.

Speaker: Dr. Giuliano Casale [Bio sketch]

Bio Sketch.

Affilation: Department of Computing - Imperial College London
Location: Room 145 Huxley Building
Time: 4:00pm (duration: 1 hour)

Tuesday 19 January 2016

The Decision Rule Approach to Optimization Under Uncertainty: Theory and Applications
Abstract. Decision making under uncertainty has a long and distinguished history in operations research. However, most of the existing solution techniques suffer from the curse of dimensionality, which restricts their applicability to small and medium-sized problems, or they rely on simplifying modeling assumptions (e.g. absence of recourse actions). Recently, a new solution technique has been proposed, which is referred to as the decision rule approach. By approximating the feasible region of the decision problem, the decision rule approach aims to achieve tractability without changing the fundamental structure of the problem. Despite their success, existing decision rules (a) are typically constrained by their a priori design and (b) do not incorporate in their modeling binary recourse decisions. In this talk, we present a methodology for the near optimal design of continuous and binary decision rules using mixed-integer optimization, and demonstrate its potential in operations management applications.

Speaker: Dr. Angelos Georghiou [Bio sketch]

Bio Sketch. Angelos Georghiou is a post-doctoral researcher with the Automatic Control Laboratory at ETH Zurich. He joined ETH in 2013, having previously been a post-doctoral researcher at the Process Systems Engineering Laboratory at MIT. He received the MSci degree in Mathematics in 2008 from Imperial College London, and the Ph.D. degree in Operations Research in 2012 from the Department of Computing at Imperial College London. Angelos's research focuses on the development of efficient computational methods for the solution of stochastic and robust optimization problems. His work is primarily application driven, the main application areas being energy systems, operations management, and control.

Affilation: Automatic Control Laboratory - Swiss Federal Institute of Technology (ETH)
Location: Room 217 Huxley Building
Time: 3:00pm (duration: 1 hour)

Wednesday 13 January 2016

Ambiguous Joint Chance Constraints under Mean and Dispersion Information
Abstract. We study joint chance constraints where the distribution of the uncertain parameters is only known to belong to an ambiguity set characterized by the mean and support of the uncertainties and by an upper bound on their dispersion. This setting gives rise to pessimistic (optimistic) ambiguous chance constraints, which require the corresponding classical chance constraints to be satisfied for every (for at least one) distribution in the ambiguity set. We provide tight conditions under which pessimistic and optimistic joint chance constraints are computationally tractable, and we show numerical results that illustrate the power of our tractability results. This is joint work with Grani Hanasusanto, Vladimir Roitch and Wolfram Wiesemann.

Speaker: Prof. Daniel Kuhn [Bio sketch]

Bio Sketch. Daniel Kuhn holds the Chair of Risk Analytics and Optimization at EPFL. Before joining EPFL, he was a faculty member at Imperial College London (2007–2013) and a postdoctoral researcher at Stanford University (2005–2006). He received a Ph.D. in Economics from the University of St. Gallen in 2004 and an M.Sc. in Theoretical Physics from ETH Zürich in 1999. His research interests revolve around robust optimization and stochastic programming.

Affilation: Risk Analytics and Optimization - Ecole Polytechnique Federale De Lausanne (EPFL)
Location: Room 217 Huxley Building
Time: 4:00pm (duration: 1 hour)

Tuesday 1 December 2015

Rescaled coordinate descent methods for Linear Programming
Abstract. Simple coordinate descent methods such as von Neumann’s algorithm or Perceptron, both developed in the 50s, can be used to solve linear programming feasibility problems. Their convergence rate depends on the condition measure of the problem at hand, and is typically not polynomial. Recent work of Chubanov (2012, 2014), related to prior work of Betke (2004), has gathered renewed interest in the application of these methods in order to obtain polynomial time algorithms for linear programming. We present two algorithms that fit into this line of research. Both our algorithms alternate between coordinate descent steps and rescaling steps, so that either the descent step leads to a substantial improvement in terms of the convergence, or we can infer that the problem is ill conditioned and rescale in order to improve the condition measure. In particular, both algorithms are based on the analysis of a geometrical invariant of the LP problem, used as a proxy for the condition measure, that appears to be novel in the literature. This is joint work with Daniel Dadush (CWI) and László Végh (LSE)

Speaker: Dr. Giacomo Zambelli [Bio sketch]

Bio Sketch. Dr Zambelli is an Associate Professor (Reader) in the Department of Mathematics at the London School of Economics and Political Science, which he joined in September 2010. Previously he was Assistant Professor at the University of Padova. He completed his PhD in Algorithms, Combinatorics and Optimization at the Tepper School of Business, Carnegie Mellon University. He is a co-recipient of the 2015 INFORMS Lanchester Prize.

Affilation: Department of Management - London School of Economics and Political Science
Location: SALC 10 Sherfield Building
Time: 4:00pm (duration: 2 hour)

Friday 16 October 2015

Families of Convex and Non-Convex Composite Optimization Problems for Signal Processing and Computer Vision
Abstract.

Speaker: Dr. Stefanos Zafeiriou [Bio sketch]

Bio Sketch. Stefanos Zafeiriou is a Senior Lecturer (equivalent to Associate Professor) in Pattern Recognition/Statistical Machine Learning for Computer Vision in the Department of Computing, Imperial College London. He has been awarded one of the prestigious Junior Research Fellowships (JRF) from Imperial College London in 2011 to start his own independent research group. He is/has participated in more than 10 EU, British and Greek research projects. Dr. Zafeiriou currently serves as an Associate Editor in IEEE Transactions on Cybernetics and Image and Vision Computing journal. He has been guest editor in more than four special issues and co-organized more than five workshops/ special sessions in top venues such as CVPR/FG/ICCV/ECCV. He has co-authored more than 40 journal papers mainly on novel statistical machine learning methodologies applied to computer vision problems such as 2D/3D face and facial expression recognition, deformable object tracking, human behaviour analysis etc published in the most prestigious journals in his field of research (such as IEEE T-PAMI, IJCV, IEEE T-IP, IEEE T-NNLS, IEEE T-VCG, IEEE T-IFS etc). His students are frequent recipients of very prestigious and highly competitive fellowships such as Google Fellowship, Intel Fellowship and the Qualcomm fellowship. He has more than 2000 citations to his work, h-index 24.

Affilation: Department of Computing - Imperial College London
Location: Room 217 - 218 Huxley Building
Time: 3:00pm (duration: 1 hour)

Wednesday 14 October 2015

Generating structured music with local search and machine learning
Abstract. Many state of the art music generation/improvisation systems generate music that sounds good on a note-to-note level. However, these compositions often lack long term structure or coherence. By looking at generating music as an optimization problem, this research overcomes this problem and generates music that has a larger structure. A powerful variable neighbourhood search algorithm (VNS) was developed, which is able to generate a range of musical styles based on it's objective function, whilst constraining the music to a structural template. In the first stage of the project, an objective function based on rules from music theory was used to generate counterpoint. In this research, a machine learning approach is combined with the VNS in order to generate structured music for the bagana, an Ethiopian lyre. Different ways are explored in which a Markov model can be used to construct quality metrics that represent how well a fragment fits the chosen style (e.g. music for bagana). Current research that aims to extend the objective function with models such as recursive neural networks is also briefly discussed. The approach followed in this research allows us to combine the power of machine learning methods with optimization algorithms.

Speaker: Dr. Dorien Herremans [Bio sketch]

Bio Sketch. Dorien Herremans is currently a Marie Skodowska-Curie Postdoctoral Fellow at C4DM, Queen Mary University of London. She got her PhD in Operations Research on the topic of Computer Generation and Classification of Music through Operations Research Methods (Compose: Compute - Generating and Classifying Music through Operations Research Methods). She graduated as a commercial engineer in management information systems at the University of Antwerp in 2005. After that, she worked as a Drupal consultant and was an IT lecturer at the Les Roches University in Bluche, Switzerland. She also worked as a mandaatassistent at the University of Antwerp, in the domain of operations management, supply chain management and operations research (OR). Her current research focuses on applications of OR in the field of music.

Affilation: School of Electronic Engineering and Computer Science - Queen Mary University
Location: LT 144 Huxley Building
Time: 3:30pm (duration: 1 hour)

Wednesday 2 September 2015

A two-phase proximal augmented Lagrangian method for large scale convex composite quadratic programming
Abstract. We consider an important class of high dimensional convex composite quadratic optimization problems with large numbers of linear equality and inequality constraints. Our work is motivated by the recent interests in convex quadratic conic programming problems, as well as from convex quadratic programming problems with dual block angular structures such as those arising from two stage stochastic programming problems. In this talk, we first introduce a symmetric Gauss-Seidel (sGS) decomposition theorem for solving an unconstrained convex composite programming problem whose objective is the sum of a multi-block quadratic function and a non-smooth function involving only the first block. Then, based on the sGS decomposition theorem, we propose a two phase proximal augmented Lagrangian method to efficiently solve the targeted problem to high accuracy. Specifically, in Phase I, we design an inexact sGS-based semi-proximal ADMM to generate a reasonably good initial point to warm-start the algorithm in Phase II, which is a semi-smooth NewtonCG based proximal augmented Lagrangian method capable of computing a high accuracy solution efficiently.

Speaker: Prof. Kim Chuan Toh [Bio sketch]

Bio Sketch. Kim-Chuan Toh is a Professor at the Department of Mathematics, National University of Singapore (NUS). He obtained his Bachelor degree from NUS in 1990 and the PhD degree from Cornell University in 1996 under the guidance of Professor Nick Trefethen. He is currently an Area Editor for Mathematical Programming Computation, and an Associate Editor for the SIAM Journal on Optimization. His research focuses on designing efficient algorithms and software for convex programming, particularly large scale matrix optimization problems such as semidefinite programming (SDP) and convex quadratic SDP.

Affilation: Department of Mathematics - National University of Singapore
Location: CPSE seminar room (C615 Roderic Hill)
Time: 11:00am (duration: 1 hour)

Tuesday 19 May 2015

Estimating variance matrices
Abstract.  This talk introduces a new method for estimating variance matrices. Starting from the orthogonal decomposition of the sample variance matrix, we exploit the fact that orthogonal matrices are never ill-conditioned and therefore focus on improving the estimation of the eigenvalues. We estimate the eigenvectors from just a fraction of the data, then use them to transform the data into approximately orthogonal series that deliver a well-conditioned estimator (by construction), even when there are fewer observations than dimensions. We also show that our estimator has lower error norms than the traditional one. Our estimator is design-free: we make no assumptions on the distribution of the random sample or on any parametric structure the variance matrix may have. Simulations confirm our theoretical results and they also show that our simple estimator does very well in comparison with other existing methods, especially when the data are generated from fat-tailed densities.

Speaker: Prof. Karim Abadir [Bio sketch]

Bio Sketch.  Karim Abadir is the Chair of Financial Econometrics at Imperial College, London. He obtained his DPhil from Oxford University. His MA (Economics) and BA (Major in Economics, Minor in Business) are from the American University in Cairo. He started his academic career as a lecturer in Economics at Lincoln College, Oxford. He then joined the University of Exeter as a Senior Lecturer in Statistics and Econometrics, rising to the position of Reader in Econometrics. From 1996-2005, he held a Chair in Econometrics and Statistics at the University of York, joint between the Departments of Mathematics and Economics. He is credited with having solved in his DPhil a major long-standing problem in Mathematical Statistics and Time Series that was open since the 1950's. More recently, he has predicted the timing of the 2008 recession a year in advance, and the different timings of the recoveries in various Western countries. He is a founding member of the liberal party Al Masreyeen Al Ahrrar (translates as Free/Liberal Egyptians), established in 2011.

Affilation: Imperial College Business School
Location: Room 218 Huxley Building
Time: 2:00pm (duration: 1 hour)

Wednesday 13 May 2015

Computational Progress in Linear and Mixed Integer Programming
Abstract. We will look at the progress in linear and mixed-integer programming software over the last 25 years. As a result of this progress, modern linear programming codes are now capable of robustly and efficiently solving instances with multiple millions of variables and constraints. With these linear programming advances as a foundation, mixed-integer programming then provides the modeling framework and solution technology that enables the overwhelming majority of present-day business planning and scheduling applications, and is the key technology behind prescriptive analytics. The performance improvements in mixed-integer programming code overs the last 25 years have been nothing short of remarkable, well beyond those of linear programming and have transformed this technology into an out-of-the box tool with applications to an almost unlimited range of real-world problems.

Speaker: Dr. Robert Bixby [Bio sketch]

Bio Sketch.

Affilation: Gurobi
Location: CPSE seminar room (C615 Roderic Hill)
Time: 11:00am (duration: 1 hour)

Thursday 7 May 2015

A bilevel programming problem occurring in smart grids
Abstract. A key property to define a power grid "smart" is its real-time, fine-grained monitoring capabilities. For this reason, a variety of monitoring equipment must be installed on the grid. We look at the problem of fully monitoring a power grid by means of Phasor Measurement Units (PMUs), which is a graph covering problem with some equipment-specific constraints. We show that, surprisingly, a bilevel formulation turns out to provide the most efficient algorithm.

Speaker: Prof. Leo Liberti [Bio sketch]

Bio Sketch. Leo Liberti obtained his B.Sc. in Mathematics and his Ph.D. in Process Systems Engineering from Imperial College. He became a professor at Ecole Polytechnique (France), then a Research Staff Member at IBM Research (USA). He was recently appointed Research Director at CNRS and part-time professor back at Ecole Polytechnique. His research interests are Mixed-Integer Nonlinear Programming and Distance Geometry.

Affilation: Ecole Polytechnique Paris
Location: CPSE seminar room (C615 Roderic Hill)
Time: 11:00am (duration: 1 hour)

Tuesday 5 May 2015

A cycle-based formulation and valid inequalities for DC power transmission problems with switching
Abstract. It is well-known that optimizing network topology by switching on and off transmission lines improves the efficiency of power delivery in electrical networks. Many authors have studied the problem of determining an optimal set of transmission lines to switch off to minimize the cost of meeting a given power demand under the direct current (DC) model of power flow. This problem is known in the literature as the Direct-Current Optimal Transmission Switching Problem (DC-OTS). Most research on DC-OTS has focused on heuristic algorithms for generating quality solutions or on the application of DC-OTS to crucial operational and strategic problems. The mathematical theory of the DC-OTS problem is less well-developed. In this work, we formally establish that DC-OTS is NP-Hard, even if the power network is a series-parallel graph with at most one load/demand pair. Inspired by Kirchoff's Voltage Law, we give a cycle-based formulation for DC-OTS, and we use the new formulation to build a cycle-induced relaxation. We characterize the convex hull of the cycle-induced relaxation, and the characterization provides strong valid inequalities that can be used in a cutting-plane approach to solve the DC-OTS. We give details of a practical implementation, and we show promising computational results on standard benchmark instances.
Co-authors: This is joint work with: Burak Kocuk, Santanu Dey, Andy Sun (Georgia Tech), Hyemin Jeon, and Jim Luedtke (Wisconsin)

Speaker: Prof. Jeff Linderoth [Bio sketch]

Bio Sketch. Jeff Linderoth is a Professor in the departments of Industrial and Systems Engineering and Computer Sciences (by courtesy) at the University of Wisconsin-Madison, joining both departments in 2007. Dr. Linderoth received his Ph.D. degree from the Georgia Institute of Technology in 1998. He was awarded an an Early Career Development Award from the Department of Energy, and he has won the SIAM/Activity Group on Optimization Prize and the INFORMS Computing Society ICS Prize. Dr. Linderoth currently serves on the editorial boards of 4 journals, including Operations Research and Mathematical Programming Computation.

Affilation: Departments of Industrial and Systems Engineering and Computer Sciences (by courtesy) - University of Wisconsin-Madison
Location: LT 145 Huxley Building
Time: 3:00pm (duration: 1 hour)

Monday 13 April 2015

Are Targets for Renewable Portfolio Standards Too Low? - The Impact of Market Structure on Energy Policy? (joint work with Makoto Tanaka and Yihsu Chen)
Abstract. In order to limit climate change from greenhouse gas emissions, governments have introduced renewable portfolio standards (RPS) to incentivize renewable energy production. While the response of industry to exogenous RPS targets has been addressed in the literature, setting RPS targets from a policymaker's perspective has remained an open question. Using a bi-level model, we prove that the optimal RPS target for a perfectly competitive electricity industry is higher than that for a benchmark centrally planned one. Allowing for market power by the non-renewable energy sector within a deregulated industry lowers the RPS target vis-à-vis perfect competition. Moreover, to our surprise, social welfare under perfect competition with RPS is lower than that when the non-renewable energy sector exercises market power. In effect, by subsidizing renewable energy and taxing the non-renewable sector, RPS represents an economic distortion that over-compensates damage from emissions. Thus, perfect competition with RPS results in "too much" renewable energy output, whereas the market power of the non-renewable energy sector mitigates this distortion, albeit at the cost of lower consumer surplus and higher emissions. Hence, ignoring the interaction between RPS requirements and the market structure could lead to sub-optimal RPS targets and substantial welfare losses.

Speaker: Dr Afzal Siddiqui [Bio sketch]

Bio Sketch. Afzal Siddiqui is a Senior Lecturer in the Department of Statistical Science. Previously, he was a Lecturer in Statistics at UCL (2005-2010) and a College Lecturer in the Department of Banking and Finance at University College Dublin. After having completed his Ph.D. in Industrial Engineering and Operations Research from the University of California at Berkeley in 2002, Afzal served as a Visiting Assistant Professor in the Department of Industrial Engineering and Operations Research at UC Berkeley (2002) and a Visiting Post-doctoral Researcher at the Ernest Orlando Lawrence Berkeley National Laboratory (2002-2003). In addition, he is a Professor (20% time) at the Department of Computer and Systems Sciences of Stockholm University and a Visiting Professor at the Systems Analysis Laboratory of Aalto University.

Affilation: Department of Statistical Science - University College London
Location: Room 217 Huxley Building
Time: 2:00pm (duration: 1 hour)

Wednesday 25 March 2015

The Complexity of Primal-Dual Fixed Point Methods for Ridge Regression
Abstract. We study the ridge regression (L2 regularized least squares) problem and its dual, which is also a ridge regression problem. We observe that the optimality conditions can be formulated in several different but equivalent ways, in the form of a linear system involving a structured matrix depending on a single “stepsize” parameter which we introduce for regularization purposes. This leads to the idea of studying and comparing, in theory and practice, the performance of the fixed point method applied to these reformulations. We compute the optimal stepsize parameters and uncover interesting connections between the complexity bounds of the variants of the fixed point scheme we consider. These connections follow from a close link between the spectral properties of the associated matrices. For instance, some reformulations involve purely imaginary eigenvalues; some involve real eigenvalues and others have all eigenvalues on the complex circle. We show that the deterministic Quartz method---which is a special case of the randomized dual coordinate ascent method with arbitrary sampling recently developed by Qu, Richtarik and Zhang---can be cast in our framework, and achieves the best rate in theory and in numerical experiments among the fixed point methods we study. This is joint work with Peter Richtarik (Edinburgh).

Speaker: Prof. Ademir Ribeiro [Bio sketch]

Bio Sketch. I am an Associate Professor at Department of Mathematics, Federal University of Parana, Brazil, since 1992. I got my undergraduate degree in Mathematics at Federal University of Parana in 1989. In 1993 I finished my MSc in Mathematics, at IMPA - National Institute for Pure and Applied Mathematics. I got my PhD in Optimization at Federal University of Parana in 2005.
My current research interests are applied mathematics, continuous optimization, global and local convergence of algorithms as filter and trust region methods for nonlinear programming and convex optimization, complexity of direct search methods, among others. I have been published papers in journals like Applied Mathematics and Computation, Applied Mathematical Modelling, Optimization, Computational Optimization and Applications, Mathematical Programming and SIAM Journal on Optimization. I have been given talks in some meetings like Optimization Conference (Porto 2007 and Guimarães 2014), Brazilian Workshop on Continuous Optimization (Rio de Janeiro 2009 and Florianópolis 2014) and International Symposium on Mathematical Programming (Rio de Janeiro 2006 and Berlin 2012). I have supervised 2 PhD and 6 MSc students.
In joint work with Elizabeth Wegner Karas, I also have published a book (in Portuguese), called Continuous Optimization: Theoretical and computational aspects. Cengage Learning, Sao Paulo, Brazil, 2013.

Affilation: Department of Mathematics - Federal University of Parana
Location: Room 218 Huxley Building
Time: 2:00pm (duration: 1 hour)

Tuesday 17 March 2015

Bifurcation Analysis Using Complete Search Methods
Abstract. When studying a non-linear dynamic system it is important to locate and characterise its equilibrium manifold and its bifurcation regions within a pre-specified computational domain. Except for very few simple cases, an algebraic characterisation of such manifold is impossible. In this seminar a methodology for locating the equilibrium manifold of non-linear dynamic systems defined by parametric ODEs is presented. This methodology is based on a set-inversion approach which uses state-of-the-art bounding techniques within a complete search algorithm. The efficacy of this approach is illustrated with a challenging non-linear model of an anaerobic digestion process.

Speaker: Dr. Mario Eduardo Villanueva [Bio sketch]

Bio Sketch. Mario Eduardo Villanueva (MEV) graduated with a BEng in Biochemical Engineering from the Instituto Tecnologico de Veracruz, Mexico and an MSc in Advanced Chemical Engineering with Process Systems Engineering from Imperial College London. He is currently a PhD student in the Department of Chemical engineering, under the supervision of Dr. B. Chachuat. His PhD project is concerned with the development of methods and tools for complete search in uncertain dynamic systems.

Affilation: Centre for Process Systems Engineering at Imperial College
Location: Room 217 Huxley Building
Time: 2:00pm (duration: 1 hour)

Friday 13 March 2015

Supply Chain Optimization - from strategic to operational decision levels
Abstract. Supply Chains are complex systems that involve challenging problems. Such problems require suitable answers so as to guarantee efficiency and responsiveness improvements of the involved systems. When answering to such problems optimization is a possible path to follow aiming at building tools that can help the decision makers on the problems solutions that span from strategic to operational levels. The scientific community has been exploring this pathway but much more is required, especially due to the outer shell of new emerging problems. The present talk characterizes supply chain decisions and presents some of the work that has been done on the optimization of supply chains detailing specially the work developed by the Operations and Logistics Group of the Centre for Management Studies at Instituto Superior Técnico (IST) in Lisbon. We conclude with a discussion of the tendencies and future challenges in the area.

Speaker: Prof. Ana Barbosa-Povoa [Bio sketch]

Bio Sketch. Ana Barbosa-Póvoa obtained her PhD in Engineering from Imperial College of Science Technology and Medicine. She is currently a Full Professor of Operations and Logistics at the Department of Management and Engineering of Instituto Superior Técnico (IST), University of Lisbon, Portugal where she is the director of the BSc and Master Programs in Engineering and Management. She is a member of the scientific council of IST and of the University Senate. She is also the Vice-president of the Portuguese Association for Operational Research. She has been acting as reviewer to several national and international research scientific boards on research projects. Her research interests are on the supply chain management, where both forward and reserve structures are included and on the design, planning and scheduling of flexible systems. Ana has published widely in these areas and supervised several Master and PhD students. Ana in 2008 has received the scientific award of Technical University of Lisbon in the scientific area of Industrial Management.

Affilation: University of Lisbon
Location: CPSE seminar room (C615 Roderic Hill)
Time: 3:00pm (duration: 1 hour)

Tuesday 3 March 2015

Formal Proofs for Nonlinear Optimization
Abstract. We present a formally verified global optimization framework. Given a semialgebraic or transcendental function f and a compact semialgebraic domain K, we use the nonlinear maxplus template approximation algorithm to provide a certified lower bound of f over K. This algorithm allows to bound in a modular way some of the constituents of f by suprema of quadratic forms with a well chosen curvature. Thus, we reduce the initial goal to a hierarchy of semialgebraic optimization problems, solved by semidefinite relaxations.
Our implementation tool interleaves semialgebraic approximations with sums of squares witnesses to form certificates. It is interfaced with Coq and thus benefits from the trusted arithmetic available inside the proof assistant. This feature is used to produce, from the certificates, both valid underestimators and lower bounds for each approximated constituent. The application range for such a tool is widespread; for instance Hales' proof of Kepler's conjecture yields thousands of multivariate transcendental inequalities. We illustrate the performance of our formal framework on some of these inequalities as well as on examples from the global optimization literature. http://cas.ee.ic.ac.uk/people/vmagron/slides/quads.pdf

Speaker: Dr. Victor Magron [Bio sketch]

Bio Sketch. Victor graduated from Ecole Centrale Paris Engineering School in 2010, while receiving his MSc from the department of Systems Innovation, Tokyo University (double diploma). In 2013, he received his PhD in Computer Science at INRIA Saclay, Ecole Polytechnique. In 2014, he was a Postdoc fellow in the MAC team at LAAS in Toulouse, France. He is currently a Research Assistant at Imperial College for the Circuits and Systems group, in the department of Electrical and Electronic Engineering.

Affilation: Department of Electrical and Electronic Engineering - Imperial College London
Location: Room 217 Huxley Building
Time: 2:00pm (duration: 1 hour)

Wednesday 11 February 2015

Worst-case complexity of nonlinear optimization: Where do we stand?
Abstract. We review the available results on the evaluation complexity of algorithms using Lipschitz-continuous Hessians for the approximate solution of nonlinear and potentially nonconvex optimization problems. Here, evaluation complexity is a bound on the largest number of problem functions (objective, constraints) and derivatives evaluations that are needed before an approximate first-order critical point of the problem is guaranteed to be found. We start by considering the unconstrained case and examine classical methods (such as Newton's method) and the more recent ARC2 method, which we show is optimal under reasonable assumptions. We then turn to constrained problems and analyze the case of convex constraints first, showing that a suitable adaptation ARC2CC of the ARC2 approach also possesses remarkable complexity properties. We finally extend the results obtained in simpler settings to the general equality and inequality constrained nonlinear optimization problem by constructing a suitable ARC2GC algorithm whose evaluation complexity also exhibits the same remarkable properties.

Speaker: Prof. Philippe Toint [Bio sketch]

Bio Sketch. Philippe L. Toint (born 1952) received its degree in Mathematics in the University of Namur (Belgium) in 1974 and his Ph.D. in 1978 under the guidance of Prof M.J.D. Powell. He was appointed as lecturer at the University of Namur in 1979 were he became associate professor in 1987 and full-professor in 1993. Since 1979, he has been the co-director of the Numerical Analysis Unit and director of the Transportation Research Group in this department. He was in charge of the University Computer Services from 1998 to 2000 and director of the Department of Mathematics from 2006 to 2009. He currently serves as Vice-rector for Research and IT for the university. His research interests include numerical optimization, numerical analysis and transportation research. He has published four books and more than 280 papers and technical reports. Elected as SIAM Fellow (2009), he was also awarded the Beale-Orchard-Hayes Prize (1994, with Conn and Gould)) and the Lagrange Prize in Continuous Optimization (2006, with Fletcher and Leyffer). He is the past Chairman (2010-2013) of the Mathematical Programming Society, the international scientific body gathering most researchers in mathematical optimization world-wide. Married and father of two girls, he is a keen music and poetry lover as well as an enthusiast scuba-diver.

Affilation: Department of Mathematics – Université de Namur
Location: CPSE seminar room (C615 Roderic Hill)
Time: 11:00am (duration: 1 hour)

Friday 6 February 2015

McCormick Relaxations: Convergence Rate and Extension to Multivariate Outer Functions
Abstract. Optimization is a widely used tool in process systems engineering, but often the optimization problems have multiple suboptimal local minima. Deterministic global optimization algorithms can solve such problems, typically employing convex/concave relaxations of the objective and constraints. Several methods have been proposed for the construction of convergent relaxations, including the McCormick relaxations. These provide the framework for the computation of convex relaxations of composite functions. McCormick's relaxations are clearly a very important tool, but they have the limitation of only allowing univariate composition. Although most functions can be decomposed in a way that only univariate functions are used as building blocks, this often results in weak relaxations. Moreover, McCormick has not provided results for the convergence rate of these relaxations. We propose a reformulation of McCormick's composition theorem, which while equivalent to the original, suggests a straight forward generalization to multi-variate outer functions. In addition to extending the framework, the multi-variate McCormick relaxation is a useful tool for the proof of relaxations: by direct application to the product, division and minimum/maximum of two functions, we obtain improved relaxations when comparing with uni-variate McCormick. Furthermore, we generalize the theory for the computation of subgradients to the multi-variate case, envisioning practical methods that utilize the framework. Further, we extend the notion of convergence order from interval extensions to convex relaxations in the pointwise metric and Hausdorff metric. We develop theory for the McCormick relaxations by establishing convergence rules for the addition, multiplication and composition operations. The convergence order of the composite function depends on the convergence order of the relaxations of the factors. No improvement in the order of convergence compared to that of the underlying bound calculation, e.g., via interval extensions, can be guaranteed unless the relaxations of the factors have pointwise convergence of high order, in which case at least quadratic conver- gence order can be guaranteed. Additionally, the McCormick relaxations are compared with the alphaBB relaxations by Floudas and coworkers, which also guarantee quadratic pointwise convergence. Finally, the convergence order of McCormick-Taylor models is addressed. Illustrative and numerical examples are given and hybrid methods are discussed. The implication of the results are discussed for practical bound calculations as well as for convex/concave relaxations of factors commonly found in process systems engineering models.

Speaker: Prof. Alexander Mitsos [Bio sketch]

Bio Sketch. Alexander Mitsos is a Full Professor (W3) in RWTH Aachen University, and the Director of the Laboratory for Process Systems Engineering (AVT.SVT), comprising 40 research and administrative staff. Mitsos received his Dipl-Ing from University of Karlsruhe in 1999 and his Ph.D.
from MIT in 2006, both in Chemical Engineering. Prior appointments include military service, free-lance engineering, involvement in a start-up company, a junior research group leader position in the Aachen Institute of Computational Engineering Science and the Rockwell International Assistant Professorship at MIT. Mitsos has over 60 publications in peer-reviewed journals and has received a number of awards. His research focuses on optimization of energy and chemical systems and development of enabling numerical algorithms.

Affilation: Laboratory for Process Systems Engineering - RWTH Aachen University
Location: CPSE seminar room (C615 Roderic Hill)
Time: 3:00pm (duration: 1 hour)

Wednesday 4 February 2015

Bayesian Optimization
Speaker: Dr. Mike Osborne
Affilation: Department of Engineering Science - University of Oxford
Location: Room 145 Huxley
Time: 2:00pm (duration: 2 hour)

Tuesday 13 January 2015

Markov Random Field Optimization in Medical Image Analysis
Abstract. Markov Random Fields (MRFs) are widely used for modelling image analysis problems, such as segmentation, denoising, and motion estimation. In this presentation, I will provide a brief overview of different applications of MRFs and some of the commonly used energy models and optimization methods. In particular, recent advances in higher-order MRF optimization seem a promising direction for modelling more complex and sophisticated energies. However, computational performance could be a limiting factor when scaling those models to large image databases. A potential solution could be to employ efficient discrete-continuous optimization methods.

Speaker: Dr. Ben Glocker [Bio sketch]

Bio Sketch. Ben Glocker is a Lecturer in Medical Image Computing at the Department of Computing, Imperial College London. Before joining Imperial in October 2013, he has been working as a post-doctoral researcher in the Machine Learning & Perception Group at Microsoft Research Cambridge. He has been appointed as Researcher Fellow of Darwin College, University of Cambridge, from 2010-2012. Ben received his doctoral degree from the Technical University of Munich, in 2011.
Ben is a member of the Biomedical Image Analysis group. His research focus is on advanced methods and tools for biomedical image computing and computer vision. In particular, he is interested in semantic understanding and automatic analysis of images using machine learning techniques.

Affilation: Department of Computing - Imperial College London
Location: Room 140 Huxley
Time: 2:00pm (duration: 1 hour)

Thursday 11 December 2014

Static and Dynamic Assortment Optimization
Abstract. Assortment Optimization is the problem of offering an optimal assortment from a ground set. Customers choose one of the offered products or decide not to purchase any. The dynamic version of this problem with finite inventories and time is a model for revenue management. Both problems are difficult when the customer population is heterogeneous (multiple segments). In this talk we present our research on the computational approaches for static and dynamic optimization under the choice as well as a matching model.

Speaker: Prof. Kalyan Talluri [Bio sketch]

Bio Sketch. Kalyan Talluri is Professor of Operations Management at Imperial College London Business School, London. Previously he was an ICREA Research Professor in the Department of Economics and Business at the Universitat Pompeu Fabra in Barcelona. He got his Masters from Purdue University and a Ph.D in Operations Research from MIT. He has taught at the Kellogg School of Management, Northwestern University, Indian School of Business, Tuck School of Management, Dartmouth, IESE and INSEAD.
His research interests are in pricing of consumer goods and services and the operational implementation of pricing tactics. He has published in OR and Management journals and is the co-winner of the INFORMS Lanchester Prize for the year 2005. He is the co-author of the book “Theory and Practice of Revenue Management”.

Affilation: Imperial College London Business School
Location: LT 311 Huxley Building
Time: 2:00pm (duration: 1 hour)

Thursday 30 October 2014

Visual-Inertial Odometry (VIO) Using Nonlinear Optimization
Abstract. Visual-inertial fusion for state estimation and mapping has recently drawn increased attention. The sensing modalities offer compelling complementary characteristics, since inertial measurements provide strong short-term temporal correlations, while visual correspondences in images form spatial (relative pose) correlations. Furthermore, inertial MEMS sensors have become increasingly small, cheap, and accurate. Traditionally, the visual-inertial odometry problem has been rather addressed with filtering formulations; in this seminar, however, an approach using nonlinear optimization is presented -- inspired by recent work of the computer vision community solving large reconstruction problems using optimization. The full batch VIO problem becomes untractable quite quickly; mobile robotics, however, needs to comply with real-time constraints. To this end, a framework using partial linearization of error terms along with marginalization (variable elimination) is suggested that allows for a bounded optimization window using the notion of keyframes without compromising the inherent sparsity of the problem. We will go through the necessary mathematical machinery and present a quantitative evaluation as well as qualitative results from on-board Unmanned Aerial Systems (UAS).

Speaker: Dr. Stefan Leutenegger [Bio sketch]

Bio Sketch. Stefan Leutenegger has obtained his PhD from ETH Zurich (Autonomous Systems Lab, ASL) in 2014, where he has has worked on solar airplane design from concepts to realization and flight testing, as well as related algorithms for navigation close to the terrain. His activities covered a broad range from structural, electrical and software engineering to the development of highly efficient, robust, and accurate algorithms for multi-sensor state estimation and mapping.
As part of his PhD work, Stefan spent three months at the robotics company Willow Garage in Menlo Park, California, in 2012 under the supervision of Dr. Kurt Konolige and Dr. Vincent Rabaud. Besides his involvement in engineering and science activities, Stefan had the ASL-internal lead in the European FP7 Projects “ICARUS” and “SHERPA” since the proposal writing phase. He has furthermore been involved in BSc and MSc student project supervision as well as for teaching a part of ASL’s Master course on Unmanned Aerial Systems. In October 2014, Stefan started as a lecturer of robotics at Imperial College London, working in Andy Davison's Dyson Robotics Laboratory.

Affilation: Dyson Robotics Lab - Imperial College London
Location: Room 217 Huxley Building
Time: 2:00pm (duration: 1 hour)

Friday 17 October 2014

Bayesian Optimization for Learning Robot Control
Abstract. Statistical machine learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows us to reduce the amount of engineering knowledge that is otherwise required. In real systems, such as robots, many experiments can be impractical and time consuming.
I will discuss Bayesian optimization, an approach to controller learning that is based on efficient global optimization of black-box (utility) functions, in the context of robot learning. I will demonstrate that this kind of learning is (a) practical and (b) very fast, i.e., it requires only a few experiments, to learn good controller parameterizations for a bipedal robot.

Speaker: Dr. Marc Deisenroth [Bio sketch]

Bio Sketch. Marc is PI of the SML group and an Imperial College Junior Research Fellow (tenure-track) with interests in statistical machine learning, robotics, control, time-series analysis, and signal processing. Marc joined the Department of Computing as a Research Fellow in September 2013. From December 2011 to August 2013 he was a Senior Research Scientist & Group Leader (Learning for Control) at TU Darmstadt (Germany). Marc is still adjunct researcher at TU Darmstadt. From February 2010 to December 2011, he was a full-time Research Associate at the University of Washington (Seattle). Marc completed his PhD at the Karlsruhe Institute for Technology (Germany) in 2009. He conducted his PhD research at the Max Planck Institute for Biological Cybernetics (2006–2007) and at the University of Cambridge (2007–2009).
Marc’s research interests center around methodologies from modern Bayesian machine learning and their application autonomous control and robotic systems. Marc’s goal is to increase the level of autonomy in robotic and control systems by modeling and accounting for uncertainty in a principled way. Potential applications include intelligent prostheses, autonomous robots, and healthcare assistants.

Affilation: Department of Computing - Imperial College London
Location: Room 217 Huxley Building
Time: 3:00pm (duration: 1 hour)

Wednesday 9 July 2014

High-precision solutions for linear programs over the rational numbers
Speaker: Ambros Gleixner
Affilation: Zuse Institute Berlin
Location: CPSE seminar room (C615 Roderic Hill)
Time: 3:00pm (duration: 1 hour)

Thursday 3 July 2014

Medium-term planning for thermal electricity production
Abstract. In the present paper, we present a mid-term planning model for thermal power generation which is based on multistage stochastic optimization and involves stochastic electricity spot prices, a mixture of fuels with stochastic prices, the effect of CO2 emission prices and various types of further operating costs. Going from data to decisions, the first goal was to estimate simulation models for various commodity prices. We apply Geometric Brownian motions with jumps to model gas, coal, oil and emission allowance spot prices. Electricity spot prices are modeled by a regime switching approach which takes into account seasonal effects and spikes. Given the estimated models, we simulate scenario paths and then use a multiperiod generalization of the Wasserstein distance for constructing the stochastic trees used in the optimization model. Finally, we solve a 1-year planning problem for a fictitious configuration of thermal units, producing against the markets. We use the implemented model to demonstrate the effect of CO2 prices on cumulated emissions and to apply the indifference pricing principle to simple electricity delivery contracts.

Speaker: Dr. Florentina Paraschiv [Bio sketch]

Bio Sketch. Dr. Florentina Paraschiv is an Assistant Professor at the University of St. Gallen and she works inside the Institute for Operations Research and Computational Finance ior/cf-HSG. Her interest fields include: econometrics of electricity, oil, and gas markets, quantification of risk in the electricity business, optimization of power production, renewable energy.

Affilation: Institute for Operations Research and Computational Finance - University of St. Gallen
Location: Room 218 Huxley Building
Time: 2:00pm (duration: 1 hour)

Friday 21 March 2014

Multi-stage Decision Optimization under Uncertainty
Abstract. In this talk we will review and discuss various aspects of multi-stage decision optimization problems under uncertainty - from problem formulation to modeling language support as well as the numerical solution of optimization problems. A stylized example is used to provide a complete walk-through using open-source software. In this presentation, the numerical solution is based on techniques from the field of multi-stage stochastic programming. A crucial issue is the proposed simplification of current modeling language approaches to annotate multi-stage optimization programs in order to e.g. allow for creating multi-stage optimization model libraries, which are independent of the underlying solution method. Selected multi-stage models from the area of Finance and Energy will be presented.

Speaker: Dr. Ronald Hochreiter [Bio sketch]

Bio Sketch. Ronald Hochreiter is Docent at the Department of Finance, Accounting and Statistics at the WU Vienna University of Economics and Business. He loves optimization under uncertainty and enjoys to think about how to simplify the modeling process of complex decision problems as well as how to implement optimization modeling tools for specific application areas. Lately, he also likes to conduct classical Business Analytics tasks for data-related problems from areas as diverse as Finance, Public Policy, and Health Care.

Affilation: WU Vienna University of Economics and Business
Location: Room 218 Huxley Building
Time: 2:00pm (duration: 1 hour)

Tuesday 28 January 2014

Revolutionizing Airline Planning & Scheduling with the Invention of Unified Optimization
Abstract. Airline planning and scheduling are complex mixed integer problems. To reduce complexity each of them has traditionally been split into stages. For example, scheduling is split into fleet assignment, aircraft scheduling, and crew scheduling, which are solved sequentially one after the other. These stages are, however, interdependent and airlines have to deal with under-optimal results and constraint violations. Decisal is the company that invented algorithms for Unified Optimization of these stages, greatly improving profit optimization and constraint satisfaction. The methodology for Unified Optimization includes several advancements of Benders Decomposition. Finally, in some cases, both the Benders master problem and the Benders subproblem are made of improved column generation algorithms.

Speaker: Dr. Nikolaos Papadakos [Bio sketch]

Bio Sketch. Dr Nikos (Nikolaos) Papadakos is the research and development director of Decisal. Prior to that he worked as a research associate at Imperial College London where he also received a PhD in operations research and an MSc in advanced computing. He also holds a BSc in mathematics from the University of Athens. Finally, his work experience includes the Bank of Attica and Biomex Epe, in software development, sales, and customer support.

Affilation: Decisal Ltd
Location: Room 345 Huxley Building
Time: 2:00pm (duration: 1 hour)

Wednesday 15 January 2014

Numerical Aggregation of Trust Evidence: Its Analysis and Optimisation
Abstract. We have designed a language in which modellers can specify trust and distrust signals that, in their presence, generate a numerical score, and where such scores can be combined with aggregation operators to express risk postures for trust-mediated interactions in IT systems. Signals may stem from heterogenous sources such as geographical information, reputation, and threat levels. Aggregated scores then inform decisions by generating conditions that compare scores to threshold values of trustworthiness. We developed a generic approach to analysing such conditions by automatically converting them into code for the Satisfiability Modulo Theory solver Z3 from Microsoft Research. This allows us to automatically analyse, e.g., whether a condition is sensitive to the increase of a trustworthiness threshold by a specified amount. We would now like to understand better whether such analysis questions can be expressed in known models as used in optimisation. For example, let a condition say that the aggregated trust score has to be above 0.5. Solvers such as Z3 seem to be unable to compute the largest interval containing 0.5 such that all values of that interval could be chosen as trustworthiness threshold without changing the behaviour of the condition. On the other hand, Z3 is perfect for reflecting logical dependencies or inconsistencies between (dis)trust signals that occur in such conditions and are quantifier-free formulas of first-order logic. A prototype implementation of the tool is available at http://delight.doc.ic.ac.uk:55555

Speaker: Prof. Michael Huth and Dr. Jim Huan-Pu Kuo
Affilation: Department of Computing - Imperial College London
Location: Room 217 Huxley Building
Time: 3:15pm (duration: 1 hour)

Thursday 26 September 2013

Risk measures and time consistency
Abstract. In this talk we discuss basic theory of risk measures and risk averse optimization. Starting with the pioneering paper by Artzner et al, "Coherent measures of risk" (1999), this theory went through rapid development in recent years. We pay a special attention to a multistage setting, and, in particular, discuss the involved concepts of time consistency.

Speaker: Prof. Alexander Shapiro [Bio sketch]

Bio Sketch. Alexander Shapiro is a Professor in the School of Industrial and Systems Engineering at Georgia Institute of Technology. He has published more than 120 research articles in peer reviewed journals and is a coauthor of several books (see below).
His research is widely cited and he was listed as an ISI Highly Cited Researcher in 2004 (ISI = Institute for Scientic Information), links to his research ID: http://www2.isye.gatech.edu/~ashapiro/research.html. He is on the editorial board of several professional journals, such as Mathematics of Operations Research, ESAIM: Control, Optimization and Calculus of Variations, Computational Management Science. He was an area editor (Optimization) of Operations Research, currently he is the Editor-in Chief of the Mathematical Programming journal. He gave numerous invited keynote and plenary talks, including invited section talk (section Control Theory & Optimization) at the International Congress of Mathematicians 2010, Hyderabad, India http://www.icm2010.in/scientific-program/invited-speakers.
Published Books:
1. Rubinstein, R.Y. and Shapiro, A., Discrete Event Systems: Sensitivity Analysis and Stochastic Optimization by the Score Function Method, John Wiley and Sons, New York, 1993.
2. Bonnans, J. F. and Shapiro, A., Perturbation Analysis of Optimization Problems, Springer, New York, 2000.
3. Handbook on Stochastic Programming, edited by: A. Ruszczynski and A. Shapiro, North-Holland Publishing Company, Amsterdam, 2003.
4. Shapiro, A., Dentcheva, D. and Ruszczynski, A., Lectures on Stochastic Programming: Modeling and Theory , SIAM, Philadelphia, 2009.

Affilation: School of Industrial and Systems Engineering - Georgia Institute of Technology
Location: Room 145 Huxley Building
Time: 2:00pm (duration: 1 hour)

Thursday 22 August 2013

Added value of scenario tree based stochastic optimization in long and medium term planning of hydro power systems
Abstract. The changes in the dynamics of power prices in Germany within the last few years have implied significant decreases of revenues of pumped storage hydro systems. The earnings from daily peak/offpeak price spreads have declined whereby at the same time long term management of large reservoirs has become even more difficult due hardly predictable power price and inflow evolutions. Stochastic Optimization has been applied for hydro system management for a long time already, but only recently operators expect additional benefits given the changed and even more uncertain power market situation. Besides stochastic dual dynamic programming, scenario tree based methods are meanwhile applied, which so far have hardly been used for complex hydro systems due to the curse of dimensionality in stochastic programming. We apply efficient discretization methods for the generation of multi-dimensional scenario trees of power prices and inflows, based on moment matching and multinomial distributions. Furthermore, by using increased hardware efficiency it is meanwhile possible to set up and solve scenario tree based stochastic optimization models for even complex pumped storage systems. Ex post analyses have shown that outperformance over deterministic optimization in the range of 1-5 percent is achievable.

Speaker: Dr. Georg Ostermaier [Bio sketch]

Bio Sketch. Georg Ostermaier is founder, owner and managing director of Decision Trees GmbH, a Munich based firm focussing on the practical application of mathematical and stochastic programming in the European Energy industry. Georg received his graduate degree in Electrical Engineering from the Technical University of Munich and his PhD in Operations Research from the University of St.Gallen, Switerland. Since 2006 he and his team have been developing stochastic optimization software systems for thermal power generation portfolios, hydro power generation systems, gas storage and gas contract valuation and gas procurment portfolio optimisation. Decision Trees GmbH has today a solid customer base in Germany, Austria, Switzerland, Norway and the United Kingdom and has proven to contribute to enhanced profits for energy producers in practice through stochastic optimization.

Affilation: tbd
Location: Room 217 Huxley Building
Time: 11:00am (duration: 1 hour)

Monday 12 August 2013

Managing with Incomplete Inventory Information
Abstract. A critical assumption in the vast literature on inventory management has been that the current level of inventory is known to the decision maker. Some of the most celebrated results such as the optimality of base-stock policies have been obtained under this assumption. Yet it is often the case in practice that the decision makers have incomplete or partial information about their inventory levels. The reasons for this are many: Inventory records or cash register information differ from actual inventory because of a variety of factors including transaction errors, theft, spoilage, misplacement, unobserved lost demands, and information delays. As a result, what are usually observed are some events or surrogate measures, called signals, related to the inventory level. These relationships can provide the distribution of current inventory levels. Therefore, the system state in the inventory control problems is not the current inventory level, but rather its distribution given the observed signals. Thus, the analysis for finding optimal production or ordering policies takes place generally in the space of probability distributions. The purpose of this talk is to review recent developments in the analysis of inventory management problems with incomplete information.

Speaker: Prof. Suresh P. Sethi [Bio sketch]

Bio Sketch. Suresh P. Sethi is Eugene McDermott Professor of Operations Management and Director of the Center for Intelligent Supply Networks at The University of Texas at Dallas. He has written 7 books and published nearly 400 research papers in the fields of manufacturing and operations management, finance and economics, marketing, and optimization theory. He teaches a course on optimal control theory/applications and organizes a seminar series on operations management topics. He initiated and developed the doctoral programs in operations management at both University of Texas at Dallas and University of Toronto. He serves on the editorial boards of several journals including Production and Operations Management and SIAM Journal on Control and Optimization. He was named a Fellow of The Royal Society of Canada in 1994. Two conferences were organized and two books edited in his honor in 2005-6. Other honors include: IEEE Fellow (2001), INFORMS Fellow (2003), AAAS Fellow (2003), POMS Fellow (2005), IITB Distinguished Alum (2008), SIAM Fellow (2009), POMS President (2012).

Affilation: Center for Intelligent Supply Networks - University of Texas at Dallas
Location: Room 217-218 Huxley Building
Time: 2:00pm (duration: 1 hour)

Thursday 8 August 2013

Risk neutral and risk averse approaches to multistage stochastic programming with applications to hydrothermal operation planning problems
Abstract. This talk gives an overview of risk neutral and risk averse approaches to multistage stochastic programming with applications to hydrothermal operation planning problems. In the first part of this talk, we discuss risk neutral and risk averse approaches using coherent risk measures to multistage (linear) stochastic programming problems based on the Stochastic Dual Dynamic Programming (SDDP) method. We give a general description of the algorithm and present computational studies related to planning of the Brazilian interconnected power system. In the second part of this talk, we discuss multistage programming with the data process subject to uncertainty. We consider a situation where the data process can be naturally separated into two components, one can be modeled as a random process, with a specified probability distribution, and the other one can be treated from a robust (worst-case) point of view. We formulate this in a time consistent way and derive the corresponding dynamic programming equations. In order to solve the obtained multistage problem we develop a variant of the (SDDP) method. We give a general description of the algorithm and present computational studies related to planning of the Brazilian interconnected power system.

Speaker: Dr. Wajdi Tekaya [Bio sketch]

Bio Sketch. Wajdi Tekaya is currently an HPCfinance postdoctoral fellow at Cambridge Systems Associates. He obtained his B.S. in industrial engineering from Tunisia Polytechnic School, M.S. in Operations Research from Paris IX University, Georgia Institute of Technology and Ph.D. in Operations Research from Georgia Institute of technology. His research interests are in computational approaches to stochastic programming with applications in energy and finance.

Affilation: Decision Trees GmbH
Location: Room 217 Huxley Building
Time: 2:00pm (duration: 1 hour)

Wednesday 3 July 2013

Interdiction Games on Markovian PERT Networks
Abstract. In a stochastic interdiction game a proliferator aims to minimize the expected duration of a nuclear weapons development project, while an interdictor endeavors to maximize the project duration by delaying some of the project tasks. We formulate static and dynamic versions of the interdictor's decision problem where the interdiction plan is either pre-committed or adapts to new information revealed over time, respectively. The static model gives rise to a stochastic program, while the dynamic model is formalized as a multiple optimal stopping problem in continuous time and with decision-d ependent information. Under a Markov assumption, we prove that the static model reduces to a mixed-integer linear program, while the dynamic model reduces to a finite Markov decision process in discrete time that can be solved via efficient value iteration. We then generalize the dynamic model to account for uncertainty in the outcomes of the interdiction actions. We also discuss a crashing game where the proliferator can use limited resources to expedite tasks so as to counterbalance the interdictor's efforts. The resulting problem can be formulated as a robust Markov decision process.

Speaker: Eli Gutin [Bio sketch]

Bio Sketch. Eli Gutin completed his MEng in Computing at Imperial College in 2012. His prize-winning final year project on "Interdiction Games on Markovian PERT networks" was supervised by Daniel Kuhn & Wolfram Wiesemann. A year later, it was submitted for publication and is the subject of today's talk.

Affilation: School of Industrial and Systems Engineering - Georgia Institute of Technology
Location: Room 218 Huxley Building
Time: 3:00pm (duration: 1 hour)

Thursday 20 June 2013

Robust Data-Driven Approach in Decision Making Under Uncertainty
Abstract. We investigate robust data-driven approach in stochastic optimization problems where partial knowledge on the exogenous uncertainties is available to the decision maker. In contrast to the traditional model-based approach, a data-driven approach requires no assumptions on the underlying distribution of exogenous uncertainties. Estimation of conditional expectation is achieved using kernel regression scheme which evaluates the cost function solely at historical observations. If sparse historical observations are available, however, the estimation is inaccurate and the resulting decision performs poorly in out-of-sample tests. To alleviate this unfavourable outcome, we 'robustify' the decision against estimation errors by utilizing techniques from robust optimization. We show that the arising min-max problem can be reformulated as a tractable conic program. We further extend the proposed approach to multi-period settings and introduce an approximate dynamic programming framework that retains the tractability of the formulation and that is amenable to efficient parallel implementation. The proposed approach is tested across several application domains and is shown to outperform various non-robust schemes in terms of standard statistical benchmarks.

Speaker: Grani Adiwena Hanasusanto [Bio sketch]

Bio Sketch. Grani Hanasusanto is a PhD student at the Department of Computing, Imperial College London, under the supervision of Dr. Daniel Kuhn. He obtained the BEng (Hons) degree in Electrical and Electronic Engineering from Nanyang Technological University, Singapore, and the MSc degree in Financial Engineering from National University of Singapore. His research interests are in numerical and computational methods and their applications.

Affilation: Department of Computing - Imperial College London
Location: Room 301 William Penney
Time: 4:00pm (duration: 1 hour)
Parallel block coordinate descent methods for huge-scale partially separable problems
Abstract. In this work we show that randomized block coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially block separable smooth convex function and a simple block separable convex function. We give a generic algorithm and several variants thereof based on the way parallelization is performed. In all cases we prove iteration complexity results, i.e., we give bounds on the number of iterations sufficient to approximately solve the problem with high probability. Our results generalize the intuitive observation that in the separable case the theoretical speedup caused by parallelization must be equal to the number of processors. We show that the speedup increases with the number of processors and with the degree of partial separability of the smooth component of the objective function. Our analysis also works in the mode when the number of blocks being updated at each iteration is random, which allows for modelling situations with variable (busy or unreliable) number of processors. We conclude with some encouraging computational results applied to huge-scale LASSO and sparse SVM instances.
This is a joint work with Dr. Peter Richtarik, University of Edinburgh.

Speaker: Martin Takac
Affilation: School of Mathematics - University of Edinburgh
Location: CPSE Seminar room
Time: 2:00pm (duration: 1 hour)

Wednesday 19 June 2013

Performance-based regularization in mean-CVaR portfolio optimization
Abstract. Regularization is a technique widely used to improve the stability of solutions to statistical problems. We propose a new regularization concept, performance-based regularization (PBR), for data-driven stochastic optimization. The goal is to improve upon Sample Average Approximation (SAA) in finite-sample performance while maintaining minimal assumptions about the data. We apply PBR to mean-CVaR portfolio optimization, where we penalize portfolios with large variability in the constraint and objective estimations, which effectively constrains the probabilities that the estimations deviate from the respective true values. This results in a combinatorial optimization problem, but we prove its convex relaxation is tight. We show via simulations that PBR substantially improves upon SAA in finite-sample performance for three different population models of stock returns. We also prove that PBR is asymptotically optimal, and further derive its first-order behavior by extending asymptotic analysis of M-estimators. This is joint work with Noureddine El Karoui (UC Berkeley Statistics) and Andrew EB Lim (NUS Business School)

Speaker: Prof. Gah-Yi Vahn [Bio sketch]

Bio Sketch. Gah-Yi Vahn is an Assistant Professor of Management Science and Operations at London Business School. She has a BSc (1st Class Hons. with Univ. Medal) from the University of Sydney (2007), an MA in Statistics (2011) and a PhD in Operations Research (2012) from the University of California, Berkeley. Gah-Yi's research interest is data-driven decision-making, in particular optimization with complex, high dimensional, and/or highly uncertain data, with applications to finance and operations management.

Affilation: Management Science and Operations - London Business School
Location: Room 145 Huxley
Time: 2:00pm (duration: 1 hour)

Friday 14 June 2013

Distributionally robust control of constrained stochastic systems
Abstract. We investigate the control of constrained stochastic linear systems when faced with limited information regarding the disturbance process, that is, when only the first and second-order moments of the disturbance distribution are known.
We employ two types of soft constraints to prevent the state from falling outside a prescribed target domain: distributionally robust chance constraints require the state to remain within the target domain with a given high probability, while distributionally robust conditional value-at-risk constraints impose an upper bound on the state's expected distance to the target domain conditional on that distance being positive.
The attribute 'distributionally robust' reflects the requirement that the constraints must hold for all disturbance distributions sharing the known moments. We argue that the design of controllers for systems accommodating these types of constraints is both computationally tractable and practically meaningful for both finite and infinite horizon problems.
The proposed methods are illustrated in the context of a wind turbine blade control design case study where flexibility issues play an important role and for which the distributionally robust constraints make sensible design objectives.

Speaker: Bart Van Parys [Bio sketch]

Bio Sketch. Bart holds a BA degree in electrical engineering, and a MA degree in applied/engineering mathematics, both from the University of Leuven. Since September 2011 he has been a PhD student in the Swiss Federal Institute of Technology (ETH Zürich) under the supervision of Prof. Manfred Morari and Dr. Paul Goulart.

Affilation: Automatic Control Laboratory at Swiss Federal Institute of Technology
Location: Room 217-218 Huxley Building
Time: 2:00pm (duration: 1 hour)

Tuesday 28 May 2013

CVaR Approximations for Minimax and Robust Convex Optimization
Abstract. Conditional value at risk (CVaR) has been widely used as a risk measure in finance. In this work, we propose to randomize decision variables of a deterministic parametric maximization problem and approximate the optimal (maximum) value by the CVaR of the randomized function. One of the main advantages of this approach is that computing CVaR is down to solving a one dimensional convex optimization problem even when the original problem is multi-dimensional and nonconvex. We apply the approximation scheme to a minimax (robust) optimization problem and a convex optimization problem with semi-infinite constraints indexed by uncertain parameters. In the latter application we use CVaR to approximate the semi-infinite constraint and then apply the Monte Carlo sampling method to discretize the CVaR approximated constraint. This approach is closely related to a popular randomization approach proposed by Calafiore and Campi where the continuum of uncertainty parameters is discretized through sampling. Error bounds for the optimal solutions of the approximate problems are derived under some moderate conditions and some numerical test results are reported. The proposed methods can be applied to distributional optimization where the distribution set is constructed through moments.

Speaker: Prof. Huifu Xu [Bio sketch]

Bio Sketch. Huifu Xu is a Professor of Operational Research in the School of Engineering and Mathematical Sciences at City University of London. Before joining City University, he was a Senior Lecturer of Operational Research in the School of Mathematics at the University of Southampton. His expertise is in continuous optimization and operational research, including developing numerical methods and underlying theory for continuous optimization problems, particularly those involving uncertain data and/or equilibrium constraints. Over the past ten years, he has been actively working on stochastic mathematical programs with equilibrium problems and is recently developing interest in robust approaches for stochastic optimization and equilibrium problems. Huifu has published about 60 papers most of which are in the international journals of optimization and operational research.

Affilation: School of Engineering and Mathematical Sciences at City University of London
Location: Room 218 Huxley Building
Time: 2:00pm (duration: 2 hours)

Wednesday 22 May 2013

Hard-to-Solve Bimatrix Games
Abstract. A bimatrix game is a two-player game in strategic form, a basic model in game theory. A Nash equilibrium is a pair of (possibly randomized) strategies, one for each player, so that no player can do better by unilaterally changing his or her strategy. In this talk, which will introduce the main concepts and geometric tools, we show that the commonly used Lemke-Howson algorithm for finding one equilibrium of a bimatrix game is exponential. The algorithm is a pivoting method similar to the simplex algorithm for linear programming. We present a class of square bimatrix games for which the shortest Lemke-Howson path grows exponentially in the dimension d of the game. We construct the games using pairs of dual cyclic polytopes with 2d facets in d-space. The paths are recursively composed so that their lengths grow like Fibonacci numbers. We also mention subsequent results and open problems in the area.

Speaker: Prof. Bernhard von Stengel [Bio sketch]

Bio Sketch. Diploma in Mathematics from Aachen, MSc in Computer Sciences at Austin/Texas (student of Edsger W. Dijkstra), PhD in Passau, Habilitation in Munich, Postdoc at Berkeley, Tilburg and ETH Zurich (with a Heisenberg grant), at LSE Mathematics since 1998. Interested in algorithmic game theory longer than the research area has that name.

Affilation: Depatment of Mathematics - London School of Economics and Political Science
Location: CPSE seminar room (C615 Roderic Hill)
Time: 2:00pm (duration: 1 hour)

Thursday 16 May 2013

Alternating Maximization: Unifying Framework for 8 Sparse PCA Formulations
Abstract. Given a multivariate data set, sparse principal component analysis (SPCA) aims to extract several linear combinations of the variables that together explain the variance in the data as much as possible, while controlling the number of nonzero loadings in these combinations. In this paper we consider 8 different optimization formulations for computing a single sparse loading vector; these are obtained by combining the following factors: we employ two norms for measuring variance (L2, L1) and two sparsity-inducing norms (L0, L1), which are used in two different ways (constraint, penalty). Three of our formulations, notably the one with L0 constraint and L1 variance, have not been considered in the literature. We give a unifying reformulation which we propose to solve via a natural alternating maximization (AM) method. Besides this, we provide a package which contains implementations for various parallel architectures and briefly discuss how these algorithms can be used to achieve better object recognition in challenging data sets.

Speaker: Dr. Selin Ahipasaoglu [Bio sketch]

Bio Sketch. Selin Damla Ahipasaoglu is an Assistant Professor at the Singapore University of Technology and Design. She received her PhD in 2009 from Cornell University and specialises in developing algorithms for large scale optimization problems, in particular first-order methods for convex problems and applications in image processing. She held research positions at Princeton University and London School of Economics before joining SUTD. She is also a very keen teacher and an advocate of active and innovative classroom teaching for undergraduates.

Affilation: Singapore University of Technology and Design
Location: CPSE seminar room (C615 Roderic Hill)
Time: 2:00pm (duration: 1 hour)

Wednesday 17 April 2013

On the Relation between Forecast Precision and Trading Profitability of Financial Analysts
Abstract. We analyze the relation between earning forecast accuracy and expected profitability of financial analysts. Modeling forecast errors with a multivariate Gaussian distribution, a complete characterization of the payoff of each analyst is provided. In particular, closed-form expressions for the probability density function, for the expectation, and, more generally, for moments of all orders are obtained. Our analysis shows that the relationship between forecast precision and trading profitability need not to be monotonic, and that, for any analyst, the impact on his expected payoff of the correlation between his forecasts and those of the other market participants depends on the accuracy of his signals. Furthermore, our model accommodates a unique full-communication equilibrium in the sense of Radner (1979).

Speaker: Prof. Alex Weissensteiner [Bio sketch]

Bio Sketch. Alex finished his PhD in 2003 and received his "Habilitation" from the Leopold-Franzens University Innsbruck (Austria) in 2010, where he worked as Assistant Professor. From 2010-2013 he worked atthe Free University of Bozen-Bolzano(Italy), School of Economics and Management.Hisfields of research are in the intersection of finance, optimization and mathematical modeling. Themain topics covered by his scientific publications are: asset allocation, asset-liability management, stochastic (linear-) programming(SLP), and neural networks. Since February 2013 he is working as Professor for Financial Engineering at the Technical University of Denmark (DTU).

Affilation: DTU Management Engineering
Location: Room 217 Huxley Building
Time: 5:00pm (duration: 1 hour)

Thursday 11 April 2013

Practical Portfolio Optimization
Abstract. The Nobel laureate Harry Markowitz showed that an investor who cares only about the mean and variance of portfolio returns should hold a portfolio on the efficient frontier. To implement these portfolios in practice, one needs to estimate the means and covariances of asset returns. Traditionally, the sample means and covariances have been used for this purpose. But due to estimation error, the portfolios that rely on the sample estimates typically perform poorly out of sample. In this talk, we will first illustrate the difficulties inherent in estimating mean-variance portfolios, and then we will discuss several approaches that can be used to overcome these difficulties in practice.

Speaker: Prof. Victor DeMiguel [Bio sketch]

Bio Sketch. Victor DeMiguel is Professor of Management Science and Operations at London Business School. He holds a PhD in Management Science and Engineering from Stanford University, and an MS in Industrial Engineering from Universidad Politecnica de Madrid. Victor's research focuses on the design, analysis, and application of optimization models for managerial decision making. Applications include financial portfolio selection and competition modelling. His papers have been published in journals such as Management Science, Operations Research, and Mathematics of Operations Research. One of his most popular papers is "Optimal Versus Naive Diversification: How Inefficient is the 1/N Portfolio Strategy", which received the Best Paper Award from the Institute for Quantitative Investment Research and was published in The Review of Financial Studies. Victor teaches the MBA courses Decision and Risk Analysis and Financial Modelling with Spreadsheets, and the Strategic Decision Making module for Executive Education. He is the recipient of the Junior Faculty Teaching Award and the Outstanding Core Course Teaching Award at London Business School. More information about Victor DeMiguel can be found here:

Affilation: London Business School
Location: CPSE seminar room (C615 Roderic Hill)
Time: 2:00pm (duration: 1 hour)

Wednesday 6 March 2013

OR and Optimization under Uncertainty using R
Abstract. The statistical computing package R is well-known in the Statistics community as well as in the Data Science and Business Analytics community, however Operations Researchers usually use MatLab and Python to solve their problems or are even using C++/Java directly to interfere with optimization solvers. In this talk, we will outline the convenience of using R for dealing with OR tasks by solving Optimization under Uncertainty problems, especially problems arising in the area of Stochastic Programming applied to Finance.

Speaker: Dr. Ronald Hochreiter [Bio sketch]

Bio Sketch. Ronald Hochreiter is Assistant Professor at the WU Vienna University of Economics and Business and Visiting Professor at the University of Bergamo. His research interests includes Operations Research (Optimization under Uncertainty) as well as Data Science and Business Analytics. Furthermore he is consultant for various companies in Austria.

Affilation: WU Vienna University of Economics and Business
Location: Room 218 Huxley Building
Time: 3:00pm (duration: 1 hour)

Wednesday 20 February 2013

Applications and Beyond
Abstract. At the end of second millennia advent of the technology of Model Predictive Control (MPC) has revolutionized the way process industry is controlled and operated. This technological breakthrough for industrial process automation asks the much sought-after control objective question, how to attain desired profit margins while satisfying all the conditions of feasibility, stability, operability, performance and safety from plants operating under the influence of uncertainty? Towards answering this question, first generation of MPC development was focused on the various regulation types of issues. Next, we focused on the development of MPC tuning engine that drives the dynamic economics of process control–delivering true economic value of control strategy. Now in the second generation of its evolution questions are raised on how MPC could be administered to the applications having fast dynamics (e.g., fuel cells, automobile and biomedical applications) or systems to which on-line implementation of MPC is expensive or not possible. This has lead to the development of off-line (explicit) implementation of MPC via –you-solve-only-once– multi-parametric optimisation techniques. In this new role of MPC-on-a-chip, we aim to develop robust control algorithms and tools for nonlinear systems operating under uncertainty. Using these tools and underlined computer-aided technologies of process automation we can minimise use of energy, minimise exploitation of environment and maximise the economic profit incentives that are consistently demanded from modern industrial plants while operating at their optimal performance and safety conditions.
In this seminar we will present historical developments and major milestones in modern control theory. Furthermore, a theoretical perspective and computational challenges to account for uncertainty in process operation for various classes of nonlinear systems will be outlined. From the practical application standpoint, we will present a generic control automation framework for fuel cell system and its integrated subsystems. To achieve this objective a fully integrated state-of-the-art process automation laboratory for fuel cell system is developed. This testing and validation facility will be used for characterising various designs as well as benchmarking control policies of the polymer electrolyte membrane fuel cell system. Finally, we will present how the proposed control platform will be able to communicate with the integrated subsystems at the different length-time scales while improving operational performance of fuel cell system by guaranteeing efficiency, material stability and longevity. As a concluding remark we will highlight our vision and goal of this research program so as to contribute significantly to the knowledge while fostering new technological discoveries in the area of process systems engineering by symbiotic, synergetic and sustainable development of industrial process automation technology.

Speaker: Amit M. Manthanwar [Bio sketch]

Bio Sketch. Amit received his Master's Degree in Chemical Engineering from Illinois Institute of Technology, Chicago. He worked as a senior distributed control systems engineer and automation consultant with RasGas in Qatar before joining Invensys as a Software Technology Developer for Advanced Process Performance Suit and Model Predictive Control product 'Connoisseur™'. He was Lecturer and Assistant Professor at the College of Engineering, Pune. Currently he is working with Professor Efstratios N. Pistikopoulos at the Centre for Process Systems Engineering at Imperial College London. His research is focused on the development of economically and environmentally conscious process design, global optimisation and robust control theory. His research work carried out in collaboration with Professor Donald J. Chmielewski has been incorporated in the book titled: 'Smart Process Plants: Software and Hardware Solutions for Accurate Data and Profitable Operations'. He has published 7 scientific papers with G-index 3 and H-index 2. He is a recipient of Hamid Arastoopour Excellence in Teaching Award at Illinois Institute of Technology, Chicago and is listed in 2006 Marquis Who's who in Science and Engineering. He is a recipient of IChemE Journals' 2011 Best Reviewers Award.

Affilation: On Industrial Process Automation and Control Theory
Location: CPSE seminar room (C615 Roderic Hill)
Time: 3:00pm (duration: 1 hour)

Thursday 14 February 2013

Time Consistency in Multistage Stochastic Optimization
Abstract. A fundamental principle for dynamic optimization is Bellman's priciple, stating that at any initially optimal decision sequence is also optimal at later stages. This time-consistency principle is still valid for stochastic programs if the criterion is to minimze expected costs or to maximize expeted profits. However, as simple examples show, it is not longer valid, if risk-sensitive criteria are chosen such as the optimization of the average value at risk.
It turns out that most risk functionals are time inconsistent, e.g., it may happen that today some loss distribution appears to be less risky than another, but looking at the conditional distribution at a later time, the opposite relation holds. We demonstrate that this time inconsistency disappears if the conditional functionals are defined in an extended manner, i.e., are evaluated under a specific change of measure. It follows from our results that, for consistency reasons, the revelation of partial information in time must dramatically change a decision maker's preferences among the remaining courses of action to keep time consistency.
We extend conditional risk functionals to allow a temporal decomposition of the initial risk functional in time consistent way. With this extension, a Bellmann priciple may be proved. Counterexamples show that without change of measures the only time consistent risk functionals are the expectation and the essential supremum.

Speaker: Prof. Georg Pflug [Bio sketch]

Bio Sketch.

Born on June 10th, 1951 in Vienna. Study of Law, Mathematics and Statistics at the University of Vienna, Magister iuris (1974), Ph.D in Mathematics (1976). Assistant Professor University of Vienna (1976-81). Professor, University of Giessen, Germany (1981-1989). Full Professor, University of Vienna (1989 - ).

Visting Professor at the University of Bayreuth (1979), Michigan State University (1985), University of California Davis (1993), Universite de Rennes (1994), Technion Haifa, Israel (1996), Princeton University (2001), University of California Davis (2006).

Dean, Department of Mathematics, University of Giessen, Germany (1987-88); Dean, Faculty of Business, Economics and Statistics, University of Vienna, Austria (2008-2010); Head, Department of Statistics and Decision Support Systems, University of Vienna (2000-2003); Member of Senate, University of Vienna (200-2003); Research scholar, International Institute of Applied Systems Analysis, IIASA (1990 - ) - Risk, Modelling and Society (RMS), Risk and Vulnerability Project (RAV), .

Editor in chief: Statistics and Decisions, Central European Journal of Operations Research

Associate editor: Statistics and Probability Letters (1994-2007), Stochastic Programming Electronic Publication Series, Austrian Journal of Statistics, Mathematics of Operations Research (1994 - 1997), Mathematical Methods of OR, Computational Optimization and Applications, Computational Management Science, Energy Systems: Optimization, Modeling, Simulation and Economic Aspects; Vestnik of the Finance Academy, Moscow.

Member, Council of Scientists, INTAS, Bruessel (1999 - 2002); Fellow, International Statistical Institute; Member, executive board of the international committee on stochastic programming; Member, central research council, University of Bolzano/Bozen.

Author Author of 4books, editor of 5 books, and more than 70 publications in refereed journals, such as: Annals of Statistics, Annals of OR, Probability Theory, J Statist. Planning and Inference, J. ACM, Parallel Computing, The Computer Journal, Math. Programming, Mathematics of Optimization, SIAM J. on Optimization, Computational Optimization and Applications, J. Applied Probability, Stoch. Processes and Applications, Graphs and Combinatorics, J. Theoretical Computer Science, Quantitative Finance etc.

Organizer of COMETT II Workshop "Simulation and Optimization", Raach (1992); Workshop"Computer Intensive Methods in Simulation and Optimization", IIASA (1994); EURO Winter School "Stochastic Optimization", Semmering (1996); Fourth World Congress of the Bernoulli Society, Vienna (1996); Workshop Stochastic Dynamic Optimization, IIASA (2002); Workshop series "Mathematical Optimization for Financial Models", Semmering (2003), Cyprus (2003), Bergamo (2004); 11th International Conference on Stochastic Programming, Vienna (2007); APMOD, Vienna and Bratislava (2008).

Project leader of past and present projects: Project leader of past and present projects: Statistical pattern recognition (Austrian National Bank); Pension fund management (BVP pension fund); Data dependency in financial optimization (FWF- Austrian Science Fund); Optimal network design and marketing strategies (Telekom Austria); AURORA-Advanced parallel and distributed algorithms for Computational Finance (FWF); Unit life insurance with guarantee (Austrian National Bank); WEBOPT (European Commission-subproject leader), Risk management in liberalized electricity markets (WWTF); Seeds in Finance (Austrian National Bank), RISKPLAN (Asia-Link), Long-term risk management (FWF), Pension fund management (Bridge Program).

Affilation: Department of Statistics and Operations Research - University of Vienna
Location: CPSE seminar room (C615 Roderic Hill)
Time: 11:00am (duration: 1 hour)

Tuesday 11 December 2012

Optimal control of Weakly Connected Markov Decision Processes
Abstract. Weakly connected Markov processes are used to model stochastic dynamics across different scales. They are widely used in electrical engineering, finance and molecular dynamics simulations. In many of these applications it is becoming increasingly important to both efficiently simulate these processes as well as to control them. Classical algorithms for the control of Markov Processes cannot be directly applied to weakly connected Markov processes. The problem with existing approaches is that they exhibit an extremely slow convergence rate due to the existence of multiscale effects. We show why existing algorithms are slow, and propose a new class of algorithms with more favourable convergence properties and computational complexity. In our approach we use spectral graph theory to derive a hierarchy of models that are valid at different resolutions. We then propose a polynomial time algorithm that uses the finest resolution model only when required. The rate of convergence of the algorithm is discussed as well as its complexity.

Speaker: Dr. Panos Parpas [Bio sketch]

Bio Sketch. Panos Parpas is a Lecturer in the Quantitative Analysis and Decision Science (QUADS) section of the Department of Computing at Imperial College London. Before joining Imperial College he was a postdoctoral fellow at the MIT Energy Initiative (2009-2011). Before that he was a quantitative associate at Credit-Suisse (2007-2009). He completed his PhD in computational optimization at Imperial College in 2006.
Panos Parpas is interested in the development and analysis of quantitative optimization models under uncertainty. Stochastic optimization models are used in many areas such as economics, finance, engineering, and energy systems. Realistic models have a large number of variables, and multiple interactions across time and space. Advanced computational methods, and analytical approximations that take advantage of problem structure are needed in order to analyze realistic models. I am interested in both the development of computational methods and applications.

Affilation: Department of Computing - Imperial College London
Location: Room 140 Huxley
Time: 4:00pm (duration: 1 hour)

Thursday 29 November 2012

The Branch-and-Sandwich Algorithm for Mixed-Integer Nonlinear Bilevel Programming Problems
Abstract. We extend our recently introduced algorithm for general bilevel programming problems, Branch-and-Sandwich (Kleniati and Adjiman, J. Global Optim., 2012), to the class of mixed-integer nonlinear bilevel problems. As in the original algorithm, auxiliary inner lower and upper bounding problems are constructed in order to bound the inner value function and provide constant bound cuts for the outer upper and outer lower bounding problems. The KKT-based relaxations, originally proposed for the inner upper bounding and the outer lower bounding problems, are applicable with respect to the lower-level continuous variables based on appropriate constraint qualifications, but are no longer required. In the extension that we present here, a robust counterpart approach is employed to formulate the inner upper bounding problem and the resulting bound cut may be the only constraint added to the proposed outer lower bounding problem. The branching framework with auxiliary lists of nodes, as developed for the original Branch-and-Sandwich, is also applied to the discrete case. The algorithm is used to solve successfully ten literature problems.

Speaker: Dr. Polyxeni-Margarita Kleniati [Bio sketch]

Bio Sketch. Dr. Kleniati is undertaking her second postdoctoral research position with Prof. Adjiman at the Chemical Engineering department of Imperial College London. She received her PhD in Computing and Optimisation research in 2010 under the supervision of Prof. Rustem at the department of Computing in Imperial College London.
The research of Polyxeni Kleniati is currently focused on the global optimisation of bilevel programming problems with applications to chemical engineering.

Affilation: Centre for Process Systems Engineering at Imperial College London
Location: Room 144 Huxley
Time: 3:00pm (duration: 1 hour)

Tuesday 27 November 2012

Matrix Learning Problems and First-Order Optimization
Abstract. In the past few years, there has been significant interest in nonsmooth convex optimization problems involving matrices, especially in the areas of machine learning, statistics and control. Instances of such problems are multitask learning and matrix completion, robust PCA, sparse inverse covariance selection etc. I will present PRISMA, a new optimization algorithm for minimizing a convex objective which decomposes into three parts: a smooth part, a simple non-smooth Lipschitz part, and a simple nonsmooth non-Lipschitz part. Our algorithm combines the methodologies of smoothing and accelerated proximal methods. Moreover, our convergence result removes the assumption of bounded domain required by Nesterov's smoothing methods. I will show how PRISMA can be applied to the problems of max-norm regularized matrix completion and clustering, robust PCA and sparse inverse covariance selection, and compare to state of the art methods.

Speaker: Dr. Andreas Argyriou [Bio sketch]

Bio Sketch. Andreas Argyriou has received degrees in Computer Science from MIT and a PhD in Computer Science from UCL. The topic of his PhD work has been on machine learning methodologies integrating different tasks and data sources. He has held postdoctoral and research faculty positions at UCL, TTI Chicago, KU Leuven and is currently in Ecole Centrale Paris with an RBUCE-UP fellowship. His current interests are in the areas of machine learning with big and complex data, compressed sensing and convex optimization methods.

Affilation: Toyota Technological Institute at Chicago
Location: Room 217 Huxley Building
Time: 2:00pm (duration: 1 hour)

Wednesday 21 November 2012

Algorithms and computer architectures for efficient real-time optimization and linear algebra solvers
Abstract. In many engineering applications where one would like to implement control and signal processing algorithms, one needs to use the latest measurements to update and solve a sequence of numerical optimization or linear algebra problems. Solving these problems in a computationally efficient and numerically reliable fashion on an embedded computing system is a challenging task.
One of the key choices that an engineer has to make in order to determine the speed, cost and power consumption of a microprocessor is the number representation that will be used in the arithmetic units. CPUs within modern desktop or laptop PCs provide hardware support for double precision floating-point arithmetic. However, most microprocessors in embedded systems do not support double precision floating-point arithmetic; they often only support single-precision floating-point and/or fixed-point arithmetic. It is therefore possible that, because of a significant decrease in precision or dynamic range, a numerical algorithm that gives reliable results on the office PC or laptop might give completely different results when implemented on an embedded computing system.
We will present novel mathematical formulations, computer architectures, optimization solvers and linear algebra solvers to show that computational resources can be reduced significantly using very low precision arithmetic, without sacrificing accuracy. We will also present new mathematical results that allow one to use fixed-point arithmetic to make impressive computational savings in iterative linear algebra solvers. Our theoretical results will be supported by implementations on a Field Programmable Gate Array (FPGA) and we will show that it is possible to exceed the peak theoretical performance of a 1TFLOP/s general-purpose GPU.

Speaker: Eric C Kerrigan [Bio sketch]

Bio Sketch. Dr Kerrigan’s research includes the optimal and robust control of nonlinear, constrained and distributed parameter systems. His research is focused on the development of efficient numerical methods and computational hardware architectures for solving the resulting problems and is applied to a variety of problems in aerospace and fluid dynamics.

Affilation: Department of Aeronautics and Department of Electrical and Electronic Engineering - Imperial College London
Location: Room 408 Electrical & Electronic Engineering Department
Time: 2:00pm (duration: 1 hour)

Thursday 8 November 2012

Reflections on robustness in stochastic programs with risk constraints
Abstract. This paper is a contribution to the robustness analysis for stochastic programs whose set of feasible solutions depends on the probability distribution P. For various reasons, probability distribution P may not be precisely specified and we study robustness of results with respect to perturbations of P. The main tool is the contamination technique. For the optimal value, local contamination bounds are derived and applied to robustness analysis of the optimal value of a portfolio performance under risk-shaping constraints. To illustrate the theoretical results, numerical examples for several mean-risk models are presented. Finally, under suitable conditions on the structure of the problem and for discrete distributions we shall suggest a new robust portfolio efficiency test with respect to the first (second) order stochastic dominance criterion and we shall exploit the contamination technique to analyze the resistance with respect to additional scenarios.

Speaker: Dr. Milos Kopa [Bio sketch]

Bio Sketch. Dr. Milos Kopa is an assistant professor at Charles University in Prague. He received his Ph.D. degree in Econometrics and Operations research in 2006 (supervisor: Prof. Jitka Dupacova, Charles University in Prague).
He is a member of several scientific societies: Stochastic Programming Community, EURO working group on financial modelling, EUROPT. Recently he has become an elected member (and secretary) of Managerial Board of new EURO working group on stochastic programming. He is vice-head of the Center of Excellence "Dynamic models in economics" that comprises about 40 leading researchers (in quantitative economics and finance) from Czech Republic.
The research of Milos Kopa is focused on: stochastic programming theory and applications, especially financial applications. In recent years he has published several papers dealing with portfolio efficiency with respect to stochastic dominance criteria; data envelopment analysis and its relation to stochastic dominance; robustness (contamination) in stochastic programs with risk constraints, etc.

Affilation: Department of Probability and Mathematical Statistics - Charles University in Prague
Location: Room 301 William Penney
Time: 3:00pm (duration: 2 hour)

Tuesday 30 October 2012

Multi-task Learning and Matrix Regularization
Abstract. We discuss the problem of estimating a structured matrix with a large number of elements. A key motivation for this problem occurs in multi-task learning. In this case, the columns of the matrix correspond to the parameters of different regression or classification tasks, and there is structure due to relations between the tasks. We present a general method to learn the tasks' parameters as well as their structure. Our approach is based on solving a convex optimization problem, involving a data term and a penalty term. We highlight different types of penalty terms which are of practical and theoretical importance. They implement structural relations between the tasks and achieve a sparse representations of parameters. We address computational issues as well as the predictive performance of the method. Finally, we describe some recent applications of these methods to computer vision and human computer interaction.

Speaker: Prof. Massimiliano Pontil [Bio sketch]

Bio Sketch. Massimiliano Pontil is Professor of Computational Statistics and Machine Learning in the Department of Computer Science at University College London. His research interests are in the field of machine learning with a focus on regularization methods, convex optimization and statistical estimation. He has published about 100 research papers on these topics, is regularly in the programme committee of the leading conferences in the field, is an associate editor of the Machine Learning Journal and is a member of the scientific advisory board of the Max Planck Institute for Biological Cybernetics, Germany.

Affilation: Department of Computer Science - University College London
Location: Room 301 William Penney
Time: 2:00pm (duration: 1 hour)

Thursday 13 September 2012

Performance bounds in robust & stochastic optimal control problems
Abstract. We present a new method to bound the performance of controllers for uncertain linear systems with mixed state and input constraints and bounded persistent disturbances. We take as a performance metric either an expected-value or minimax discounted cost over an infinite horizon, and provide a method for computing a lower bound on the achievable performance of any causal control policy in either case. Our lower bound is compared to an upper performance bound provided by restricting the choice of controller to one that is affine in the observed disturbances, and we show that the two bounds are closely related. In particular, the lower bounds have a natural interpretation in terms of affine control policies that are optimal for a problem with a restricted disturbance set. We show that our performance bounds can be computed via solution of a finite dimensional convex optimization problem, and provide numerical examples to illustrate the efficacy of our method.

Speaker: Bart Van Parys [Bio sketch]

Bio Sketch. Bart holds a BA degree in electrical engineering, and a MA degree in applied/engineering mathematics, both from the University of Leuven. Since September 2011 he has been a PhD student in the Swiss Federal Institute of Technology (ETH Zürich) under the supervision of Prof. Manfred Morari and Dr. Paul Goulart.

Affilation: Automatic Control Laboratory at Swiss Federal Institute of Technology
Location: Room 308 Computing Department
Time: 3:00pm (duration: 1 hour)

Thursday 30 August 2012

Optimisation with PDE constraints using automated consistent adjoints of finite element models
Abstract. Optimisation with partial differential equations (PDE) as constraints arise in many research areas from science engineering to finance. Typical examples are data assimilation for weather forecasting or ocean modelling and shape optimisation for wing designs in which PDEs enforce the laws of physics. The resulting optimisation problems are constrained by these nonlinear, time dependent differential equations which can be computationally extremely demanding to solve. Therefore the usage of gradient based optimisation algorithms is usually essential to reduce the number of optimisation iterations.
In this talk we present work towards a new framework for solving PDE constrained optimisation problems that aims to automate many of the steps involved in solving these kind of problems. Given a differentiable PDE model, the parameter set and a functional of interest, it applies gradient based optimisation to solve the optimisation problem. The key feature of this framework is the efficient gradient computation by automatically deriving and solving the associated the adjoint equation. The framework is demonstrated on examples for steady and unsteady optimal control problems.
One of the major difficulties in practice is the derivation and implementation of the adjoint system to efficiently compute gradient information; Naumann (2011) described it as "[...] one of the great open challenges in the field of High-Performance Scientific Computing" for large scale simulation code. There are two current approaches to derive the adjoint equation each of which suffer from their own limitations. Algorithmic differentiation (AD) derives the adjoint model directly from the model source code. In practice this technique has strong restrictions, and requires a major initial and ongoing investment to prepare the code for automatic adjoint generation. An alternative is to formulate the adjoint PDE and to discretise this separately. This approach, known as the continuous adjoint has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops.
In this talk, we present an alternative approach where the PDE is formulated in a high level language that resembles the matematical notation. The model is automatically generated using code generation (using the FEniCS project). In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by a library called libadjoint. In contrast to the conventional algorithmic differentiation description of considering a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of considering the model as a sequence of discrete equations which are assembled and solved. It is the coupling of the respective abstractions employed by libadjoint and the FEniCS project which produces the adjoint model automatically, without further intervention from the model developer.

Speaker: Simon Funke [Bio sketch]

Bio Sketch. Simon is a PhD student in the Applied Modelling and Computation Group at Imperial College.

Affilation: Applied Modelling and Computation Group at Imperial College
Location: CPSE seminar room (C615 Roderic Hill)
Time: 2:00pm (duration: 1 hour)

Thursday 5 July 2012

Robust Pricing of Monopolistic Cloud Computing Services with Service Level Agreements
Abstract. Cloud Computing is a new computing paradigm that gives end-users on-demand access to computing resources of companies that maintain large data centres. Here, we address the optimal pricing of cloud computing services from the perspective of a monopolistic service provider that needs to manage demand responsiveness and uncertainty. We formulate the pricing problem for on-demand services as a multi-stage stochastic program and model service level agreements via chance constraints. Under weak assumptions about the demand uncertainty we show that the resulting model can be reduced to an equivalent two-stage stochastic program. As cloud computing is only just emerging, it is impossible to reliably estimate demand distributions from historical data. Indeed, such data may even be difficult to collect. We address this type of model uncertainty by adopting a distributionally robust approach, assuming that only information about the location, size and support (but not the shape) of the demand distribution is available. We show that the arising robust model can be reformulated as a second-order cone program, and we analytically derive the worst-case distributions. Several extensions of the basic model are discussed. First, we study generalized models in which higher-order moments of the demand distribution are known. Next, we include multiple products and account for different product qualities. Finally, we investigate the possibility of selling unused capacity (if any) on a spot market.

Speaker: Vladimir Roitch
Affilation: Department of Computing - Imperial College London
Location: CPSE seminar room (C615 Roderic Hill)
Time: 4:30pm (duration: 30 min)
A decision rule approach to medium-term hydropower scheduling under uncertainty
Abstract. We present a multistage stochastic optimisation model for the medium-term scheduling of a cascaded hydropower system. Electricity spot prices change on a much shorter time scale than the hydrological dynamics of the reservoirs in the cascade. We exploit this property to reduce computational complexity: we partition the planning horizon into hydrological macroperiods, and we account for intra-stage price variability by using price duration curves. Moreover, we restrict the space of recourse decisions to those affine in the observable data, thereby obtaining a tractable approximate problem.

Speaker: Paula Rocha
Affilation: Department of Computing - Imperial College London
Location: CPSE seminar room (C615 Roderic Hill)
Time: 4:00pm (duration: 30 min)

Tuesday 3 July 2012

Protection at All Levels: Probabilistic Envelope Constraints
Abstract. Optimization under chance constraints is a standard approach to ensure that bad events such as portfolio losses, are unlikely to happen. They do nothing, however, to protect more against terrible events (e.g., deep portfolio losses, or bankruptcy). In this talk, we will propose a new decision concept, termed "probabilistic envelop constraint", which extends the notion of chance constraints, to a formulation that provides different probabilistic guarantees at each level of constraint violation. Thus, we develop a notion of guarantee across the spectrum of disasters, or rare events, ensuring these levels of protection hold across the curve, literally. We further show that the corresponding optimization problem can be reformulated as a semi-infinite optimization problem, and provide conditions that guarantee its tractability. Interestingly, the resulting formulation is what is known as a comprehensive robust optimization in literature. This work thus provides a new fundamental link between two main methodologies in optimization under uncertainty: stochastic optimization and robust optimization. This is a joint work with Constantine Caramanis (UT-Austin) and Shie Mannor (Technion).

Speaker: Dr. Xu Huan [Bio sketch]

Bio Sketch. Huan Xu has been an assistant professor of Mechanical Engineering of National University of Singapore since 2011. He obtained his Ph. D. degree in ECE from McGill University, Canada, in 2009, and was a postdoctral research fellow of the University of Texas at Austin prior to joining NUS. His current research interest focuses on learning and decision-making in large-scale complex systems, including machine learning, high-dimensional statistics, robust and adaptable optimization, robust sequential decision making, and applications to large-scale systems. He has published in leading operations research and machine learning journals including Operations Research, Math. Oper. Res., IEEE Info. Theory, JMLR, and conferences including ICML, NIPS, and COLT.

Affilation: Department of Mechanical Engineering at National University of Singapore
Location: CPSE Seminar room (C616 Roderic Hill)
Time: 2:00pm (duration: 1 hour)

Monday 2 July 2012

Time-Critical Cooperative Path-Following Control of Multiple UAVs
Abstract. Worldwide, there has been growing interest in the use of autonomous vehicles to execute cooperative missions of increasing complexity without constant supervision of human operators. Despite significant progress in the field of cooperative control, several challenges need to be addressed to develop strategies capable of yielding robust performance of a fleet in the presence of complex vehicle dynamics, communications constraints, and partial vehicle failures. In this talk, we will present a theoretical framework for the development of decentralized strategies for cooperative motion control of multiple vehicles that must meet stringent spatial and temporal constraints. The approach adopted applies to teams of heterogeneous systems, and does not necessarily lead to swarming behavior. Flight test results of a coordinated road search mission involving multiple small tactical UAVs will be discussed to demonstrate the efficacy of the multi-vehicle cooperative control framework presented.

Speaker: Prof. Naira Hovakimyan [Bio sketch]

Bio Sketch. Naira Hovakimyan received her M.S. in Theoretical Mechanics and Applied Mathematics in 1988 from Yerevan State University in Armenia. She received her Ph.D. in Physics and Mathematics in 1992, in Moscow, from the Institute of Applied Mathematics of Russian Academy of Sciences, majoring in optimal control and differential games. In 1997 she was awarded a governmental postdoctoral scholarship to work in INRIA, France. In 1998 she was invited to the School of Aerospace Engineering of Georgia Tech, where she worked as a research faculty member until 2003. In 2003 she joined the Department of Aerospace and Ocean Engineering of Virginia Tech, and in 2008 she moved to the University of Illinois at Urbana-Champaign, where she is a professor and Schaller faculty scholar. She is the 2011 recipient of the AIAA Mechanics and Control of Flight Award. She has coauthored one book and more than 250 refereed publications. Her research interests are in the theory of robust adaptive control and estimation with an emphasis on aerospace applications, control in the presence of limited information, networks of autonomous systems and game theory. She is an associate fellow and life member of AIAA, a Senior Member of IEEE, and a member of AMS and ISDG.

Affilation: Department of Mechanical Science and Engineering at University of Illinois at Urbana-Champaign
Location: Room 212 William Penny
Time: 2:00pm (duration: 1 hour)

Thursday 21 June 2012

Modeling and Solution Methods for Stochastic Programming Problems Under Endogenous Observation of Uncertainty
Abstract. We first present an overview of optimization problems under uncertainty and multi-stage stochastic programming methods. We then discuss applications where the decision maker alters the underlying stochastic process by affecting the timing of uncertainty observation, i.e. problems under endogenous observation of uncertainty. We discuss how this important but less studied class of problems can be formulated as a large-scale multi-stage stochastic programming model. To address this challenging problem, we develop a number of theoretical results, modeling methods and computational techniques. First, we show how the structure of the problem can be exploited to formulate substantially smaller yet tighter models. Second, we discuss a number of approximations (e.g. finite-horizon approximation for rolling-horizon approaches) that can be used to obtain solutions of high quality. Third, we present a novel branch-and-cut algorithm where we start from a reduced model and add essential constraints only if they are violated. The presented methods are applied to the planning of clinical trials in the pharmaceutical research and development pipeline.

Speaker: Dr. Christos Maravelias [Bio sketch]

Bio Sketch. Christos obtained his Diploma in Chemical Engineering from the National Technical University of Athens. He then moved to the London School of Economics (London, UK), where he received an MSc in Operational Research. After completing his military service in Greece, he went to Carnegie Mellon University where he worked towards his PhD under the supervision of Professor Grossmann. In the fall of 2004 he joined the faculty of the Department of Chemical and Biological Engineering at the University of Wisconsin - Madison. He is a recipient of the Inaugural Olaf A. Hougen Fellowship, an NSF CAREER award, and the 2008 W. David Smith Jr. Award from the CAST division of ACIChE. Christos' primary research interests are in the areas of a) chemical production planning and scheduling, b) process synthesis, with an emphasis on biofuels, c) optimization under uncertainty, and d) the development of computational tools for the design of chemical and biological catalysts.

Affilation: Department of Chemical and Biological Engineering at the University of Wisconsin-
Location: Room 219A William Penny
Time: 2:00pm (duration: 1 hour)

Friday 25 May 2012

On Control and Random Dynamical Systems in Reproducing Kernel Hilbert Spaces
Abstract. We introduce a data-based approach to estimating key quantities which arise in the study of nonlinear control systems and random nonlinear dynamical systems. Our approach hinges on the observation that much of the existing linear theory may be readily extended to nonlinear systems - with a reasonable expectation of success - once the nonlinear system has been mapped into a high or infinite dimensional Reproducing Kernel Hilbert Space. In particular, we develop computable, non-parametric estimators approximating controllability and observability energy functions for nonlinear systems, and study the ellipsoids they induce. It is then shown that the controllability energy estimator provides a key means for approximating the invariant measure of an ergodic, stochastically forced nonlinear system. We also apply this approach to the problem of model reduction of nonlinear control systems. In all cases the relevant quantities are estimated from simulated or observed data. This is joint work with J. Bouvrie (Duke University).

Speaker: Dr. Boumediene Hamzi [Bio sketch]

Bio Sketch. Boumediene Hamzi is a Marie Curie Fellow at the Department of Mathematics of Imperial College. He obtained his Ph.D. in Control Theory from the University of Paris-Sud and then held different visiting academic positions at UCDavis, the MSRI and then Duke University. His current research interests are the analysis of control and random dynamical systems in Reproducing Kernel Hilbert Spaces in view of developing data-based methods for the analysis and prediction of random dynamical systems and control strategies for nonlinear systems on the basis of observed data (rather than a pre-described model).

Affilation: Department of Mathematics of Imperial College
Location: CPSE seminar room (C615 Roderic Hill)
Time: 2:00pm (duration: 1 hour)

Tuesday 15 May 2012

Tutorial on Stochastic Calculus
Abstract. This tutorial session provides an introduction to stochastic calculus along the lines of the Merton model for asset pricing. Some properties of the components of this model will be explained and used to construct basic stochastic integrals. I will then proceed with the most important theorems in the general case, notably Itō's lemma, and evaluate some examples.

Speaker: Florian Locker [Bio sketch]

Bio Sketch. Florian studied economics and finance at the University of Innsbruck and Tulane University and joined the Institute for Statistics and Mathematics at WU Vienna as a PhD student in 2008. He is interested in methods concerning the hedging of derivative assets within incomplete markets and when assets exhibit jumps, and their numerical implementation. He is currently visiting the QUADS research group at Imperial College.

Affilation: Institute for Statistics and Mathematics of WU
Location: CPSE seminar room (C615 Roderic Hill)
Time: 2:00pm (duration: 1 hour)

Monday 14 May 2012

Scheduling Modular Projects on a Bottleneck Resource
Abstract. In this paper, we model a research-and-development project as consisting of several modules, with each module containing one or more activities. We examine how to schedule the activities of such a project in order to maximize the expected profit when the activities have a probability of failure and when an activity's failure can cause its module and thereby the overall project to fail. A module succeeds when at least one of its constituent activities is successfully executed. All activities are scheduled on a scarce resource that is modeled as a single machine. We describe various policy classes, establish the relationship between the classes, develop exact algorithms to optimize over two different classes (one dynamic program and one branch-and-bound algorithm), and examine the computational performance of the algorithms on randomly generated instance sets.

Speaker: Dr. Roel Leus [Bio sketch]

Bio Sketch. Roel Leus holds a Master's degree (1998) in Business Engineering and a PhD (2003) in Applied Economics from KU Leuven (Belgium). He is currently Associate Professor of Operations Research at the Faculty of Business and Economics of KU Leuven. He has been a visiting researcher at London Business School (2004) and at LAAS-CNRS Toulouse (2008).

Affilation: Faculty of Business and Economics of KU Leuven
Location: Room 572A Huxley Building
Time: 11:00am (duration: 1 hour)

Friday 27 April 2012

Robust Dynamic Risk Measures
Abstract. Recent progress in the theory of dynamic risk measures has found a strong echo in stochastic programming, where the time consistency of dynamic decision making under uncertainty is currently under scrutiny. In this talk we first review the concepts of coherence and time consistency of dynamic risk measures and then discuss their ramifications for stochastic programming. Next, we extend these concepts to stochastic programming models subject to distributional ambiguity, which motivates us to introduce robust dynamic risk measures. We discuss conditions under which these robust risk measures inherit coherence and time consistency from their nominal counterparts. We also propose an approximation scheme based on polynomial decision rules for solving linear multistage stochastic programs involving robust dynamic risk measures. The theoretical concepts are illustrated through numerical examples in the context of inventory management.

Speaker: Dimitra Bampou
Affilation: Imperial College London
Location: Room 218 Huxley Building
Time: 3:00pm (duration: 1 hour)

Thursday 26 April 2012

Development and application of automatic force field parameterization software for molecular simulation
Abstract. Force fields are empirical potential energy functions that describe molecules and their interactions; they constitute the physical foundation for simulations of atomic and molecular motion. The accuracy of a force field is determined by empirical parameters, and the choice of optimal parameters has been a difficult challenge for decades. The molecular simulation community is in need of an automatic and systematic method for force field parameterization, which would revolutionize the field by providing greatly improved simulation accuracy and reproducibility of force field development. With this goal in mind, I have developed an open-source software package called ForceBalance to perform automatic force field parameterization. The software is built around a standardized procedure for force field development, interfaces easily with classical and quantum simulation codes, and has the ability to produce force fields that are optimized to reproduce any experimental or theoretical reference data. I will introduce the concepts and implementation of ForceBalance, provide a simple demonstration of the software, and discuss opportunities for collaboration.

Speaker: Dr. Lee-Ping Wang [Bio sketch]

Bio Sketch. Lee-Ping Wang graduated from U.C Berkeley with a B.A. in Physics in 2006. He entered graduate school in the chemistry department at MIT, where he worked with Prof. Troy Van Voorhis on various topics in theoretical chemistry such as water splitting catalysis, QM/MM methods, and force field development, graduating with a Ph.D. in 2011. Lee-Ping is now working as a postdoctoral fellow at Stanford University with Profs. Todd Martinez and Vijay Pande where he is continuing to refine force field development methods and applying them to biomolecular simulation.

Affilation: Stanford University
Location: Room 218
Time: 2:00pm (duration: 1 hour)

Thursday 22 March 2012

Lifting Methods for Generalized Semi-Infinite Programs
Abstract. In this talk we present numerical solution strategies for generalized semi-infinite optimization problems (GSIP), a class of mathematical optimization problems which occur naturally in the context of design centering problems, robust optimization problems, and many fields of engineering science. GSIPs can be regarded as bilevel optimization problems, where a parametric lower-level maximization problem has to be solved in order to check feasibility of the upper level minimization problem. In this talk we discuss three strategies to reformulate a class lower-level convex GSIPs into equivalent standard minimization problems by exploiting the concept of lower level Wolfe duality. Here, the main contribution is the discussion of the non-degeneracy of the corresponding formulations under various assumptions. Finally, these non-degenerate re-formulations of the original GSIP allow us to apply standard nonlinear optimization algorithms.

Speaker: Dr. Boris Houska [Bio sketch]

Bio Sketch. Boris Houska studied mathematics and physics at the university of Heidelberg in 2003-2008. He obtained his Ph.D. in 2011 in Electrical Engineering at the Optimization in Engineering Center (OPTEC) at K.U. Leuven. Since 2012 he is a postdoctoral researcher at the Centre of Process Systems Engineering at Imperial College. His research interests include numerical optimization and optimal control, robust optimization, as well as fast MPC algorithms.

Affilation: Centre for Process Systems Engineering at Imperial College
Location: Room 217 Huxley Building
Time: 3:00pm (duration: 1 hour)

Wednesday 21 March 2012

Payment Rules through Discriminant-Based Classifiers
Abstract. In mechanism design it is typical to impose incentive compatibility and then derive an optimal mechanism subject to this constraint. By replacing the incentive compatibility requirement with the goal of minimizing expected ex-post regret, we are able to adapt techniques of statistical machine learning to the design of payment rules. This computational approach to mechanism design is applicable to domains with multi- dimensional types and situations where computational efficiency is a concern. Specifically, given an outcome rule and access to a type distribution, we train a support vector machine with a special discriminant function structure such that it implicitly establishes a payment rule with desirable incentive properties. We discuss applications to a multi-minded combinatorial auction with a greedy winner-determination algorithm and to an assignment problem with egalitarian outcome rule. Experimental results demonstrate both that the construction produces payment rules with low ex-post regret, and that penalizing classification errors is effective in preventing failures of ex-post individual rationality.

Speaker: Prof. David Parkes [Bio sketch]

Bio Sketch. David C. Parkes is the Gordon McKay Professor of Computer Science in the School of Engineering and Applied Sciences at Harvard University. He was the recipient of the NSF Career Award, the Alfred P. Sloan Fellowship, the Thouron Scholarship, the Harvard University Roslyn Abramson Award for Teaching. Parkes received his Ph.D. degree in Computer and Information Science from the University of Pennsylvania in 2001, and an M.Eng. (First class) in Engineering and Computing Science from Oxford University in 1995. At Harvard, Parkes founded the Economics and Computer Science research group. His research interests include mechanism design, electronic commerce, market design, and social computing. Parkes is editor of Games and Econonic Behavior, and on the editorial boards of JAAMAS, ACM TEAC and INFORMS J. Computing. Parkes also serves as the Chair of the ACM SIG on Electronic Commerce and was the Program Chair of ACMEC'07 and AAMAS'08, and General Chair of ACMEC'10.

Affilation: School of Engineering and Applied Sciences at Harvard University
Location: Room 217-218 Huxley Building
Time: 2:00pm (duration: 1 hour)

Thursday 8 March 2012

The Long-run Impact of Wind Power on Electricity Prices and Generating Capacity (joint work with Nicholas Vasilakos)
Abstract. This paper uses a market equilibrium model to calculate how the mix of generating capacity would change if large amounts of intermittent renewables are built in Great Britain, and what this means for operating patterns and the distribution of prices over time. If generators bid their marginal costs, we find that the changes to the capacity mix are much greater than the changes to the pattern of prices. Thermal capacity falls only slightly in response to the extra wind capacity, and there is a shift towards power stations with higher variable costs (but lower fixed costs). The changes to the pattern of prices, once capacity has adjusted, are relatively small. In an oligopolistic setting, strategic generators will choose lower levels of capacity. If wind output does not receive the market price, then mark-ups on thermal generation will be lower in a system with large amounts of wind power.

Speaker: Prof. Richard Green - Alan and Sabine Howard Professor of Sustainable Energy Business at Imperial College Business School [Bio sketch]

Bio Sketch. Richard Green is the Alan and Sabine Howard Professor of Sustainable Energy Business at Imperial College Business School. He was previously Professor of Energy Economics and Director of the Institute for Energy Research and Policy at the University of Birmingham, and Professor of Economics at the University of Hull. He started his career at the Department of Applied Economics and Fitzwilliam College, Cambridge. He has spent time on secondment to the Office of Electricity Regulation and has held visiting appointments at the World Bank, the University of California Energy Institute and the Massachusetts Institute of Technology. He has been studying the economics and regulation of the electricity industry for over 20 years. He has written extensively on market power in wholesale electricity markets and has also worked on transmission pricing. More recently, the main focus of his work has been on the impact of low-carbon generation (nuclear and renewables) on the electricity market, and the business and policy implications of this.

Affilation: Imperial College London
Location: Room 344A Huxley Building
Time: 2:00pm (duration: circa 1 hour)

Thursday 23 February 2012

Enclosing with Simple Bodies
Abstract. The Minimum Volume Enclosing Ellipsoid and Minimum Enclosing Ball problems are related to covering a given set of points with an ellipsoid or a ball with the smallest volume possible. These problems have many applications, especially in statistics. We will survey recent theoretical results and take a look at efficient algorithms for solving these problems. We will show that these algorithms can be used for very large data sets.

Speaker: Dr. Selin Damla Ahipasaoglu - research officer [Bio sketch]

Bio Sketch. Selin Damla Ahipasaoglu received her PhD in 2009 from Cornell University under the supervision of Prof. Mike Todd. After her PhD, she worked as a postdoctoral researcher at Princeton University and London School of Economics. She specialises in developing algorithms for large scale optimization problems, in particular first-order methods for convex problems. She is also working on auctions and game theory. In April 2012, she will be joining the Singapore University of Technology and Design as an assistant professor.

Affilation: Management Science Group at LSE
Location: Room 217 Huxley Building
Time: 2:00pm (duration: circa 1 hour)

Wednesday 15 February 2012

Complexity and heuristics in stochastic optimization
Abstract. Combining recent results on numerical integration and convex optimization, we derive a polynomial bound on the worst case complexity of a class of static stochastic optimization problems. We then describe a technique for reducing dynamic problems to static ones. The reduction technique is only a heuristic but it can effectively employ good guesses for good solutions. This is illustrated on an 82-period problem coming from pension insurance industry.

Speaker: Prof. Teemu Pennanen - Professor of Mathematical Finance Probability and Statistics [Bio sketch]

Bio Sketch. Teemu Pennanen is the Professor of Mathematical Finance, Probability and Statistics at King's College, London. Before joining KCL, Professor Pennanen worked as Managing Director at QSA Quantitative Solvency Analysts Ltd, with a joint appointment as Professor of Stochastics at University of Jyvaskyla, Finland. His earlier appointments include a research fellowship of the Finnish Academy and several visiting positions in universities abroad.

Affilation: King's College London
Location: Room 218 Huxley Building
Time: 2:00pm (duration: circa 1 hour)

Wednesday 8 February 2012

Performance-Based Contracts for Outpatient Medical Services
Abstract. In recent years, the performance-based approach to contracting for medical services has been gaining popularity across different healthcare delivery systems, both in the US (under the name of "Pay-for-Performance", or P4P), and abroad ("Payment-by-Results", or PbR, in the UK). The goal of our research is to build a unified performance-based contracting (PBC) framework that incorporates patient access-to-care requirements and that explicitly accounts for the complex outpatient care dynamics facilitated by the use of an online appointment scheduling system. We address the optimal contracting problem in a principal-agent framework where a service purchaser (the principal) minimizes her cost of purchasing the services and achieves the performance target (a waiting time target) while taking into account the response of the provider (the agent) to the contract terms. Given the incentives offered by the contract, the provider maximizes his payoff by allocating his outpatient service capacity among three patient groups: urgent patients, dedicated advance patients and flexible advance patients. We model the appointment dynamics as that of an M=D=1 queue and analyze several contracting approaches under adverse selection (asymmetric information) and moral hazard (private actions) settings. We study the first-best and the second-best solutions, as well as their specific contracting implementation schemes. Our results show that simple and popular schemes used in practice cannot implement the first-best solution and that the linear PBC cannot implement the second-best solution. In order to overcome these limitations, we propose a threshold-penalty PBC approach and show that it coordinates the system for an arbitrary patient mix and that it achieves the second-best performance for the setting where all advance patients are dedicated.

Speaker: Dr. Houyuan Jiang - Senior Lecturer in Management Science
Affilation: Judge Business School - University of Cambridge
Location: Room 218 Huxley Building
Time: 2:00pm (duration: circa 1 hour)

Thursday 26 January 2012

A Stochastic Capacity Expansion Model for the UK Electricity System
Abstract. Energy markets are currently undergoing one of their most radical changes in history. On one hand, market liberalisation leads to a shift from state-owned utilities with a focus on failure resilience towards competitive markets where reliability is traded off with costs. This will result in a much higher utilisation of generation and transmission facilities, which in turn leads to a less predictable system behaviour and more frequent outages. Moreover, predictions about climate change dictate the gradual replacement of non-renewable energy sources with renewable alternatives such as solar or wind power. Contrary to non-renewable energy, the electricity delivered from renewable sources is highly uncertain due to the intermittency of solar radiation, wind etc. Both developments highlight the need to accommodate uncertainty in the design and management of future energy systems. This work aims to identify the most cost-efficient expansion of the UK energy grid, given a growing future demand for energy and the target to move towards a more sustainable energy system. To this end, we develop a multi-stage stochastic program where the investment decisions (generation units and transmission lines that should be built) are taken here-and-now, whereas the operating decisions are taken in hourly time stages over a horizon of 30 years. The resulting problem contains several thousand time stages and is therefore severely intractable. We develop a novel problem reformulation, based on the concept of time randomization, that allows us to equivalently reformulate the problem as a two-stage stochastic program. By taking advantage of the simple structure of the decision rule approximation scheme, we can model and solve a problem that optimises the entire UK energy grid with nearly 400 generators and 1000 transmission lines.

Speaker: Angelos Georghiou
Affilation: Department of Computing - Imperial College London
Location: Room 217-218 Huxley Building
Time: 5:30pm (duration: 30 min)
Multistage Stochastic Portfolio Optimisation in Deregulated Electricity Markets Using Linear Decision Rules
Abstract. The deregulation of electricity markets often poses great financial risks to retailers who procure electric energy on the spot market to satisfy their customers' electricity demand. To hedge against this risk exposure, retailers often hold a portfolio of electricity derivative contracts. In this talk, we present a multistage stochastic mean-variance optimisation model for the management of such a portfolio. To reduce computational complexity, we perform two approximations: stage-aggregation and linear decision rules (LDR). The LDR approach consists of restricting the space of recourse decisions to those affine in the history of the random parameters. When applied to mean-variance optimisation models, it leads to convex quadratic programs. Since their size grows typically only polynomially with the number of decision stages, they are amenable to efficient numerical solution. Our numerical experiments highlight the value of adaptivity inherent in the LDR method and its potential for enabling scalability to portfolio optimisation problems with many decision stages.

Speaker: Paula Rocha
Affilation: Department of Computing - Imperial College London
Location: Room 217-218 Huxley Building
Time: 5:00pm (duration: 30 min)

Wednesday 25 January 2012

Universal decision rule approximations for dynamic decision-making under uncertainty
Speaker: Phebe Vayanos
Affilation: Department of Computing - Imperial College London
Location: Room 217-218 Huxley Building
Time: 5:00pm (duration: 1 hour)

Thursday 12 January 2012

Steam Engines Jet Fighters and Credit Crises
Abstract. The talk will discuss some of the underlying causes of the financial crisis with particular focus on financial market instability, monetary policy and the influence of the Efficient Market Hypothesis.

Speaker: Dr George Cooper [Bio sketch]

Bio Sketch. George Cooper is a fund manager at BlueCrest Capital in London. Prior to joining BlueCrest George was the head of European interest rate research at JP Morgan and has also worked for both Deutsche Bank and Goldman Sachs. George’s book “The Origin of Financial Crises: Central Banks, Credit Bubbles and the Efficient Market Fallacy” was published in August 2008 and has now been translated into over a dozen languages. George holds a PhD in engineering from Durham University.

Affilation: BlueCrest Capital
Location: Room 217-218 Huxley Building
Time: 2:00pm (duration: circa 1 hour)

Friday 6 January 2012

Robust Optimization of Dynamic Systems and Its Applications.
Abstract. This talk gives an introduction to numerical methods for the robust optimization of uncertain dynamic systems. In contrast to standard differential equations, which describe the propagation of a single vector-valued trajectory in time, uncertain dynamic systems propagate a set valued function: the uncertainty tube. For the case that control inputs are available this uncertainty tube can be influenced and optimized such that certain robustness criteria are met. The aim of the first part of the talk is to explain how to formulate and solve such tube-based robust optimal control problems. The second part of the talk is about the software environment and algorithm collection ACADO Toolkit, which implements tools for automatic control and dynamic optimization. It provides a general framework for using a great variety of algorithms for direct optimal control, including robust optimal control and model predictive control. The ACADO Toolkit is implemented as a selfcontained C++ code, while the object-oriented design allows for convenient coupling of existing optimization packages and for extending it with user-written optimization routines. We present numerical examples from the field of mechanics and biochemical engineering.

Speaker: Dr. Boris Houska
Affilation: tbd
Location: CPSE Seminar room (C616 Roderic Hill)
Time: 3:00pm (duration: circa 1 hour)

Thursday 8 December 2011

Decision rules for information discovery in multi-stage stochastic programming
Abstract. Stochastic programming and robust optimization are disciplines concerned with optimal decision-making under uncertainty over time. Traditional models and solution algorithms have been tailored to problems where the order in which the uncertainties unfold is independent of the controller actions. Nevertheless, in numerous real-world decision problems, the time of information discovery can be influenced by the decision maker, and uncertainties only become observable following an (often costly) investment. Such problems can be formulated as mixed-binary multi-stage stochastic programs with decision-dependent non-anticipativity constraints. Unfortunately, these problems are severely computationally intractable. We propose an approximation scheme for multi-stage problems with decision-dependent information discovery which is based on techniques commonly used in modern robust optimization. In particular, we obtain a conservative approximation in the form of a mixed-binary linear program by restricting the spaces of measurable binary and real-valued decision rules to those that are representable as piecewise constant and linear functions of the uncertain parameters, respectively. We assess our approach on a problem of infrastructure and production planning in offshore oil fields from the literature.

Speaker: Phebe Vayanos
Affilation: Department of Computing - Imperial College London
Location: CPSE Seminar room (C615 Roderic Hill)
Time: 5:00pm (duration: 30 min)
A Scenario Approach for Estimating the Suboptimality of Linear Decision Rules in Two-Stage Robust Optimization
Abstract. Robust dynamic optimization problems involving adaptive decisions are computationally intractable in general. Tractable upper bounding approximations can be obtained by requiring the adaptive decisions to be representable as linear decision rules (LDRs). In this presentation we investigate families of tractable lower bounding approximations, which serve to estimate the degree of suboptimality of the best LDR. These approximations are obtained either by solving a dual version of the robust optimization problem in LDRs or by utilizing an inclusion-wise discrete approximation of the problem's uncertainty set. The quality of the resulting lower bounds depends on the distribution assigned to the uncertain parameters or the choice of the discretization points within the uncertainty set, respectively. We prove that identifying the best possible lower bounds is generally intractable in both cases and propose an efficient procedure to construct suboptimal lower bounds. The resulting instance-wise bounds outperform known worst-case bounds in the majority of our test cases.

Speaker: Michael Hadjiyiannis
Affilation: Department of Computing - Imperial College London
Location: CPSE Seminar room (C615 Roderic Hill)
Time: 5:00pm (duration: 30 min)
Scenario-Free Stochastic Programming with Polynomial Decision Rules
Abstract. Multi-stage stochastic programming provides a versatile framework for optimal decision making under uncertainty, but it gives rise to hard functional optimization problems since the adaptive recourse decisions must be modeled as functions of some or all uncertain parameters. We propose to approximate these recourse decisions by polynomial decision rules and show that the best polynomial decision rule of a fixed degree can be computed efficiently. We also show that the suboptimality of the best polynomial decision rule can be estimated efficiently by solving a dual version of the stochastic program in polynomial decision rules.

Speaker: Dimitra Bampou
Affilation: Department of Computing - Imperial College London
Location: CPSE Seminar room (C615 Roderic Hill)
Time: 5:00pm (duration: 30 min)

Thursday 1 December 2011

How IBM enables industries to better leverage optimisation technology?
Abstract. IBM ILOG CPLEX Optimisation Studio is used by leading academic & business institutions to rapidly create both mathematical programming and constraint programming models. It is used to build efficient optimisation models and state-of-the-art applications for the full range of planning and scheduling problems. The tool provides a powerful Integrated Development Environment (IDE) using the Optimisation Programming Language (OPL) language, programmatic APIs, or indeed other 3rd party modelling interfaces. This makes it easy to evaluate different modelling approaches and to integrate external data. With its built-in development tools, it supports the entire model development process.In addition, IBM ILOG ODM Enterprise provides an enterprise scale platform for developing and deploying highly effective optimisation based analytical planning and scheduling solutions for business decision makers across a variety of industries. In this presentation, you will find out, how IBM is providing a powerful platform to enable, a) better scenario management, b) faster decision making process, c) distributed planning processes for large scale applications, and d) a role based planning and collaboration environment.

Speaker: Dr. Mozafar Hajian [Bio sketch]

Bio Sketch. Mozafar Hajian completed his Ph.D. at Brunel University of West London in Combinatorial Optimisation, before joining IC-Parc at Imperial College as a Senior Research Fellow. During this period, he collaborated with both Computing and Manufacturing departments to pioneer new methods in solving combinatorial optimisation problems and distributed Supply Chain Optimisation. Outside of academia, he has held senior roles at ILOG, SAP and IBM, including senior business development manager for Supply Chain Optimisation at SAP. He is presently a certified Client Technical Advisor for IBM ILOG Optimisation & Supply Chain solutions.

Affilation: tbd
Location: 219A William Penney Lab (entrance from walkway)
Time: 2:00pm (duration: circa 1 hour)

Friday 25 November 2011

Coordinating Flexible Responsive Electricity Demand via Smart Grids
Abstract. As the electric grid evolves into a 'smart' grid, there is renewed interest in developing and implementing strategies for integrating flexible, responsive electric demand into the perpetual task of balancing electricity supply and demand on the grid. Pilot programs and research have demonstrated that automated energy management systems, particularly for residential customers, are instrumental in increasing the response of flexible demand from customers participating in these strategies. This talk will begin with a discussion of the potential benefits to a single residence when coordinating electricity consumption, distributed generation and storage with an automated energy management system under time-varying pricing and uncertain weather conditions. A discussion of the open questions involving what may happen when large numbers of automated energy management systems are installed on the smart grid will follow.

Speaker: Dr. Daniel Livengood [Bio sketch]

Bio Sketch. Daniel is a postdoctoral research fellow at the Massachusetts Institute of Technology, where he recently completed his Ph.D. in Engineering Systems. Daniel's research focuses on automated home energy management systems as part of future smart grids, with a particular interest in developing algorithms that coordinate energy consumption, distributed generation and storage in response to time-varying pricing and uncertain weather forecasts. Along with his ongoing academic research, Daniel is working on energy and thermal algorithms with the startup Coincident, Inc., as part of their energy management appliance. Daniel is also an advisor to the startup Grid Solutions, Inc., and has worked as an intern at the demand response company EnerNOC, analyzing baseline calculation methods for demand response programs.

Affilation: tbd
Location: CPSE Seminar room (C615 Roderic Hill)
Time: 2:00pm (duration: circa 1 hour)

Thursday 24 November 2011

Large Scale Numerical Shape Optimisation
Abstract. Shape Optimisation problems are a special sub-class of optimisation problems with PDE constraints, where the domain in which the PDE is defined becomes the unknown to be found. The Hadamard Theorem states that given sufficient regularity, one can always find a representation of the shape gradient that exists on the surface of the unknown domain alone, thereby offering the potential to create very efficient numerical schemes. Unless the structure of a shape optimisation problem is exploited in some fashion, the resulting need to include the deformation of the surrounding domain can potentially become prohibitively expensive. The desire to create higher order methods like SQP also necessitates studies about second order shape derivatives. The talk will also focus on applications in aerodynamic design and the shaping of wave emitters.

Speaker: Dr. Stephan Schmidt
Affilation: tbd
Location: CPSE Seminar room (C615 Roderic Hill)
Time: 4:30pm (duration: circa 1 hour)

Thursday 17 November 2011

LAROS: a convex optimization approach for feature extraction
Abstract. We propose a convex optimization formulation using nuclear norm and $\ell_1$-norm to find a large approximately rank-one submatrix (LAROS) for a given matrix. We are able to characterize the low-rank and sparsity structure of the resulting solutions. We show that our model can recover low-rank submatrices for matrices with subgaussian random noises. We solve the proposed model using a proximal point algorithm and show that it can be applied to applications in feature extraction. This is joint work with Stephen Vavasis and Kim-Chuan Toh.

Speaker: Dr. Xuan Vinh Doan
Affilation: Warwick Business School - University of Warwick
Location: Room 572A Huxley Building
Time: 3:00pm (duration: circa 1 hour)

Thursday 10 November 2011

Optimization under Uncertainty - Recent Advances in Multi-Parametric Mixed Integer Linear Programming and its Applications
Abstract. In optimization under uncertainty, multi-parametric programming is a powerful tool to account for the presence of uncertainty in mathematical models. Its objective is to derive the optimal solution as a function of the parameters without exhaustively enumerating the parameter space. We consider the general multi-parametric mixed integer linear programming (mp-MILP) problem. The presence of uncertainty in mixed integer linear programming models employed in widespread application fields, including planning/scheduling, process synthesis and hybrid control, significantly increases the complexity and computational effort in retrieving optimal solutions. A particular difficulty arises when uncertainty is simultaneously present in the coefficients of the objective function, the constraint matrix and the constraint vector, which transforms the original linear model into a nonlinear and non-convex one with respect to both the optimization variables and the parameters. Here, we discuss two novel methods for the solution of the general mp-MILP problem. The first approach deals with the global solution of the general mp-MILP model. Exploiting the special structure of the problem, we outline the steps of a parametric branch and bound procedure. The second approach is a two-stage method for the approximate solution of the general mp-MILP problem which combines state-of-the-art robust optimization and multi-parametric programming techniques. The two-stage method is applied to short-term batch process scheduling under uncertainty and we demonstrate its potential in the construction of a pro-active scheduling policy.

Speaker: Martina Wittmann-Hohlbein [Bio sketch]

Bio Sketch. Martina Wittmann-Hohlbein obtained a Diploma in Mathematics and Economics at Martin-Luther-University Halle-Wittenberg, Germany, in 2007. She joined CPSE at Imperial College in 2009.

Affilation: CPSE - Department of Chemical Engineering - Imperial College London
Location: 219A William Penney Lab (entrance from walkway)
Time: 4:00pm (duration: 1 hour)