

BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Computational Optimisation Group - ECPv6.15.11//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Computational Optimisation Group
X-ORIGINAL-URL:http://optimisation.doc.ic.ac.uk
X-WR-CALDESC:Events for Computational Optimisation Group
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20120101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20130614T140000
DTEND;TZID=UTC:20130614T140000
DTSTAMP:20260429T034152
CREATED:20170124T102142Z
LAST-MODIFIED:20170124T102142Z
UID:581-1371218400-1371218400@optimisation.doc.ic.ac.uk
SUMMARY:Seminar: Distributionally robust control of constrained stochastic systems
DESCRIPTION:Title: Distributionally robust control of constrained stochastic systemsSpeaker: Bart Van ParysAffiliation: Automatic Control Laboratory at Swiss Federal Institute of TechnologyLocation: Room 217-218 Huxley BuildingTime: 2:00pm \nAbstract. We investigate the control of constrained stochastic linear systems when faced with limited information regarding the disturbance process\, that is\, when only the first and second-order moments of the disturbance distribution are known.  We employ two types of soft constraints to prevent the state from falling outside a prescribed target domain: distributionally robust chance constraints require the state to remain within the target domain with a given high probability\, while distributionally robust conditional value-at-risk constraints impose an upper bound on the state’s expected distance to the target domain conditional on that distance being positive.  The attribute 'distributionally robust' reflects the requirement that the constraints must hold for all disturbance distributions sharing the known moments. We argue that the design of controllers for systems accommodating these types of constraints is both computationally tractable and practically meaningful for both finite and infinite horizon problems.  The proposed methods are illustrated in the context of a wind turbine blade control design case study where flexibility issues play an important role and for which the distributionally robust constraints make sensible design objectives. \nAbout the speaker. Bart holds a BA degree in electrical engineering\, and a MA degree in applied/engineering mathematics\, both from the University of Leuven. Since September 2011 he has been a PhD student in the Swiss Federal Institute of Technology (ETH Zürich) under the supervision of Prof. Manfred Morari and Dr. Paul Goulart.
URL:http://optimisation.doc.ic.ac.uk/event/seminar-distributionally-robust-control-of-constrained-stochastic-systems/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20130619T140000
DTEND;TZID=UTC:20130619T140000
DTSTAMP:20260429T034152
CREATED:20170124T102142Z
LAST-MODIFIED:20170124T102142Z
UID:580-1371650400-1371650400@optimisation.doc.ic.ac.uk
SUMMARY:Seminar: Performance-based regularization in mean-CVaR portfolio optimization
DESCRIPTION:Title: Performance-based regularization in mean-CVaR portfolio optimizationSpeaker: Prof. Gah-Yi VahnAffiliation: Management Science and Operations – London Business SchoolLocation: Room 145 HuxleyTime: 2:00pm \nAbstract. Regularization is a technique widely used to improve the stability of solutions to statistical problems. We propose a new regularization concept\, performance-based regularization (PBR)\, for data-driven stochastic optimization. The goal is to improve upon Sample Average Approximation (SAA) in finite-sample performance while maintaining minimal assumptions about the data. We apply PBR to mean-CVaR portfolio optimization\, where we penalize portfolios with large variability in the constraint and objective estimations\, which effectively constrains the probabilities that the estimations deviate from the respective true values. This results in a combinatorial optimization problem\, but we prove its convex relaxation is tight. We show via simulations that PBR substantially improves upon SAA in finite-sample performance for three different population models of stock returns. We also prove that PBR is asymptotically optimal\, and further derive its first-order behavior by extending asymptotic analysis of M-estimators. This is joint work with Noureddine El Karoui (UC Berkeley Statistics) and Andrew EB Lim (NUS Business School) \nAbout the speaker. Gah-Yi Vahn is an Assistant Professor of Management Science and Operations at London Business School. She has a BSc (1st Class Hons. with Univ. Medal) from the University of Sydney (2007)\, an MA in Statistics (2011) and a PhD in Operations Research (2012) from the University of California\, Berkeley. Gah-Yi’s research interest is data-driven decision-making\, in particular optimization with complex\, high dimensional\, and/or highly uncertain data\, with applications to finance and operations management.
URL:http://optimisation.doc.ic.ac.uk/event/seminar-performance-based-regularization-in-mean-cvar-portfolio-optimization/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20130620T140000
DTEND;TZID=UTC:20130620T140000
DTSTAMP:20260429T034152
CREATED:20170124T102142Z
LAST-MODIFIED:20170124T102142Z
UID:579-1371736800-1371736800@optimisation.doc.ic.ac.uk
SUMMARY:Seminar: Parallel block coordinate descent methods for huge-scale partially separable problems
DESCRIPTION:Title: Parallel block coordinate descent methods for huge-scale partially separable problemsSpeaker: Martin TakacAffiliation: School of Mathematics –  University of EdinburghLocation: CPSE Seminar roomTime: 2:00pm \nAbstract. In this work we show that randomized block coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially block separable smooth convex function and a simple block separable convex function. We give a generic algorithm and several variants thereof based on the way parallelization is performed. In all cases we prove iteration complexity results\, i.e.\, we give bounds on the number of iterations sufficient to approximately solve the problem with high probability. Our results generalize the intuitive observation that in the separable case the theoretical speedup caused by parallelization must be equal to the number of processors. We show that the speedup increases with the number of processors and with the degree of partial separability of the smooth component of the objective function. Our analysis also works in the mode when the number of blocks being updated at each iteration is random\, which allows for modelling situations with variable (busy or unreliable) number of processors. We conclude with some encouraging computational results applied to huge-scale LASSO and sparse SVM instances.  This is a joint work with Dr. Peter Richtarik\, University of Edinburgh. \nAbout the speaker.
URL:http://optimisation.doc.ic.ac.uk/event/seminar-parallel-block-coordinate-descent-methods-for-huge-scale-partially-separable-problems/
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20130620T160000
DTEND;TZID=UTC:20130620T160000
DTSTAMP:20260429T034152
CREATED:20170124T102141Z
LAST-MODIFIED:20170124T102141Z
UID:578-1371744000-1371744000@optimisation.doc.ic.ac.uk
SUMMARY:Seminar: Robust Data-Driven Approach in Decision Making Under Uncertainty
DESCRIPTION:Title: Robust Data-Driven Approach in Decision Making Under UncertaintySpeaker: Grani Adiwena HanasusantoAffiliation: Department of Computing – Imperial College LondonLocation: Room 301 William PenneyTime: 4:00pm \nAbstract. We investigate robust data-driven approach in stochastic optimization problems where partial knowledge on the exogenous uncertainties is available to the decision maker. In contrast to the traditional model-based approach\, a data-driven approach requires no assumptions on the underlying distribution of exogenous uncertainties. Estimation of conditional expectation is achieved using kernel regression scheme which evaluates the cost function solely at historical observations. If sparse historical observations are available\, however\, the estimation is inaccurate and the resulting decision performs poorly in out-of-sample tests. To alleviate this unfavourable outcome\, we ‘robustify’ the decision against estimation errors by utilizing techniques from robust optimization. We show that the arising min-max problem can be reformulated as a tractable conic program. We further extend the proposed approach to multi-period settings and introduce an approximate dynamic programming framework that retains the tractability of the formulation and that is amenable to efficient parallel implementation. The proposed approach is tested across several application domains and is shown to outperform various non-robust schemes in terms of standard statistical benchmarks. \nAbout the speaker. Grani Hanasusanto is a PhD student at the Department of Computing\, Imperial College London\, under the supervision of Dr. Daniel Kuhn. He obtained the BEng (Hons) degree in Electrical and Electronic Engineering from Nanyang Technological University\, Singapore\, and the MSc degree in Financial Engineering from National University of Singapore. His research interests are in numerical and computational methods and their applications.
URL:http://optimisation.doc.ic.ac.uk/event/seminar-robust-data-driven-approach-in-decision-making-under-uncertainty/
END:VEVENT
END:VCALENDAR