Avinash Bhardwaj
  • Home (current)
  • About
  • Research
  • Teaching
  • Students
  • Contact
  • Smiley face I am a member of the faculty at the Indian Institute of Technology Bombay in the Department of Industrial Engineering and Operations Research.

    Prior to this, I had appointments as Postdoctoral Fellow at Center for Operations Research and Econometrics (CORE) working with Professor Laurance Wolsey and Prof. Mathieu Van Vyve, Associate Specialist in the Department of Industrial Engineering and Operations Research at the University of California Berkeley and as a Postdoctoral Fellow in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Institute of Technology working with Prof. George Nemhauser and Prof. Shabbir Ahmed. I completed my doctoral studies in the Department of Industrial Engineering and Operations Research at the University of California Berkeley advised by Prof. Alper Atamtürk. As a part of my Doctoral Thesis I studied polyhedral structure of the Binary Conic Quadratic Knapsacks and applications thereof.

    My research interests broadly span the theoretical, algorithmic, and computational aspects of convex, discrete and combinatorial optimization.

    When I am not thinking about math, I can usually be found running high up in the mountains.
    News:

    Hrishikesh Venkataraman's (IISER Pune, joint student with Vishnu Narayanan) work on Lonely runner conjecture and identifying new families of lonely runner instances is now available on Arxiv .
    Abhishek Pathapati's (joint student with Vishnu Narayanan) work on Exact augmented Lagrangian duality for mixed integer convex optimization is now available on Arxiv .
    Hari Narayan's (joint student with Manjesh Hanawal) work on Unsupervised Early Exit in DNNs with Multiple Exits was presented in the 2nd ACM International Conference on AI-ML Systems and will appear in the proceedings of the same. The manuscript is available on Arxiv .
  • A few more Lonely Runners
    Lonely Runner Conjecture, proposed by Jörg M. Wills and so nomenclatured by Luis Goddyn, has been an object of interest since it was first conceived in 1967 : Given positive integers k and n1, n2,...,nk there exists a positive real number t such that the distance of t⋅nj to the nearest integer is at least 1⁄k+1, for all 1 ≤ j ≤ k. In a recent article Beck, Hosten and Schymura described the Lonely Runner polyhedron and provided a polyhedral approach to identifying families of lonely runner instances. We revisit the Lonely Runner polyhedron and highlight some new families of instances satisfying the conjecture.
    A few more Lonely Runners. (with Vishnu Narayanan and Hrishikesh Venkataraman) (ArXiv Preprint).
    Exact augmented Lagrangian duality for mixed integer convex optimization
    Augmented Lagrangian dual augments the classical Lagrangian dual with a non-negative non-linear penalty function of the violation of the relaxed/dualized constraints in order to reduce the duality gap. We investigate the cases in which mixed integer convex optimization problems have an exact penalty representation using sharp augmenting functions (norms as augmenting penalty functions). We present a generalizable constructive proof technique for proving existence of exact penalty representations for mixed integer convex programs under specific conditions using the associated value functions. This generalizes the recent results for MILP (Feizollahi, Ahmed and Sun, 2017) and MIQP (Gu, Ahmed and Dey 2020) whilst also providing an alternative proof for the aforementioned along with quantification of the finite penalty parameter in these cases.
    Exact augmented Lagrangian duality for mixed integer convex optimization. (with Vishnu Narayanan and Abhishek Pathapati) (SIAM Journal on Optimization).
    Almost exact risk budgeting with return forecasts for portfolio allocation
    We revisit the portfolio allocation problem with designated risk-budget. We generalize the problem of arbitrary risk budgets with unequal correlations to one that includes return forecasts and transaction costs while keeping the no-shorting constraint. We offer a convex second order cone formulation that scales well with the number of assets and explore solutions to problem variants - on equity-bond asset allocation problems as well as formulating portfolios using index constituents from the NASDAQ100 index, illustrating the benefits of this approach.
    Almost exact risk budgeting with return forecasts for portfolio allocation. (with Manjesh Hanawal and Purushottam Parthasarathy) (Operations Research Letters).
    Unsupervised Early Exit in DNNs with Multiple Exits
    Deep Neural Networks (DNNs) are generally designed as sequentially cascaded differentiable blocks/layers with a prediction module connected only to its last layer. DNNs can be attached with prediction modules at multiple points along the backbone where inference can stop at an intermediary stage without passing through all the modules. The last exit point may offer a better prediction error but also involves more computational resources and latency. An exit point that is `optimal' in terms of both prediction error and cost is desirable. The optimal exit point may depend on the latent distribution of the tasks and may change from one task type to another. During neural inference, the ground truth of instances may not be available and error rates at each exit point cannot be estimated. Hence one is faced with the problem of selecting the optimal exit in an unsupervised setting. Prior works tackled this problem in an offline supervised setting assuming that enough labeled data is available to estimate the error rate at each exit point and tune the parameters for better accuracy. However, pre-trained DNNs are often deployed in new domains for which a large amount of ground truth may not be available. We model the problem of exit selection as an unsupervised online learning problem and use bandit theory to identify the optimal exit point. Specifically, we focus on Elastic BERT, a pre-trained multi-exit DNN to demonstrate that it `nearly' satisfies the Strong Dominance (SD) property making it possible to learn the optimal exit in an online setup without knowing the ground truth labels. We develop upper confidence bound (UCB) based algorithm named UEE-UCB that provably achieves sub-linear regret under the SD property. Thus our method provides a means to adaptively learn domain-specific optimal exit points in multi-exit DNNs. We empirically validate our algorithm on IMDb and Yelp datasets.
    Unsupervised Early Exit in DNNs with Multiple Exits. (with Manjesh Hanawal and Hari Narayan) (AIMLSystems'22).
    Supermodular Covering Knapsack Polytope
    The supermodular covering knapsack set is the discrete upper level set of a non-decreasing supermodular function. Submodular and supermodular knapsack sets arise naturally when modeling utilities, risk and probabilistic constraints on discrete variables. In a recent paper Atamtürk and Narayanan study the lower level set of a non-decreasing submodular function. In this complementary paper we describe pack inequalities for the supermodular covering knapsack set and investigate their separation, extensions and lifting. We give sequence-independent upper bounds and lower bounds on the lifting coefficients. Furthermore, we present a computational study on using the polyhedral results derived for solving 0-1 optimization problems over conic quadratic constraints with a branch-and-cut algorithm.
    Supermodular Covering Knapsack Polytope. (with Alper Atamtürk) in Discrete Optimization.
    Deciding Polyhedrality of Spectrahedra
    Spectrahedra are affine sections of the cone of positive semidefinite matrices which, as convex bodies, generalize the class of polyhedra. In this paper we investigate the problem of recognizing when a spectrahedron is polyhedral. We generalize and strengthen results of Ramana (1998) regarding the structure of spectrahedra and we devise a normal form of representations of spectrahedra. This normal form is effectively computable and leads to an algorithm for deciding polyhedrality. Our investigation was originally motivated by that if a spectrahedron can be recognized to be a polyhedron then the corresponding semidefinite program can be modeled as a linear program and solved rather efficiently. As Ramana (1998) describes, another motivation lies in connections between spectrahedra and perfect graph theory.
    Deciding Polyhedrality of Spectrahedra. (with Raman Sanyal and Philipp Rostalski) in SIAM Journal on Optimization.
    Network Design with Probabilistic Capacities
    We consider a network design problem with random arc capacities and give a formulation with a probabilistic capacity constraint on each cut of the network. To handle the exponentially-many probabilistic constraints a separation procedure that solves a nonlinear minimum cut problem is introduced. For the case with independent arc capacities, we exploit the supermodularity of the set function defining the constraints and generate cutting planes based on the supermodular covering knapsack polytope. For the general correlated case, we give a reformulation of the constraints that allows to uncover and utilize the submodularity of a related function. The computational results indicate that exploiting the underlying submodularity and supermodularity arising with the probabilistic constraints provides significant advantages over the classical approaches.
    Network Design with Probabilistic Capacities. (with Alper Atamtürk) in Networks.
  • Current Semester (Fall 2023)
    IE 501 : Optimization Models.
    The course webpage is maintained on IITB Moodle
    Teaching at IIT Bombay
    ME 477 : Introduction to Optimization.(Fall 2019, Fall 2020, Fall 2021, Fall 2022)
    ME 308 : Industrial Engineering and Operations Research - I.(Spring 2019, Spring 2020, Spring 2021, Spring 2022, Spring 2023)
    IE 716 : Integer Programming: Theory and Computations.(Spring 2020)
    IE 804 : Convex Analysis/Topics in Modern Convex Optimization.(Fall 2021)
    Teaching at UC Berkeley
    IEOR 162 : Linear Programming.(Fall 2014)
    • Current Doctoral Students

    Abhishek Pathapati (Ph.D. Candidate, IEOR, IIT Bombay) Purushottam Parthasarthy (Ph.D. Candidate, IEOR, IIT Bombay) Hritiz Gogoi (Ph.D. Candidate, IEOR, IIT Bombay) Hrishikesh Venkatraman (Ph.D., IEOR, IIT Bombay)
      Prospective Students

    Oh me! Oh life! of the questions of these recurring, Of the endless trains of the faithless, of cities fill’d with the foolish, Of myself forever reproaching myself, (for who more foolish than I, and who more faithless?) Of eyes that vainly crave the light, of the objects mean, of the struggle ever renew’d, Of the poor results of all, of the plodding and sordid crowds I see around me, Of the empty and useless years of the rest, with the rest me intertwined, The question, O me! so sad, recurring—What good amid these, O me, O life? Answer. That you are here—that life exists and identity, That the powerful play goes on, and you may contribute a verse.

                                                                                   ---------- Walt Whitman, Leaves of Grass (1892) ----------



    My philosophy on advising and mentoring My principal mentorship goal is to help you become an independent researcher and a critical thinker. To this end, I will suggest what I think are fruitful directions, themes or families of questions, recommend reading, and offer guidance on coming up with your own problems. I prefer to motivate students to explore the unknown rather than offer them an assigned question that I already know the answer to. This is, in my humble opinion, the essence of learning and research! Learning how to find and pose good problems will prepare you well for the future (even if you don't spend your life in academia!) Everyone is indeed different, and you should seek an advisor who's advising style matches your learning style. What seems inspiring and helpful for one student may seem boring or too structured (or too unstructured!) for another. If you are motivated to learn and contribute (not necessarily the same as work and publish), I'll be happy to take that journey with you.
  • AVINASH BHARDWAJ
    Office: 204, Second Floor
    Industrial Engineering and Operations Research
    Indian Institute of Technology Bombay
    Powai, Mumbai 400076
    abhardwaj[AT]iitb.ac.in

    (+91) 22 2576 7653




    Background images were taken by Avinash Bhardwaj.