### up to complementary slackness, MnSymbol(?)

parent b44dd2e7
No preview for this file type
 ... ... @@ -9,6 +9,7 @@ \usepackage{mathtools} \usepackage[osf]{mathpazo} \usepackage{eulervm} \usepackage{MnSymbol} \usepackage{array} \usepackage{enumitem} \usepackage{booktabs} ... ... @@ -783,5 +784,136 @@ Next, we look at another example for the case when there is no feasible solution Now observe that there are no positive reduced cost w.r.t. $M$ but the artificial variable $x_5$ is still in the optimal basis. Hence, the original problem is infeasible. If there would be a feasible solution then there exists basic solution with basic variables drawn from $\{x_1, x_2, x_3, x_4\}$ with objective value $0$ w.r.t. $M$, i.e. basic solutions with artificial variables are never optimal. \end{example*} Now we make a sharp break and move on to a very enlightning concept of mathematical optimization: \emph{duality}. The motivation is to find an estimate on how large the optimum of an LP (that is to be maximized) can be, i.e. find a (strong) \emph{upper bound}. Note that fo \emph{lower bounds}, we can evaluate any feasible solution. We use the constraints to find such upper bounds. Every \emph{cone combination} (i.e. a linear combination with only non-negative coefficients) of constraints yields to an upper bound if the sum of these coefficients (also called \emph{Lagrange multipliers}) are at least as large as the objective coefficients for each variable. \begin{example*} Consider the following LP. \begin{align*} \text{maximize } & 2x_1 + 4x_2 - 3x_3\\ \text{subject to } & x_1 + 3x_2 - x_3 \leq 4\\ & x_1 + 2x_2 - 2x_3 \leq 3\\ & x_1, x_2, x_3 \geq 0. \end{align*} From the first constraint, we can derive an upper bound on theobjective function: $$z(x) = 2x_1 + 4x_2 - 3x_3 \leq 2(x_1 + 3x_2 - x_3) \leq 2 \cdot 4 = 8.$$ If we use both constraints, we can obtain an even tighter bound: $$z(x) = 2x_1 + 4x_2 - 3x_3 \leq (x_1 + 3x_2 - x_3) + (x_1 + 2x_2 - 2x_3) \leq 4 + 3 = 7.$$ If we want to find the best upper bound, consider the following: \begin{align*} z(x) = 2x_1 + 4x_2 - 3x_3 &\leq \lambda_1(x_1 + 3x_2 - x_3) + \lambda_2(x_1 + 2x_2 - 2x_3)\\ &\leq (\lambda_1 + \lambda_2)x_1 + (3\lambda_1 + 2\lambda_2)x_2 + (-\lambda_1 - 2\lambda_2)x_3\\ &\leq 4\lambda_1 + 3\lambda_2. \end{align*} This chain of inequalities hold if $\lambda_1 + \lambda_2 \geq 2 (= c_1)$, $3\lambda_1 + 2\lambda_2 \geq 4 (= c_2)$, $-\lambda_1 - 2\lambda_2 \geq -3 (= c_3)$, and of course $\lambda_1, \lambda_2 \geq 0$. We obtain the best (i.e. smallest) upper bound by minimizing $4\lambda_1 + 3\lambda_2$ subject to these four constraints. \end{example*} Note that the minimization problem from the example is again an LP, which we call the \emph{dual LP} of the LP we started with (\emph{primal LP}). More formally we have \begin{description} \item[primal LP] $\max \{c^Tx \mid \underbrace{Ax \leq b, x \geq 0}_{\eqqcolon P}\}$ \item[dual LP] $\min \{\lambda^Tb \mid \underbrace{\lambda^TA \geq c^T, \lambda \geq 0}_{\eqqcolon D}\}$ \end{description} By construction of the dual LP, we have the following property. \begin{theorem}[weak duality] Let $P, D$ denote the polyhedra of a primal and a corresponding dual LP. \begin{enumerate} \item If $x \in P$ and $\lambda \in D$, then $c^Tx \leq \lambda^Tb$. \item If $P \neq \emptyset$, $c^Tx$ unbounded from above iff $D = \emptyset$. \item If $D \neq \emptyset$, $\lambda^Tb$ unbounded from below iff $P = \emptyset$. \end{enumerate} \begin{proof} \begin{enumerate}[label=\itshape{(\roman*)}] \item $c^Tx \leq \lambda^TAx \leq \lambda^Tb$.\qedhere \end{enumerate} \end{proof} \end{theorem} For each primal constraint, there is a dual variable; for each primal variable, there is a dual constraint. We do not need that the primal LP is in canonical form. By the following rules, we can dualize every LP. \begin{table}[H] \centering \begin{tabular}{l|r} \toprule primal & dual\\ \midrule maximize & minimize\\ objective coeffiecients & right hand side\\ right hand side & objective coefficients\\ $A$ & $A^T$\\ \midrule $a_i^T x \leq b$ & $\lambda_i \geq 0$\\ $a_i^T x = b$ & $\lambda_i$ free\\ $a_i^T x \geq b$ & $\lambda_i \leq 0$\\ \midrule $x_j \geq 0$ & $\lambda^TA_j \geq c_j$\\ $x_j$ free & $\lambda^TA_j = c_j$\\ $x_j \leq 0$ & $\lambda^TA_j \leq c_j$\\ \bottomrule \end{tabular} \end{table} In fact, we can also read the table above from right to left to dualize minimization problem. This yields to the following theorem. \begin{theorem} The dual of a dual is the primal. \end{theorem} What makes duality interesting is that every dual solution is a certificate for optimality (or rather maximal gap to optimality). Especially if we have $x^\ast \in P, \lambda^\ast \in D$ with $c^Tx^\ast = \lambda^{\ast T}b$, then $x^\ast$ is optimal for the primal LP and $\lambda^\ast$ is optimal for the dual LP. Note that the concept of dual problems is not restricted to linear optimization; it can be applied to arbitrary non-linear problems, too. However, the following theorem is special and not true in arbitrary non-linear settings but always holds for linear programs. \begin{theorem}[strong duality] If the primal LP has a finite optimal solution $x^\ast$, then the dual LP has a finite optimal solution $\lambda^\ast$. Moreover, $$c^Tx^\ast = \lambda^{\ast T}b.$$ \begin{proof} Let $B$ be an optimal basis for the primal LP, i.e. $x_B$ solves $A_B x_B = b \iff x_B = A_B^{-1}b$. Since $B$ is optimal, we have that the reduced cost are non-positive: $$\bar{c}_N^T = c_N^T - \underbrace{c_B^T A_B^{-1}}_{\eqqcolon \lambda^T}A_N \leq 0$$ We claim that $\lambda^T$ is optimal dual solution. For feasibility just note that $\lambda^T A \geq c^T$ because of the reduced cost. Moreover, $$c_B^T x_B = c_B^T A_B^{-1}b = \lambda^T b,$$ i.e. the objective values match. \end{proof} \end{theorem} From duality we derive \begin{itemize} \item primal optimality implies dual feasibility, \item primal feasibility implies dual optimaliyt. \end{itemize} Note that dual variables $\lambda^T = c_B^T A_B^{-1}$ are implicitly computed in the Simplex algorithm: Reduced cost $\bar{c}_N^T = c_N^T - \lambda^TA_N$. For slack variables $x_{n+i}$ is $c_{n+i} = 0$ and $A_{n+i}$ is the $i$-th unit vector. There is also a very common economic interpretation of dual variables; ofthen they are called \emph{shadow prices} or \emph{opportunity costs}. A constraint often models the availability of some resource $i$. That is, given a primal LP \begin{align*} \text{maximize } & c^T x\\ \text{subject to } & Ax \leq b + \epsilon\\ & x \geq 0, \end{align*} where we consider a right hand side that is \emph{relaxed} by some $\epsilon \geq 0$, the interpretation of a dual variable $\lambda_i$ for the $i$-th constraint is the possible increment of the objective value if one more unit of the $i$-th resource is available. $$\lambda^T(b+\epsilon) = \lambda^Tb + \lambda^T\epsilon.$$ \begin{theorem}[complementary slackness] A primal solution $x$ and a dual solution $\lambda$ are optimal iff \begin{enumerate}[label=\itshape(\roman*)] \item $a_i^T x = b_i$ or $\lambda_i = 0$ for all $i \in \{1, \ldots, m\}$. \label{thm:cs:1} \item $\lambda^TA_j = c_j$ or $x_j = 0$ for all $j \in \{1, \ldots, n\}$. \label{thm:cs:2} \end{enumerate} \begin{proof} By strong duality: $c^T x = \lambda^TAx = \lambda^T b$. Then we have by the first equation: $$(\underbrace{c^T - \lambda^TA}_{\geq 0}) \underbrace{x}_{\geq 0} = 0$$ providing the proof for \ref{thm:cs:2} and for \ref{thm:cs:1}, we use the second equation for the analogous argument $%\qedhere \underbrace{\lambda^T}_{\geq 0} (\underbrace{Ax - b}_{\geq 0}) = 0.\qedhere$ \end{proof} \end{theorem} We now want to analyze soundness and runtime of the Simplex algorithm. In the worst case the algorithm still needs an exponential number (in $m$) of iterations. Note that there are also algorithm that guarantee a runtime in polynomial time for solving LPs but it turns out that the Simplex algorithm is better in practice where it usually visits $\mathcal{O}(m)$ vertices. \end{document}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!