### fixed some typos

parent d889bd1e
No preview for this file type
 ... ... @@ -98,7 +98,7 @@ Every constraint (from an LP in canonical form) defines a \emph{halfspace} in $\ \end{tikzpicture} \end{figure} We call the set of points that satisfy all constraints \emph{feasible region}$P \coloneqq \{x \in \mathbb{R}^n \mid Ax \leq b, x \geq 0\}$.$P$is a \emph{polyhedron}, i.e. the intersection of finitely many halfspaces. If$P$is bounded, it is called a \emph{polytope}. We call the set of points that satisfy all constraints \emph{feasible region}$P \coloneq \{x \in \mathbb{R}^n \mid Ax \leq b, x \geq 0\}$.$P$is a \emph{polyhedron}, i.e. the intersection of finitely many halfspaces. If$P$is bounded, it is called a \emph{polytope}. \begin{figure}[H] \centering \begin{tikzpicture} ... ... @@ -181,7 +181,7 @@ The \emph{dimension}$\dim P$is the dimension of the smallest affine subspace c For a given$x \in P$, a constraint$a_i^T x \leq b_i$is called \emph{active} (or \emph{binding}) if$a_i^T x = b_i$. A \emph{face} with respect to$H \subseteq \{1, \ldots, m\}is $$F \coloneqq \{x \in P \mid a_i^T x \leq b_i \text{ active in } x, i \in H\}. F \coloneq \{x \in P \mid a_i^T x \leq b_i \text{ active in } x, i \in H\}.$$ \begin{figure}[H] \centering ... ... @@ -339,7 +339,7 @@ Idea for the Simplex algorithm: Move from vertex to vertex such that the objecti We introduce \emph{slack variables} to fill gaps between constraints and corresponding right hand side. $$s \coloneqq b - Ax, \quad s \geq 0. s \coloneq b - Ax, \quad s \geq 0.$$ This yields to \emph{standard form} for LPs: \begin{align*} ... ... @@ -495,15 +495,15 @@ We first only consider the second stage, which is sufficient if settingx_1 = \ \tcc{We are given a feasible basic solution} \While{not $\bar{c}_N^T \leq 0$}{ \tcp{Pricing: choose best pivot column (Dantzig rule)} $s \coloneqq \arg\max_{j \in N} \{\bar{c}_j > 0\}$.\\ $s \coloneq \arg\max_{j \in N} \{\bar{c}_j > 0\}$.\\ \If{$\bar{A}_s \leq 0$}{ \Return{unbounded} } \tcp{Ratio test: choose best pivot row} $r \coloneqq \arg\min_{i \in B} \left\{\frac{\bar{b}_i}{\bar{a}_{is}} \mid \bar{a}_{is} > 0\right\}$.\\ $r \coloneq \arg\min_{i \in B} \left\{\frac{\bar{b}_i}{\bar{a}_{is}} \mid \bar{a}_{is} > 0\right\}$.\\ \tcp{Pivoting} $B \coloneqq B \setminus \{r\} \cup \{s\}$.\\ $N \coloneqq N \setminus \{s\} \cup \{r\}$. $B \coloneq B \setminus \{r\} \cup \{s\}$.\\ $N \coloneq N \setminus \{s\} \cup \{r\}$. %TODO new A, b, c, z } \Return{optimal solution} ... ... @@ -784,7 +784,7 @@ Next, we look at another example for the case when there is no feasible solution Now observe that there are no positive reduced cost w.r.t. $M$ but the artificial variable $x_5$ is still in the optimal basis. Hence, the original problem is infeasible. If there would be a feasible solution then there exists basic solution with basic variables drawn from $\{x_1, x_2, x_3, x_4\}$ with objective value $0$ w.r.t. $M$, i.e. basic solutions with artificial variables are never optimal. \end{example*} Now we make a sharp break and move on to a very enlightning concept of mathematical optimization: \emph{duality}. The motivation is to find an estimate on how large the optimum of an LP (that is to be maximized) can be, i.e. find a (strong) \emph{upper bound}. Note that fo \emph{lower bounds}, we can evaluate any feasible solution. We use the constraints to find such upper bounds. Every \emph{cone combination} (i.e. a linear combination with only non-negative coefficients) of constraints yields to an upper bound if the sum of these coefficients (also called \emph{Lagrange multipliers}) are at least as large as the objective coefficients for each variable. Now we make a sharp break and move on to a very enlightning concept of mathematical optimization: \emph{duality}. The motivation is to find an estimate on how large the optimum of an LP (that is to be maximized) can be, i.e. find a (strong) \emph{upper bound}. Note that for \emph{lower bounds}, we can evaluate any feasible solution. We use the constraints to find such upper bounds. Every \emph{cone combination} (i.e. a linear combination with only non-negative coefficients) of constraints yields to an upper bound if the sum of these coefficients (also called \emph{Lagrange multipliers}) are at least as large as the objective coefficients for each variable. \begin{example*} Consider the following LP. \begin{align*} ... ... @@ -793,7 +793,7 @@ Now we make a sharp break and move on to a very enlightning concept of mathemati & x_1 + 2x_2 - 2x_3 \leq 3\\ & x_1, x_2, x_3 \geq 0. \end{align*} From the first constraint, we can derive an upper bound on theobjective function: From the first constraint, we can derive an upper bound on the objective function: $$z(x) = 2x_1 + 4x_2 - 3x_3 \leq 2(x_1 + 3x_2 - x_3) \leq 2 \cdot 4 = 8.$$ ... ... @@ -853,7 +853,7 @@ For each primal constraint, there is a dual variable; for each primal variable, \end{tabular} \end{table} In fact, we can also read the table above from right to left to dualize minimization problem. This yields to the following theorem. In fact, we can also read the table above from right to left to dualize minimization problems. This yields to the following theorem. \begin{theorem} The dual of a dual is the primal. \end{theorem} ... ...
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!