https://optimization.mccormick.northwestern.edu/api.php?action=feedcontributions&user=JohnPlaxco&feedformat=atomoptimization - User contributions [en]2021-10-20T18:58:13ZUser contributionsMediaWiki 1.21.3https://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-06-08T00:11:14Z<p>JohnPlaxco: </p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
[[File:InteriorPointMethod.png|thumb|300px|right|Examples of logarithmic barrier functions.]]<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
<br />
<br />
<br />
<br />
=Barrier Method=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want, and makes all functions twice differentiable. We then put the logarithm over a variable that sets a level of accuracy for the approximation we make. Here we will call that variable t. We define<br><br />
<math>\phi(x)=-\sum_i^mlog(-f_i(x))</math><br><br />
which blows up if any of our constraints are violated. Our LP problem now becomes<br><br />
minimize <math> f_0(x) + \frac{1}{t}\phi(x)</math><br><br />
st <math> Ax=b</math><br><br />
<br><br />
This allows us to use Newton's method to follow what is called a Central Path, which is a series of points we iterate through that all satisfy the equality constraints <math>Ax=b</math> from the original problem, but give increasingly more optimized values for the objective function, with the inequality constraints <math>f_i(x)</math> not necessarily equal to 0.<br />
==Algorithm==<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math>[3]<br />
<br />
==Example==<br />
min <math> f(x) = x_1 - 2x_2 </math><br><br />
s.t. <math> 1 + x_1 - (x_2)^2 >= 0 </math><br> <br />
<br />
<math> x_2 >= 0 </math><br><br />
<br />
Use the Frisch barrier function to yield the unconstrained problem:<br />
<br />
min <math> r(c,x) = x_1 - 2x_2 - clog(1+x_1-(x_2)^2) - clog(x_2) </math><br />
<br />
For a specific parameter c, the first order necessary conditions for optimality are:<br />
<br />
<math> 1 - c/(1+x_1-(x_2)^2) = 0 </math><br />
<br />
<math> -2 + 2cx_2/(1+x_1-(x_2)^2) - c/x_2 = 0 </math><br />
<br />
Hence<br />
<br />
<math> c = 1 + x_1 - (x_2)^2 </math><br />
<br />
substituting c in<br />
<br />
<math> -2 + 2cx_2/(1+x_1-(x_2)^2) - c/x_2 = 0 </math><br />
<br />
we obtain<br />
<br />
<math> (x_2)^2 - x_2 - (1/2)c = 0 </math><br />
<br />
Hence<br />
<br />
<math> x_2 = (1 +/- sqrt(1+2c))/2 </math>, of which the positive is the only feasible solution<br />
<br />
=Primal-Dual IP Algorithm=<br />
<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g_i(x)\ge 0</math> with <math>i = 1,...,m</math>.<br />
<br />
We now introduce slack variables to turn all inequalities into non-negativities:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g(x)- s = 0</math> with <math>s \ge 0</math>.<br />
<br />
The logrithmic barrier function is now introduced:<br />
<br />
minimize <math> f(x) - \mu~ \sum_{i=1}^m\log(s_i)</math> <br />
<br />
s.t. <math>h(x)-s=0</math><br />
<br />
Now incorporate the equality constraint(s) into the objective function using Lagrange multipliers:<br />
<br />
minimize <math>f(x) - \mu~ \sum_{i=1}^m\log(s_i) - y^T(g(x) - s)</math><br />
<br />
Next, set all of the derivatives equal to 0:<br />
<br />
<math>\nabla f(x) - \nabla g(x)^Ty = 0</math><br />
<br />
<math>- \mu W^{-1} e + y = 0</math><br />
<br />
<math>g(x) - s = 0</math><br />
<br />
Rearranging:<br />
<br />
<math> \nabla f(x) - \nabla g(x)^Ty = 0 </math><br />
<br />
<math>W Y e = \mu e</math><br />
<br />
<math>g(x) - s = 0</math><br />
<br />
Utilize Newton's Method to determine the search directions, <math> \Delta x, \Delta s, \Delta y </math>:<br />
<br />
::<math>\begin{bmatrix} G(x,y) & 0 & -A(x)^T \\ 0 & Y & W \\ A(x) & -I & 0 \end{bmatrix} \begin{bmatrix} \Delta x \\ \Delta s \\ \Delta y \end{bmatrix} = \begin{bmatrix} -\nabla f(x) + A(x)^T y \\ \mu e - W Y e \\ -g(x) + s \end{bmatrix}</math><br />
<br />
where <math>G(x,y) = \nabla^2 f(x) - \sum_{i=1}^m\ y_i \nabla^2 g_i(x)</math><br />
<br />
and <math> A(x) = \nabla g(x)</math><br />
<br />
Using the 2nd equation, we solve for <math>\Delta s</math>, the result of which is the reduced KKT system:<br />
<br />
<math>\begin{bmatrix} -G(x, y) & A^T(x) \\ A(x) & WY^{-1} \end{bmatrix} \begin{bmatrix} \Delta x \\ \Delta y \end{bmatrix} = \begin{bmatrix} \nabla f(x) - A^T(x) y \\ -g(x) + \mu Y^{-1} e \end{bmatrix} </math><br />
<br />
From here, perform iterations: <br />
<br />
<math> x^{k+1} = x^{k} + \alpha^k \Delta x^k</math><br />
<br />
<math> s^{k+1} = s^{k} + \alpha^k \Delta s^k</math><br />
<br />
<math> y^{k+1} = y^{k} + \alpha^k \Delta y^k</math><br />
<br />
=Conclusion=<br />
The Interior Point method approximates the constraints of a linear programming model as a set of boundaries surrounding a region. These approximations are used when the problem has constraints that are discontinuous or otherwise troublesome, but can me modified so that a linear solver can handle them. Once the problem is formulated in the correct way, Newton's method is used to iteratively approach more and more optimal solutions within the feasible region.<br />
<br />
= Sources =<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>JohnPlaxcohttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-06-08T00:09:09Z<p>JohnPlaxco: /* Introduction and Uses */</p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
[[File:InteriorPointMethod.png|thumb|200px|right|Examples of logarithmic barrier functions.]]<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want, and makes all functions twice differentiable. We then put the logarithm over a variable that sets a level of accuracy for the approximation we make. Here we will call that variable t. We define<br><br />
<math>\phi(x)=-\sum_i^mlog(-f_i(x))</math><br><br />
which blows up if any of our constraints are violated. Our LP problem now becomes<br><br />
minimize <math> f_0(x) + \frac{1}{t}\phi(x)</math><br><br />
st <math> Ax=b</math><br><br />
<br><br />
This allows us to use Newton's method to follow what is called a Central Path, which is a series of points we iterate through that all satisfy the equality constraints <math>Ax=b</math> from the original problem, but give increasingly more optimized values for the objective function, with the inequality constraints <math>f_i(x)</math> not necessarily equal to 0.<br />
==Algorithm==<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math>[3]<br />
<br />
==Example==<br />
min <math> f(x) = x_1 - 2x_2 </math><br><br />
s.t. <math> 1 + x_1 - (x_2)^2 >= 0 </math><br> <br />
<br />
<math> x_2 >= 0 </math><br><br />
<br />
Use the Frisch barrier function to yield the unconstrained problem:<br />
<br />
min <math> r(c,x) = x_1 - 2x_2 - clog(1+x_1-(x_2)^2) - clog(x_2) </math><br />
<br />
For a specific parameter c, the first order necessary conditions for optimality are:<br />
<br />
<math> 1 - c/(1+x_1-(x_2)^2) = 0 </math><br />
<br />
<math> -2 + 2cx_2/(1+x_1-(x_2)^2) - c/x_2 = 0 </math><br />
<br />
Hence<br />
<br />
<math> c = 1 + x_1 - (x_2)^2 </math><br />
<br />
substituting c in<br />
<br />
<math> -2 + 2cx_2/(1+x_1-(x_2)^2) - c/x_2 = 0 </math><br />
<br />
we obtain<br />
<br />
<math> (x_2)^2 - x_2 - (1/2)c = 0 </math><br />
<br />
Hence<br />
<br />
<math> x_2 = (1 +/- sqrt(1+2c))/2 </math>, of which the positive is the only feasible solution<br />
<br />
=Primal-Dual IP Algorithm=<br />
<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g_i(x)\ge 0</math> with <math>i = 1,...,m</math>.<br />
<br />
We now introduce slack variables to turn all inequalities into non-negativities:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g(x)- s = 0</math> with <math>s \ge 0</math>.<br />
<br />
The logrithmic barrier function is now introduced:<br />
<br />
minimize <math> f(x) - \mu~ \sum_{i=1}^m\log(s_i)</math> <br />
<br />
s.t. <math>h(x)-s=0</math><br />
<br />
Now incorporate the equality constraint(s) into the objective function using Lagrange multipliers:<br />
<br />
minimize <math>f(x) - \mu~ \sum_{i=1}^m\log(s_i) - y^T(g(x) - s)</math><br />
<br />
Next, set all of the derivatives equal to 0:<br />
<br />
<math>\nabla f(x) - \nabla g(x)^Ty = 0</math><br />
<br />
<math>- \mu W^{-1} e + y = 0</math><br />
<br />
<math>g(x) - s = 0</math><br />
<br />
Rearranging:<br />
<br />
<math> \nabla f(x) - \nabla g(x)^Ty = 0 </math><br />
<br />
<math>W Y e = \mu e</math><br />
<br />
<math>g(x) - s = 0</math><br />
<br />
Utilize Newton's Method to determine the search directions, <math> \Delta x, \Delta s, \Delta y </math>:<br />
<br />
::<math>\begin{bmatrix} G(x,y) & 0 & -A(x)^T \\ 0 & Y & W \\ A(x) & -I & 0 \end{bmatrix} \begin{bmatrix} \Delta x \\ \Delta s \\ \Delta y \end{bmatrix} = \begin{bmatrix} -\nabla f(x) + A(x)^T y \\ \mu e - W Y e \\ -g(x) + s \end{bmatrix}</math><br />
<br />
where <math>G(x,y) = \nabla^2 f(x) - \sum_{i=1}^m\ y_i \nabla^2 g_i(x)</math><br />
<br />
and <math> A(x) = \nabla g(x)</math><br />
<br />
Using the 2nd equation, we solve for <math>\Delta s</math>, the result of which is the reduced KKT system:<br />
<br />
<math>\begin{bmatrix} -G(x, y) & A^T(x) \\ A(x) & WY^{-1} \end{bmatrix} \begin{bmatrix} \Delta x \\ \Delta y \end{bmatrix} = \begin{bmatrix} \nabla f(x) - A^T(x) y \\ -g(x) + \mu Y^{-1} e \end{bmatrix} </math><br />
<br />
From here, perform iterations: <br />
<br />
<math> x^{k+1} = x^{k} + \alpha^k \Delta x^k</math><br />
<br />
<math> s^{k+1} = s^{k} + \alpha^k \Delta s^k</math><br />
<br />
<math> y^{k+1} = y^{k} + \alpha^k \Delta y^k</math><br />
<br />
=Conclusion=<br />
The Interior Point method approximates the constraints of a linear programming model as a set of boundaries surrounding a region. These approximations are used when the problem has constraints that are discontinuous or otherwise troublesome, but can me modified so that a linear solver can handle them. Once the problem is formulated in the correct way, Newton's method is used to iteratively approach more and more optimal solutions within the feasible region.<br />
<br />
= Sources =<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>JohnPlaxcohttps://optimization.mccormick.northwestern.edu/index.php/File:InteriorPointMethod.pngFile:InteriorPointMethod.png2014-06-08T00:06:29Z<p>JohnPlaxco: </p>
<hr />
<div></div>JohnPlaxcohttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-26T01:31:13Z<p>JohnPlaxco: </p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want, and makes all functions twice differentiable. We then put the logarithm over a variable that sets a level of accuracy for the approximation we make. Here we will call that variable t. We define<br><br />
<math>\phi(x)=-\sum_i^mlog(-f_i(x))</math><br><br />
which blows up if any of our constraints are violated. Our LP problem now becomes<br><br />
minimize <math> f_0(x) + \frac{1}{t}\phi(x)</math><br><br />
st <math> Ax=b</math><br><br />
<br><br />
This allows us to use Newton's method to follow what is called a Central Path, which is a series of points we iterate through that all satisfy the equality constraints <math>Ax=b</math> from the original problem, but give increasingly more optimized values for the objective function, with the inequality constraints <math>f_i(x)</math> not necessarily equal to 0.<br />
==Algorithm==<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math>[3]<br />
<br />
==Example==<br />
min <math> f(x) = x_1 - 2x_2 </math><br><br />
s.t. 1 + x_1 - (x_2)^2<br />
<br />
=Primal-Dual IP Algorithm=<br />
<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g_i(x)\ge 0</math> with <math>i = 1,...,m</math>.<br />
<br />
We now introduce slack variables to turn all inequalities into non-negativities:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g(x)- s = 0</math> with <math>s \ge 0</math>.<br />
<br />
The logrithmic barrier function is now introduced:<br />
<br />
minimize <math> f(x) - \mu~ \sum_{i=1}^m\log(s_i)</math> <br />
<br />
s.t. <math>h(x)-s=0</math><br />
<br />
Now incorporate the equality constraint(s) into the objective function using Lagrange multipliers:<br />
<br />
minimize <math>f(x) - \mu~ \sum_{i=1}^m\log(s_i) - y^T(g(x) - s)</math><br />
<br />
Next, set all of the derivatives equal to 0:<br />
<br />
<math>\nabla f(x) - \nabla g(x)^Ty = 0</math><br />
<br />
<math>- \mu W^{-1} e + y = 0</math><br />
<br />
<math>g(x) - s = 0</math><br />
<br />
Rearranging:<br />
<br />
<math> \nabla f(x) - \nabla g(x)^Ty = 0 </math><br />
<br />
<math>W Y e = \mu e</math><br />
<br />
<math>g(x) - s = 0</math><br />
<br />
Utilize Newton's Method to determine the search directions, <math> \Delta x, \Delta s, \Delta y </math>:<br />
<br />
::<math>\begin{bmatrix} G(x,y) & 0 & -A(x)^T \\ 0 & Y & W \\ A(x) & -I & 0 \end{bmatrix} \begin{bmatrix} \Delta x \\ \Delta s \\ \Delta y \end{bmatrix} = \begin{bmatrix} -\nabla f(x) + A(x)^T y \\ \mu e - W Y e \\ -g(x) + s \end{bmatrix}</math><br />
<br />
where <math>G(x,y) = \nabla^2 f(x) - \sum_{i=1}^m\ y_i \nabla^2 g_i(x)</math><br />
<br />
and <math> A(x) = \nabla g(x)</math><br />
<br />
Using the 2nd equation, we solve for <math>\Delta s</math>, the result of which is the reduced KKT system:<br />
<br />
<math>\begin{bmatrix} -G(x, y) & A^T(x) \\ A(x) & WY^{-1} \end{bmatrix} \begin{bmatrix} \Delta x \\ \Delta y \end{bmatrix} = \begin{bmatrix} \nabla f(x) - A^T(x) y \\ -g(x) + \mu Y^{-1} e \end{bmatrix} </math><br />
<br />
From here, perform iterations: <br />
<br />
<math> x^{k+1} = x^{k} + \alpha^k \Delta x^k</math><br />
<br />
<math> s^{k+1} = s^{k} + \alpha^k \Delta s^k</math><br />
<br />
<math> y^{k+1} = y^{k} + \alpha^k \Delta y^k</math><br />
<br />
<br />
=Conclusion=<br />
The Interior Point method is a method that approximates the constraints of a linear programming model as a set of boundaries surrounding a region. These approximations are used when the problem has constraints that are discontinuous or otherwise troublesome, but can me modified so that a linear solver can handle them. Once the problem is formulated in the correct way, Newton's method is used to iteratively approach more and more optimal solutions within the feasible region.<br />
<br />
= Sources =<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>JohnPlaxcohttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-26T01:30:45Z<p>JohnPlaxco: </p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want, and makes all functions twice differentiable. We then put the logarithm over a variable that sets a level of accuracy for the approximation we make. Here we will call that variable t. We define<br><br />
<math>\phi(x)=-\sum_i^mlog(-f_i(x))</math><br><br />
which blows up if any of our constraints are violated. Our LP problem now becomes<br><br />
minimize <math> f_0(x) + \frac{1}{t}\phi(x)</math><br><br />
st <math> Ax=b</math><br><br />
<br><br />
This allows us to use Newton's method to follow what is called a Central Path, which is a series of points we iterate through that all satisfy the equality constraints <math>Ax=b</math> from the original problem, but give increasingly more optimized values for the objective function, with the inequality constraints <math>f_i(x)</math> not necessarily equal to 0.<br />
==Algorithm==<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math>[3]<br />
<br />
==Example==<br />
min <math> f(x) = x_1 - 2x_2 </math><br><br />
s.t. 1 + x_1 - (x_2)^2<br />
<br />
=Primal-Dual IP Algorithm=<br />
<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g_i(x)\ge 0</math> with <math>i = 1,...,m</math>.<br />
<br />
We now introduce slack variables to turn all inequalities into non-negativities:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g(x)- s = 0</math> with <math>s \ge 0</math>.<br />
<br />
The logrithmic barrier function is now introduced:<br />
<br />
minimize <math> f(x) - \mu~ \sum_{i=1}^m\log(s_i)</math> <br />
<br />
s.t. <math>h(x)-s=0</math><br />
<br />
Now incorporate the equality constraint(s) into the objective function using Lagrange multipliers:<br />
<br />
minimize <math>f(x) - \mu~ \sum_{i=1}^m\log(s_i) - y^T(g(x) - s)</math><br />
<br />
Next, set all of the derivatives equal to 0:<br />
<br />
<math>\nabla f(x) - \nabla g(x)^Ty = 0</math><br />
<br />
<math>- \mu W^{-1} e + y = 0</math><br />
<br />
<math>g(x) - s = 0</math><br />
<br />
Rearranging:<br />
<br />
<math> \nabla f(x) - \nabla g(x)^Ty = 0 </math><br />
<br />
<math>W Y e = \mu e</math><br />
<br />
<math>g(x) - s = 0</math><br />
<br />
Utilize Newton's Method to determine the search directions, <math> \Delta x, \Delta s, \Delta y </math>:<br />
<br />
::<math>\begin{bmatrix} G(x,y) & 0 & -A(x)^T \\ 0 & Y & W \\ A(x) & -I & 0 \end{bmatrix} \begin{bmatrix} \Delta x \\ \Delta s \\ \Delta y \end{bmatrix} = \begin{bmatrix} -\nabla f(x) + A(x)^T y \\ \mu e - W Y e \\ -g(x) + s \end{bmatrix}</math><br />
<br />
where <math>G(x,y) = \nabla^2 f(x) - \sum_{i=1}^m\ y_i \nabla^2 g_i(x)</math><br />
<br />
and <math> A(x) = \nabla g(x)</math><br />
<br />
Using the 2nd equation, we solve for <math>\Delta s</math>, the result of which is the reduced KKT system:<br />
<br />
<math>\begin{bmatrix} -G(x, y) & A^T(x) \\ A(x) & WY^{-1} \end{bmatrix} \begin{bmatrix} \Delta x \\ \Delta y \end{bmatrix} = \begin{bmatrix} \nabla f(x) - A^T(x) y \\ -g(x) + \mu Y^{-1} e \end{bmatrix} </math><br />
<br />
From here, perform iterations: <br />
<br />
<math> x^{k+1} = x^{k} + \alpha^k \Delta x^k</math><br />
<br />
<math> s^{k+1} = s^{k} + \alpha^k \Delta s^k</math><br />
<br />
<math> y^{k+1} = y^{k} + \alpha^k \Delta y^k</math><br />
=Example=<br />
<br />
=Conclusion=<br />
The Interior Point method is a method that approximates the constraints of a linear programming model as a set of boundaries surrounding a region. These approximations are used when the problem has constraints that are discontinuous or otherwise troublesome, but can me modified so that a linear solver can handle them. Once the problem is formulated in the correct way, Newton's method is used to iteratively approach more and more optimal solutions within the feasible region.<br />
<br />
= Sources =<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>JohnPlaxcohttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T22:54:02Z<p>JohnPlaxco: </p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.[1]<br />
<br />
There are two important interior point algorithms: the barrier method and primal-dual IP method. The primal-dual method is usually preferred due to its efficiency and accuracy. Major differences between the two methods are as follows. There is only one loop/iteration in primal-dual because there is no distinction between outer and inner iterations as with the barrier method. In primal-dual, the primal and dual iterates do not have to be feasible.[3]<br />
<br />
=Barrier Method=<br />
For the barrier method algorithm, there a few approximations that must be made. Given a problem in the form of<br><br />
Minimize <math>f_0(x)</math><br><br />
subject to <math> f_i(x) \leq 0</math><br><br />
<math>Ax=b</math><br><br />
we must reformulate it to implicitly include the inequalities in the objective function. We can do this by creating a function that greatly increases the objective if a constraint is not met. Our conditions then changes to<br><br />
minimize <math> f_0(x) + \sum_i^m I_-(f_i(x))</math><br><br />
st <math> Ax=b</math><br><br />
where <math> I_-(x)=\left|\begin{array}{cc}0& x \leq 0\\ \infty & x > 0\end{array}\right|</math><br><br><br />
This problem, however, is not continuous. A modification can be made by approximating <math>I_-(x)</math> as a logarithm log(-x), which approaches infinity when x approaches 0 as we want, and makes all functions twice differentiable. We then put the logarithm over a variable that sets a level of accuracy for the approximation we make. Here we will call that variable t. We define<br><br />
<math>\phi(x)=-\sum_i^mlog(-f_i(x))</math><br><br />
which blows up if any of our constraints are violated. Our LP problem now becomes<br><br />
minimize <math> f_0(x) + \frac{1}{t}\phi(x)</math><br><br />
st <math> Ax=b</math><br><br />
<br><br />
This allows us to use Newton's method to follow what is called a Central Path, which is a series of points we iterate through that all satisfy the equality constraints <math>Ax=b</math> from the original problem, but give increasingly more optimized values for the objective function, with the inequality constraints <math>f_i(x)</math> not necessarily equal to 0.<br />
==Algorithm==<br />
Given strictly feasible <math>x, t:=t^0>0,\mu >1, \epsilon<0</math><br><br />
'''repeat'''<br><br />
1. Compute <math>x^*(t)</math> by minimizing <math>tf_0 + \phi </math> subject to <math>Ax = b</math>, starting at x.<br><br />
2. Update <math>x:=x^*(t)</math>.<br><br />
3. Quit if <math>\frac{m}{t} \leq \epsilon</math>, else<br><br />
4. Increase <math>t:=\mu t</math>[3]<br />
<br />
==Example==<br />
min <math> f(x) = x_1 - 2x_2 </math><br />
s.t. 1 + x_1 - (x_2)^2<br />
<br />
=Primal-Dual IP Algorithm=<br />
<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g_i(x)\ge 0</math> with <math>i = 1,...,m</math>.<br />
<br />
We now introduce slack variables to turn all inequalities into non-negativities:<br />
<br />
minimize <math>f(x)</math> <br />
<br />
s.t. <math>g(x)- s = 0</math> with <math>s \ge 0</math>.<br />
<br />
The logrithmic barrier function is now introduced:<br />
<br />
minimize <math> f(x) - \mu~ \sum_{i=1}^m\log(s_i)</math> <br />
<br />
s.t. <math>h(x)-s=0</math><br />
<br />
Now incorporate the equality constraint(s) into the objective function using Lagrange multipliers:<br />
<br />
minimize <math>f(x) - \mu~ \sum_{i=1}^m\log(s_i) - y^T(g(x) - s)</math><br />
<br />
Next, set all of the derivatives equal to 0:<br />
<br />
<math>\nabla f(x) - \nabla g(x)^Ty = 0</math><br />
<br />
<math>- \mu W^{-1} e + y = 0</math><br />
<br />
<math>g(x) - s = 0</math><br />
<br />
Rearranging:<br />
<br />
<math> </math><br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>JohnPlaxcohttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T21:31:28Z<p>JohnPlaxco: </p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.<br />
<br />
=General Idea=<br />
The primal-dual interior-point method can easily be understood by using the simplest NLP problem; one with only inequality constraints. <br />
Consider the following:<br />
<br />
minimize <math>f(x)</math> <br />
s.t. <math>g(x) \ge 0~~ x \in \mathbb{R}^n, g(x) \in \mathbb{R}^m~~~~~~(1)</math>.<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>JohnPlaxcohttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T21:22:23Z<p>JohnPlaxco: </p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction and Uses=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT) constraints described below. The problem is solved (assuming there IS a solution) either by iteratively solving for KKT conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, interior point methods approach a solution from the interior or exterior of the feasible region, but are never on the boundary.<br />
<br />
Two of the most important interior point algorithms are the barrier method and the primal-dual interior point method.<br />
<br />
<br />
=General Idea=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>JohnPlaxcohttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T20:54:18Z<p>JohnPlaxco: </p>
<hr />
<div><br />
Authors: John Plaxco, Alex Valdes, Wojciech Stojko. (ChE 345 Spring 2014) <br><br />
Steward: Dajun Yue, Fengqi You<br><br />
Date Presented: May 25, 2014 <br><br />
=Introduction=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems that contain inequalities as constraints. The LP Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable. In general, a problem is assumed to be strictly feasible, and will have a dual optimal that will satisfy Karush-Kuhn-Tucker (KKT). The problem is solved (assuming there IS a solution) either by iteratively solving for Karush-Kuhn-Tucker (KKT) conditions or to the original problem with equality instead of inequality constraints, and then applying Newton's method to these conditions.<br />
<br />
Interior point methods came about from a desire for algorithms with better theoretical bases than the simplex method. While the two strategies are similar in a few ways, the interior point methods involve relatively expensive (in terms of computing) iterations that quickly close in on a solution, while the simplex method involves usually requires many more inexpensive iterations. From a geometric standpoint, <br />
<br />
=Uses=<br />
<br />
=Algorithm=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>JohnPlaxcohttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T20:23:41Z<p>JohnPlaxco: </p>
<hr />
<div>Claimed by John Plaxco, Alex Valdes, Wojciech Stojko.<br><br />
Sources 4 and 5 have a chapter each devoted to our topic. Source 3 has a long section of chapters. Other two sources mention it, and the rest of the books do not have the topic.<br />
<br />
=Introduction=<br />
Interior point methods are a type of algorithm that are used in solving both linear and nonlinear convex optimization problems. More specifically, convex optimization problems that contain inequalities as constraints. The Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable.<br />
<br />
=Uses=<br />
<br />
=Algorithm=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
2. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
3. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>JohnPlaxcohttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T20:22:32Z<p>JohnPlaxco: /* Introduction */</p>
<hr />
<div>Claimed by John Plaxco, Alex Valdes, Wojciech Stojko.<br><br />
Sources 4 and 5 have a chapter each devoted to our topic. Source 3 has a long section of chapters. Other two sources mention it, and the rest of the books do not have the topic.<br />
<br />
=Introduction=<br />
Interior point methods are a type of algorithms that are used in solving both linear and nonlinear convex optimization problems.convex optimization problems that contain inequalities as constraints.The Interior-Point method relies on having a linear programming model with the objective function and all constraints being continuous and twice continuously differentiable.<br />
<br />
=Uses=<br />
<br />
=Algorithm=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, Optimization of chemical processes (pp 242-291). McGraw-Hill, 2001<br><br />
2. Bradley, Hax, and Magnanti, Applied Mathematical Programming (p 413).<br><br />
3. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
4. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
5. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>JohnPlaxcohttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T19:12:16Z<p>JohnPlaxco: </p>
<hr />
<div>Claimed by John Plaxco, Alex Valdes, Wojciech Stojko.<br><br />
Sources 4 and 5 have a chapter each devoted to our topic. Source 3 has a long section of chapters. Other two sources mention it, and the rest of the books do not have the topic.<br />
<br />
=Introduction=<br />
<br />
=Uses=<br />
<br />
=Algorithm=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, Optimization of chemical processes (pp 242-291). McGraw-Hill, 2001<br><br />
2. Bradley, Hax, and Magnanti, Applied Mathematical Programming (p 413).<br><br />
3. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
4. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
5. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>JohnPlaxcohttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-25T19:10:46Z<p>JohnPlaxco: </p>
<hr />
<div>Claimed by John Plaxco, Alex Valdes, Wojciech Stojko.<br><br />
Sources 4 and 5 have a chapter each devoted to our topic. Source 3 has a long section of chapters. Other two sources mention it, and the rest of the books do not have the topic.<br />
<br />
=Introduction=<br />
<br />
=Uses=<br />
<br />
=Example=<br />
<br />
=Conclusion=<br />
<br />
<br />
<br />
== Sources ==<br />
1. T.F. Edgar, D.M. Himmelblau, L.S. Lasdon, Optimization of chemical processes (pp 242-291). McGraw-Hill, 2001<br><br />
2. Bradley, Hax, and Magnanti, Applied Mathematical Programming (p 413).<br><br />
3. R.J. Vanderbei, Linear Programming: Foundations and Extensions (Chp 17-22). Springer, 2008.<br><br />
4. J. Nocedal, S. J. Wright, Numerical optimization (Chp 14). Springer, 1999. <br><br />
5. S. Boyd, L. Vandenberghe, Convex Optimization (Chp 11). Cambridge University Press, 2009</div>JohnPlaxcohttps://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_LPInterior-point method for LP2014-05-24T23:04:54Z<p>JohnPlaxco: Created page with "Claimed by John Plaxco, Alex Valdes, Wojciech Stojko."</p>
<hr />
<div>Claimed by John Plaxco, Alex Valdes, Wojciech Stojko.</div>JohnPlaxco