# Talk:Mathematical optimization

Page contents not supported in other languages.

## Argmin

Sorry to serious math people for any errors in my explanation of min, arg min, etc..

Perhaps one of you could also clarify for me what the subscript x in minx f(x) means. (I take this to mean, hopefully correctly, "What is the minimum value of f(x)?") Is it an indication that x is a variable, so you don't accidentally treat it as a constant? If so, when do you have to specify subscript x and when do you not?

--Ryguasu 04:22 Nov 6, 2002 (UTC)

It's in the context of argmin, argmax, then it's the value of x which minimizes f(X)

The last example of section 1 is not correct. Patrick 02:07 Dec 20, 2002 (UTC)

## Combinatorial optimization

In mathematics, the term optimization typically refers to the study of ...

I would say not typically, but all the time. So the word typically is reduntant.

(When the set A contains a finite number of elements, the problem falls in the domain of combinatorial optimization.)

OK, you do have a point. But you see, to include all the special cases would make the definition way too long. For example, I specialize in infinite dimensional optimization, when the domain of the objective function is a space of functions, or a space of shapes. But you see, if we add your special case, and my special case, what becomes is a mess. So, for the sake of clarity, and at the expence of being the most general, I deleted your sentence. There is still that link at the bottom about combinatorial optimization you put. You can also add it in the Major subfields section, several paragraphs below the definition of optimization. But I would suggest that we keep the definition of optimization simple. I would be very interested in hearing what you think about this. Thanks! --Oleg Alexandrov 23:01, 18 Dec 2004 (UTC)

## General formulation and notations

Optimization problems in finite dimension can assume the following general form

${\displaystyle \min \limits _{x,y}\;\;Z=f(x,y)}$
${\displaystyle {\mbox{subject to: }}\ }$
${\displaystyle h(x,y)=a\ }$
${\displaystyle g(x,y)\leq b}$
${\displaystyle x^{L}\leq x\leq x^{U},x\in \Re ^{n}}$
${\displaystyle y\in \{0,1\}^{p}}$

Where f is a scalar function of the continuous variables x and of the discrete variables y, h is a q-dimension vector function representing equality constraints, g is a m-dimension vector function representing inequality constraints and xL and xU are lower and upper bounds on the x variable, respectively. Generally, the inequality constraints do not necessarily need to formulated with an upper limit to their value (≤), but this formulation constitutes a general formalism once any constraints in the form of a lower bound (≥) can be easily converted to an upper bound. Rigorously, a problem containing only the inequality constraints would, by itself, be a general representation of a mathematical programming problem, once the bounds on the variables can be immediately represented by this form and equality constraints can be represented by the association of two simultaneous constraints having the same upper and lower bounds.

Optimization problems are divided into distinct classes, according to characteristics of the functions and variables that compose them. In the case in which all the functions that belong to the model are linear functions and the set of discrete variables is empty (q = 0), the problem belongs to the class of Linear Programming (LP) problems. If, conversely, any of the functions in the problem present non-linearity and it’s variables are still all in a continuous space, the problem belongs to the class of Non-Linear Programming (NLP) problems. The special case in which the objective function of a continuous problem contains quadratic and bilinear terms and the entire set of constraints is composed of linear functions is regarded as Quadratic Programming (QP). Finally, if the problem contains discrete variables (q > 0), it belongs to the classes of Mixed Integer Linear Problems (MILP), if it is composed of linear functions, and Mixed Integer Non-Linear Problems (MINLP), otherwise.

## Calculus of Variations

Calculus of variations is about sending an action integral over some space to an extremum by varying a function of the coordinates. It may be used in the context of temporal trajectories but that is not the definition of it and is not even the definition given by the wikipedia page. The suggestion that Calculus of Variations is solely used to compute trajectories through time is misleading.

Resolved

## Maths literacy

Plan 102.132.225.41 (talk) 15:13, 30 May 2022 (UTC)

## equal

There are a fair number of optimization problems where the optimum is when two quantities are equal. For example, the maximum power transfer is when the load impedance equals the source impedance. The actual reason I am writing this is that there is no Kelvin's law for conductor size where the optimum is when the annual power loss equals the annual costs of the power line. But there are so many problems that have a similar form, or at least close enough, that there should be a page for that. Gah4 (talk) 07:40, 2 June 2022 (UTC)

## One thing I hoped to find in the article (but did not):

One thing I hoped to find in the article is the answer to this question:

Who first recognized that by solving an equation of the form f(x) = 0 for x, one finds the critical points of the function f, which can then be used to determine the maximum and minimum values taken by f(x) for x in an interval [a, b] of the real line?

Was it clearly either Newton or Leibniz before the other one knew this?

Or did they both claim to have priority on it that point in their reported quarrel about who did what first?

Or, perhaps history just doesn't know the answer.

But there must be an earliest known publication of that method of finding maxima and minima!

(This optimization technique, which every first-semester calculus student learns, has undoubtedly saved the world many billions of dollars, and I suspect I am underestimating.)

If anyone knows the answer, it would be good to include that information in the article. 2601:200:C000:1A0:D0B:17BA:E5BF:AD9F (talk) 23:40, 20 June 2022 (UTC)

## Mathematical programming as a type of declarative programming

The page on Programming paradigm says that "mathematical" programming is a type of "declarative programming" "in which the desired result is declared as the solution of an optimization problem", and the link on the word "mathematical" links here because it is a link to "Mathematical programming", which redirects to here. As far as I can tell, this page doesn't give any indications or examples as to how a fully fledged programming language could be made up of solving optimization problems, though. (Here, by "fully fledged" I'm thinking "Turing complete/equivalent", but I have to admit that it doesn't that on the "programming paradigm" page, and the word "programming language" can refer to languages that are not Turing complete, even if it usually doesn't.) The closest thing I could find was the "Machine learning" section under "Applications", which consists of a link to Machine learning#Optimization. DubleH (talk) 18:22, 15 July 2022 (UTC)