# Talk:Knapsack problem

Page contents not supported in other languages.

## There is a major problem

There is a major mis-leading and incorrect sentence in the FPTAS section, in the FPTAS algorithm they refer to the dynamic programming mentioned above. The dynamic programming is correct but assumes that the weights are integer, however,the FPTAS rounds the VALUES ergo one can not use this dynamic programming after the rounding instead one should use a dynamic programming on the values see http://math.mit.edu/~goemans/18434S06/knapsack-katherine.pdf for more details. (Note that, in the paper the base condition should be A(0,0) = 0 instead of their base condition). I can rewrite it by myself but unfortunately my English level is not strong enough. — Preceding unsigned comment added by Ilanrcohen (talkcontribs) 08:17, 24 August 2015 (UTC)

## How about four color problem

Authors of this page, how about four-color problem, job-shop problem? I'm sure there are others, but those are the ones I've heard of. Job shop is the problem of scheduling random jobs of random size for maximum use of shop. Four color is how many colors are required for a map so no country borders on another with same color. Ortolan88

see Linear programming ... -- Tarquin 20:30 Jan 14, 2003 (UTC)

Seems that bin packing is also related, since it is also about efficiency.

P.S. I don't see the four-color problem as being related.

--MarkH

## Reference to Dantzig paper on the greedy approximation algorithm

In the section on the greedy 2-approximation for the unbounded knapsack problem, there is a reference to 'Discrete-Variable Extremum Problems' by George B. Tantzig, cited as providing the algorithm. As far as I can see, Dantzig is proposing the algorithm for the 0-1-problem (which he defines on page 273). His Figure 3, which illustrates the procedure, also includes only one instance of each object (or object type), and he constraints the membership count to 0 or 1 (Equation 10). At no point does he give any formal guarantee about the accuracy (that is, the approximation bound) for this algorithm. The section should therefore probably be reworded, so as not to indicate that (1) the application to unbounded knapsack comes from this paper, and (2) the paper gives the 2-approximation bound, as it does neither (as far as I can see). Also, either a reference or a brief proof (or perhaps both) for the 2-bound should be included, as this would now be a baseless claim, as it stands, without the Dantzig reference for support. Mlhetland (talk) 13:59, 5 September 2010 (UTC)

The proof that it's a 2-approximation is really simple, by the way: Because you can have as many items as you wish, you can always cover at least half the knapsack with the best (feasible) object type. If it's bigger than half the size, just use one – if it's smaller, use more than one. The optimum cannot possibly be better than filling the knapsack completely with the best (feasible) object type, and we're no worse than half of that. Mlhetland (talk) 14:24, 5 September 2010 (UTC)

The section should mention the greedy 2-approximation algorithm for the 0-1 case also. In the end of the normal greedy, compare the solution to the most valueable item, and choose this item instead if it is a better solution. 2-bound is proved in "On the approximation ratio of Greedy Knapsack", by Pekka Kilpeläinen. The current claim that the greedy algorithm performs arbitrary bad in the bounded case is heavily misleading. (204.102.224.235 (talk) 06:24, 25 March 2014 (UTC))

Some reference on the fact that the unbounded knapsack problem is NP-complete might also be in order. It seems, in fact, that if the number of object types is constant, you can solve the problem in polynomial time: http://books.google.com/books?id=u5DB7gck08YC&pg=PA211 Mlhetland (talk) 14:45, 5 September 2010 (UTC)

## Dynamic algorithm is wrong

As stated you could reuse items, when you choose a new item on each iteration you then use you previous solution for the highest value you can for a cost equal to the iteration cost you are on minus the value of your item. The problem is what if that solution already uses the item you chose. Lonjers (talk) 01:43, 20 April 2008 (UTC)

Agreed. The current version looks like a solution for the 0-1 problem, which does not allow repetitions. For the unbounded problem the recursion should probably be
A(i, j) = max(A(i - 1, j), vi + A(i, j - ci)) if cij.
Can anyone check this? 85.2.63.93 (talk) 08:18, 20 April 2008 (UTC)
appears to be fixed now Lonjers (talk) 07:32, 17 June 2009 (UTC)

## Question regarding the given solution

What happens if C, the total sum, isn't an integer? Then, there is no way one can reach A(C). We can approximate C either by floor or ceil but we can't be sure that the solution will be the best one, since one of the values can be smaller then the difference between C and it's approximated value. Bravemuta 16:57, 14 July 2007 (UTC)

By the way, the 1/2 approximation solution given in this article is also wrong. Consider if you've got one item that almost fills the knapsack, with value 1. The other items have value slightly less than one, and small size. Then you would get only value 1, when you could pick a lot of the smaller items and get a better value. —Preceding unsigned comment added by 66.67.122.13 (talk) 23:01, 16 December 2007 (UTC)

You are right I think the 1/2 approximation algorithm given is wrong. The correct one is to order the items by the ratio of their value to cost and take them from greatest to least. There is also a ( 1- E)*opt polynomial algorithm. I will fix this soon. Lonjers (talk) 01:39, 20 April 2008 (UTC)

## DP for 0-1 only?

The last paragraph uses the word "essentials" which is seems totally wrong - it doesn't make any sense to me (English first language, still amateur [but reasonably knowledgeable] computer scientist). Also, the main thing, is it just me or is the dynamic programming algorithm only for the 0-1 problem (neither this article nor Levitin03 say so explicitly, but as far as I can see...) --대조 | Talk 20:24, 15 December 2005 (UTC)

The DP algorithm will work for the unbounded variant of the knapsack problem just fine. In fact, the algorithm as stated on the page would need to be modified to determine a solution for the 0/1 problem. "Essentials" is defined in the opening lines of this article. It's an unusual word, but I assume there's some tradition of using it here. --Dantheox 06:29, 16 December 2005 (UTC)

## Need graphic change

It would be great to get the main graphic changed so that the dollar signs are in the right places. (They should precede the numbers: "$1" is correct, "1$" is incorrect.) —The preceding unsigned comment was added by Pkirlin (talkcontribs).

Done. Isn't SVG great? —Keenan Pepper 07:43, 9 July 2007 (UTC)
Additionally $1/1kg and$2/2kg $4/12kg should be changed to something else, as they are obviously worse than$2/1kg.--Ancient Anomaly (talk) 13:08, 19 August 2010 (UTC)

## Continuous knapsack problem

Continuous knapsack problem redirects here but is not mentioned (presumably, it was removed). If I recall correctly this is the variant that allows fractional parts of items to be placed in the knapsack, and has a trivial polynomial-time solution. This is an important variant if only to demonstrate how changes in the problem description can alter the solution's complexity. Dcoetzee 03:40, 5 February 2008 (UTC)

## Isn't the given problem in coNP????

The decision version of the 0-1 knapsack problem, as given, seems to be a member of coNP, NOT NP as the article states.

To see this: consider that to decide if a given solution to the 0-1 knapsack problem optimizes ${\displaystyle \sum _{j=1}^{n}p_{j}x_{j}}$ subject to ${\displaystyle \sum _{j=1}^{n}w_{j}x_{j}\leq c}$ we only need a poly-size witness proving it *doesn't* (some other solution with a higher value subject to the constraints). If there were a poly-size witness proving it *was* a member, then the decision problem would be in NP. —Preceding unsigned comment added by Austinjp (talkcontribs) 13:34, 31 March 2008 (UTC)

According to the article, the decision problem form of the knapsack problem is the question "can a value of at least V be achieved without exceeding the cost C?" This is not the same as "is a given solution the optimal solution?". -- Jitse Niesen (talk) 14:58, 31 March 2008 (UTC)
I see. There is some ambiguity: I missed the one sentence blurb about the decision problem before the contents and assumed the statement of NP-completeness at the end of the definitions section was about a decision form of the optimization problems in that same section (which we seem to agree is no NP-complete). I will do what I can to fix the ambiguity, feel free to correct me where I fail. --Austin Parker (talk) 01:02, 1 April 2008 (UTC)
Much better, thanks. I assume you are sure that the decision form of the optimization problem is coNP-complete. This is probably something obvious, but I know almost no complexity theory.
By the way, welcome here. I hope that you're enjoying the place. -- Jitse Niesen (talk) 10:23, 1 April 2008 (UTC)
Please ignore the bit I struck through. It is completely obvious. -- Jitse Niesen (talk) 10:26, 1 April 2008 (UTC)
You're choosing the wrong decision problem version. As with most optimization problems, the standard decision version of the problem is to say, given a problem and a bound v, is there a solution with value at least v? Given an algorithm for solving this problem, finding the optimal solution can then be done by binary search. This is in fact in NP and NP-complete. The complexity of other versions of this problem are not discussed in the literature and generally aren't very important, so I've removed them. Besides that, it's odd to discuss the complexity of other decision versions of one specific optimization problem and none of the hundreds of others. Dcoetzee 08:37, 2 April 2008 (UTC)

## Translate French page

Anyone interested in using Yahoo's babelfish or other translater to copy the featured French page version? It seems much more in depth with well-written explanations. It's pretty well translated at babelfish (though some copy editing needs to be done). —Preceding unsigned comment added by 128.252.78.87 (talk) 04:21, 16 September 2008 (UTC)

Good idea. I would like to do this (as soon as I get some spare time).
Fairfield (talk) 09:59, 16 October 2008 (UTC)

## I made a number of changes in notation

Hi everybody.

Over the past few days, I made a number of minor changes to the page. These changes cleaned up some inconsistency in terminology and notation. They also made the "integers" restriction explicit.

I hope I did not annoy the other users of this page by being so rash. As you can guess, I am a newcomer to the Wikipedia.

Fairfield (talk) 20:59, 15 October 2008 (UTC)

## Djkstra???

Couldn't you simply use Djkstra's path-finding algorithm for the 0-1 Knapsack problem? You would have to calculate cost as a ratio of value to weight and then add the items until you reach a full knapsack. —Preceding unsigned comment added by ThomHehl (talkcontribs) 11:41, 8 May 2009 (UTC)

## Sandbox version

I decided to try to improve this article a while back, though I'm no longer interested. here is what I came up with (permalink). I've merged some new sections in the article. If anyone else wants to try their hand at improving the article, feel free to steal bits from my sandbox version. --Cryptic C62 · Talk 01:14, 1 November 2010 (UTC)

## Photography?

Can someone clearify the following sentence from the text: "In the field of Photography, the term knapsack problem is often used to refer specifically to the subset sum problem and is commonly known as one of Karp's 21 NP-complete problems.".

Should it really say "Photography"? What does Photography have to do with this? Avl (talk) 17:55, 8 August 2012 (UTC)

## Dynamic Programming Solution to Unbounded Knapsack Problem does not make sense to me

After the pseudo-code for the Dynamic Programming Solution to Unbounded Knapsack Problem, the author says the answer will be found by tabulating the values in m[]. However, the pseudo-code never assigns any values to m[]. A reference to where the author found the algorithm would also help. — Preceding unsigned comment added by 173.59.73.254 (talk) 19:23, 26 September 2012 (UTC)

Yeah, it seems wrong to me for a number of reasons. It never counts an item twice, but seems to solve the 0-1 problem. The maximum weight is in T[n][W]. Composition can be found by iterating over T[n][0-W] and "packing" an item if the value increases. — Preceding unsigned comment added by 78.131.137.62 (talk) 07:10, 14 November 2012 (UTC)

## Examples are sorely lacking

Although there are about π / 3 ≈ 1.047 examples presented (in the Applications section) — and none anywhere else — these are extremely thin and totally inadequate to really illustrate what is going on.

How about at least one or two fully explained examples, of at least the simplest type of knapsack problem — not too trivial and not too complicated. Maybe one example that is fully solved, along with its solution, and one more that is perhaps just a little too difficult to be solved in any plausible amount of time on today's computers.

It is totally ridiculous that people forget entirely that this is not a journal article, it is an encyclopedia, for beginners to learn. Imagine you were teaching a course on the knapsack problems to beginners. If you don't include a few fully worked-through examples, you probably have some important things to learn about good teaching.Daqu (talk) 23:29, 14 April 2016 (UTC)

I would agree with this. Because many of the Wikipedia articles on technical subjects are written in a way which is difficult for the curious (ie not already familiar with the subject) fashion, the next best thing until a re-write with better communication is available would be more examples. Readers can then at least deduce what is happening even if the descriptions are given in archaic shorthand. I must repeat again as I have in other articles, the usefulness of an Encyclopedia is in its ability to convey knowledge, and whatever the arguments are for "Well learn the language then!" it fails in this for a great many people.

78.149.206.254 (talk) 20:42, 7 July 2016 (UTC) --Sam, UK

## Multi-objective version

As stated in the article, this version is ill-posed, because it does not rigorously define an optimal solution. Optimality is defined with respect to a single objective function; if there are multiple objective functions then there is no single way to determine optimality. A solution that is maximal for all the objectives is good, but what if we have to choose between solutions that maximize different objectives?--Jasper Deng (talk) 17:40, 15 August 2016 (UTC)

Multi-objective optimization problems may be ill-posed or well-posed according to the particular statement of the problem and how the multiple criteria are handled. See Multi-objective optimization for a general discussion. --Mark viking (talk) 00:50, 16 August 2016 (UTC)

## History of Knapsack Problem

Problem 10.1 of the Scottish Book appears to be an example of the Knapsack problem. http://kielich.amu.edu.pl/Stefan_Banach/pdf/ks-szkocka/ks-szkocka3ang.pdf Itullis (talk) 09:11, 5 April 2018 (UTC)

## The example is wrong

The ratio of the solution (2033/67 = 30.34) is higher than the ratio of any of the items. — Preceding unsigned comment added by 72.162.48.250 (talk) 11:47, 18 October 2018 (UTC)

## Daily fantasy sports?

"The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science, complexity theory, cryptography, applied mathematics, and daily fantasy sports."

How is daily fantasy sports a field comparable to combinatorics, computer science, etc.? Jwood (leave me a message) See what I'm up to 17:51, 20 May 2019 (UTC)

## An editorial observation

It's really hard to fit the most relevant information into this article without making it too long and unreadable. <wink>

Thanks for your hard work, everyone. --ESP (talk) 02:18, 1 August 2020 (UTC)