KNAPSACK PROBLEMS WITH SETUP Yanchun Yang A Dissertation Submitted to the Graduate Faculty of Auburn University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Auburn, Alabama August 7, 2006 KNAPSACK PROBLEMS WITH SETUPS Except where reference in made to the work of others, the work described in this dissertation is my own or was done in collaboration with my advisory committee. This dissertation does not include proprietary or classified information. _______________________________ Yanchun Yang Certificate of Approval: _________________________ _________________________ Saeed Maghsoodloo Robert L. Bulfin, Chairman Professor Technology Management Professor Industrial and Systems Engineering Industrial and Systems Engineering _________________________ _________________________ Jorge Valenzuela Stephen L. McFarland Associate Professor Dean Industrial and Systems Engineering Graduate School iii KNAPSACK PROBLEMS WITH SETUP Yanchun Yang Permission is granted to Auburn University to make copies of this dissertation at its discretion, upon request of individuals or institutions and at their expense. The author reserves all publication rights. _____________________________ Signature of Author _____________________________ Date of Graduation iv VITA Yanchun Yang, daughter of Jie Yang and Jianmei Zhao, was born on April 14, 1977, in Jiagedaqi Daxinganling, Heilongjiang Province, P.R.China. She graduated with the degrees of Bachelor of Science (Industrial Engineering) in 1998 and Master of Management Engineering in 2001, both from Northeastern University, Shenyang, P.R.China. She entered Graduate School, Auburn University, in January, 2003. v DISSERTATION ABSTRACT KNAPSACK PROBLEMS WITH SETUP Yanchun Yang Doctor of Philosophy, August 7, 2006 (MISE, Northeastern University, China, 2001) (B.S., Northeastern University, China, 1998) 107 Typed Pages Directed by Robert L. Bulfin This research studies three integer programming models which can be applied to order acceptance in make-to-order manufacturing or regional project selection in multiple periods. All three models are the variations of the binary knapsack problems and they are called the knapsack problem with setup (KPS), the multiple knapsacks problem with setup (MKPS) and the multiple-choice knapsack problem with setup (MCKS), respectively. In all three models, jobs belong to different families and some variables represent setup for a family of jobs: if a setup is not done, no jobs in this family can be processed; if two jobs are processed sequentially, no setup is required. vi Branch-and-bound algorithms are used to obtain the optimal solutions to all three models. Setup variables are branched on. After all setup variables are fixed, the models are transformed to a (several) knapsack problem(s). For each model, an independent linear knapsack problem is developed to give an upper bound. When a setup variable is fixed during branching, we update certain variables in the linear knapsack problem. The optimal objective of the updated linear knapsack problem is an upper bound on the generated sub-problem. The rounded LP solution of the linear knapsack problem for KPS or MCKS corresponds to an incumbent of KPS or MCKS. A greedy algorithm is developed to obtain a lower bound on MKPS. Computational experiments show the effectiveness of these algorithms. vii Style manual or journal used Bibliography conforms to those of European Journal of Operational Research Computer software used ANSI C, AMPL, Microsoft Office Excel and Microsoft Office Word viii TABLE OF CONTENTS LIST OF TABLES............................................................................................................. xi LIST OF FIGURES .......................................................................................................... xii I. INTRODUCTION........................................................................................................... 1 1.1. Objectives and significance .................................................................................. 1 1.2. Mathematical Model............................................................................................. 3 1.2.1. Order acceptance ........................................................................................ 3 1.2.2. Regional project selection with a fixed budget .......................................... 5 1.3. Basic research method .......................................................................................... 7 1.3.1. Cutting Plane .............................................................................................. 7 1.3.2. Dynamic Programming .............................................................................. 7 1.3.3. Branch and Bound ...................................................................................... 8 1.4 Relaxation Method................................................................................................. 8 1.4.1. Linear Relaxation ....................................................................................... 8 1.4.2. Surrogate Relaxation .................................................................................. 9 1.4.3. Lagrangean Relaxation............................................................................. 10 References.................................................................................................................. 11 ?. KNAPSACK PROBLEM WITH SETUP .................................................................. 12 Abstract...................................................................................................................... 12 2.1. Introduction......................................................................................................... 12 2.2. Literature survey................................................................................................. 14 2.3. Background......................................................................................................... 16 2.4. Solution algorithm .............................................................................................. 19 2.4.1. Fixing variables ........................................................................................ 19 2.4.2. Bounding .................................................................................................. 20 2.4.3. Choosing a new sub-problem ................................................................... 20 ix 2.4.4. Heuristic ................................................................................................... 21 2.5. Computational experiments................................................................................ 21 2.6. Conclusions......................................................................................................... 26 Appendix A. is greater than ............................................................................ 27 0i r 1, +ti r References.................................................................................................................. 27 ?. MULTIPLE KNAPSACK PROBLEM WITH SETUP.............................................. 29 Abstract...................................................................................................................... 29 3.1. Introduction......................................................................................................... 29 3.2. Linear knapsack problems and knapsack problem with setup............................ 33 3.2.1. Linear knapsack problem ......................................................................... 33 3.2.2. Algorithm for LKPS................................................................................. 34 3.2.3. An upper bound on MKPS ....................................................................... 36 3.3. Feasible solution (lower bound) ......................................................................... 39 3.4. Branch-and-bound algorithm.............................................................................. 40 3.4.1. Variable order........................................................................................... 40 3.4.2. Fixing .................................................................................................. 41 rk y 3.4.3. Choosing a new sub-problem ................................................................... 45 3.5. Computational experiments................................................................................ 46 3.6. Conclusions......................................................................................................... 54 Appendix A. The optimal objective of K1 is the upper bound on MKPS ................. 55 References.................................................................................................................. 58 ?.MULTIPLE-CHOICE KNAPSACK PROBLEM WITH SETUP .............................. 59 Abstract...................................................................................................................... 59 4.1. Introduction and literature review ...................................................................... 59 4.2. An upper bound and feasible solution ................................................................ 63 4.2.1. Linear knapsack problem ......................................................................... 63 4.2.2. Transform a linear knapsack problem with setup to a linear knapsack problem ..................................................................................................... 64 4.2.3. The algorithm for the upper bound and feasible solution......................... 66 4.3. Fixing ............................................................................................................. 70 it y x 4.3.1. Fixing to one......................................................................................... 70 it y 4.3.2. Fixing to zero........................................................................................ 71 it y 4.3.4. Choosing a New Sub-problem.................................................................. 72 4.4. Computational experiments................................................................................ 73 4.5. Conclusion .......................................................................................................... 79 Appendix A. The optimal objective of is an upper bound on MCKS. ............ 79 u LKP Appendix B. The rounded solution of corresponds to a feasible solution of MCKS........................................................................................................................ 81 u LKP Appendix C. Three Dominance rules ........................................................................ 83 References.................................................................................................................. 90 ?. CONCLUSIONS......................................................................................................... 91 BIBLIOGRAPHY............................................................................................................. 94 LIST OF TABLES Table 2.1. Solution time (seconds) for AKPS................................................................... 23 Table 2.2. Comparing solution time (seconds) of CPLEX and AKPS ............................. 26 Table 3.1. Solution time (minute) for AMKPS for 5 periods ........................................... 47 Table 3.2. Solution time (minute) for AMKPS for 7 periods ........................................... 48 Table 3.3. The lower bound, upper bound and optimal solution ...................................... 51 Table 3.4. The comparison of solution time (Minute) between AMKPS and CPLEX .... 53 Table 4.1. Solution time (minutes) with 10N = and ...................................... 74 ~ [10,30] i n Table 4.2. Solution time (minutes) with 30N = and ...................................... 75 ~ [30,50] i n Table 4.3. Solution time (minutes) with 50N = and ..................................... 75 ~ [50,70] i n Table 4.4. The solution time (minute) comparison between AMCKS and CPLEX......... 78 xi LIST OF FIGURES Fig. 2.1. Comparison of uncorrelated instances with similar total variables number....... 24 Fig. 2.2. Comparison of correlated instances with similar total variables number........... 24 Fig. 3.1. Solution time for average 45 jobs per family and 5 periods............................... 49 Fig. 3.2. Solution time for average 65 jobs per family and 5 periods............................... 49 Fig. 3.3. Solution time for average 85 jobs per family and 5 periods............................... 50 Fig. 3.4. Solution time for average 45 jobs per family and 7 periods............................... 50 Fig. 3.5. Solution time for average 65 jobs per family and 7 periods............................... 50 Fig. 3.6. Solution time for average 85 jobs per family and 7 periods............................... 51 Fig. 4.1. Solution Time for 50, 15NT= = and .............................................. 76 ~ [50,70] i n Fig. 4.2. Solution Time for 50, 20NT= = and ............................................. 76 ~ [50,70] i n xii I. INTRODUCTION 1.1. Objectives and significance Make-to-order production is playing an increasingly important role in our economy, partly due to the Internet and manufacturing technology advances. In make-to-order production, price is dictated not only by cost, but also by the customer?s expectation as well. Some customers are willing to pay a higher price for a short lead-time while others are willing to wait for their products in exchange for lower prices. Thus prices can be related to a product?s delivery date. Price, schedule and the total profit have very complex connections. These connections are of extreme interest to businesses today. 1 N Assume there is a manufacturing company. At time T, they receive some orders (jobs), which belong to families. Familyi ,N 1, ..i= , has jobs. Also assume that these jobs should be produced in the next planning period. The company?s manufacturing capacity is fixed and can?t be changed in the short term. Setup cost and setup time occurs when manufacturing changes from a job in one family to another job from a different family. There is no setup between jobs of the same family. The company operates with a batch delivery policy; products that are manufactured in the same period have the same shipping date. This is a common scenario in many manufacturing companies. Then the company needs to decide how to choose orders to maximize the total profit. In this case, a single knapsack model with setup is used to solve this problem. i n To extend this problem, jobs can be manufactured inT different periods, but a family can only be produced in a single period. Here the price charged for the product many relate to the customer?s desired due date; the price depends on the job?s completion time. The price could be determined by this way: there would be a base price for a job delivered at the customer?s desired due date; there will be ?earliness? and ?tardiness? penalties for other delivery dates. These prices would depend on the deviation from the desired due date and each customer?s tolerance for this deviation. Sometimes, the price could be increased for urgent jobs; or the price could be decreased if the customer agrees to allow more time for delivery. So in this system, prices are changed based on the product?s actual delivery time. The company might negotiate the price based on customer desires and company capabilities. Before making a production schedule, we should know the prices of jobs as a function of different completion dates. With the added price variability, this model is more complex than typical scheduling models in make-to-order manufacturing. The company has to consider the marginal profit for different jobs, the current production capacity, and each family?s setup cost and time before choosing orders and deciding the job assignment to maximize its total profit. A multiple knapsack problem with setup (MKPS) model can solve this problem. In above scenario, if production inT periods need the same non-renewable material and jobs from the same family can be manufactured in multiple periods, then a multiple- choice knapsack problem with setup (MCKS) can model this problem. MCKS is more helpful in an organization?s decision making on a fixed budget to invest a number of projects in multiple areas in multiple periods. In order to do a project in an area, a project 2 office must be set up. The organization would like to decide where to set up offices and which projects to do to maximize net profit subject to this budget restriction. 1.2. Mathematical Model 1.2.1. Order acceptance In make-to-order, if all orders have to be finished in one time period, a knapsack problem with setup (KPS) can be used to solve the orders acceptance problem. In this situation, a company will decide which jobs will be produced in this period. Given this model: 11 1 .. i nNN ij ij i i ij i Max c x f y st == = + ?? ? 11 1 i nNN ij ij i i ij i ax dy b == = + ?? ? ? (1) 1, ; 1, ij i i x yj ni N?== (2) ,{0,1} 1,;1, ij i i .x yjn?==N ) (3) i -is index families, j -is index jobs, N -is the number of families, i n -is the number of jobs in familyi , ij c -is the profit of job j in family , i ij a -is the time to process job in familyi , j i f -is the setup cost for familyi (0 i f < , 3 i d -is the setup time for familyi , b -is the time available for processing, ij x -is one if job in family is produced, zero otherwise, j i i y -is one if any job in family is produced, zero otherwise. i Constraint (1) requires that the total time used by jobs and setups cannot exceed the time available for production (resource other than time could also be considered). Constraints (2) prohibit a job from being processed if it belongs to a family that has not been setup. If jobs can be manufactured in multiple periods, and all items in same family should be manufactured together in one period, then this model could be described as a multiple knapsack problem with setup (MKPS): 111 11 .. i n TN TN ijt ijt it it tij ti Max c x f y st === == + ??? ?? , (1) 11 1 1, .. i nNN ij ijt i it t ij i ax dy b t T == = +?= ?? ? 1, ; 1, ; 1, .. ijt i it x yj ni Nt T?===, (2) 1 11,. T it t yi = ?= ? N, (3) , {0,1} 1, .. ; 1, .. ; 1, .. i x yjnNt?===T. (4) ijt x -is 1 if the job of family i is arranged into periodt , otherwise 0, j it y -is 1 if some job of familyi is arranged into periodt , otherwise 0, ijt c -is the profit of job of family in period t ( ), j i 0 ijt c ? it f -is the setup cost for family in periodt (i 0 it f )< , 4 ij a -is the processing time for job of family ( ), j i 0 ij a > 5 0 0 i d -is the setup time for family i ( ), i d > t b -is the available resource for processing in period t ( ). t b > Constraint (1) requires that the total time used by jobs and setups cannot exceed the time available in each period for production (resource other than time could also be considered). Constraints (2) prohibit a job from being processed if it belongs to a family that has not been setup. Constraints (3) guarantee setup of each family occurs once. In this model, all jobs belong to different families. If a job is chosen, then setup time and setup costs must occur. A job may be put intoT different periods, but the profit is different in different periods. The objective is to maximize the sum of the profits of accepted jobs. N 1.2.2. Regional project selection with a fixed budget Select projects which can be invested in multiple periods and in different regions to maximize net profit. This model can be described as a multiple-choice knapsack problem with setup. 111 11 .. i n TN TN ijt ijt it it tij ti Max c x f y st === == + ??? ?? 11 1 11 i nTN TN ij ijt i it tij ti ax dy b === == + ??? ?? ?, (1) 1, .. , 1, .. ; 1, .. ijt it i x yj ni Nt T?= = =, (2) 1 1 1,... , 1,.. T ijt i t x iNj = ?= = ? n, (3) , {0,1} 1,.. ; 1,.. ; 1,.. . ijt it i x yiNjnt?===T (4) ijt c -is the profit of project in area in period t ( ), j i 0 ijt c ? it f -is the setup cost for opening an office in area in period t (i 0 it f ? ), ij a -is the investment needed for project in area ( ), j i 0 ij a > i d -is the investment cost to open an office in area ( ), i 0 i d > b -is the budge available to invest ( ), 0b > it y - is one if office is set up in areai in periodt , otherwise zero, ijt x -is one if project in area is done in period , otherwise zero, j i t N -is the number of areas, T -is the number of periods. Constraint (1) requires the total budget used by all projects and setup office can?t exceed the budget available. Constraints (2) prohibit a project done before the office in this area is set up. Constraints (3) guarantee a project in an area only can be invested once. Constraints (4) require the variables to be binary. 6 7 1.3. Basic research method These three models are integer programs (IPs). For integer programming, branch and bound, cutting planes and dynamic programming could be used to optimally solve this class problem. 1.3.1. Cutting Plane Cutting plane algorithm is an important and well-known approach to solve IPs. It is one of the purest methods in polyhedral description algorithms and an alternative to enumeration. Cutting planes redefines the problem again and again by adding constraints until the problem is solved. In practice, a successful cutting plane algorithm depends on the relaxation method of the original problem, and the choice of cutting inequalities. There must be a family of valid inequalities, which define any optimal point, and a relaxation that is tractable. In fact when we add valid inequalities to the relaxation, we solve a series of relaxed problems. If this series of problems are easy to solve, that is better. But for these three models, we did not find such an algorithm for the relaxations. Therefore, cutting plane does not appear to be our best choice. For further study of cutting planes, refer to Parker and Rardin (1988). 1.3.2. Dynamic Programming Dynamic Programming is not a specific algorithm, but we can use dynamic programming theory to design an algorithm for these three models. As the number of jobs increase, that algorithm becomes worse, and storage space will increase exponentially. We do not choose to use dynamic programming. 1.3.3. Branch and Bound Branch and Bound belongs to the strategy of ?partial enumeration?, just like cutting planes belongs to? polyhedral description?. These two strategies are often used to solve IPs. Though they are non-polynomial in the worst case, they can be effective solution procedures for IPs in practice. In a branch-and-bound algorithm, if a variable is restricted to be binary, we can separate the problem into two sub-problems: one with x 0x = and the other with 1x = . Successful applications for B&B need a good algorithm to calculate upper and lower bounds for those sub-problems. The tighter the upper and lower bounds are, the more effective the algorithm is. Only with strong bounds we can expect to fathom candidate problems rapidly enough to avoid being overcome by the exponential growth in the number of potential sub-problems. Since we design a linear knapsack problem to supply the upper bound for each model and the linear knapsack problem is easy to be solved by Danzig?s algorithm, B&B becomes an attractive method to solve these problems. 1.4 Relaxation Method 1.4.1. Linear Relaxation Linear programming is, without doubt, the most successful branch of optimization (Parker and Rardin, 1988). Integer programming is usually changed to linear programming by relaxing the integer constraints. Linear programs can be solved easily, and may provide a good upper bound. Therefore, many integer program algorithms use a linear relaxation to get the bound. 8 In this paper, we relax the integer constraints of job variables for all three models. Linear knapsack problems are designed to give the upper bounds on these relaxations. 1.4.2. Surrogate Relaxation A surrogate constraint is a linear combination of other constraints. The following is an example of surrogate relaxation: 11 1 .. ( 1,..., ), {0,1} mn jij ij n jij i j ij Max c x st ax b i m x == = ?= ? ?? ? Then its surrogate relaxation is: 9 i 11 11 1 .. {0,1} mn jij ij mn m ijij ij i ij Max c x st vax vb x == == = ? ? ?? ?? ? The original problem?s solution is also a feasible solution to the surrogate relaxation, but the solution of surrogate relaxation is not necessarily feasible to the original problem. The surrogate relaxation has a larger feasible space. The optimal solution to the surrogate is an upper bound of the original problem. In this paper, surrogate relaxation along with linear relaxation will be used in MKPS to obtain a good upper bound. 1.4.3. Lagrangean Relaxation Lagrangean relaxation is also a common relaxation model. This is an example for Lagrangean relaxation: Give the model L1 11 1 .. 1,..., {0,1} i i nN ij ij ij n ij ij i j ij Max c x st ax b i N x == = ?= ? ?? ? Its Lagrangean relaxation, L2, is: 11 1 1 () .. {0,1} ii nnNN ij ij i i ij ij ij i j ij Max c x u b a x st x == = = +? ? ?? ? ? For each feasible solution of L1, we have 11 1 1 11 () ii nnNN N ij ij i i ij ij ij ij ij i j ij cx u b ax cx == = = == +? ? ?? ? ? ?? i n and all feasible solutions of L1 must be feasible solutions of L2, but not vice versa. If we use Lagrangean relaxation, the knapsack problem?s good structure is destroyed. Also experimentation shows the bound is not tight enough. Therefore, Lagrangean relaxation is not used in this paper. 10 11 References Parker, R.G., Rardin, R. L. 1988. Discrete Optimization. Academic Press, Inc. San Diego, CA. 12 ?. KNAPSACK PROBLEM WITH SETUP Abstract This paper studies a 0-1 knapsack problem with setup (KPS) where one set of variables serves as the upper bound of another set of variables. An efficient algorithm presented by Bulfin (1988) for the linear relaxation of this problem is applied to obtain an upper bound. Branch and bound is used to obtain the optimal solution, and the upper bound variables are branched before the remaining variables so KPS becomes a single knapsack problem. Computational experiments show that this algorithm is effective when objective and constraint coefficients are uncorrelated. This model can be used in order acceptance of single period in make-to-order manufacturing. 2.1. Introduction A company makes metal door frames based on customer orders. Door frames have different heights, widths, jamb sizes and a number of hinges and lock configurations. An order can be for a single frame or for 1,000 identical frames. To make a particular frame, the production machinery must be set up for the parameters of that door. Some setups, like the height of the door are easily made, while others, like jamb size require much time and labor. The actual cost to produce a frame depends on what other frames are being produced; if many identical frames are made, economies of scale result in a low cost. On the other hand, if a single frame is made, the setup cost dominates and the cost is high. Thus which orders are accepted, when they are produced and the price charged are critical to profitability. This scenario describes the basic order acceptance problem faced by all make-to-order manufacturers. Orders consist of jobs, and similar jobs make up a family. Families share a setup; if two jobs from the same family are processed sequentially, no setup is required. The manufacturer plans production for the next period based on orders received. An order can be accepted or rejected for production in this period. This problem can be formulated as a knapsack with setup. Let i index families j index jobs N be the number of families, i n be the number of jobs in familyi , ij c be the profit of job j in family , i ij a be the time to process job j in familyi , i f be the setup cost for familyi (0 i f )< , i d be the setup time for family and i b be the time available for processing. The decision variables are: ij x is one if job j in family is produced, zero otherwise and i i y is one if any job in family is produced, zero otherwise. i The model, which we call KPS, is: 13 11 1 .. i nNN ij ij i i ij i Max c x f y st == = + ?? ? 11 1 i nNN ij ij i i ij i ax dy b == = + ?? ? ? (1) 1, ; 1, ij i i x yj ni N?== (2) ,{0,1} 1,;1, ij i i .x yjn?==N (3) Constraint (1) requires that the total time to produce jobs cannot exceed the time available. Constraints (2) ensure a job is processed only if it belongs to a family that has been setup. Constraints (3) require the variables to be binary. In the following section we give a brief literature review and discuss background used in the solution methodology. In Section 2.3, we present an algorithm to solve KPS. Computational results are given in Section 2.4. Finally, we give concluding remarks. 2.2. Literature survey This linear relaxation of KPS was first introduced by Ham et al. (1985) as a cell loading problem for a Group Technology production system. Bulfin (1988) developed a polynomial algorithm for the linear relaxation of KPS. It is based on the ratio rule of Dantzig (1957) for the linear knapsack problem. Akinc (2004) derives an algorithm for a special case of KPS with no setup time, which he called fixed-charge knapsack problem. His algorithm to solve the linear relaxation is the same as the one in Bulfin (1988). He outlined a branch-and-bound algorithm to solve the integer version and used this solution to compare heuristics. No solution times are 14 given for the branch-and-bound algorithm. He states ?This problem is solved as an LP. If all are integer, then the optimal solution of P (the fixed-charge knapsack problem) is obtained from the solution of the ordinary 0/1 knapsack problem that optimally allocates the available capacity to all i y ij x for which 1 i y = .? This statement is not true, as seen by the following counter-example: 11 12 1 21 22 2 11 12 21 22 11 1 12 1 21 2 22 2 11 12 21 22 65 58 .. 344 , , ,,, {0,1} Max x x y x x y st xxx x xyxy xyxy xxxx +?++? +++? ?? ?? ? The LP?s optimal solution is 12 1, 1yy= = , and the objective is 13. Based on Akinc?s claim, solving the integer knapsack with both setups included gives a solution value of 9, with , and 12 1, 1yy== 11 1x = 12 1x = . But the solution 12 1, 0yy= = , and 11 1x = 12 1x = has objective 10. Hence, the optimal objective of knapsack problem when all are integer in LP solution is not necessarily optimal for the integer model. This brings the results of his paper into question. y Chajakis and Guignard (1994) consider the setup knapsack problem which is similar to ours except the setup cost i f and profit of job can be positive or negative. An extra constraint is added to make sure a setup does not occur if no job in this family is put into knapsack. This is unnecessary in KPS since is positive and ij c ij c i f is negative. Chajakis and Guignard transform the original problem to an equivalent formulation without setup variables by two methods. Variables y are described by a Boolean union of x variables 15 so that the constraints coupling and can be deleted and the problem becomes a ?knapsack problem? with a Boolean union of all variables. The second method is to enumerate all non-dominated feasible solution for each family and define a pseudo- variable corresponding to this solution. This transforms the setup knapsack to a multiple- choice knapsack problem and only one pseudo-variable can be one in an optimal solution. Dynamic programming is used to solve the first transformation; branch-and-bound and dynamic programming are both used to solve the multiple-choice knapsack problem in the second transformation. Instances with 5, 10, 20, 50, and 200 families are tested. A maximum of 4000 total variables can be solved. x y 2.3. Background The knapsack problem and its many variants are well-studied. For a discussion, see Martello and Toth (1990) and Dudzinski and Walukiewicz (1987). We discuss some basic results that will be used later in this paper. Dantzig (1957) defined the linear knapsack problem as: 16 . 1 1 .. 01,1, n jj j n jj j j Max c x st ax b x jn = = ? ?? = ? ? If the variables are ordered by 12 12 ... n n ccc aa ??? a , he showed the optimal solution is given by 1, j x jt=< 1 1 () t j j t t ba x a ? = ? = ? 0, j x jt=> where . 1 min{ | } i j j tia = => ? b Similarly, we define the linear relaxation of KPS (LKPS), which is given by 11 1 11 1 .. , 1, ; 1, , 01,;1,, 01,1,.. i i nNN ij ij i i ij i nNN ij ij i i ij i ij i i ij i i Max c x f y st ax dy b xyj ni N xjnN yiN == = == = + +? ?== ?== ?? = ?? ? ?? ? Define ,1,. 1,. ij ij i ij c riNj a ===n. Order jobs so that . i niiii rrrr ,321 ...... ??? Let 11 0 1 1 max{ | 1, 2,.. } i i t k ij i ij i jj iitk ij iij i j j cf cf rk adad == = = ++ = ++ ?? ?? n for iN? . Separate the jobs of familyi into two sets, i XM = {1? } and i t i XT = { +1.... }. Then ; a proof is given in the Appendix A. i t i n i nititii rrrr ,2,1,0, ... ???? ++ For familyi , define: 17 18 n n n ' 1 1 ' 1 1 ' ,1 ' ,1 ' 1, .. 1, .. 1 i i i i t iiji j t iiji j ijt ij i i ijt ij i i iii ccf aad ccjt aajt nnt = = ?+ ?+ =+ =+ ==+ ==+ =?+ ? ? Then LKPS can be reformulated as a classical linear knapsack problem, which we call LBKP: ' ' ' 11 ' 11 .. 0 1, 1,.. 1,... i i nN ij ij ij n N ij ij ij ij i Max c z st az b ziNj == == ? ???= = ?? ?? and solved by Dantzig?s ratio rule. If there is no fractional variable, KPS is also solved. We know that, at most, one variable will have a fractional value. Suppose , . If ij zf= 01f<< i j t> , then job i tj+ in familyi will be the only fractional variable KPS and all setup times and costs are considered. On the other hand, if 1j = , represents a virtual job composed of setup and jobs 1 through of family . Here and 1i z i t i i yf= ij x f= ,1,. i j t= so all are fractional in KPS and the setup time and cost for familyi and the processing time and profit of the first jobs are only partially considered. If we round the fractional variable(s) to zero, then the current solution is feasible to KPS, and can be used as a lower bound in the branch-and-bound algorithm. i t 2.4. Solution algorithm To develop a branch-and-bound algorithm, we need to make several decisions. These include how to fix variables, calculate bounds, choose the next sub-problem to explore and obtain an initial incumbent solution. We discuss these now. 2.4.1. Fixing variables We only fix setup variables to be zero or one in our main branch-and-bound scheme. When a sub-problem is created with fixed to one, the right hand side is reduced by and i y i y i d i f is added to objective directly in the sub-problem. Then all , ij z 1,.. i j n= are replaced by real variables 1 ,... i i n x x of familyi . When a sub-problem is created with fixed to zero, , i y ij z 1,. i j n= are removed from that sub-problem. Note that if all i y are binary in the linear relaxation but some ij x is fractional, solving a knapsack problem over the ij x with = 1 will not necessarily give the optimal solution as we showed in Section 2. When all are fixed, we solve a knapsack over the remaining variables to obtain the best solution with those variables fixed. If this produces a better solution than the incumbent, it replaces the incumbent. i y i y We order the variables by 1i z 10 20 0 ... N rr r? ?? . If a variable has large , it is more likely to be one in an optimal solution, while those with smaller ratios are more likely to be zero. We choose either the first or last variable to fix first and work toward the middle. This tends to keep the number of active branches small. 0i r 19 2.4.2. Bounding 20 n n We use LBKP as an upper bound on KPS. It is a linear relaxation which allocates the setup time and cost proportionally. It is initially solved by the ratio rule. When some is fixed, it is easy to resolve the sub-problem. If we fix to one, we delete the pseudo variables and insert the new variables i y i y 1 ,.. ii zz 1 ,... i i x x . This may require taking resource from some free variables, which are chosen by the ratio rule to maintain optimality. Similarly, fixing may free up resource, which is then allocated to free variables using the ratio rule. 0 i y = 2.4.3. Choosing a new sub-problem When variables are fixed, two sub-problems are created. If a sub-problem?s upper bound is no better than an incumbent solution it is discarded. When its bound indicates it could contain a better solution to KPS we store it in a bucket. Each bucket contains sub- problems with bounds that are about the same. LetUB be the best upper bound and be the value of the current incumbent solution. If we wantINC K buckets, calculate ()UB INC K ? ?= . Then bucket one will contain all sub-problems with upper bounds in the interval[ , bucket two[2,UB UB?? ] ],UB UB? ???, and bucket K [,INC INC +?]. Buckets can be updated as upper bounds or the incumbent change. When we choose a new sub-problem to explore, we take one based on LIFO from the lowest numbered, non- empty bucket. This gives almost a ?best-bound? strategy, but without the bookkeeping overhead. 2.4.4. Heuristic If the fractional valued variable of LBKP is , rounding down to 0 frees resource. Allocate this resource to variables with processing time less than and already has its family set up. Variables are chosen by the ratio rule until there are no more variables which can use the remaining resource. ij z ij z ' ij ij az ' ij ij az 2.5. Computational experiments Our experiments will be similar to previous experiments on knapsack problems. However KPS has a setup requirement, so setup time and setup cost will be included in this study. We wish to test our algorithm (AKPS) on a variety of problem instances to see what problems can be solved. Instances will be generated by setting four parameters at several levels. The parameters are the number of families, average number of jobs in a family, proportion of setup time/cost relative to totals, and correlation between objective function and constraint coefficients. All data will be integer valued. The number of families will be fixed at 50 and 100. The number of jobs in familyi is a uniformly distributed integer in either [40, 50] or [90,100]. Setup cost and time is given by 1 1 () i n it ijt j f ec = =? ? 2 1 () i n ii j de a = = ? j 1 e and are uniform from [0.05, 0.15], [0.15, 0.25], [0.25, 0.35], and [0.35, 0.45]. 2 e 21 We choose and two ways. First and are both chosen uniformly from [10, 10000]; thus they are independent. Next, is chosen uniformly from [10, 10000], while is chosen uniformly from [ -1000, +1000], but if is less than 10 it is randomly chosen from [10,100]; this introduces some correlation between the two coefficients. a c ij a ijt c ij a ijt c ij a ij a ijt c In previous knapsack studies, instances tend to be the hardest when the available resource is roughly one half the sums of the constraint coefficients. Therefore, we choose uniformly from [ ,b 11 0.4* i nN ij ij a == ?? 11 0.6* i nN ij ij a == ?? ]. For each level of the four factors we generate ten instances. AKPS was coded in C and all instances were run on a Dell P.C. with 1.7G Intel processor and 512M bytes of memory. In the following tables, we report the minimum (MIN), average (AVG) and maximum solution time (MAX) in seconds. We also give the average ratio of initial solution (INC) to initial upper bound (UB) and the average ratio of initial incumbent to the optimal solution (OPT). 22 Table 2.1. Solution time (seconds) for AKPS uncorrelated correlated N i n Setup LB/UB LB/OPT MIN AVG MAX LB/UB LB/OPT MIN AVG MAX [0.05-0.15] 1.00 1.00 0.03 0.06 0.27 0.98 0.98 8.05 17.46 29.28 [0.15-0.25] 0.99 0.99 0.06 0.53 1.72 0.97 0.97 2.25 16.63 30.73 [0.25-0.35] 0.99 0.99 0.03 0.49 1.17 0.97 0.97 1.09 25.69 65.56 50 [40,60] [0.35-0.45] 0.97 0.97 1.25 2.62 4.89 0.98 0.98 12.83 22.97 56.5 [0.05-0.15] 1.00 1.00 0.08 0.09 0.12 0.98 0.98 5.69 26.47 63.72 [0.15-0.25] 0.99 0.99 0.05 0.87 2.94 0.97 0.97 11.30 28.46 55.75 [0.25-0.35] 0.98 0.98 0.09 2.67 5.28 0.98 0.98 2.77 34.52 82.31 50 [90,110] [0.35-0.45] 0.98 0.98 0.25 4.25 9.30 0.99 0.99 0.91 49.36 101.4 [0.05-0.15] 1.00 1.00 0.06 0.16 0.36 0.99 0.99 17.39 153.07 503.38 [0.15-0.25] 1.00 1.00 0.08 1.43 4.36 0.99 0.99 70.61 124.69 220.53 [0.25-0.35] 0.99 0.99 0.05 4.96 18.97 0.99 0.99 24.62 175.51 315.67 100 [40,60] [0.35-0.45] 0.99 0.99 2.41 14.34 29.62 0.99 0.99 22.11 131.22 305.85 [0.05-0.15] 1.00 1.00 0.14 0.24 0.39 0.99 0.99 121.86 385.44 877.19 [0.15-0.25] 1.00 1.00 0.28 4.02 7.50 0.99 0.99 58.69 *477.78 877.73 [0.25-0.35] 0.99 0.99 1.33 11.86 30.48 0.99 0.99 17.55 *468.23 953.29 100 [90,110] [0.35-0.45] 0.99 0.99 1.08 31.26 107.09 0.99 0.99 11.48 *484.35 784.72 Note: ?*? shows 3 of these instances ran out of memory; AVG, MAX, and MIN are calculated based on the remaining seven instances. Our heuristic solution is outstanding. On average, it was less than 2% from the optimal over the entire range of instances tested. Based on the data from Table 2.1, correlated instances are more difficult to solve than uncorrelated instances. The setup proportion has a stronger effect on uncorrelated instances than correlated instances. With the same number of variables, AKPS works better when there are fewer families and the number of jobs per family is large. This makes sense since fewer family variables simplify the first stage of the branching. Instances with 50 families and an average of 100 jobs per family are much easier than instances with 100 families and an average of 50 jobs per family. Fig. 2.1 shows the solution time of instances with 50N = and an average of 100 jobs per family and instances with 100N = and an average of 50 jobs per family with 23 uncorrelated coefficients. With roughly the same number of variables, instances with larger are more difficult. Also, solution time increases as setup proportion increases. The incumbent solution gets worse as setup proportion increases. Fig. 2.2 gives the solution time with correlated coefficients. Instances with fewer families still work better than the others but solution time is not changed too much as setup proportion increases. In correlated instances, setup proportion does not have as much effect on the incumbent. N uncorrelated 0.00 2.00 4.00 6.00 8.00 10.00 12.00 14.00 16.00 [0.05-0.15] [0.15-0.25] [0.25-0.35] [0.35-0.45] Setup N=50 N=100 Time (Seconds) Fig. 2.1. Comparison of uncorrelated instances with similar total variables number correlated 0.00 20.00 40.00 60.00 80.00 100.00 120.00 140.00 160.00 180.00 200.00 [0.05-0.15] [0.15-0.25] [0.25-0.35] [0.35-0.45] Setup N=50 N=100 Time (Seconds) Fig. 2.2. Comparison of correlated instances with similar total variables number 24 Chajakis and Guignard only test uncorrelated instances with coefficients from a small range. (i.e. one set of instances obtains setup cost, profit from [-100, 100] and setup time, processing time from [1,10]). Since the dynamic programming used in their paper has a pseudo-polynomial worst case complexity, the large coefficients will increase the difficulty of instances and need more storage without doubt. The second approach presented fail in instances with total 4000 variables because of storage used up. The first one can solve the same instances but need over 1000 seconds. They permit job profit negative and setup cost positive in their model, which, to some extent, make instances easier due to parts of variables having fixed to 0 by a preprocessing, which reduce the size of the problem remarkably. The total number of variables after preprocessing is only about 40%-60% of the original one. For instances with 4000 variables, only 2500 variables are left after this preprocessing. We also compare AKPS with CPLEX 9.1 (called by AMPL). We test instances with 50 families and an average of 100 jobs per family. For each setup, we test five instances. AKPS takes much less time for uncorrelated problems. CPLEX takes from 12 to 96 times longer; as setup proportion increases the difference becomes smaller. When the coefficients are not independent, the difference is much smaller. AKPS is only 3 to 6 times faster on average, and a few instances take less time on CPLEX. We also compared some instances with 100 families and 50 jobs per family, but do not present the data. CPLEX is better than AKPS when and c are correlated. But AKPS is better than CPLEX if and are uncorrelated for instances with a a c 100, ~ [40,60] i Nn= . Therefore we suggest using AKPS when a and c are uncorrelated; if they are correlated and there are over 50 families CPLEX might be better. 25 26 Table 2.2. Comparing solution time (seconds) of CPLEX and AKPS Uncorrelated Correlated SETUP AKPS CPLEX CPLEX/AKPS AKPS CPLEX CPLEX/AKPS 0.05 1.17 23.40 21.87 13.08 0.60 0.09 1.92 21.33 13.64 491.73 36.05 0.05 1.06 21.20 44.06 253.78 5.76 0.05 1.08 21.60 34.00 3.81 0.11 [0.05-0.15] 0.06 0.86 14.33 37.42 226.00 6.04 AVG 20.37 9.71 0.05 4.67 93.40 24.58 411.08 16.72 0.41 26.28 64.10 56.78 929.03 16.36 0.05 2.31 46.20 40.14 376.75 9.39 0.11 15.44 140.36 40.22 269.69 6.71 [0.15-0.25] 0.05 6.97 139.40 81.39 215.44 2.65 AVG 96.69 10.36 4.75 15.52 3.27 23.64 6.67 0.28 1.95 12.09 6.20 46.01 514.64 11.19 0.61 16.97 27.82 88.14 5.75 0.07 2.75 17.39 6.32 7.67 14.86 1.94 [0.25-0.35] 1.55 26.58 17.15 72.03 102.00 1.42 AVG 12.15 2.98 3.97 11.20 2.82 179.06 7.42 0.04 0.91 16.36 17.98 6.56 35.95 5.48 1.41 65.77 46.65 36.91 283.38 7.68 7.42 2.91 0.39 107.62 265.69 2.47 [0.35-0.45] 4.58 12.78 2.79 22.16 96.05 4.33 AVG 14.13 4.00 2.6. Conclusions We investigate the knapsack problem with setup. This is an important problem, modeling order acceptance, cell loading, project selection and others. We have developed an exact algorithm for the problem. The first computational tests on exact solutions indicate our algorithm is vastly superior to CPLEX for many instances, superior on others and about the same for the rest. Further, we have determined what parameter values make instances hard for our algorithm. Finally, the proposed heuristic is excellent, being within 2% of optimal for all the problems tested. Appendix A. is greater than . 0i r 1, +ti r Proof. Assume and0,,, >dcba d c b a ? , then b a db ca ? + + . Since frombcad ? d c b a ? , then , or abbcadab +?+ )()( cabdba +?+ , so b a db ca ? + + . Thus if , then job should be included in ,1 0it i r + ? r 1t + i XM . Since it is not, then ,0 , 1iit rr + > . Therefore . represents family ?s maximum ability to obtain profit for each unit of resource it consumes. i nititii rrrr ,2,1,0, ... ???? ++ 0i r i References Akinc, U. 2004. Approximate and exact algorithm for the fixed-charge knapsack problem, European Journal of Operational Research 170, 363-375. Bulfin, R. L. 1988. An algorithm for the continuous variable upper bound knapsack problem, OPSEARCH 25 (2), 119-125. Chajakis, E.D., Guignard, M. 1994. Exact algorithms for the setup knapsack problem, INFOR 32 (3), 124-142. Dantzig, G.B. 1957. Discrete variable extremum problems, Operations Research 5, 266-277. Dudzinski, K., Walukiewicz, S. 1987. Exact methods for the knapsack problem and its generalizations. European Journal of Operational Research 28, 3-21. Ham, I., Hitomi, K., Yoshida, T. 1985. Group Techonology, Kluwer Nijhoff Publishing, Boston, Massachusetts. 27 28 Martello S. and Toth, P. 1990. Knapsack Problems: Algorithms and Computer Implementations, John Wiley and Sons, New York. ?. MULTIPLE KNAPSACK PROBLEM WITH SETUP Abstract We present a multiple knapsack problem with setup (MKPS). This problem can be used to model order acceptance and production scheduling of multiple periods in make- to-order manufacturing. Some variables represent setting up production for a family of jobs; if a setup is not done, no jobs in the family can be processed. Further, a family can only be set up in one period of the planning horizon. A linear knapsack problem is designed to give an upper bound on MKPS. A greedy algorithm is developed to obtain a lower bound. Setup variables are branched on; when all set up variables are fixed, MKPS becomes several independent knapsack problems. Computational experiments show this algorithm is effective, especially when resources are tight. 3.1. Introduction The knapsack problem and its variants are well known problems in integer programming. In this paper, we present a model that we call the multiple knapsack problem with setup (MKPS). In this model, jobs belong to different families. If a job is processed, then a setup time and a setup cost are incurred. A job can be assigned toT different periods, but only one setup for each family is allowed during the planning horizon, so jobs in the same family must be processed in the same period. The profit for job N j of familyi processed in periodt is , and varies for different periods, but the ijt c 29 processing time stays the same. The objective is to maximize the sum of the profits of processed jobs. Formally, we have: ij a 111 11 .. i n TN TN ijt ijt it it tij ti Max c x f y st === == + ??? ?? (1) 11 1 ,1,. i nNN ij ijt i it t ij i ax dy b t T == = +?= ?? ? , 1, ; 1, ; 1, .. ijt i it x yj ni Nt T?=== (2) (3) 1 11,. T it t yi = ?= ? N , {0,1} 1,.. ; 1,.. ; 1,.. i x yjnNt?===T (4) ijt x -is one if the th j job of familyi is arranged into periodt , otherwise zero, it y -is one if some job of family is arranged into period , otherwise zero, i t ijt c -is the profit of job j of family in period t ( ), i 0 ijt c ? it f -is the setup cost for family in periodt (i 0 it f )< , ij a -is the processing time for job j of family ( ), i 0 ij a > i d -is the setup time for family i ( ), 0 0 i d > t b -is the available resource for processing in period t ( ). t b > Constraints (1) require that the total resource used by jobs in each period can not exceed the resource available. Constraints (2) prohibit a job from being processed if it belongs to a family that has not been setup. Constraints (3) guarantee jobs in the same family processed in a single period. Constraints (4) require all variables to be binary. 30 This formulation models order acceptance in make-to-order manufacturers. Assume a manufacturer receives orders for jobs which belong to different product families. Orders can be manufactured inT periods. Setup time and setup cost occur between jobs of different families. If jobs are accepted, jobs of the same family are done in the same period. N In make-to-order production, price is dictated not only by cost, but also by the customer?s expectation as well. Some customers are willing to pay a higher price for a short lead-time, while others are not. Thus prices are related to a product?s completion date, and different production schedules could produce different profits. The optimal solution to MKPS gives the maximum profit, which orders to accept, and how to assign jobs to periods. The multiple knapsack problem assigns a set of items to multiple knapsacks with fixed capacities so that the total profit of selected items is maximal. The multiple knapsack problem is a special case of multiple knapsack problem with setup by ignoring the setup variables and setting . The multiple knapsack problem has been widely investigated. Martello and Toth (1980, 1981) discussed an upper bound algorithm using Lagrangean relaxation. Pisinger (1999) presented an exact algorithm using a surrogate relaxation to get an upper bound, and dynamic programming to get the optimal solution. The surrogate relaxation of the multiple knapsack problem with identical multipliers is a knapsack problem. Apparently, MKPS can not do in this way not only because each job has the different profit coefficients in periods, but also there are the additional setup variables in the model. ijt ij cc= 31 MKPS has multiple-choice constraints like the multiple-choice knapsack problem. An efficient algorithm and two dominance properties exist for the linear multiple-choice knapsack problem. More detail can be found in Pisinger (1995). The knapsack problem with setup (KPS) is a special case of MKPS when . Bulfin (1988) gave an efficient algorithm for its linear relaxation (LKPS), which is similar to Dantzig?s algorithm for the linear knapsack problem. This transforms the LKPS into a knapsack problem by using a modified ratio related to a job set. We state this algorithm in the following section. Akinc (2004) describes algorithms for a fixed-charge knapsack problem, which is a special case of MKPS with a single period and zero setup time. 1T = Though the LP solution is often a good upper bound on integer programs such as knapsack problem and multiple-choice knapsack problem, we do not solve the linear relaxation of MKPS for obtaining an upper bound, but design a linear knapsack problem formulation, whose optimal objective is the upper bound of MKPS. Since MKPS becomes independent knapsack problems if all variables are fixed, branching is done in two stages. The first stage is to branch on variables. When all it y it y y variables are fixed, the second stage solves independent knapsack problems. There are many algorithms available for knapsack problem. We just use a simple branch-and-bound algorithm for knapsack problem. Our approach (AMKPS) is outlined below: Step 1. Do surrogate relaxation and linear relaxation for MKPS. Step 2. Find an initial upper bound for MKPS. Step 3. Find a feasible solution (incumbent) for MKPS. Step 4. Determine a branching order for the variables. y 32 Step 5. Decide which variable to fix in current node. y Step 6. Generate a new node by solving a sub-problem with fixed to one; save this node if its bound is better than the incumbent solution. If all are fixed, then solve a set of knapsack problems and update the incumbent solution if possible. y y Step 7. Generate a new node by solving a sub-problem with fixed to zero; save this node if its bound is better than the incumbent solution. If all are fixed, then solve a set of knapsack problems and update the incumbent solution if possible. y y Step 8. Choose a candidate node. If none exists, stop, the incumbent solution is optimal; else go to Step 5. The rest of the paper is organized as following: we discuss Steps 1 and 2 in section 3.2; section 3.3 explains the approach used in Step 3 and section 3.4 presents the remaining steps. Computational experiments are discussed in 3.5 and a summary is given in section 3.6. 3.2. Linear knapsack problems and knapsack problem with setup We use the linear knapsack problem and linear knapsack problem with setup to obtain an upper bound of MKPS. Let us review these two models firstly. 3.2.1. Linear knapsack problem The linear knapsack problem is a well known integer program: 33 34 . 1 1 .. 01,1, n jj j n jj j j Max c x st ax b x jn = = ? ?? = ? ? All variables are ordered by non-increasing profit-to-process ratio k k c a . By Dantzig?s algorithm, if variable is the first one with , then t 1 t k k a = > ? b 1, 1, .. 1 j x jt==? 1 1 t k k t t ba x a ? = ? = ? 0, 1,.. j x jt n==+ 3.2.2. Algorithm for LKPS Bulfin (1988) shows LKPS can be transformed to a linear knapsack problem. Consider the LKPS: 11 1 11 1 , .. 1, ; 1, 01,;1, 01,1,. i i nNN ij ij i i ij i nNN ij ij i i ij i ij i i ij i i Max c x f y st ax dy b xyj ni N xjnN yiN == = == = + +? ?== ?== ?? = ?? ? ?? ? Bulfin?s algorithm uses the ratio [] iij ii fc da + j ? ?+ ? ? ? ? as a criterion to assign the resource. Define ,1,. 1,. ij ij i ij c riNj a ===n. Reorder jobs 1? , so that . Let i n i niiii rrrr ,321 ...... ??? 11 11 max{ | 1, 2,.. } i i t k ij i ij i jj itk ij iij i j j cf cf kn adad == == ++ == ++ ?? ?? foriN? . Then in familyi , jobs are separated into two sets: i XM = {1? }and i t i XT = { +1.... }. The jobs in i t i n i XM can be considered as a single job. Now for family , define: i ' 1 1 ' 1 1 ' ,1 ' ,1 ' 1, .. 1, .. 1 i i i i t iiji j t iiji j ijt ij i i ijt ij i i iii ccf aad ccjt aajt nnt = = ?+ ?+ =+ =+ ==+ ==+ =?+ ? ? n n Then LKPS can be reformulated as: 35 36 n ' ' ' 11 ' 11 .. 0 1, 1,.. 1,... i i nN ij ij ij nN ij ij ij ij i Max c z st az b ziNj == == ? ???= = ?? ?? Pseudo job is composed of jobs 1i z 1 ,.. i i t x x along with the setup cost and time, and , for i ij ij t zx + = 2,.. ii j nt=?. Solve this linear knapsack problem. At most one variable can have a fractional value, say . Iff 1k zf= , then ,1,. kj k x fj t= = . If , then , 1 kl zfl=? k kl t x f + = . 3.2.3. An upper bound on MKPS 3.2.3.1. Relaxation Surrogate relaxation (Pisinger, 1999) and Lagrangian relaxation (Martello and Toth, 1981) have been applied to obtain an upper bound on the multiple knapsack problem. In this paper, surrogate relaxation with identical multipliers on constraints (1) is used. Selecting identical multipliers keeps unrelated to periods after surrogate relaxation. Relaxing integrality of the variables gives a mix-integer formulation SMKPS: ij a x 11 1 11 11 1 11 1 1 .. 11,. 1, .. 1, .. 1, .. 0 1 1,.. 1,.. 1,.. {0,1} 1,.. 1,.. i i nTN TN ijt ijt it it tij ti nTN TN T ij ijt i it t tij ti t T it t ijt it i ijt i it Max c x f y st ax dy b yiN xyi Nj nt T x iNjntT yiNtT === == === == = = + +? ?= ?= = = ?? = = = ?== ??? ?? ??? ?? ? ? SMKPS gives an upper bound on MKPS since every solution to MKPS is a feasible solution for SMKPS, but not vice versa. Unlike the usual approach, we do not solve SMKPS to obtain an upper bound on MKPS; we design a new knapsack problem based on SMKPS whose optimal solution is an upper bound on MKPS 3.2.3.2. The knapsack problem giving the upper bound of MKPS Using only the variables of family in periodt of SMKPS, we construct the linear knapsack problem with setup: i 1 1 , .. 1, 01, 01, i i n ijt ij it i j n ij ij i i j ij i i ij i i Max c x f y st ax dy b xyj n xjn y = = + +? ?= ?= ?? ? ? 37 Based on Bulfin?s algorithm, this formulation can be transformed to a linear knapsack problem with pseudo variables 1 ,... it n z z and their corresponding profit and processing coefficients 1 ,... it n cc 1 ,... it n aa. Pseudo variables are ordered by non-increasing ratio j j c a . We define a set 11 {(0,0),( , ) | 1,.. } tt it j j it jj Pact == = ?? n=from these pseudo coefficients. For the sake of brevity, record these points are with 0 ,... it n pp 1 . t tj j p xa = = ? and 1 . t tj j p yc = = ? . We can constructT point sets for it P 1, ..tT= . Let and delete any repeated points. Order all points by non-decreasing . Apply the following rules to delete points from . ' 1 T i t P = = ? it P .px ' i P 1. If r p and s p have . . rs p xpx? and . . rs p ypy? , then delete s p 2. If , , rks p pp have . . . rks p xpxpx??and . . . rks p ypypy? ? , and .. .. .. . kr sk kr sk py py py py .p xpx pxpx ?? ? ?? , then delete k p . These two dominance rules are called multiple-choice dominance rules in this paper, and stems from the two dominance rules for the multiple-choice knapsack problem (Sinha and Zoltners, 1979). Assume there are ' 1 i n + points ' 0 ,... i n p p ( ) remained, ordered by increasing and . We can define pseudo variables by setting and . 0 (0,0)p = .px .py ' i n ' 1 ,.. i i in zz ' 1 .. ij j j apxp ? =?x y ' 1 .. ij j j cpyp ? =? 38 Repeating this process for all families, we obtainN ' 1 N i i n = ? pseudo variables and a linear knapsack problem K1 with resource b ( 1 T k k bb = = ? ): ' ' ' 11 ' 11 ' .. 0 1, 1,.. 1,.. i i n N ij ij ij n N ij ij ij ij i Max c z st az b ziNj == == ? ??= = ?? ?? n it it res==0j We prove the optimal objective of K1 is an upper bound on MKPS in Appendix B. 3.3. Feasible solution (lower bound) A good initial feasible solution can fathom many candidate nodes and reduce the search time. We will use a greedy algorithm to calculate one. Algorithm determines a feasible assignment of family ?s jobs to period when there is resource available. The algorithm returns , the total profit of this assignment and , the amount of resource actually used. (, )assign i t i t t b it obj it res Algorithm : (, )assign i t Step 1. Set ,obj , t bb= 0,0= , , it it it obj obj f?+ i bbd?? it i res d= . Step 2. ; if 1jj?+ i j n> , stop. Step 3. If , then ij ba? it it ijt obj obj c? + , ij bba? ? , it it ij res res a? + ; 39 If , then go to Step 2; else stop. 0b > else go to Step 2 Algorithm is used to get the feasible solution. It uses to assign a family to a period and update the available resource. It continues until there is not enough resource left for any job. feas it obj 40 TStep 1. Set . Solve for{1, . . }NN N= (, )assign i t 1,.. 1,..iNt= = Step 2. Choose{, with}rs max{ | , 1,.. } rs it obj obj i NN t T= ?=; if , stop. 0 rs obj = Step 3. , rs lb lb obj?+ s s b b res?? r . Delete r from . IfNN NN ?= , stop. Step 4. Solve . Go to Step 2. (, )assign i s iNN? 3.4. Branch-and-bound algorithm To develop a branch-and-bound algorithm, we need to make several decisions. These include how to fix variables, calculate bounds, choose the next sub-problem to explore and obtain an initial incumbent solution. Also, the order to fix variables has to be decided. 3.4.1. Variable order Order all variables by non-increasing it y 1 ,1,.,1,. i n it ijt it j pro c f i N t T = =+= = ? . If is near the front, then this variable is more likely to be one. Similarly if is near the end, it is more likely to be zero. Fixing variables first at the front or rear aid in keeping the number of branches small. We fix variables by looking at the beginning and end of it y it y it y this ordered list and working toward the middle. So familyi ,1,. it yt T= has a search order: if is the variable of theT variables related to family , then set . it y th k i ( ) it oy k= In the current node, we decide which variable will be fixed based on all variables fixed. Assume we fix . Since each family is assigned to at most one period, then for rk y 0 rt y = ( ) ( rt rk oy oy< ) 1, ..tT= . 3.4.2. Fixing rk y As we proceed through branch-and-bound algorithm, we fix setup variables to zero or one. If is free, family is represented by pseudo variables rk y r ,1,. rj r zj n?= ; these variables are never fixed. If is fixed at one, all pseudo variables rk y ,1,. rj zj r n?= are removed and real variables ,1,. rjk r x j= nare included in K1; rjk x are always free in the branch-and- bound algorithm. If is fixed at zero, all pseudo variables as well as their coefficients are recalculated, excluding the possibility of family r being setup in period , and included in K1; again the new rk y k ,1,. rj r zj n?= are always free. When all are fixed to either zero or one, a knapsack problem over the appropriate , 1,.. 1,.. it yi Nt T== ijt x is solved to determine the optimal solution. When is fixed to one, the bounding problem K1 changes as follows: rk y the actual setup cost for family in periodt is added to the objective; r the actual setup time for family is subtracted from the surrogate constraint; r pseudo variables for family are removed, and r real variable ,1,. rjk r x j= nare added 41 When is fixed to zero, the changes are removing pseudo variables for family and adding new pseudo variables. rk y r We also tighten the relaxation by adding the constraint for period to the bounding problem. Pseudo variables will only use the surrogate resource, but k it z ijk x variables will use both the surrogate resource and the resource from period . Subtracting the setup time will reduce the available surrogate resource and will reduce the resource from period . Removing pseudo variables may increase the surrogate resource, but will not affect the resource for period . Thus, the previous optimal solution to the bounding problem may no longer be feasible or optimal. We could re-solve it from scratch, but we will show how to adjust the old solution to obtain the optimal solution to the new bounding problem. First, we introduce some notations. k k k Let obj : The current node?s upper bound, {| 1} tit Giy==: Family fixed to periodt , {| } it U i y is free= : Family free, ' 1 t T i tiG bb d =? =? ?? : Available resource for all variables, ' ,1,. t tt i iG bb dt T ? =? = ? : Available resource for families in periodt . The current node?s upper bound is the optimal objective of this formulation, K2. It can be proved by an approach similar to what we used in Appendix A. 42 ' ' ' 111 1 '' 111 ' 1 .. ,1,. ii tt ii t i t nn TT ij ij ijt ijt it iU j t iG j t iG nn T ij ij ij ijt iU j t iG j n ij ijt t iG j Max c z c x f st az ax b ax b t T ?= =?= =? ?= =?= ?= ++ +? ?= ?? ??? ?? ?? ??? ?? When we fix to one, set rk y '' kkr bbd=? '' r bbd=? rk obj obj f=+ {} kk GG r=? \{ }UUr= and 0, 1,.. rt ytTt== ?k r The algorithm to fix to one can be separated into three steps: rk y Step 1. Delete pseudo variables from the outer knapsack. ' 1 ,... r r rn zz Step 2. Restore feasibility (if necessary). Step 2.1. Set . If , find the variable '' kk bbd?? ' 1 i k n ij ijk k iG j ax b ?= > ?? ijk x greater than zero with smallest ratio. Decrease it until either it is zero or ' 1 i k n ij ijk k iG j ax b ?= = ?? . Repeat until ' 1 i k n ij ijk k iG j ax b ?= = ?? is achieved. 43 44 111 ii t nnT ij ij ij ijt iU j t iG j az ax b ?= =?= Step 2.2. Set . If ''' r bbd?? ' + > ?? ??? , repeat the procedure in 2.1, except choose either ijt x or variables greater than zero with smallest ratio. ij z Step 3. Restore optimality. Step 3.1. Let be the set of all zero-value variables. Find the maximum ratio of all variables in {| 0, }{| 0, ijt ijt ij ij Vxx iU zz iU==??=?} max r V. Step 3.2. Set all fractional-valued variables and all variables with value one and ratio less than to zero. Put these variables in max r Vin the proper ratio order. This releases resource for new variables to use. Variables in K2 now have value one only and their ratio is no worse than max r Step 3.3. Do the following sub-algorithm to obtain the optimal solution of K2. Step a. Set . 1k = Step b. If the variable inV is th k ijt x , then go to Step c; else the variable inV is , and go to Step d. th k ij z Step c. If , then ' 0 t b = 1kk? + ; else If , thenbb ' t ba? ij a '' ttij ? ? '' ij ,bba? ? ijt obj c?+, obj , and 1 ijt x = ; else ' t ijt ij b x a = , ' bb '' t b? ? ,and ijt ijt obj obj c x? + . Go to Step e. 45 ' ij aStep d. If , thenbb ' ij ba? ''' ? ? ' ij obj c, obj ? + , and . 1 ij z = Set 1kk? + . else ' 'ij ij b z a = , ' 0b = ,and . ' ij ij obj obj c z?+ Step e. If and ' 0b > kV? go to Step b; else stop. When we fix to zero, pseudo variables are deleted from variable set. Since , update . Apply the multiple-choice dominance rules to delete dominated points. Use the remaining points to obtain the updated pseudo variables . We can resolve the problem with new variable set to obtain the upper bound of the sub-problem with rk y ' 1 ,.. i r rn zz 0, ( ) ( ), 1,.. rt rt rk yoyoyt=<=T it P ' ()( ) rt rk i oy oy P > = ? ' 1 ,.. i r rn zz 0 rk y = . In this case, only steps 1 and 3 are needed to resolve the problem. 3.4.3. Choosing a new sub-problem When variables are fixed, two sub-problems are created. If a sub-problem?s upper bound is no better than an incumbent solution it is discarded. When its bound indicates it could contain a better solution to MKPS we store it in a bucket. Each bucket contains sub-problems with bounds that are about the same. LetUB be the best upper bound and be the value of the current incumbent solution. If we wantINC K buckets, calculate ()UB INC K ? ?= . Then bucket one will contain all sub-problems with upper bounds in the interval[ , bucket two[2,UB UB?? ] ],UB UB? ???, and bucket K [,INC INC +?]. Buckets can be updated as upper bounds or the incumbent change. When we choose a new sub-problem to explore, we take one based on LIFO from the lowest numbered, non- empty bucket. This gives almost a ?best-bound? strategy, but without the bookkeeping overhead. 3.5. Computational experiments We test AMKPS on a variety of problem instances to see what problems can be solved in reasonable time. Instances are generated by setting four parameters at several levels. The parameters are average number of jobs in a family, number of periods, proportion of setup time/cost relative to totals, and resource tightness. The number of families is fixed to ten ( ). The number of jobs in a family is integer uniformly distributed from three intervals [40, 50], [60, 70] and [80, 90]. The number of periods will be either five or seven, corresponding to a work week. Setup cost and time are determined by 10N = 1 1 () i n it ijt j f ec = =? ? 2 1 () i n ii j de a = = ? j We choose and uniformly from [0.15, 0.25], [0.25, 0.35], [0.35, 0.45], and [0.45, 0.55]. Resource availability is determined by 1 e 2 e 11 () i n N ij ij t a b K == = ?? , where K is 10, 7.5 or 5. Finally, and are random integers chosen from[10, 10000]. ijt c ij a For each level of the four factors we generate ten instances. AMKPS was coded in C and all instances were run on a Dell P.C. with 1.7G Intel processor and 512M bytes of 46 47 memory. In the following tables, we report the minimum (MIN), average (AVG) and maximum (MAX) solution time in minutes. A zero indicates less than one minute of computational time. We also give the average ratio of initial solution (INC) to initial upper bound (UB) and the average ratio of initial solution to the optimal solution (OPT). Table 3.1 gives results for five period problems and Table 3.2 is for seven periods. Table 3.1 Solution time (minute) for AMKPS for 5 periods [40, 50] [60 70] [80, 90] Resource Setup INC/UB INC/OPT MAX AVG MIN INC/UB INC/OPT MAX AVG MIN INC/UB INC/OPT MAX AVG MIN [0.15 0.25] 0.84 0.87 0 0 0 0.85 0.88 0 0 0 0.85 0.88 0 0 0 [0.25 0.35] 0.93 0.98 0 0 0 0.94 0.99 0 0 0 0.95 1.00 0 0 0 [0.35 0.45] 0.96 0.99 0 0 0 0.97 0.99 0 0 0 0.98 0.99 0 0 0 [0.45 0.55] 0.92 0.99 0 0 0 0.92 0.99 0 0 0 0.91 0.99 0 0 0 K=10 Average 0.91 0.96 0 0 0 0.92 0.96 0 0 0 0.92 0.97 0 0 0 [0.15 0.25] 0.73 0.74 0 0 0 0.72 0.73 0 0 0 0.71 0.72 0 0 0 [0.25 0.35] 0.82 0.89 0 0 0 0.81 0.87 1 0 0 0.80 0.85 1 0 0 [0.35 0.45] 0.89 0.98 0 0 0 0.89 0.99 1 0 0 0.89 0.99 1 0 0 [0.45 0.55] 0.94 0.99 0 0 0 0.95 0.99 0 0 0 0.96 1.00 0 0 0 K=7.5 Average 0.85 0.90 0 0 0 0.84 0.90 0 0 0 0.84 0.89 0 0 0 [0.15 0.25] 0.94 0.95 0 0 0 0.94 0.95 0 0 0 0.94 0.95 0 0 0 [0.25 0.35] 0.86 0.88 0 0 0 0.89 0.90 0 0 0 0.89 0.90 0 0 0 [0.35 0.45] 0.76 0.79 0 0 0 0.78 0.80 0 0 0 0.76 0.78 0 0 0 [0.45 0.55] 0.72 0.82 1 1 0 0.71 0.80 3 1 1 0.70 0.80 8 3 2 K=5 Average 0.82 0.86 0 0 0 0.83 0.86 1 0 0 0.82 0.86 2 1 0 Table 3.2 Solution time (minute) for AMKPS for 7 periods [40, 50] [60 70] [80, 90] Resource Setup INC/UB INC/OPT MAX AVG MIN INC/UB INC/OPT MAX AVG MIN INC/UB INC/OPT MAX AVG MIN [0.15 0.25] 0.87 0.92 2 1 0 0.86 0.91 5 3 1 0.86 0.90 9 5 2 [0.25 0.35] 0.94 0.99 0 0 0 0.94 0.99 0 0 0 0.95 0.99 1 0 0 [0.35 0.45] 0.96 0.99 0 0 0 0.96 0.99 0 0 0 0.97 0.99 0 0 0 [0.45 0.55] 0.88 0.98 0 0 0 0.88 0.99 0 0 0 0.87 0.99 0 0 0 K=10 Average 0.91 0.97 1 0 0 0.91 0.97 1 1 0 0.91 0.97 3 1 0 [0.15 0.25] 0.80 0.86 7 3 1 0.79 0.85 12 6 2 0.78 0.85 26 17 10 [0.25 0.35] 0.82 0.91 11 4 1 0.83 0.91 22 10 2 0.81 0.90 29 19* 13 [0.35 0.45] 0.90 0.98 2 1 0 0.91 1.00 12 3 0 0.91 0.99 10 4* 1 [0.45 0.55] 0.94 0.99 0 0 0 0.95 1.00 0 0 0 0.96 1.00 0 0 0 K=7.5 Average 0.87 0.94 5 2 0 0.87 0.94 12 5 1 0.87 0.94 16 10 6 [0.15 0.25] 0.95 0.98 0 0 0 0.95 0.98 0 0 0 0.96 0.98 0 0 0 [0.25 0.35] 0.90 0.95 1 0 0 0.91 0.96 1 0 0 0.90 0.95 3 1 0 [0.35 0.45] 0.82 0.91 3 1 0 0.83 0.92 8 3 1 0.82 0.91 11 4 2 [0.45 0.55] 0.75 0.89 9 4 1 0.74 0.87 16 9 3 0.74 0.87 19 10 5 K=5 Average 0.86 0.93 3 2 0 0.86 0.93 6 3 1 0.86 0.93 8 4 2 Note: ?*? means some instances run out of memory, and the value of AVG in the table is the average of the remaining instances, as are Max and Min. AMKPS performs very well for five period problems, with the hardest taking less than 8 minutes. Seven period instances are harder, but most instances are solved in less than 30 minutes. Seven of the 720 instances were not solved by AMKPS. These instances had seven periods, and average of 85 jobs per family, resource tightness of 7k = and setup percentage of [0.25, 0.35] or [0.35, 0.45]. These instances used up memory. The Min, Max, and Average are of the problems actually solved. Solution time increases as number of period and number of jobs increase. We ran some instances with different combinations of numbers of periods and jobs and found that the solution time changes in almost the same way as for the test problems. 48 The relationships between setup and resource tightness are more complex. Fig. 3.1 to 3.6 demonstrate this. 40-50 0.00 0.10 0.20 0.30 0.40 0.50 0.60 [0.15 0.25] [0.25 0.35] [0.35 0.45] [0.45 0.55] Setup K=10 K=7.5 K=5 Time(Minute) Fig. 3.1. Solution time for average 45 jobs per family and 5 periods 60-70 0.00 0.20 0.40 0.60 0.80 1.00 1.20 1.40 1.60 [0.15 0.25] [0.25 0.35] [0.35 0.45] [0.45 0.55] Setup K=10 K=7.5 K=5 Time (Minute) Fig. 3.2. Solution time for average 65 jobs per family and 5 periods 49 80-90 0.00 0.50 1.00 1.50 2.00 2.50 3.00 3.50 4.00 [0.15 0.25] [0.25 0.35] [0.35 0.45] [0.45 0.55] Setup K=10 K=7.5 K=5 Time (Minute) Fig. 3.3. Solution time for average 85 jobs per family and 5 periods Fig. 3.4. Solution time for average 45 jobs per family and 7 periods 40-50 0 1 2 3 4 5 [0.15 0.25] [0.25 0.35] [0.35 0.45] [0.45 0.55] Setup K=10 K=7.5 K=5 Time (Minute) 60-70 0 2 4 6 8 10 12 [0.15 0.25] [0.25 0.35] [0.35 0.45] [0.45 0.55] Setup K=10 K=7.5 K=5 Time (Minute) Fig. 3.5. Solution time for average 65 jobs per family and 7 periods 50 80-90 0 5 10 15 20 25 [0.15 0.25] [0.25 0.35] [0.35 0.45] [0.45 0.55] Setup K=10 K=7.5 K=5 Time (Minute) Fig. 3.6. Solution time for average 85 jobs per family and 7 periods When , the maximum time happens on instances with setup from [0.15, 0.25]; when , the maximum time happens on instances with setup from [0.25, 0.35]; when , instances with setup from [0.45, 0.55] use the most time. From these plots, we conclude that problems become difficult when . 10K = 7.5K = 5K = *~(2,3)eK The heuristic algorithm given in this paper is very effective, especially when the resources are tight. We give the quality (average proportion of lower bound to initial upper bound and to optimal solution) in Table 3.3. The quality decreased as resources increase in each period. For both five and seven period problems, the heuristic is good, typically in the 85%-95% range. Table 3.3 The lower bound, upper bound and optimal solution [40 50] [60, 70] [80, 90] Period Resource Setup INC/UB INC/OPT INC/UB INC/OPT INC/UB INC/OPT K=10 Average 0.91 0.96 0.92 0.96 0.92 0.97 K=7.5 Average 0.85 0.90 0.84 0.90 0.84 0.89 Period 5 K=5 Average 0.82 0.86 0.83 0.86 0.82 0.86 K=10 Average 0.91 0.97 0.91 0.97 0.91 0.97 K=7.5 Average 0.87 0.94 0.87 0.94 0.87 0.94 Period 7 K=5 Average 0.86 0.93 0.86 0.93 0.86 0.93 51 We also compare AMKPS to CPLEX 9.1 (called by AMPL). We choose the hardest instances for AMKPS (7 periods and ) to compare. Trial runs on other instances showed these results are typical. Due to the difficulty of solving with CPLEX, only five instances per level were solved. Table 3.4 shows the clear superiority of AMKPS. CPLEX solved very few problems in less than two hours; we let one solve until the optimal solution is obtained, and it took over 29 hours. ~ [80,90] i n 52 Table 3.4 The comparison of solution time (Minute) between AMKPS and CPLEX K=10 K=7.5 K=5 Setup CPLEX AMKPS CPLEX AMKPS CPLEX AMKPS * 2 * 13 26 0 * 11 * 16 7 0 * 5 * 13 10 0 * 4 * 30 9 0 [0.15,0.25] * 12 * 12 8 0 AVG * 7 * 17 12 0 * 1 116 21 36 0 * 0 * 10 * 1 * 0 * * 77 1 * 0 * 28 26 0 [0.25, 0.35] * 0 * * * 2 AVG * 0 * * * 1 * 0 * 6 * 2 * 0 * 2 * 3 * 0 * * * 4 * 0 * 6 * 3 [0.35, 0.45] * 0 * 2 * 9 AVG * 0 * * * 4 * 0 19 0 * 14 * 0 14 0 * 22 * 0 4 0 * 27 * 0 22 0 * 16 [0.45, 0.55] * 0 7 0 * 11 AVG * 0 13 0 * 18 Note: * means the instance uses more than 2 hours or uses up memory. AMKPS use less time than CPLEX for all but three instances when and is from [80, 90]. We also do experiments with instances with fewer variables and AMKPS also used less considerably time than CPLEX. *~(2,3)eK i n 53 3.6. Conclusions The MKPS model can be used for order selection in make-to-order manufacturing. In this paper, we use branch-and-bound algorithm to solve MKPS and design a new method to get an upper bound on MKPS. Rather than relaxing constraints of the original models to an upper bound, we propose a new linear knapsack model to obtain an upper bound. We prove the knapsack optimal objective solution is an upper bound on MKPS. In branching, we add a resource constraint whose family has been fixed to that to tighten the relaxation. This prohibits jobs from using more than the period capacity. This knapsack problem can still be solved efficiently. We also give an effective greedy heuristic which supplies a good feasible solution as a lower bound. After all variables are branched on, MKPS is transformed to knapsack problems. The computational experiments show that AMKPS works well with a tight resource limit. Sixty seven-period instances are tested to compare AMKPS with CPLEX: AMKPS solve 57 instances of them in less than 30 minutes but CPLEX fail in 46 instances and need more time than AMKPS for the remaining 14 instances. In this paper, we only use a simple branch-and-bound algorithm for the knapsack problem when all setup variables are fixed. If a better algorithm, e.g. the one developed by Martello et al. (1999) is used, the solution time can be reduced. y 54 Appendix A. The optimal objective of K1 is the upper bound on MKPS Before proving the proposition, we need the following Lemma: Lemma 1. A linear knapsack problem can be transformed to a concave piecewise function Proof. For knapsack problem 55 . 1 1 .. 01,1, n jj j n jj j j Max c x st ax b x jn = = ? ?? = ? ? Order all variables by non-increasing ratio j j c a . Define a point set . Put these points on coordinates and connect the adjacent points, we can obtain a concave piecewise function and these points are the breaking points of the piecewise function. is the optimal objective of the linear knapsack problem. 11 {(0,0),( , ) | 1,.. } tt jj jj Pact == = ?? n= F ()Fb If , set ; 1 n j j ba = ? ? 1 () n j j Fb c = = ? else if 1 t x = 1 t j j ab = < ? 1 1 () t j j t t ba x a ? = ? = ? if 1 11 tt jj jj ab a ? == ?< ? ? 0 t x = if 1 1 t j j ba ? = ? ? On the verse, if we know , we can construct an equivalent knapsack problem for this piecewise function. ()Fb Proposition 1. The optimal objective of K1 is the upper bound on MKPS Proof. The coefficients of variables from familyi in periodt 1 , ,... i it i t in t f cc, construct a linear knapsack problem with setup, say : 1 , ,... i ii in da a it LKPS 1 0 1 , .. 1, 01, 01, i i n ijt ij it i j n ij ij i i j ij i i ij i i Max c x f y st ax dy b xyj n xjn y = = + +? ?= ?= ?? ? ? Based on Bulfin?s algorithm, this formulation can be transformed to a linear knapsack problem with pseudo variables 1 ,... i i n z z ? and their corresponding profit and processing coefficients . Pseudo variables are ordered by non-increasing ratio,1, ij ij i ca j n?? = .? ij ij c a ? ? . Then we can obtain a piecewise function with its breaking point set so that for any available resource , is the optimal solution of the . it F it P 0 b 0 () it Fb it LKPS 56 57 it PWe define , and delete all dominated points by two multiple-choice dominance rules for linear multiple-choice knapsack problem (Sinha and Zoltners, 1979). Connecting the remaining points, we can obtain another piecewise function , which has . ' 1 T i t P = = ? i F 000 () (), 0 iit Fb F b b?> If the optimal solution of MKPS is known, assume 1 it y = and the resources and profit from family are andi i w , 1,.. i profit i N= with solution set ={| . Then is a feasible solution from the following linear knapsack problem ( ) with setup and i S 1 ijt ijt xx= } i S 1 LKPS i profit is the objective of the feasible solution 1 1 , .. 1, 01, 01, i i n ijt ij it i j n ij ij i i i j ij i i ij i i Max c x f y st ax dy w xyj n xjn y = = + +? ?= ?= ?? ? ? Since is the optimal objective of , thus . Since , then and ( ) it i Fw 1 LKPS ( ) , 1,.. it i i F w profit i N?= ( ) ( ) ii iti Fw F w? ( ) , 1,.. ii i F w profit i N?= 11 () NN ii i ii F w profit == ? ?? . Set 1, 1, .. ij zjk== 1 '' 11 ii kk ij i ij jj aw a + == ?< ?? if ' 1 '1 1 i k iij j ik ik wa z a = + + ? = ? if 1 11 tt jj jj ab a ? == ?< ?? 0, 1 ij zjk=>+ Then the solution set ' ,1,.,1,. ij i z jn==Nis a feasible solution of K1 since 1 N i i wb = ? ? . Hence the optimal solution of K1 is greater or equal to 1 () N ii i F w = ? , and an upper bound on MKPS. References Akinc, U. 2004. Approximate and exact algorithm for the fixed-charge knapsack problem, European Journal of Operational Research 170, 363-375. Bulfin, R. L. 1988. An algorithm for the continous, variable upper bound knapsack problem, OPSEARCH 25 (2), 119-125. Dantzig, G.B. 1957. Discrete variable extremum problems, Operation Research 5, 266- 277. Martello, S., Pisinger, D., Toth, P. 1999. Dynamic programming and strong bounds for the 0-1 Knapsack Problem. Management Science 45 (3), 414-424. Martello, S., Toth, P. 1980. Solution of the zero-one multiple knapsack problem, European Journal of Operational Research 4, 276-283. Martello, S., Toth, P. 1981. A bound and bound algorithm for the zero-one multiple knapsack problem. Discrete Applied Mathematics 3, 275-288. Pisinger, D. 1995. A minimal algorithm for the multiple-choice knapsack problem. European Journal of Operational Research 83, 394-410. Pisinger, D. 1999. An exact algorithm for large multiple knapsack problems. European Journal of Operational Research 114, 528-541. Sinha, A., Zoltners, A.A. 1979. The multiple-choice knapsack problem, Operations Research 27, 503-515. 58 59 ?.MULTIPLE-CHOICE KNAPSACK PROBLEM WITH SETUP Abstract We present a multiple-choice knapsack problem with setup (MCKS). This model can be applied to regional project selection in multiple periods. In the model, some variables model setups and serve as the upper bound on the remaining ones. A linear knapsack problem is designed to give an upper bound on MCKS, and a branch-and-bound algorithm is used to optimally solve MCKS. Setup variables are branched on; when all are fixed, MCKS becomes a knapsack problem. Computational experiments show this algorithm is effective even for instances CPLEX can not solve in two hours. 4.1. Introduction and literature review The multiple-choice knapsack problem (MCK) is well known in combinational optimization. In this paper, we present a model we call a multiple-choice knapsack problem with setup (MCKS). This model can be used in regional project selection in multiple periods for an organization (country or company) which has a fixed budget to invest in a number of projects in multiple areas which can be done in multiple periods. To do a project in an area, a project office must be set up. The organization would like to decide where to set up offices and which projects to do to maximize net present value subject to a budget restriction. Given the formulation of MCKS: 111 11 .. i nTN TN ijt ijt it it tij ti Max c x f y st === == + ??? ?? 11 1 11 i nTN TN ij ijt i it tij ti ax dy b === == + ??? ?? ?, (1) 1, .. , 1, .. ; 1, .. ijt it i x yj ni Nt T?= = =, (2) 1 1 1,... , 1,.. T ijt i t x iNj = ?= = ? n, (3) , {0,1} 1,.. ; 1,.. ; 1,.. . ijt it i x yiNjnt?===T (4) ijt c -is the profit of project j in area in period t ( ), i 0 ijt c ? it f -is the setup cost for opening an office in area in period t (i 0 it f ? ), ij a -is the investment needed for project j in area ( ), i 0 ij a > i d -is the investment cost to open an office in area ( ), i 0 i d > b -is the budge available to invest ( ), 0b > it y - is one if office is set up in areai in periodt , otherwise zero, ijt x -is one if project j in area is done in period , otherwise zero, i t N -is the number of areas, T -is the number of periods. Constraint (1) requires the total budget used by all projects and to setup offices can not exceed the budget available. Constraints (2) prohibit a project being done unless the office in this area is set up. Constraints (3) guarantee that a project can only be done once. Constraints (4) require all variables to be binary. 60 Besides the application of regional development, this model can also be used in order acceptance in multiple periods with a non-renewable resource. We develop an upper bound and an effective heuristic for MCKS based on the linear knapsack problem with setup and the linear multiple-choice knapsack problem. Following traditional terminology, we call areai family , and the projecti j in area jobi j of familyi . We also call the setup time of familyi and i d it f the setup cost of family in period . i t For the linear knapsack problem, Dantzig (1957) gave an algorithm which allocates the limited resource to jobs based on the non-increasing profit-to-processing ratio. Without y variables, MCKS becomes a multiple-choice knapsack problem, another well- studied problem. (See Pisinger,1995; Sarin and Karwan, 1989; Armstrong et al, 1983 ) Two dominance rules for the linear multiple-choice knapsack problem (Sinha and Zoltners, 1979) are used to develop a linear knapsack problem as an upper bound on MCKS. Without constraint (2), MCKS becomes a knapsack problem with setup. Bulfin (1988) gave an efficient algorithm for its linear relaxation. We explain this algorithm in the following section. Akinc (2004) describes algorithms for a fixed-charge knapsack problem, which is a special case of MCKS; it has a single period and no setup time. We use a branch-and-bound algorithm to obtain the optimal solution to MCKS. It can be briefly described by two steps. We branch on variables; when all variables are fixed, the problem is a knapsack problem in the variables. We use a simple branch-and- bound algorithm to solve this knapsack problem. To reduce the branches of the tree, y y x 61 y variables are reordered before branching. The ordering process is as follows: Order variables by non-increasing it y 1 ,1,.,1,. i n it ijt it j pro c f i N t T = =+= = ? . If is near the front, then this variable is more likely to be 1. Similarly if is near the end, it is more likely to be 0. We fix variables by looking at the beginning and end of this ordered list and working toward the middle. Fixing variables first at the front or rear aids in keeping the number of branches small. Renumber variables by this order so that will be branched on before . it y it y it y it y 1it y + The algorithm (AMCKS) for solving MCKS is outlined below: Step 1. Obtain an upper bound formulation for MCKS and a feasible solution for MCKS. Step 2. Decide which variable to be fixed in the current node. Step 2.1. Generate a new node by fixing some to one; save this node if its bound is better than the incumbent solution. If all are fixed, solve a knapsack problem and update the incumbent solution if possible. y y Step 2.2. Generate a new node by fixing to zero; save this node if its bound is better than incumbent solution. If all are fixed, then solve a knapsack problem and update the incumbent solution if possible. Delete the current candidate node. y y Step 3. Choose a new candidate node. If none exists, stop, the incumbent solution is optimal; else go to Step 2. In the remaining of the paper, we discuss Step 1 in section 4.2. Section 4.3 explains the algorithms used in Steps 2 and 3. Section 4.4 discusses computational experiments. 62 4.2. An upper bound and feasible solution Unlike the usual approaches of relaxing some constraints of a formulation to obtain an upper bound, we design a linear knapsack problem whose optimal objective is an upper bound on MCKS. This approach uses the algorithm presented by Bulfin (1988) for the linear knapsack problem with setup, which transforms a linear knapsack problem with setup to a linear knapsack problem. 4.2.1. Linear knapsack problem Consider a linear knapsack problem: 1 1 .. 01 n jj j n jj j j Max c x st ax b x = = ? ?? ? ? All variables are ordered by non-increasing j j c a . By Dantzig?s algorithm (1957), if 1 11 kk jj jj ab a + == ?< ?? , then 1, 1,... j x jk== 1 1 1 k j j k k ba x a = + + ? = ? 0, 2,.. j x jk n==+ 63 64 n A linear knapsack problem corresponds to a concave piecewise function. Define , and put these 11 {(0,0),( , ) | 1,.. } kk jj jj Pack == == ?? 1n+ points on coordinates and connect the adjacent points. This defines a concave piecewise function . This process is independent of resourceb . For a given resourceb , is the optimal objective of the linear knapsack problem with resource (If F ()Fb b 1 n j j b = > a ? , set 1 () n j j Fb c = = ? ). For brevity, denote these points as 0 ,.. n p p with 1 . k kj j p xa = = ? and 1 . k kj j p yc = = ? 1, ..kn= . 1,.. k p kn= are the break points of the piecewise function . Conversely the break points of a concave piecewise function define a linear knapsack problem with some resource by defining F n F b j x corresponding to 1 .. jj j apxp ? x= ? and 1 .. jj j cpypy ? = ? 1, ..jn= , , . 4.2.2. Transform a linear knapsack problem with setup to a linear knapsack problem Consider the linear knapsack problem with setup: 11 1 11 1 .. , 1, ; 1, , 01,;1, 01,1,.. i i nNN ij ij i i ij i nNN ij ij i i ij i ij i i ij i i Max c x f y st ax dy b x yj ni N x jn N yiN == = == = + +? ?== ?== ?? = ?? ? ?? ? Bulfin (1988) proposed an efficient algorithm, similar to Dantzig?s algorithm for the linear knapsack problem (1957). Reorder all jobs of family so that i 1 1 ,1,. 1 ij ij i ij ij jn aa + + ?=?1, ..iN cc , = . Let 11 1 1 max{ | 1, 2,.. } i i t k ij i ij i jj itk ij iij i jj cf cf kn adad == = = ++ == ++ ?? ?? for iN? . Then for family , jobs can be separated into two sets: i i XM = {1? }and i t i XT = { +1.... }. The jobs in i t i n i XM can be considered as a single job. Now for family , define: i ' 1 1 ' 1 1 ' ,1 ' ,1 ' 1, .. 1, .. 1 i i i i t iiji j t iiji j ijt ij i i ijt ij i i iii ccf aad ccjt aajt nnt = = ?+ ?+ =+ =+ ==+ ==+ =?+ ? ? n n n Then linear knapsack problem with setup can be reformulated as: ' ' ' 11 ' 11 .. 0 1, 1,.. 1,... i i nN ij ij ij nN ij ij ij ij i Max c z st az b ziNj == == ? ???= = ?? ?? 65 Pseudo job is composed of jobs 1i z 12 ,,. i ii t x xxand i ij ij t zx + = , for 2,.. ii j nt=?. After solving this linear knapsack problem, at most one variable can have a fractional value, say . If , then andf 1k z= f k yf= ,1,. kj k x fj t= = . If , 1 kl zfl= ? , then only k kl t x f + = . 4.2.3. The algorithm for the upper bound and feasible solution Before we explain the approach to obtain an upper bound and a feasible solution for MCKS, let us introduce the period subset?s piecewise function of family . i 4.2.3.1. Subset?s piecewise function If the optimal solution of MCKS is known, then { | 1} iit Sty= = is the set of periods in which family is processed. But before solving MCKS, is unknown. We know must be a subset of{1 . Set{1 has total i i S i S , . . }T , . . }T 2 1 T ? non-empty subsets, which we denote as 1 ,.. K SS,2and1 T K =? k S is the cardinality of the subset . k S For any , define {1, .. }, 1, .. k STk?=K Smax{ | } k ijS ijt k cc=? k k iS it tS f f ? = ? k iS k i dS= d. Using pseudo variables ij x? , , for each , we can formulate a linear knapsack problem with setup: i y? k S 66 1 0 1 .. , 1, , 01,, 01. i kk i k n ijS ij iS i j n ij ij iS i j ij i i ij i i Max c x f y st ax d y b xyj n xjn y = = ??+ ??+? ???= ? ?= ??? ? ? This problem can be transformed to a linear knapsack problem based on section 4.2.2, the linear knapsack problem defines a concave piecewise function with break points set . Thus k iS F k iS P k iS F is the piecewise function of for family . k S i 4.2.3.2. Upper bound formulation of MCKS After obtaining k iS F and its break points set for each subset of familyi , we define and delete any repeated points in the set. Apply the following two multiple-choice dominance rules (Sinha and Zoltners, 1979) to k iS P , 1,.. k Sk K= 1 k K i k PP = ?= ? iS i P? : Dominance rule 1. If r p and s p have . . rs p xpx? and . . rs p ypy? , then delete s p Dominance rule 2. If , , rks p pp have . . . rks p xpxpx? ? ,. . . rks p ypypy? ? and .. .. .. . kr sk kr sk py py py py .p xpx pxpx ?? ? ?? , then delete k p . Call the set of remaining points and put them on coordinates. Connecting the adjacent points in , we obtain a concave piecewise function with break points i P i P i F ' 1 ,... i n p p . All points in i P ? are below the line of the piecewise function thus i F 67 000 () () 0, 1,. k iiS Fb F b b k K??=. 68 x ? =?Define pseudo variables with and ' i n ' 1 ,.. i i in zz ' 1 .. ij j j apxp ? =? ' 1 .. ij j j cpypy 1, .. i j n?= . Repeating this process for all families, we obtainN ' 1 N i i n = ? pseudo variables and formulate a linear knapsack problem with resource . u LKP b ' ' ' 11 ' 11 ' .. 01,1,.1,. i i nN ij ij ij nN ij ij ij ij i Max c z st az b ziNj == == ? ??= = ?? ?? n We prove that the optimal objective of is an upper bound on MCKS. This problem has at most one fractional variable. If we round this fractional variable to zero, then we obtain a feasible integer solution for . The integer solution of corresponds to a feasible solution of MCKS and its objective is a lower bound on MCKS. (Refer to the Appendix C for their proofs). u LKP u LKP u LKP This approach is impractical if T is large. We present three dominance rules to reduce the number of subsets considered. (Refer to the Appendix D for their proofs). Consider two subsets of family : if , {1,.. } rk SS T? i 00 () () rk iS iS F bFb? for all , then dominates . All break points of are below , so 0 0b > r S k S k iS F r iS F k iS i PP? which means need not be included in . Consider k iS P i P ? 1 ,.. K SSof familyi : Dominance rule 3. If and , then dominates . r SS? k k 11 rr k nn ijS iS ijS iS jj cf cf == +> + ?? r S k S Dominance rule 4. Assume and dominates . If there is another subset with rk SS? r S k S l S lk SS??=, then dominates . l SS? r lk SS? Dominance rule 5. Assume , rk SS? 0 rk iS iS ff= = and 0 rk iS iS dd= = . If there is a subset with l S lk SS??=, then dominates . l SS? k lr SS? With the help of dominance rules, the break points of some period subsets? piecewise functions need not be included into i P ? and thus reduce the effect to determine . After finding all non-dominated subsets, we calculate the break points of their piecewise functions. We do not put all break points together and apply dominance rules 1 and 2 once; rather we add these points into u LKP i P ? in a specific order and apply dominance rules 1 and 2 totalT times. Define ik S to be all non-empty subsets of{, for familyi and be all non- dominated points set from the piecewise functions of all elements in ..}kT ik P? ik S . With the help of Dominance rule 4, the algorithm to generate is: i P Step 1. Set . 1kT=? Step 2. While ( ) 0k > { Set {| jj ik SkSSS=? ?} Set 1ik ik SSS ? = ? Apply dominance rule 3 to delete dominated sets in 1ik S ? } 1kk?? Step 3. Calculate and for k iS F k iS P 1ki SS? . 69 Step 4. Set k andT= 1iT P ? + ? = . Step 5. While ( ) 0k > { Set 11 {| / k ik ik iS k ik ik PP PSSS ++ ??=? ? } Apply rules 1 and 2 to delete dominated points in i P? } 1kk?? When the algorithm ends, is the we need. After , 1i P? i P i P 1, ..N= are known, is obtained. u LKP 4.3. Fixing it y When we fix to one or zero, it y , 1,.. 1 ik yk t= ? have been fixed by the variable order. Define and 1 {| 1, } iik Sky kt==? 1 i S includes all non-empty subsets of . 1 i S 4.3.1. Fixing to one it y When we fix to one, is subtracted from and it y i d b it f is added to the objective. Then it f and can be viewed as zero. Coefficients i d k iS f and related to subset k iS d 1 ,, kkk St SS S?? i change. Therefore all and related to these subsets change, which can cause a change of . F P i P We can calculate the new piecewise functions for all affected and apply the above algorithm to obtain an updated . Then we obtain the new pseudo variables and their processing time and profit coefficients of family from the updated . k S i P i i P 70 This updating process can be simplified by applying dominance rule 5. Based on this, subset dominates any subset 1 i S 1 , kk i SS S? . Therefore, we only need consider subsets are 1 1 {| } ijjit it SSSS S + ??? 1+ . The process to update can be described as: i P Step 1. Calculate the piecewise function of subset . 1 i S Step 2. Calculate the piecewise functions of subsets 1 1 {| ijjit SSSS + ??}. Step 3. corresponding to 1it P + ? 1it S + is known from the calculation of the upper bound on MCKS. Set 1 ' 11 {}{ | { | } k i iit iSk i jjit iS PP P PS S SS S ++ ??=?? ?? ? and apply rules 1 and 2 to delete dominated points in i P?and obtain the updated . i P After we obtain the updated , we can obtain the new pseudo variables of family . i P i 4.3.2. Fixing to zero it y If we fix it to 0, then all subsets includingt must delete this period, so their piecewise functions change, resulting in a different . y i P Assumel is the last period in 1 , then 1 stays the same since we fix y to one. All subsets used to update when we fix to one are i S i S il i P il y 1 1 {| }and ijjil SSSS + ?? 1il S + ; all subsets used to update P when we fix y to zero is i it 1 1 {| }S?and ijjit SSS + ? 1it S + . Since 11il it SS ++ to when we fi ? , then all subsets used when zero are part of the subsets for fixing one. We save the P x y to one so we need not calculate the updated it y il y i il i P . 71 4.3.3. Bounding 72 iable z a n: .n After we obtain the new pseudo jobs from the updated i , the node?s new upper bound can be obtained by replacing the old pseudo var ' 1 ,.. i i in zof f milyi with the new ones. For the current node?s upper bound formulatio ' 1 ,.. i i in zz P s ' ' ' 11 ' 11 ' .. 01,1,.1, i i nN ij ij ij nN ij ij ij ij i Max c z st az b ziNj == == ? ??= = ?? ?? If we fix it to 1, then reduce available resourceb by i d ; delete all old pseudo jobs ; replace by the new ones and find the new optimal. If we fix to 0, then we replace all old pseudo jobs by new ones and find the new optimal. We can prove the new optimal is the upper bound of the current node by an approach similar to what we used in Appendix A. y ' 1 ,.. i i in zz it y ' 1 ,.. i i in zz 4.3.4. Choosing a New Sub-problem When variables are fixed, two sub-problems are created. If a sub-problem?s upper bound is no better than an incumbent solution it is discarded. When its bound indicates it could contain a better solution to MCKS we store it in a bucket. Each bucket contains sub-problems with bounds that are about the same. Let B be the best upper bound and be the value of the current incumbent solution. If we want U INC K buckets, calculate ()UB INC K ? ?= . 73 ]UB?? , bucket ]UB Then bucket one will contain all sub-problems with upper bounds in the interval[UB two[2UB, ,? ? cket??, and bu K [, ]INC INC +? . Buckets can be updated as upper bounds or the incumbent change. When we choose a new sub-problem to explore, we take one based on LIFO from the lowest numbered, non- empty bucket. This gives an almost ?best-bound? strategy, but without the bookkeeping overhead. 4.4 d. t buted from 10, 30], [30, 50] and [50, 70]. Setup cost and time will be determined by . Computational experiments We test AMCKS on a variety of problem instances to see what problems can be solve Instances will be generated by setting five parameters at several levels. The parameters are number of families, average number of jobs in a family, proportion of setup time/cos relative to total time and cost, number of periods, and relationship between a and c . The number of families will be fixed at 10, 30 and 50. The number of periods will be fixed at 5, 10, 15 and 20. The number of jobs in a family will be integer uniformly distri [ 1 1 () i n it ijt j f ec = =? ? 2 1 () i n de a= ? iij j= We will choose and uniformly from [0.05, 0.1], [0.1, 0.15], [0.15, 0.2], and [0.2, 0.25]. and have three relationships: is uniformly chosen from [10, 10000], and is chosen from[10, 10000] and is randomly chosen from[10, 10000], and =+, is 1 2 ij also uniformly chosen from [10, 10000] (uncorrelated relationship-U); ij a is uniformly e e a c a ijt c ij t ijt ij ctee 74 n it sen from [10,100] (stron ti uni ?? randomly chosen from [0, 2000] (weak relationship-W); ij a is uniformly chosen from[10, 10000], and ijt c is randomly chosen from[ ij a -1000, ij a +1000], if ijt c is less than 10, the is randomly ch lao g re onship-S). Resource availability will be form from 11 11 0.4* ,0.6* , ij ij ij ij aa == == ?? ?? ii nnNN ?? ?? pper bound (UB) and the average ratio of initial solution to the Table 4.1 o e . For each level of the five factors we generate ten instances. AMCKS was coded in C and all instances were run on a Dell P.C. with 1.7G Intel processor and 512M bytes of memory. In the following tables, we report the minimum (MIN), average (AVG) and maximum (MAX) solution time in minutes. We also give the average ratio of initial solution (INC) to initial u optimal solution (OPT). Soluti n tim (minutes) with N 10= and ~ i n [10,30] U S W pe d L L L rio Setup LB/UB B/OPT MIN AVG MAX LB/UB B/OPT MIN AVG MAX LB/UB B/OPT MIN AVG MAX [0.05-0.1] 0.977 0.978 0.00 0.00 0.00 0.991 0.991 0.00 0.00 0.00 0.902 0.903 0.00 0.00 0.01 [0.1-0.15] 0.976 0.977 0.00 0.00 0.00 0.988 0.989 0.00 0.00 0.00 0.843 0.845 0.00 0.00 0.01 [0.15-0.2] 0.953 0.953 0.00 0.00 0.00 0.991 0.991 0.00 0.00 0.00 0.878 0.883 0.00 0.00 0.01 5 [0.2-0.25] 0.912 0.913 0.00 0.00 0.00 0.929 0.930 0.00 0.00 0.00 0.893 0.900 0.00 0.00 0.01 [0.05-0.1] 0.992 0.992 0.00 0.00 0.00 0.997 0.997 0.00 0.00 0.00 0.903 0.904 0.00 0.01 0.02 [0.1-0.15] 0.978 0.978 0.00 0.00 0.01 0.990 0.990 0.00 0.00 0.01 0.894 0.896 0.00 0.01 0.02 [0.15-0.2] 0.973 0.974 0.00 0.00 0.00 0.997 0.997 0.00 0.00 0.01 0.833 0.838 0.00 0.01 0.02 10 [0.2-0.25] 0.947 0.949 0.00 0.00 0.00 0.919 0.921 0.00 0.01 0.01 0.905 0.910 0.01 0.01 0.03 [0.05-0.1] 0.993 0.993 0.02 0.02 0.03 0.996 0.996 0.00 0.00 0.01 0.862 0.862 0.00 0.02 0.03 [0.1-0.15] 0.992 0.993 0.00 0.01 0.01 0.989 0.990 0.00 0.00 0.01 0.893 0.895 0.00 0.02 0.05 [0.15-0.2] 0.967 0.968 0.00 0.00 0.01 0.996 0.996 0.00 0.01 0.02 0.895 0.899 0.01 0.04 0.07 15 [0.2-0.25] 0.935 0.936 0.00 0.00 0.01 0.945 0.946 0.00 0.01 0.03 0.837 0.842 0.03 0.05 0.07 [0.05-0.1] 0.994 0.994 0.09 0.10 0.11 0.997 0.998 0.01 0.01 0.01 0.904 0.905 0.02 0.04 0.05 [0.1-0.15] 0.978 0.979 0.02 0.02 0.03 0.985 0.985 0.00 0.01 0.03 0.907 0.908 0.00 0.04 0.08 [0.15-0.2] 0.978 0.979 0.01 0.01 0.02 0.997 0.998 0.04 0.919 0.923 0.01 0.08 0.19 20 [0.2-0.25] 0.958 0.959 0.01 0.01 0.01 0.950 0.952 0.00 0.12 0.845 0.851 0.05 0.12 0.41 0.01 0.02 0.03 75 Ta le 4.2 o e b Soluti n tim (minutes) with N 30= and ~n [30,50] i S U W Pe d L L L rio Setup LB/UB B/OPT MIN AVG MAX LB/UB B/OPT MIN AVG MAX LB/UB B/OPT MIN AVG MAX [0.05-0.1] 0.996 0.996 0.00 0.02 0.07 0.999 0.999 0.00 0.02 0.08 0.949 0.950 0.04 0.41 0.76 [0.1-0.15] 0.991 0.991 0.00 0.02 0.04 0.989 0.989 0.00 0.08 0.25 0.954 0.954 0.04 0.19 0.30 [0.15-0.2] 0.987 0.987 0.00 0.01 0.04 0.984 0.984 0.00 0.12 0.35 0.958 0.958 0.00 0.22 0.49 5 [0.2-0.25] 0.974 0.974 0.00 0.02 0.04 0.977 0.977 0.00 0.11 0.34 0.964 0.965 0.00 0.27 0.63 [0.05-0.1] 0.997 0.997 0.01 0.05 0.17 0.999 0.999 0.01 0.03 0.06 0.966 0.966 0.07 0.79 1.57 [0.1-0.15] 0.999 0.999 0.01 0.01 0.01 0.997 0.997 0.00 0.09 0.46 0.952 0.952 0.08 0.69 1.04 [0.15-0.2] 0.990 0.990 0.00 0.03 0.05 0.983 0.983 0.00 0.20 0.64 0.961 0.962 0.11 1.08 2.01 10 [0.2-0.25] 0.980 0.980 0.00 0.05 0.13 0.983 0.983 0.01 0.29 0.57 0.964 0.965 0.33 1.51 3.67 [0.05-0.1] 0.997 0.997 0.06 0.09 0.18 0.999 0.999 0.00 0.05 0.21 0.972 0.972 0.24 0.97 1.92 [0.1-0.15] 0.996 0.996 0.02 0.04 0.09 0.996 0.996 0.01 0.15 0.44 0.970 0.971 0.12 0.89 1.46 [0.15-0.2] 0.982 0.982 0.01 0.08 0.18 0.990 0.990 0.01 0.28 1.25 0.962 0.962 0.06 1.69 2.74 15 [0.2-0.25] 0.974 0.974 0.05 0.09 0.15 0.983 0.983 0.00 0.88 5.57 0.959 0.959 0.01 2.86 6.17 [0.05-0.1] 0.997 0.997 0.87 1.74 3.68 0.999 0.999 0.03 0.10 0.34 0.982 0.982 0.05 0.97 2.10 [0.1-0.15] 0.999 0.999 0.07 0.08 0.09 0.997 0.997 0.02 0.09 0.16 0.971 0.971 0.12 1.75 5.91 .2] 0.987 0.987 0.03 0.08 0.17 0.994 0.994 0.02 0.16 0.49 0.975 0.976 0.22 2.18 8.73 0.26 0.977 0.977 0.11 1.28 7.92 0.942 0.943 1.18 12.42 51.60 [0.15-0 20 [0.2-0.25] 0.977 0.977 0.03 0.14 Table 4.3 oSoluti n time (minutes) with N 50= and ~n [50,70] i U W S Period L L L Setup LB/UB B/OPT MIN AVG MAX LB/UB B/OPT MIN AVG MAX LB/UB B/OPT MIN AVG MAX [0.05-0.1] 0.997 0.997 0.01 0.15 0.82 1.000 1.000 0.00 0.09 0.34 0.979 0.979 0.05 2.12 4.25 [0.1-0.15] 0.997 0.997 0.00 0.07 0.51 0.995 0.995 0.01 0.39 0.93 0.980 0.980 0.21 1.64 3.70 [0.15-0.2] 0.991 0.991 0.00 0.14 0.36 1.000 1.000 0.10 1.74 4.49 0.984 0.984 0.03 1.03 3.43 5 [0.2-0.25] 0.987 0.987 0.00 0.13 0.36 0.983 0.983 0.08 0.92 1.76 0.980 0.980 0.30 3.09 5.78 [0.05-0.1] 0.998 0.998 0.03 0.39 1.88 1.000 1.000 0.02 0.17 0.52 0.988 0.988 0.02 3.10 6.99 [0.1-0.15] 1.000 1.000 0.01 0.04 0.09 0.999 0.999 0.02 0.29 1.26 0.976 0.976 0.12 4.46 8.44 [0.15-0.2] 0.991 0.991 0.02 0.16 0.48 1.000 1.000 0.02 2.00 4.56 0.970 0.970 0.86 9.72 19.66 10 [0.2-0.25] 0.985 0.985 0.01 0.42 1.41 0.986 0.986 0.21 2.65 6.05 0.971 0.971 4.23 16.20 28.67 [0.05-0.1] 0.997 0.997 0.20 0.80 2.10 1.000 1.000 0.02 0.14 0.31 0.984 0.984 0.36 7.29 22.69 [0.1-0.15] 0.998 0.998 0.05 0.13 0.33 0.997 0.997 0.05 0.95 3.95 0.979 0.979 0.06 8.33 17.03 [0.15-0.2] 0.993 0.993 0.03 0.35 0.96 1.000 1.000 0.01 2.24 5.83 0.969 0.970 0.06 17.70 35.68 15 [0.2-0.25] 0.987 0.987 0.02 0.43 1.01 0.990 0.990 0.04 1.89 3.75 0.976 0.976 0.07 30.40 82.01 [0.05-0.1] 0.997 0.997 19.50 29.00 44.47 1.000 1.000 0.08 0.26 0.53 0.983 0.983 0.11 8.88 27.13 [0.1-0.15] 0.998 0.998 0.17 0.45 1.35 0.998 0.998 0.03 0.41 2.71 0.981 0.981 0.44 13.60 29.35 [0.15-0.2] 0.991 0.991 0.13 0.52 1.08 1.000 1.000 0.12 2.08 7.84 0.973 0.973 3.15 34.10 125.01 2 .54 0 [0.2-0.25] 0.990 0.990 0.09 0.64 1.53 0.991 0.991 0.03 5.41 21.96 0.978 0.979 1.34 41.10 95 76 tion are more difficult than large setup proportion. The difference between 5% setup and 10% setup is apparent, but there is little difference between 10% setup and 15% setup. Fig. 4.1 and Fig. 4.2 show when coefficients are uncorrelated, instances with small setup propor N=50;T=15 0 5 10 15 20 25 30 35 [0.05-0.1] [0.1-0.15] [0.15-0.2] [0.2-0.25] Setup proportion U W S Time (Minute) ~ [50,70] i n Fig. 4.1. Solution Time for 50NTand, 15= = 50 families, 20 periods 0 5 10 20 25 35 40 45 [0.05-0.1] [0.1-0.15] 15 30 [0.15-0.2] [0.2-0.25] setup proportion U W S Time (Minute) Fig. 4.2. Solution Time for 50, 20NT= = and ~ [50,70] i n The dominance rules are more effective when setup proportion is large and when a and c are uncorrelated. If the setup proportion is small, jobs are more often assigned to multiple periods; the dominance rules are not as effective as for instances with large setup proportion. But when s correlated over different periods, there is not c i 77 eases, instances wit having weak relationship and str m t and nce the lower bo n rd knapsack pro d on MCKS r use up before the first break point of the piecewise function. If e fractional value is the first pseudo variable of some family, all variables corresponded as good. lso c mp C X ll M . W s rd instances fo M 0 ds m 90] i n om Tr ns much difference in assigning a job to a particular period. Thus AMCKS can easily solve an instance with small setup proportion. When setup proportion incr h ,ac ong relationship beco e harder. By the central limit theorem, the total setup cos time follow a normal distribution. Under the weak and strong relationships, the correlation of setups in different periods increases. With setup proportion increasing, setup has more effect on the optimal solution and differences in periods decrease. He dominance rules are not as effective in this case. The other possible reason is und. With setup proportion increasing, the lower bound becomes worse so we can not fathom nodes as effectively. Instances with a and c correlated are more difficult. The piecewise function for different periods become flat, and the computation for the composite piecewise functio becomes complex. The knapsack problem when all setup variables fixed is also a ha problem; we did not use a special algorithm to deal with correlation in the blem. Some improvement can be expected if a special algorithm is used. We use the rounded solution as a lower boun , which is very effective. Fo instances with 30N = and 50N = , the lower bound is at least 95% of the optimal. When 10N = , it is worse since there are fewer points on every piecewise function, so the resource is much easier to th to this pseudo variable are rounded to zero, thus the lower bound is not We a o are AMCKS to PLE 9.1 (ca ed by A PL) e choo e the ha est r A CKS (2 perio , 50 fa ilies, ~ [80, ) to c pare. ial ru on 78 other instan lts are ty on CPLEX to iff of solving with P l ins s p el so ab sho e c superiority of AMCKS. Table 4.4 The solution m e pa et A S a L S ces showed these resu pical . Due the d iculty C LEX, on y five tance er lev were lved. T le 4 ws th lear ti e (minut ) com rison b ween MCK nd CP EX U W Setup AMCKS C/A C AMCKS C/A CPLEX PLEX CPLEX AMCKS C/A 1 > 120.00 39.82 3.01 4.51 1.31 3.45 5.12 0.77 6.65 2 > > > [0.05-0.1] >12 .00 14 3 4. 2 0. 5. 1. 7 AVG > 5 1 120.00 15.39 7.80 5.53 0.15 37.58 5.90 8.46 0.70 3 120.00 31.31 3.83 5.05 0.16 31.06 13.68 15.05 0.91 4 120.00 19.23 6.24 4.82 2.58 1.87 4.19 1.60 2.62 5 0 .3 8.37 4 08 58.28 07 4 3.45 5.85 26.45 2.87 1 120.00 0.21 70.34 7.79 0.04 90.61 5.03 1.79 2.82 2 > 5 > 1 [0.1-0.15] 120.00 0.17 8. 3 0. 13 3 1. 9 VG 587.84 44.91 7.00 4 0. 120.00 0.24 06.97 7.82 1.03 7.57 82.07 8.39 9.78 3 120.00 0.22 545.99 8.79 0.86 10.19 5.24 1.00 5.22 4 17.11 0.20 590.17 8.59 5.23 1.64 >120.00 13.56 8.85 5 > 725.73 5 59 14.56 .2 5 8.33 A 1 50.85 0.09 566.01 7.47 0.13 56.14 12.68 17.1 74 2 7.41 0.08 91.82 7.10 0.19 38.35 >120.00 31.1 .86 3 15.84 0.12 129.98 9.22 0.32 28.91 >120.00 33.79 3.55 4 15.52 0.16 97.24 8.92 4.89 1.82 29.80 8.90 3.35 [0.15-0.2] 5 average 180.83 30.60 4.83 1 8.70 0.32 27.41 6.87 0.15 47.39 >120.00 28.72 0 3 17.36 0.91 19.08 9.18 0.33 27.77 32.76 2.59 12.63 4.18 2 7.59 0.84 8.99 7.11 12.00 0.59 >120.00 17.92 3 4 7.08 0.20 34.63 6.29 4.88 1.29 >120.00 48.74 2.46 5 average 35.60 27.20 4.07 6.70 8.53 2.23 3.82 6.89 0.18 39.29 >120.00 47.09 2.55 [0.2-0.25] 7.44 0.07 103.13 6.87 0.14 47.42 >120.00 26.83 4.47 Whe and are uncorrelated, AMCKS is much better than CPLEX. When and are correlated, AMCKS is still better than CPLEX except for instances with 5% setup and both solvers take longer for instances with larger setup than those with smaller setup. AMCKS solves problems with 5%-10% setup in about one-third hour; when setup proportion is over 10%, AMCKS takes less than one hour. For problems with 10%, 15% n a c a c 79 CPLEX failed to solve many instances in two hours. Though CPLEX can e od f for roportion has more effect on instances with uncorrelated relationship instances than other instances. In this paper, we only use a simple branch- fixed. If a gorithm, e.g. the one developed by Martello et al. (1999) is used, the solution tim Appendix A. The optimal objective of is an upper bound on MCKS. If the optimal solution of MCKS is known, then we obtain the sets in the optimal solution of MCKS and the resource taken by and 20% setups, obtain a near optimal solution, it can?t prove it is optimal in two hours, which often happens in many algorithms for integer programming. 4.5. Conclusion MCKS can be used for project selection for a country or company. In this paper, w use a branch-and-bound algorithm to solve the multiple-choice knapsack problem with setup. A linear knapsack problem is designed to give an upper bound on MCKS. We develop three dominance rules to simplify the process and save time to obtain an upper bound model. The rounded solution of the linear knapsack problem provides a go incumbent for MCKS. For instances with N greater than 30, the heuristic is over 95% o the optimal solution. Computational experiments show the algorithm?s effectiveness. Compared to CPLEX, the proposed algorithm obtains the optimal solution in less time most instances. Setup p and-bound algorithm for the knapsack problem when all setup variables are better al e can be reduced. Proof. u LKP { | 1} iit Sty==, 1, ..iN= 80 fam s e profit 1ilyi is i w as well a contribut d , ,.. i prof it i N= with i wb? ? and th optimal objective N i profit ? . 1 N i= e = For the period set , there is cct 1i i i ijS ijt i S max{ , }S, i i iS it tS f f ? = ? , and i iS i i dS= d. Using = ? pseudo variables ij x? and , formulate a linear knapsack problem with setup i y? i S LKP by these coefficients: , 1, .. , 01. i kk i k n ijS ij iS i j n ij ij iS i i j ij i i i Max c x f y ax d y w xyj n y = = ??+ ??+? ???= ? ? ? Since we know the optimal solution of MCKS, se t 1 1 , ..st 01,., ij i xjn?= ??? t 1 T ii t yy = ?= ? and 1 T ij ijt t x x = ? = ? 1, .. i j n= , so that i y?and ij x? are a feasible solution of i S LK know 1 1 ii ijS ij ijS i jjt cx c x === ? ??? 1 it t P . We 1 i nn nTT jt ijt ijt tj cx = ? ? and 1 ii i T iS i it f yf? n y = = ? , thus 1 kk ijS ij iS i i j c x f y profit = ?? i +? Because ? . () i iS i F w is the optimal objective of i S LKP , then () i iS i i F wprof? t. The linear knapsack problem obtained from is ( ) ii Fw 81 1 ' 1 ' i ij ij j n ij j Max c z a = = ? ? D ne its solution for this problem ' ' .. i n st ' 01,1. ij i ij i z w zjn ? ?? = efi is i Z . Repeating this process f i or all families, i 1 N Z = fe .. 01,1,.1,. i ij ij j nN ij ij ij i Max c z st az b ziNjn = ? ??= = ?? ?? easible solution is ? is a asible solution for u LKP ' i nN i= ' ' 11 ' 11 ' ij== and its objective of this f () ii i N F w ich is les than the optimal ective of LKP e i iS i i F w profit??, the i= Appendix B. The rounded solution of corresponds to a feasible solution of MCKS in the brea and the corresponding profit obtained by the family is . For all pseudo variables n ? wh s obj u . Sinc ( i F w n i profit ? is less than the optimal objective of u LKP . ) () i 1 N u LKP Proof. k solution isAssume resource taken by familyi i w i obj 1 ,.. i ii zz ? , there is 1 1 ij ij ij ij cc aa + + ?? ? . 82 If for al then as g rounded solution, and k, then the coefficients of z comes from si n all variables of familyi to zero. If 1 ik z = in the k 0 ij z = l ' 1, .. i jn= , 0 ij zj=> i k p and 1k p ? in point set by xand y i P 1 .. ik k k apxp ? ? =? 1 .. ik k k cpyp ? ? = ? . On the piecewise function i F ,. ki p xw= and . ki p y obj= . Assume this point k p comes from the break points of c wise functiopie e n k iS F . k iS F corre ponds to a lines ar knapsack problem , but k S LKP this k S LKP is the transformation of a linear knapsack problem with setup LKP k S S iS i n ij ij iS i j ij i i ij i Max c x f y st ax d y b xyj n xjn y = ??+ ??+? ???= ? ?= ??? ? ? Based on Bulfin?s algorithm for linear knapsack problem with setup, the variables 1 ijS ij j= 0 1 , .. , 1.. , 01., 01. i kk i k n i k S LKPS can be separated into two set 1 {,. } iit XMxx? ?= and } 1 {,. i it in XTx x + ??= . If is the reak point of ij zin first b k iS F , then set 1 1,.. ik kt? = = andx 1 1,.. ik i x kt n? = =+ ; else set 1 1,... 1 ik x kjt? == +?. If 1 ik x? = , let 1 ijr x = and 0 ir y = if Smax{ | } ijr ijt k cct= ? ; else set 0, ijr k x rS=?. Then and 1 i kk n ijt ijt it it i tS j tS c x f y obj ?= ? += ?? ? 1 i kk n ij ijt i it i tS j tS ax dy w ?= ? + = ?? ? asible integer solution of MCK . Repeat this process to all families, and we can obtain a fe S with e rounded solution?s objective ofthe objective the same as th u . LKP 83 Appendix C. Three Dominance rules Let us introduce two notations: ' 1 0 1 .. () .. jj jj pypy Fb p xpx + + ? = ? , : The derivative function of piecewise eak points o and 0 0b ? function F . , 1,... j pj n= are n br f F 0 (0,0)p = . 1 .. jj px b p x + ?< . Ifbp> 0 . n x, then 0 () 0Fb? = . {| 1,. , (0)} k k ijS ki ij c Jjjn F a ?== ? iS : The job set to form the first line between 0 p and 1 p of k iS F ., 1,. k Sk K= are all subsets of for family Before proving Dominance rule 1, 2 and 3, we need the following Lemma: Lemma 1. Le be three different subsets of for famil , and l {1, . . }T i . t , , rkl SSS {1, . . }T yi rk SSS?=, rk SS??=. If , then(0) (0) rk iS iS FF??> (0) (0) rl iS iS FF? ?? . Proof. Case 1: If there is () lrs j JJJ?? ? (1) If , then lr ijS ijS cc= (0) l l ijS iS ij a c F?? , but (0) l r ijS iS ij c F a ?< , thus l i iS FF(0) (0) r S ? ?> . (2) If , then lk ijS ijS cc= (0) l l ijS iS ij c F a ?? , but (0) k k ijS iS ij c F a ?< , thus (0) (0) (0) lkr iS iS iS FFF? ??< < . lr JJJ?? s , then define ijS , l l ijS Jcc= {| , } lr r llijS JjjJcc=? = Case 2: If an {| } k k l ijS Jjj=? thend [( )] r k iS ijS fc++ ? ) ( (0) ()) rk rk ll rk rk ll iS ijS jJ jJ iS iS iS ij ij jJ jJ fc F dd a a ?? ?? + ? = ++ + ? ?? . Since l 84 () rr iS fc+ (0) () r l r r r l ijS jJ iS iS ij jJ F da ? ? ?? + ? in the same way, ? () () k k k l iS ijS iS jJ fc da ? + + ? ? (0) k k l k jJ iS ij F ? ?? . Therefore, r (0) max{ (0), (0)} (0) lrk iS iS iS iS FFFF? ?? ?? = Le ma 2. 1,m If .. iit dftT===,and there are two sets and with ) ( rk iS iS Fb? and 0, 0, r S k S rk SS? , then ( )Fb 00 '' 000 1 () (),0 i rk n iS iS ij j Fb Fb b a = ??? ? . Proof. A: 01,1,. i rr r n ijS ijS j n ijS i Max c x Consider knapsack problem 1 ..st 0 1 , r ij ijS j ax b x jn = ? ??= ? is the optimal objective of the linear knapsack problem A with right-hand side Consider the knapsack problem B: = ? 0 () r iS Fb 0 b . 85 01,1,. kk i k k ijS ijS n ij ijS j ijS i ax b i n Max c x 1 .. j st = ? 0 1 x jn = ? ??= ? 0 () k iS F b is the op imal linear knapsack problem B with right- sid 0 b .Since rk SS? , then rk S ij c t objective of the hand S ce ij ? a s le space is same with B?s, iS nd A?s fea ib thus iS F 00 () () rk bFb? . set Let 0 {| 1,. , ( )} r r ijS c Jj Fb?? riiS ij j n a == min{ , } rr r r ij S ijS cc jJ?=?then? and r ij ij aa 0 ( rr r r ij S iS ij Fb a ? ; set ) c = 0 {| 1,. , ( )} k ijS c Jjjn Fb??== ? k kiiS a and ij min{ , } kk k k ij S ijS k ij ij cc jJ aa ?=?then 0 () k ij iS Fb? kk k S ij c a = . Then '' rk J J? . Case 1: If JJ??= , kr if rk j j= , then since , thus ; if rr kk ij S ij S cc? '' 00 () () rk iS iS Fb Fb? rk j j? , then kk kr rr kk ij S ij S ij S ij ij ij ccc aa ?? r a , thus Case 2: If , '' 00 () () rk iS iS Fb Fb? . kr JJ??? 86 jo 00 () () kr rk ijSijS iS iS ij ij cc Fb Fb aa ?????b r j J?? but k j J?there is a , thus? . , letf rk SS? r p k pLemma 3. I and be the first points except (0, 0) on piecewise functions r iS F , and k iS F respectivel max{ . , }pxp= . Then forbb> , '' y. Define . rs b x Proof. If 1 01 00 () () rk iS iS Fb Fb? . 0 1 * i n ki i j bSd a = ?+ ? j , then '' 00 () () 0 rk iS iS Fb Fb= = . If 0 11 r i ij k i ij jj== 0 r iS ** ii nn Sd ab Sd a+?< + ?? , then ()Fb ' 0= , ' 0 () 0Fb> . If k iS 0 1 * i n ri j bSd a = <+ ? ij , then assume ' 0 () rr r r ij S iS ij c Fb a = , ' ij S c 0 () kk k k iS ij Fb a = . Since ** ri k Sd Sd< i , then 00 ** ri k bSdbSd?>? i . Then for the linear knapsack problem 1 0 1 .. *, i? 01,1,. i rr r r n ijS ijS j n ij ijS r j ijS i c x st ax b S d xjn = = ?? ??= ? and its piecewise function Max r iS F , there is 0 (*) rr r r ij S iS r i ij c Fb S d a ? ?=. For the linear knapsack problem 1 0 1 .. * 01,1,. k ijS i b S d xjn ? ?= a i kk i k n ijS ijS j n ij ijS k i j Max c x st ax = = ? ? ? ? nd its piecewise function k iS F , there is 0 (*) kr k k ij S iS k i ij c Fb S d a ? ?=. Based on lemma 2, there is 00 (*)(* rk iS r i iS r i Fb S d Fb S d? ???). Since piecewise function is concave function, then 00 (*)(* kk iS r i iS k i Fb S d Fb S d???. ) Therefore kkrr rk ij Sij S ij ij cc aa ? , so ( k iS F F Lemma 4. Assume . If '' 00 () ) r iS b b? . rk SS? (0) (0) rk iS iS FF? ?> , then there is at most one intersection of and except (0,0); if r k iS F iS F (0) (0) rk iS iS FF? ?? , then dominates . Pro k S r S of. Case 1: (0) (0) rk iS iS FF??> Assume 87 r p k p are the first points respectively on piecewise function r iS F and k iS Fand . If there are two intersections 12 ,p p of r iS F and , then there k iS F is 11 (.) (.) rk iS iS F px F px??< and 22 (.) (.) rk iS iS F px F px? ?> . Since r iS F and k iS F are concave piecewise functions, there is 2 .max{.,.} rk p xpxpx> . But based on lemma 3, 22 (.) (.) rk iS iS Fpx Fpx??> can not be true. Therefore, there is no second intersection 2 p . Case 2: (0) (0) rk iS iS FF??< . 88 If . . rk p xpx? , there is no intersectio between (0n p , . k p x ). If there is an intersection outside of (0, . k p x ), then there is 11 (.) (.) rk iS iS F px F px? ?> that is impossible based on lemma 3. If . . rk p xpx> : Define 1 rk JJJ=?. Since n,1,. rk ijS ijS i ccj? = , then kr ijSijS ij ij cc aa ? . We have 1 1 1 (. ) (.) kk k k r k k iS ijS jJ J iS k ij iS k jJ iS ij jJ J fc Fpx a Fpx da ?? ? ?? + ??+? ? + ? ? ? 1 . kijr jJ and p xap ? +? ? x, thus 00 () () kr iS iS F bFb> for x 0 (0, . ) r bp? . If there is an intersection p outside of (0, . ) r p x , then that is impossible based on lemma 3. Case 3: If (.) (.) rk iS iS Fpx Fpx??> (0) (0) rk iS iS FF??= . . rk p xpx? , 000 () (), (0, .) rk iS iS r Fb Fb b px=?, and 000 () (), ( ., .) rk iS iS r k F bFbbpxpx .. rk p xpx> can not happen since if there is 1 rk J JJ= ? , then 1 (0), k r k ijS ijS iS ij ij c c Fj aa ??? ?J, thus should be included into . Then 1 J k J . r p x can not be larger than . k p x . 89 Dominance rule 3. If , and k , then dominates Proof. For , there is , thus rk SS? 11 ii rr k nn ijS iS ijS iS jj cf cf == +> + ?? r S k S . 0 1 i r n ij iS j bad = =+ ? 00 () () rk iS iS Fb Fb> (0) (0) rk iS iS FF? ?> based on case 2 of lemma 4. Based on case 1 of lemma 4, there is at most one intersection p with (.) (.)Fpx Fpx??< , 0 . rk iS iS p xb< , thus 00 () ()Fb Fb> can not be true. Thus there is no intersection of r rk iS iS iS F and k iS F , and dominates rk r k set l S with , rl kl SS SS ir S ik S . Dominance rule 4. If and dominates , then for another SS? S S ? ??= ?=, rl SS? dominates kl SS? . Proof. Define variabl ijS ijS ijS ccc?=? k e rlr , kl ijS ijS ijS ccc? =?. Since rk ijS ijS cc? , then k ijS ijS ijS ijS rl r r r jjj=== rk ijS ijS cc???,and 11 ( | 0) ( | 0) nn jj cc cc == ??>???> ?? . We know: (| 0) nnn ijS S ijS ijS ijS cccc ? =+??> ??? 111 (| 0) kl k k k ijS S ijS ijS ijS jjj cccc ? === =+??> ??? Since rl rl iS S it tS S rk 111 nnn f f ? ?? = ? > kl kl iS S it tS S f f ? ?? = ? , we can nn obtain ijS S iS S ijS S iS S jj cf cf ?? ?? == +? + ?? . Because l , 11 rl rl kl kl rl k SSSS??? then r S dominates k S by dominance rule 3. Dominance rule 5. If rk SS? , 0ff rk iS iS = = , 0dd rk iS iS = = then for another period set ith ,SS SS l S w 90 lr lk ? ??= ?=, dominates lk SS? rl SS? . Proof. Since ff==, dd rk rk iS iS rk iS iS SS? , 0 0= = , then , rl kl rl kl iSS iSS iSS iSS ffdd ???? ==, ij a keeps same, and rl kl ijS S ijS S cc ?? ? , then for any resource 0 b there is 00 () () rl kl iS S iS S FbFb ?? ? , thus lk SS? dominates rl SS? . Re European Journal of Operational Research 170, 363-375 Armstrong R.D, Kung D.S., Sinha P., Zoltners A.A. 1983. A computation study of a , 9, 184-198. Bulfin, R. L. 1988. An algorithm for the continous, variable upper bound knapsack 277 Martello, S., Pisinger, D., Toth, P. 1999. Dynamic programming and strong bounds for . European Journal of Operational Research 83, 394-410. Sarin S., Karwan MH. 1989. The linear multiple choice knapsack problem?, Operations ferences Akinc, U. 2004. Approximate and exact algorithm for the fixed-charge knapsack problem, multiple-choice knapsack problem, ACM Transactions on Mathematical Software problem, OPSEARCH 25 (2), 119-125. Dantzig, G.B. 1957. Discrete variable extremum problems, Operation Research 5, 266- the 0-1 Knapsack Problem. Management Science 45 (3), 414-424. Pisinger, D. 1995. A minimal algorithm for the multiple-choice knapsack problem Research Letters, 8, 95-100. 91 This research investigated three integer programming models which can be applied to order acceptance in make-to-order production and regional project selection in multiple periods: the knapsack problem with setup (KPS), the multiple knapsack problem with setup (MKPS) and the multiple-choice knapsack problem with setup (MCKS). The common characteristics of all three models are: jobs belong to different families; setup time and setup costs are incurred if a job is processed; if two jobs from the same family are processed sequentially, no setup is required; resource is limited and some jobs can be selected to be manufactured. The objective is to maximize the sum of profits of processed jobs. KPS can be used in order acceptance of single period. The model selects the jobs to be processed for maximizing the total profit. MKPS, as an extension of KPS, is used in order acceptance of multiple periods. Besides selecting the jobs to be processed, it also decides the periods which the selected jobs are arranged in. Jobs? coefficients vary in different periods, but the processing time stays the same. In MKPS, jobs? profits affect job?s production schedule and the chosen schedules decide the job?s profit. The two factors are balanced by maximizing the total profit under a resource limit. MCKS is applied to regional projects selection in multiple periods, and it can also be used in order acceptance of multiple periods with a non-renewable resource. ?. CONCLUSIONS Branch-and-bound algorithm is used to obtain the optimal solution for all three models. The success of the algorithm relies on the effectiveness of the upper bound and lower bound in branching and the effort to obtain them. Unlike the usual approaches of relaxing 92 some constraints of a formulation to obtain an upper bound, we design a linear knapsack As the simplest among the three models, KPS can be viewed as a special case of the oth near relaxation to a linear knapsack problem. We show a linear knapsack problem co , and the concave piecewise function defines Multiple-choice constraints are on the setup variables in MKPS to guarantee the jobs mily be processed in a single period. In MCKS, multiple-choice constraints pproaches to obtain the linear knapsack problems which give the upper bounds on MK e obtain a concave piecewise function for each family seudo variables as well as their profit and processing coefficients are defined from these pie truct the linear knapsack omplex than those in MKPS. We develop three dominance rules to simplify it. KPS or MCKS is rounded to CKS. A greedy algorithm is developed to obtain a lower bound on MKPS. problem for each model, and its LP solution is the upper bound on the model. er two. Bulfin (1988) gave an algorithm for its linear relaxation, which transforms the li rresponds to a concave piecewise function the variables as well as their coefficients in the linear knapsack problem. of the same fa are on the job variables so that a family?s jobs can be processed in multiple periods. A PS and MCKS are similar. W with the help of two dominance rules for linear multiple-choice knapsack problem. P cewise functions. We use these pseudo variables to cons problem. The process to obtain the concave piecewise functions in MCKS is more c If the LP solution of the linear knapsack problem for integers, we obtain an integer solution that corresponds to an incumbent of KPS or M 93 fter ll setup variables are fixed, the problem change to a (several) knapsack problem(s). A sim e these knapsack problems. The e algorithms for all three models arrive at the optimal solution in less time for most ins Branching is done in two stages. The first stage is to branch on setup variables. A a ple branch-and-bound algorithm is used to solv computational experiments show these algorithms? effectiveness. Compared to CPLEX, th tances. 94 BIBLIOGRAPHY Akinc, U. 2004. Approximate and exact algorithm for the fixed-charge knapsack problem, European Journal of Operational Research 170, 363-375. Armstrong R.D., Kung D.S., Sinha P., Zoltners A.A. 1983. A computation study of a multiple-choice knapsack problem, ACM Transactions on Mathematical Software, 9, 184-198. Bulfin, R. L. 1988. An algorithm for the continuous variable upper bound knapsack problem, OPSEARCH 25 (2), 119-125. Chajakis, E.D., Guignard, M. 1994. Exact algorithms for the setup knapsack problem, INFOR 32 (3), 124-142. Dantzig, G.B. 1957. Discrete variable extremum problems, Operation Research 5, 266-277. Dudzinski, K., Walukiewicz, S. 1987. Exact methods for the knapsack problem and its generalizations. European Journal of Operational Research 28, 3-21. Ham, I., Hitomi, K., Yoshida, T. 1985 Group Techonology, Kluwer Nijhoff Publishing, Boston, Massachusetts. Martello, S., Pisinger, D., Toth, P. 1999. Dynamic programming and strong bounds for the 0-1 Knapsack Problem. Management Science 45 (3), 414-424. Martello, S., Toth, P. 1980. Solution of the zero-one multiple knapsack problem, European Journal of Operational Research 4, 276-283. Martello, S., Toth, P. 1981. A bound and bound algorithm for the zero-one multiple knapsack problem. Discrete Applied Mathematics 3, 275-288. Martello S., Toth, P. 1990. Knapsack Problems: Algorithms and Computer Implementations, John Wiley and Sons, New York. Parker, R. G., Rardin, R. L. 1988. Discrete Optimization. Academic Press, Inc. San Diego, CA. 95 Pisinger, D. 1995. A minimal algorithm for the multiple-choice knapsack problem. European Journal of Operational Research 83, 394-410. Pisinger, D. 1999. An exact algorithm for large multiple knapsack problems. European Journal of Operational Research 114, 528-541. Sarin S., Karwan MH. 1989. The linear multiple choice knapsack problem?, Operations Research Letters, 8, 95-100 Sinha, A., Zoltners, A.A. 1979. The multiple-choice knapsack problem, Operations Research 27, 503-515.