You haven't specified what it writes on fail (which error message).

I had tried it much more than 10 times and all worked ok for me. In fact, I've just checked it for whole 1000.Ewing wrote:However, in this case I believe the problem is actually convex by construction: the N-by-N matrix in my objective function (currentSigma) is a covariance matrix of random normal variables and should be always positive definite

You have aversion = 1 > 0, hence objective function

float(np.dot(x_.T,mu) - aversion/2 * np.dot(x_.T, np.dot(sigma, x_)))

for positive-definite sigma is definitely concave.

Ewing,

If you are trying to trace the Efficient Frontier, the fastest approach (by far) is the Critical Line Algorithm, which starts from the portfolio of maximum return (compatibly with the constraints) and then finds the sequence of "corner portfolios" where individual constraints become active or inactive. The first step in general requires the solution of a LP problem, but for "box-type" constraints there are simpler alternatives. You may read a description of the algorithm by Bill Sharpe at http://www.stanford.edu/~wfsharpe/mia/opt/mia_opt3.htm , and gauge its speed from my Interactive Asset Allocator at http://www.personalquant.com/Portfolio/ , where all the calculations are performed in JavaScript at client side (Firefox 3.6 or Google Chrome or MSIE9 are recommended for best performance of the charting routines).

Some pseudocode which is very Python-like can be found at the end of the paper by A.&D. Niedermayer at http://papers.ssrn.com/sol3/papers.cfm? … _id=894842 .

I had tried it much more than 10 times and all worked ok for me. In fact, I've just checked it for whole 1000.

Ewing wrote:

However, in this case I believe the problem is actually convex by construction: the N-by-N matrix in my objective function (currentSigma) is a covariance matrix of random normal variables and should be always positive definite

You have aversion = 1 > 0, hence objective function

float(np.dot(x_.T,mu) - aversion/2 * np.dot(x_.T, np.dot(sigma, x_)))

for positive-definite sigma is definitely concave.

thank you for your response. I will definitely check out FuncDesigner.

Were you able to run all 1000 optimizations? It fails for me after a random number of opts, sometimes after more than 10, sometimes it fails on even the first one.

To give you more context, please allow me to describe the problem a little bit more qualitatively:

In the work that I do, it's quite common to run a series of path-dependant optimizations to simulate the effects of various constraints and changes to the problem parameters. Some of the more common parameters include components of the objective function that depend on the previous solution, hence I included a component where the initial guess is the previous weights. In a more elaborate problem, the optimal solution is fed back to some part of the objective function and used in the next optimization.

In this case, the problem is very simple and, as you correctly pointed out, a prime candidate for more straightforward QP solvers. However, due to the fact that the simulations I run typically include more components to the objective function (some of which are not necessarily linear or quadratic), with many different types of constraints, I typically default to the more general numerical routines such as NLP/NSP, CP in CVXOPT and fmincon in MATLAB.

You are certainly right in that if the problem is not guranteed convex, I should not expect guranteed convergence. However, in this case I believe the problem is actually convex by construction: the N-by-N matrix in my objective function (currentSigma) is a covariance matrix of random normal variables and should be always positive definite. To lend more evidence, this simulation runs completely with CVXOPT's QP and CP solvers as well as with NLP v0.28. Strangely I can't get it to run fully in v0.29 or later.

If you have no problems running it from beginning to end, then it certainly sounds like a problem on my end.

Thank you again for your help/advice in this matter.

Dmitrey wrote:

]]>Hi Ewing,

you are using random numbers so I can't state anything about general ralg convergence on your probs (by the way, they are non-convex, so convergence is not guaranteed).

I can't reproduce whole output for numOpt = 1000, but for me all works ok, see the output for 10 below (however, I don't understand why do you decided to use x_init = problem.xf.reshape(-1, 1) in the code involved).

Also, you could just use another NLP/NSP/GLP solver for your probs. BTW, your code could be easily rewritten to FuncDesigner and you would not have to supply derivatives by yourself.HTH, D.

`------------------------ Running opt 1 -------------------------------------------------- solver: ralg problem: unnamed type: NLP goal: max iter objFunVal log10(maxResidual) 0 0.000e+00 -100.00 164 2.259e-03 -16.90 istop: 4 (|| F[k] - F[k-1] || < ftol) Solver: Time Elapsed = 1.08 CPU Time Elapsed = 1.09 objFunValue: 0.0022592224 (feasible, MaxResidual = 1.25767e-17) Running opt 2 -------------------------------------------------- solver: ralg problem: unnamed type: NLP goal: max iter objFunVal log10(maxResidual) 0 -4.573e-04 -16.90 135 2.834e-03 -6.11 istop: 4 (|| F[k] - F[k-1] || < ftol) Solver: Time Elapsed = 0.93 CPU Time Elapsed = 0.93 objFunValue: 0.0028336503 (feasible, MaxResidual = 7.74669e-07) Running opt 3 -------------------------------------------------- solver: ralg problem: unnamed type: NLP goal: max iter objFunVal log10(maxResidual) 0 5.025e-04 -5.60 56 1.987e-03 -6.10 istop: 4 (|| F[k] - F[k-1] || < ftol) Solver: Time Elapsed = 0.39 CPU Time Elapsed = 0.39 objFunValue: 0.0019868082 (feasible, MaxResidual = 7.89413e-07) Running opt 4 -------------------------------------------------- solver: ralg problem: unnamed type: NLP goal: max iter objFunVal log10(maxResidual) 0 -1.090e-03 -5.62 155 2.535e-03 -6.13 istop: 4 (|| F[k] - F[k-1] || < ftol) Solver: Time Elapsed = 1.09 CPU Time Elapsed = 1.07 objFunValue: 0.0025350419 (feasible, MaxResidual = 7.34746e-07) Running opt 5 -------------------------------------------------- solver: ralg problem: unnamed type: NLP goal: max iter objFunVal log10(maxResidual) 0 -5.104e-05 -12.88 70 2.104e-03 -6.17 istop: 4 (|| F[k] - F[k-1] || < ftol) Solver: Time Elapsed = 0.47 CPU Time Elapsed = 0.47 objFunValue: 0.0021038503 (feasible, MaxResidual = 6.81124e-07) Running opt 6 -------------------------------------------------- solver: ralg problem: unnamed type: NLP goal: max iter objFunVal log10(maxResidual) 0 7.900e-04 -12.86 74 1.657e-03 -6.38 istop: 4 (|| F[k] - F[k-1] || < ftol) Solver: Time Elapsed = 0.45 CPU Time Elapsed = 0.45 objFunValue: 0.0016573733 (feasible, MaxResidual = 4.14251e-07) Running opt 7 -------------------------------------------------- solver: ralg problem: unnamed type: NLP goal: max iter objFunVal log10(maxResidual) 0 3.775e-04 -6.17 36 8.416e-04 -12.87 istop: 4 (|| F[k] - F[k-1] || < ftol) Solver: Time Elapsed = 0.2 CPU Time Elapsed = 0.21 objFunValue: 0.00084155964 (feasible, MaxResidual = 1.35589e-13) Running opt 8 -------------------------------------------------- solver: ralg problem: unnamed type: NLP goal: max iter objFunVal log10(maxResidual) 0 -8.904e-04 -12.87 113 2.350e-03 -13.05 istop: 4 (|| F[k] - F[k-1] || < ftol) Solver: Time Elapsed = 0.66 CPU Time Elapsed = 0.67 objFunValue: 0.0023500047 (feasible, MaxResidual = 8.82523e-14) Running opt 9 -------------------------------------------------- solver: ralg problem: unnamed type: NLP goal: max iter objFunVal log10(maxResidual) 0 3.105e-04 -6.82 32 1.197e-03 -6.10 istop: 4 (|| F[k] - F[k-1] || < ftol) Solver: Time Elapsed = 0.19 CPU Time Elapsed = 0.19 objFunValue: 0.001196871 (feasible, MaxResidual = 7.90877e-07)`

you are using random numbers so I can't state anything about general ralg convergence on your probs (by the way, they are non-convex, so convergence is not guaranteed).

I can't reproduce whole output for numOpt = 1000, but for me all works ok, see the output for 10 below (however, I don't understand why do you decided to use x_init = problem.xf.reshape(-1, 1) in the code involved).

Also, you could just use another NLP/NSP/GLP solver for your probs. BTW, your code could be easily rewritten to FuncDesigner and you would not have to supply derivatives by yourself.

HTH, D.

```
------------------------
Running opt 1
--------------------------------------------------
solver: ralg problem: unnamed type: NLP goal: max
iter objFunVal log10(maxResidual)
0 0.000e+00 -100.00
164 2.259e-03 -16.90
istop: 4 (|| F[k] - F[k-1] || < ftol)
Solver: Time Elapsed = 1.08 CPU Time Elapsed = 1.09
objFunValue: 0.0022592224 (feasible, MaxResidual = 1.25767e-17)
Running opt 2
--------------------------------------------------
solver: ralg problem: unnamed type: NLP goal: max
iter objFunVal log10(maxResidual)
0 -4.573e-04 -16.90
135 2.834e-03 -6.11
istop: 4 (|| F[k] - F[k-1] || < ftol)
Solver: Time Elapsed = 0.93 CPU Time Elapsed = 0.93
objFunValue: 0.0028336503 (feasible, MaxResidual = 7.74669e-07)
Running opt 3
--------------------------------------------------
solver: ralg problem: unnamed type: NLP goal: max
iter objFunVal log10(maxResidual)
0 5.025e-04 -5.60
56 1.987e-03 -6.10
istop: 4 (|| F[k] - F[k-1] || < ftol)
Solver: Time Elapsed = 0.39 CPU Time Elapsed = 0.39
objFunValue: 0.0019868082 (feasible, MaxResidual = 7.89413e-07)
Running opt 4
--------------------------------------------------
solver: ralg problem: unnamed type: NLP goal: max
iter objFunVal log10(maxResidual)
0 -1.090e-03 -5.62
155 2.535e-03 -6.13
istop: 4 (|| F[k] - F[k-1] || < ftol)
Solver: Time Elapsed = 1.09 CPU Time Elapsed = 1.07
objFunValue: 0.0025350419 (feasible, MaxResidual = 7.34746e-07)
Running opt 5
--------------------------------------------------
solver: ralg problem: unnamed type: NLP goal: max
iter objFunVal log10(maxResidual)
0 -5.104e-05 -12.88
70 2.104e-03 -6.17
istop: 4 (|| F[k] - F[k-1] || < ftol)
Solver: Time Elapsed = 0.47 CPU Time Elapsed = 0.47
objFunValue: 0.0021038503 (feasible, MaxResidual = 6.81124e-07)
Running opt 6
--------------------------------------------------
solver: ralg problem: unnamed type: NLP goal: max
iter objFunVal log10(maxResidual)
0 7.900e-04 -12.86
74 1.657e-03 -6.38
istop: 4 (|| F[k] - F[k-1] || < ftol)
Solver: Time Elapsed = 0.45 CPU Time Elapsed = 0.45
objFunValue: 0.0016573733 (feasible, MaxResidual = 4.14251e-07)
Running opt 7
--------------------------------------------------
solver: ralg problem: unnamed type: NLP goal: max
iter objFunVal log10(maxResidual)
0 3.775e-04 -6.17
36 8.416e-04 -12.87
istop: 4 (|| F[k] - F[k-1] || < ftol)
Solver: Time Elapsed = 0.2 CPU Time Elapsed = 0.21
objFunValue: 0.00084155964 (feasible, MaxResidual = 1.35589e-13)
Running opt 8
--------------------------------------------------
solver: ralg problem: unnamed type: NLP goal: max
iter objFunVal log10(maxResidual)
0 -8.904e-04 -12.87
113 2.350e-03 -13.05
istop: 4 (|| F[k] - F[k-1] || < ftol)
Solver: Time Elapsed = 0.66 CPU Time Elapsed = 0.67
objFunValue: 0.0023500047 (feasible, MaxResidual = 8.82523e-14)
Running opt 9
--------------------------------------------------
solver: ralg problem: unnamed type: NLP goal: max
iter objFunVal log10(maxResidual)
0 3.105e-04 -6.82
32 1.197e-03 -6.10
istop: 4 (|| F[k] - F[k-1] || < ftol)
Solver: Time Elapsed = 0.19 CPU Time Elapsed = 0.19
objFunValue: 0.001196871 (feasible, MaxResidual = 7.90877e-07)
```

my name is Ewing and I'm a financial analyst working at a quantitative research group. I use a variety of optimizers to solve all types of problems - the most prevalent of which are constrained versions of the Markowitz portfolio optimization paradigm.

I have quite a fair bit of experience with Matlab but recently migrated to python where I have found open opt to be a great tool.

Recently, I've encountered strange problems with the NLP/NSP objects: simple problems that were completely and quickly solveable in OpenOpt 0.28 have started crashing in newer versions, including 0.31 for no apparent reason.

Here is the code. This is quite simple and the problem could probably be transformed to use another solver but I typically put in many more non-linear constraints which makes NLP/NSP more suited for my purposes. Please note that the same code runs flawlessly and quickly in version 0.28 but there is a very good chance I'm doing something very wrong here.

Thank you so much for your help.

from openopt import NLP

import numpy as np

numAssets = 25

assets = ['Asset_' + str(currentAsset) for currentAsset in range(0, numAssets)]

def objectiveFunction(x, mu, sigma, aversion = 1):

x_ = x.reshape(-1,1)

return float(np.dot(x_.T,mu) - aversion/2 * np.dot(x_.T, np.dot(sigma, x_)))

def objectiveGradient(x, mu, sigma, aversion = 1):

x_ = x.reshape(-1,1)

return mu - aversion * np.dot(sigma, x_)

def nlConstraint(x, sigma, conVal = 0.02):

x_ = x.reshape(-1, 1)

return np.dot(x_.T, np.dot(sigma, x_)) - pow(conVal,2)/21

def nlConstraintGradient(x, sigma, conVal = 0.02):

x_ = x.reshape(-1, 1)

return 2 * np.dot(sigma, x_)

A = np.identity(numAssets)

b = np.tile(0.2, (numAssets,1))

Aeq = np.tile(1.0, (1, len(assets)))

beq = np.array([0.]).reshape(-1,1)

numOpt = 1000

x_init = np.tile(0., (len(assets), 1))

for currentOpt in range(1, numOpt):

print 'Running opt ' + str(currentOpt)

assetReturns = np.random.randn(1000, numAssets) * 0.5*0.05/pow(252,0.5)

currentMu = assetReturns.mean(axis = 0).reshape(-1, 1)*21

currentSigma = np.cov(assetReturns.T)*21

problem = NLP(f = objectiveFunction, x0 = x_init, A = A, b = b, Aeq = Aeq, beq = beq, \

goal = 'max', df = objectiveGradient, ftol = 1e-5, scale = 1, maxIter = 1e5, iprint = 0,\

c = nlConstraint, dc = nlConstraintGradient , contol = 1e-6, maxFunEvals = 1e6)

problem.args.f = (currentMu, currentSigma)

problem.args.c = (currentSigma,)

problem.solve('ralg')

if problem.stopcase == 1:

x_init = problem.xf.reshape(-1, 1)

else:

print "Not feasible"

break