Properties of Non-linear models
objective- constraint are non linear functions of the decision variables
non-constant return to scale, effect from input on output is nonlinear
in pricing models, price determines quantitaly sold (lower price more demand e.g.) -> nonlinear function
Model fitting
to meaure goodness of the fit the sum the squared differences between the observed values nd the model’s prediction -> then they attempt to minimize this sum of squared differences -> quaring leads to nonlinearity
financial models aim to achieve high return and low risk. Ris is usally measured in variance or std of portfolio -> nonlinear function of the decision variables (investment amount)
NOn-linear models in real world
non-lineasr more common, probably very difficult to find problems that are truly linear
either approximate NLP with LP
or by allowing nonlinearities in your models you can create more realistic models -> more difficult to solve
with Solver this means the found solution may be suboptimal
Non-linear models x Solver
solver is guaranteed to solve certain types of NLPs correctly
Convex: slope’s rate of change is always nondecreasing/ increasing
Concave: slope’s rate of change is always nonincreasing/ decreasing
properties of Convex and concave functions
sum
multiply by positive constant
multiply by negative constant
Convex
convex
concave
Concave
Problems solver solves correctly
Conditions for maximum problems
objective function or it’s logarithm is concave
constraints are linear
Conditions for minimization problem:
the objective function is convex
What probles does solver have with this function?
For the figure graphed below, points A and C are called local maxima because the function is larger at A and C than at nearby points.
However, only point A maximizes the function; it is called the global maximum.
The problem is that Solver can get stuck near point C, concluding that C maximizes the function.
CRG Non-Linear Alogithm
Start at an initial point
Solver chooses a starting solution (your initial guess).
Compute the gradient
It estimates how the objective function changes in each direction and identifies the steepest descent direction (for minimization).
Take a small step (line search)
Solver moves a little in the downhill direction and checks if the objective improves.
If improvement → continue in that direction
If not → adjust step size or direction
Adjust for constraints (reduced gradient)
If constraints exist, Solver modifies the movement so it stays inside the feasible region, like sliding along walls instead of going through them.
Repeat
This loop (compute → step → check → adjust) repeats many times until:
improvement becomes tiny
or the gradient is near zero
→ Solver concludes it has reached a local optimum.
How to overcome this?
Try several possible starting values for the changing cells,
Run Solver from each of these, and
Take the best solution Solver finds.
-> does not guarantee the optimal solution as well
Application of NLP: Portfolio Selection Model
We turn the two-objective “maximize return & minimize risk” portfolio problem into a single-objective nonlinear problem by fixing a minimum return and then minimizing variance.
Step 1 — Choose a minimum required expected return
Example:
“Portfolio must earn at least 10%.”
This becomes a constraint
Step 2 — Minimize the variance (risk)
Now the objective is only one:
Minimize Risk
nterpret the portfolio standard deviation of 0.1217 in a probabilistic sense.
Specifically, if we believe that stock returns are approximately normally distributed, then the probability is about 0.68 that the actual portfolio return will be within one standard deviation of the expected return,
and the probability is about 0.95 that the actual portfolio return will be within two standard deviations of the expected return.
Zuletzt geändertvor 9 Tagen