MATH Final Project -
Consider the following mathematical model for a spring-mass-dashpot system, found on p.11 of the Optimization text:
u′′(t)+ cu′(t)+ ku(t) = 0
on the interval [0, T] with initial conditions u(0) = u0 and u′(0) = v0. There are N =2 parameters x? = (c, k)T. There are M observations of displacement {ud,i}Mi=1 sampled at {ti}Mi=1, where ti = (i - 1)T/(M - 1).
1. Solve the system analytically, letting c = 0.2 and k = 10, with initial condition u0 = 3 and v0 = -10. You may use a computational tool to assist.
2. Compute the numerical solution to the above 2nd order differential equation for t ∈ [0, 20] using a Matlab routine such as ode23 or ode15s. To use either you will need to rewrite the above equation as a system of two 1st order equations. Let M =301. At this point you will likely have two user-defined functions, one that calls the ode solver and one that contains your system of first-order ODEs. Store your solution as a "'data"' vector ud,i to be used in the next steps.
3. Create two new data sets that include the original data ud found in step 2 plus noise. Use the form uˆd(ti) = ud(ti)+nl · randi where randi are the normally distributed random numbers with zero mean and variance 1.0. You may use randn in MATLAB to generate an n-vector with random entries. Choose nl = 0.001, 0.01. Store these two noisy data sets as well.
4. Access/load a separate dataset beam_new from the beam_new.mat ?le in MATLAB, obtained from a vibrating beam experiment. At this point you should have 4 datasets: 1 'original' generated from the known parameters for the spring-mass-dashpot system, 2 modified from the original with different levels of noise and 1 provided from the vibrating beam.
5. Create a routine to solve the inverse problem, i.e. one that will find optimal parameters to describe your system given a set of data and a set of guess parameters. You will likely create two new files, one main function to call an optimizer and another function to evaluate the objective function. Recall that you are minimizing the objective function
J(x→) = R(x→)T R(x→)/2 ≡ (||R(x→)||22/2) = ½i=1∑M|um(ti : x→) - ud,i|2
You may end up with 4 files: the main function calls the objective, which calls the solver, which calls the ODE. You may want to use fminsearch as the optimizer to begin.
6. For all 4 data sets, estimate the parameters c and k using the inverse least squares method using the initial guess x =(0.3, 10.2)T. Do this first using fminsearch or another Nelder-Mead based optimizer which not require the gradient. Then do with levmar.m or another gradient-based optimizer, which will require modifications to your code. To use levmar.m you will need to evaluate and return the objective cost (J(x→), what you are minimizing), the Jacobian, and the gradient within your objective function / solver that calls the ODE(these two could be combined), as described in its help file and in the class notes. Report the following outputs for each of the 4 datasets and 2 optimization routines:
- Table listing the estimated optimal values of the parameters, values of the objective function, number of iterations, and total CPU time.
- Plot of data vs. model solution with each set of optimal parameters (separately or combined).
- Plot of residual vs time (do these separately).
7. Comment on your results, including a discussion of how well the given ODE model fits the datasets and any pattern you may or may not see in the residual plots.
Attachment:- Assignment.rar