**Course introduction and syllabus.**[PDF]**Introduction to System Identification.**[PDF]**Unit-Pulse-Response Identification.**[PDF]- 2.1: Continuous-time linear time-invariant systems.
- 2.2: The importance of the unit-pulse response.
- 2.3: Direct approach to finding unit-pulse response.
- 2.4: Scalar random variables.
- 2.5: Vector random variables.
- 2.6: Properties of jointly-distributed random variables.
- 2.7: Vector random (stochastic) processes.
- 2.8: Back to system identification.
**Frequency-Response Identification.**[PDF]- 3.1: Transfer operator and frequency function.
- 3.2: Direct identification of frequency function.
- 3.3: Identification of frequency function via Fourier analysis.
- 3.4: Empirical transfer function estimates.
- 3.5: Spectra of quasi-stationary signals.
- 3.6: Smoothed periodograms.
- 3.7: Frequency filtering.
- 3.8: Blackman-Tukey estimate.
- 3.9: ETFE and SPA toolbox commands.
**Transfer-Function Identification.**[PDF]- 4.1: Introduction to transfer functions.
- 4.2: Some examples of time responses versus pole locations.
- 4.3: Bode plots from discrete-time transfer functions.
- 4.4: System ID with transfer-function models.
- 4.5: Initial model structure selection.
- 4.6: Fitting parameterized model: Simulation or prediction?.
- 4.7: Fitting parameterized model: Cost function.
- 4.8: Solving the linear least-squares ARX problem.
- 4.9: Nonlinear optimization.
- 4.10: Generic nonlinear optimization example.
- 4.11: Toolbox methods (1): Frequency response.
- 4.12: Toolbox methods (2): Impulse response, residuals.
- 4.13: Toolbox methods (3): Model validation using correlations.
- 4.14: Toolbox methods (4): ARX model size.
- 4.15: An example with nonwhite noise.
- 4.16: Model quality: Bias error.
- 4.17: Bias problems in ARX WLS solution.

**State-Space Identification, Noiseless Data.**[PDF]- 5.1: Introduction to state-space models.
- 5.2: Working with state-space systems.
- 5.3: Discrete-time Markov parameters.
- 5.4: Discrete-time controllability and observability.
- 5.5: State-space realization problem.
- 5.6: Quadratic forms.
- 5.7: Matrix gain.
- 5.8: Singular value decomposition (SVD) of a matrix.
- 5.9: Moore-Penrose pseudo-inverse.
- 5.10: Back to Ho-Kalman.
- 5.11: Fibonacci example.
- 5.12: Eigensystem Realization Algorithm (ERA).
- 5.13: The subspace methods, data matrices.
- 5.14: Geometric projections that we will need.
- 5.15: Deterministic subspace system identification.
- 5.16: Data matrices for continuous-time realization.
- 5.17: Simple continuous-time realization.
- 5.18: Numeric problems with simple algorithm.
- 5.19: Proof of better conditioning.
- 5.20: Improved continuous-time system ID.
**State-Space Identification, Noisy Data.**[PDF]- 6.1: Stochastic identification via subspace methods.
- 6.2: Step 1: Some statistical relationships.
- 6.3: Step 2a: Kalman-filter covariance.
- 6.4: Step 2b: Kalman-filter state.
- 6.5: Step 3: Geometric properties of stochastic systems.
- 6.6: Step 4: Computing the system matrices.
- 6.7: Combined deterministic-stochastic identification.
- 6.8: Step 1a: Kalman-filter covariance.
- 6.9: Step 1b: Kalman-filter state.
- 6.10: Step 2a: Orthogonal projection for combined systems (A,B
^{-1}). - 6.11: Step 2b: Orthogonal projection for combined systems....
- 6.12: Step 2c: Orthogonal projection for combined systems....
- 6.13: Step 3: Oblique projections for combined systems.
- 6.14: Step 4a: Computing the system matrices A, C.
- 6.15: Step 4b: Computing the system matrices B, D.
**Advanced Topics in System Identification.**[PDF]