**Course introduction and syllabus.**[PDF]**Course background.**[PDF]- 1.1: From time to frequency domain.
- 1.2: From frequency to time domain.
- 1.3: Dynamic properties of LTI systems.
**State-space dynamic systems (continuous-time).**[PDF]- 2.1: Introduction to LTI state-space models.
- 2.2: Four canonical forms for LTI state-space models.
- 2.3: One more canonical form, transformations.
- 2.4: Time (dynamic) response.
- 2.5: Diagonalizing the
*A*matrix. - 2.6: The Jordan canonical form; MIMO canonical forms.
- 2.7: Zeros of a state-space system.
- 2.8: Linear time-varying systems.
- 2.9: What about nonlinear systems?
**State-space dynamic systems (discrete-time).**[PDF]- 3.1: The
*z*transform. - 3.2: Working with the
*z*transform. - 3.3: Discrete-time state-space form.
- 3.4: More on discrete-time state-space models.
- 3.5: Linear time-varying and nonlinear discrete-time systems.
**Stability.**[PDF]- 4.1: Vector norms and quadratic forms.
- 4.2: Matrix gain.
- 4.3: Lyapunov stability.
- 4.4: Proof of the Lyapunov stability theorem.
- 4.5: Discrete-time Lyapunov stability.
- 4.6: Stability of locally linearized systems.
- 4.7: Input-output stability, LTV case.
- 4.8: Input-output stabiliyt, LTI case.
**Observability and controllability.**[PDF]- 5.1: Continuous-time observability: Where am I?
- 5.2: Continuous-time controllabiliyt: Can I get there from here?
- 5.3: Discrete-time controllability and observability.
- 5.4: Cayley-Hamilton theorem.
- 5.5: Continuous-time Gramians.
- 5.6: Discrete-time Gramians.
- 5.7: Computing transformation matrices.
- 5.8: Canonical (Kalman) decompositions.
- 5.9: PBH controllability/observability tests.
- 5.10: Minimal realizations: Why not controllable/observable.
**State-feedback control.**[PDF]- 6.1: State-feedback control.
- 6.2: Bass-Gura pole placement.
- 6.3: Ackermann's formula.
- 6.4: Reference input.
- 6.5: Pole placement.
- 6.6: Integral control for continuous-time systems.
- 6.7: State feedback for discrete-time systems.
- 6.8: MIMO control design.
**Output-feedback control.**[PDF]- 7.1: Open-loop and closed-loop estimators.
- 7.2: The observer gain design problem.
- 7.3: Discrete-time prediction estimator.
- 7.4: Compensation design: Separation principle.
- 7.5: The compensator, continuous- and discrete-time.
- 7.6: Current estimator/compensator.
- 7.7: Compensator design using current estimator.
- 7.8: Discrete-time reduced-order estimator.
- 7.9: Discrete-time reduced-order prediction compensator.
- 7.10: Continuous-time reduced-order estimator.
- 7.11: Estimator pole placement.
**Linear quadratic regulator.**[PDF]- 8.1: Introduction to optimal control.
- 8.2: Dynamic programming: Bellman's principle of optimality.
- 8.3: The discrete-time LQR problem.
- 8.4: Infinite-horizon discrete-time LQR.
- 8.5: The continuous-time LQR problem (a-c).
- 8.6: The continuous-time LQR problem (d).
- 8.7: Solving the differential Riccati equation via simulation.
- 8.8: Continuous-time systems and Chang-Letov.