Explain the differences using the training data trainingsamplesdct mat
Homework Set Two
ECE 271A
Department of Computer and Electrical Engineering
University of California, San Diego
Nuno Vasconcelos
1. Problem 2.6.26 in Duda, Hart, and Stork (DHS).
multinomial distribution | n! | N | ||
---|---|---|---|---|
|
j=1 |
a) Derive the ML estimator for the parameters πi, i = 1, . . . , N. (Hint: notice that these parameters are probabilities, which makes this an optimization problem with a constraint. If you know about Lagrange multipliers feel free to use them. Otherwise, note that minimizing a function f(a, b) under the constraint a + b = 1 is the same as minimizing the function f(a, 1 −a)).
b) Derive the same result by computing derivatives in the usual way. (Hint: you may want to use a man-ual of matrix calculus such as that at http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html.
Also, it may be easier to work with the precision matrix
P =
Σ−1.)
6. (computer) This week we will continue trying to
classify our cheetah example. Once again we use the decomposition into
8×8 image blocks, compute the DCT of each block, and zig-zag
scan. However, we are going to assume that the class-conditional
densities are multivariate Gaussians of 64 dimensions.
2