Language:EN
Pages: 2
Rating : ⭐⭐⭐⭐⭐
Price: $10.99
Page 1 Preview
explain the differences using the training data tr

Explain the differences using the training data trainingsamplesdct mat

Homework Set Two
ECE 271A
Department of Computer and Electrical Engineering
University of California, San Diego
Nuno Vasconcelos

1. Problem 2.6.26 in Duda, Hart, and Stork (DHS).

multinomial distribution
n! N

N
k=1ck!

j=1

a) Derive the ML estimator for the parameters πi, i = 1, . . . , N. (Hint: notice that these parameters are probabilities, which makes this an optimization problem with a constraint. If you know about Lagrange multipliers feel free to use them. Otherwise, note that minimizing a function f(a, b) under the constraint a + b = 1 is the same as minimizing the function f(a, 1 −a)).

b) Derive the same result by computing derivatives in the usual way. (Hint: you may want to use a man-ual of matrix calculus such as that at http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html.

Also, it may be easier to work with the precision matrix P = Σ1.)
6. (computer) This week we will continue trying to classify our cheetah example. Once again we use the decomposition into 8×8 image blocks, compute the DCT of each block, and zig-zag scan. However, we are going to assume that the class-conditional densities are multivariate Gaussians of 64 dimensions.

2

You are viewing 1/3rd of the document.Purchase the document to get full access instantly

Immediately available after payment
Both online and downloadable
No strings attached
How It Works
Login account
Login Your Account
Place in cart
Add to Cart
send in the money
Make payment
Document download
Download File
img

Uploaded by : Ryan Murray

PageId: DOCF301929