1. Low Rank Approximation and Regression in Input Sparsity Time Combinatorial Algorithms for Numerical Linear Algebra

    Ken Clarkson
    David Woodruff

    IBM Research - Almaden

  2. Elevator Version

  3. Results

  4. Sketching `A`

  5. Embedding via Sketches

  6. Sketches and Regression

  7. Sketching Matrices

  8. Our sketching matrix

  9. Properties of Our Sketching Matrix

  10. Preserving Norms

  11. Preserving norms of `y`

  12. The Virtues of Obliviousness, and Leverage Scores

  13. Leverage Scores as Bounds on `y_i`

  14. High and Low Leverage Scores

  15. Error for `y^H`

  16. Error for `y^L`

  17. Mixed error

  18. Mixed error, enough already

  19. Big, Medium, and Little Leverage Scores

  20. Within-Group Error, Analysis Ingredients

  21. (Even More Matrix Concentration Goodness)

    Let $\ell$ be an integer parameter. For $i\in [\ell]$, let $H_i\in\reals^{d\times d}$ be independent symmetric zero-mean random matrices. Suppose \begin{align}\rho_i^2 &\equiv \norm{\E[H_i H_i]}_2 \textrm{ and }\\ M &\equiv \max_{i\in [\ell]} \norm{H_i}_2.\end{align} Then for any $\tau > 0$, \[ \log \Pr\left[\left\|\sum_{i\in [\ell]} H_i\right\|_2 > \tau\right] \le \log 2d - \frac{\tau^2/2}{\sum_{i \in [\ell]}\rho_i^2 + M\tau/3}. \]
  22. Sketching for Low Rank Approximation

  23. Concluding Remarks