In probability theory and information theory, the mutual information (sometimes known by the archaic term transinformation) of two random variables is a quantity that measures the mutual dependence of the two random variables. The most common unit of measurement of mutual information is the bit, when logarithms to the base 2 are used.
Big-O Algorithm Complexity Cheat Sheet
Notation for asymptotic growth
- (theta) Θ upper and lower, tight equal
- (big-oh) O upper, tightness unknown less than or equal
- (small-oh) o upper, not tight less than
- (big omega) Ω lower, tightness unknown greater than or equal
- (small omega) ω lower, not tight greater than
 Big O is the upper bound, while Omega is the lower bound. Theta requires both Big O and Omega, so that’s why it’s referred to as a tight bound (it must be both the upper and lower bound). For example, an algorithm taking Omega(n log n) takes at least n log n time but has no upper limit. An algorithm taking Theta(n log n) is far preferential since it takes AT LEAST n log n (Omega n log n) and NO MORE THAN n log n (Big O n log n).SO
 f(x)=Θ(g(n)) means f (the running time of the algorithm) grows exactly like g when n (input size) gets larger. In other words, the growth rate of f(x) is asymptotically proportional to g(n).
 Same thing. Here the growth rate is no faster than g(n). big-oh is the most useful because represents the worst-case behavior.
Asscociation Analysis (e.g. Market-Basket Analysis)
Rule evaluation metrics
- Support (s): fraction of contexts that contain both X and Y
- Condence (c): Measures how often the items in Y occur in contexts containing X
The norm of a square matrix A is a non-negative real number denoted A – which is a measure of the magnitude of the matrix. There are several diﬀerent ways of deﬁning a matrix norm
- 1st Norm (= maximum of the absolute column sums)
- The Euclidean norm (= square root of the sum of all the squares of the elements)