class QuadraticMinimizer extends SerializableLogging
Proximal operators and ADMM based Primal-Dual QP Solver
Reference: http://www.stanford.edu/~boyd/papers/admm/quadprog/quadprog.html
It solves problem that has the following structure
1/2 x'Hx + f'x + g(x) s.t Aeqx = b
g(x) represents the following constraints which covers ALS based matrix factorization use-cases
1. x >= 0 2. lb <= x <= ub 3. L1(x) 4. L2(x) 5. Generic regularization on x
- Alphabetic
- By Inheritance
- QuadraticMinimizer
- SerializableLogging
- Serializable
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
- new QuadraticMinimizer(nGram: Int, proximal: Proximal = null, Aeq: DenseMatrix[Double] = null, beq: DenseVector[Double] = null, maxIters: Int = -1, abstol: Double = 1e-6, reltol: Double = 1e-4, alpha: Double = 1.0)
- nGram
rank of dense gram matrix
- proximal
proximal operator to be used
- Aeq
rhs matrix for equality constraints
- beq
lhs constants for equality constraints
- abstol
ADMM absolute tolerance
- reltol
ADMM relative tolerance
- alpha
over-relaxation parameter default 1.0 1.5 - 1.8 can improve convergence
Type Members
- type BDM = DenseMatrix[Double]
- type BDV = DenseVector[Double]
- case class State extends Product with Serializable
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- val admmIters: Int
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native() @IntrinsicCandidate()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- val full: Int
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @IntrinsicCandidate()
- def getProximal: Proximal
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @IntrinsicCandidate()
- def initialize: State
Public API to get an initialState for solver hot start such that subsequent calls can reuse state memmory
Public API to get an initialState for solver hot start such that subsequent calls can reuse state memmory
- returns
the state for the optimizer
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- val linearEquality: Int
- def logger: LazyLogger
- Attributes
- protected
- Definition Classes
- SerializableLogging
- def minimize(q: DenseVector[Double]): DenseVector[Double]
- def minimize(H: DenseMatrix[Double], q: DenseVector[Double]): DenseVector[Double]
- def minimize(upper: Array[Double], q: DenseVector[Double], initialState: State): DenseVector[Double]
minimize API for cases where upper triangular gram matrix is provided by user as primitive array.
minimize API for cases where upper triangular gram matrix is provided by user as primitive array. If a initialState is not provided by default it constructs it through initialize
- upper
upper triangular gram matrix of size rank x (rank + 1)/2
- q
linear term for quadratic optimization
- initialState
provide an optional initialState for memory optimization
- returns
converged solution
- def minimize(H: DenseMatrix[Double], q: DenseVector[Double], initialState: State): DenseVector[Double]
minimize API for cases where gram matrix is provided by the user.
minimize API for cases where gram matrix is provided by the user. If a initialState is not provided by default it constructs it through initialize
- H
symmetric gram matrix of size rank x rank
- q
linear term for quadratic optimization
- initialState
provide an optional initialState for memory optimization
- returns
converged solution
- def minimize(q: DenseVector[Double], initialState: State): DenseVector[Double]
minimize API for cases where gram matrix is updated through updateGram API.
minimize API for cases where gram matrix is updated through updateGram API. If a initialState is not provided by default it constructs it through initialize
- q
linear term for quadratic optimization
- initialState
provide an optional initialState for memory optimization
- returns
converged solution
- def minimizeAndReturnState(q: DenseVector[Double]): State
- def minimizeAndReturnState(H: DenseMatrix[Double], q: DenseVector[Double]): State
- def minimizeAndReturnState(upper: Array[Double], q: DenseVector[Double], initialState: State): State
minimizeAndReturnState API that takes upper triangular entries of the gram matrix specified through primitive array for performance reason and the linear term for quadratic minimization
minimizeAndReturnState API that takes upper triangular entries of the gram matrix specified through primitive array for performance reason and the linear term for quadratic minimization
- upper
upper triangular gram matrix specified as primitive array
- q
linear term
- initialState
provide a initialState using initialState API for memory optimization
- returns
converged state from QuadraticMinimizer
- def minimizeAndReturnState(H: DenseMatrix[Double], q: DenseVector[Double], initialState: State): State
minimizeAndReturnState API that takes a symmetric full gram matrix and the linear term for quadratic minimization
minimizeAndReturnState API that takes a symmetric full gram matrix and the linear term for quadratic minimization
- H
gram matrix, symmetric of size rank x rank
- q
linear term
- initialState
provide a initialState using initialState API for memory optimization
- returns
converged state from QuadraticMinimizer
- def minimizeAndReturnState(q: DenseVector[Double], initialState: State): State
minimizeAndReturnState API gives an advanced control for users who would like to use QuadraticMinimizer in 2 steps, update the gram matrix first using updateGram API and followed by doing the solve by providing a user defined initialState.
minimizeAndReturnState API gives an advanced control for users who would like to use QuadraticMinimizer in 2 steps, update the gram matrix first using updateGram API and followed by doing the solve by providing a user defined initialState. rho is automatically calculated by QuadraticMinimizer from problem structure
- q
linear term for the quadratic optimization
- initialState
provide a initialState using initialState API
- returns
converged state from QuadraticMinimizer
- def minimizeAndReturnState(q: DenseVector[Double], rho: Double, initialState: State, resetState: Boolean = true): State
minimizeAndReturnState API gives an advanced control for users who would like to use QuadraticMinimizer in 2 steps, update the gram matrix first using updateGram API and followed by doing the solve by providing a user defined initialState.
minimizeAndReturnState API gives an advanced control for users who would like to use QuadraticMinimizer in 2 steps, update the gram matrix first using updateGram API and followed by doing the solve by providing a user defined initialState. It also exposes rho control to users who would like to experiment with rho parameters of the admm algorithm. Use user-defined rho only if you understand the proximal algorithm well
- q
linear term for the quadratic optimization
- rho
rho parameter for ADMM algorithm
- initialState
provide a initialState using initialState API
- resetState
use true if you want to hot start based on the provided state
- returns
converged state from ADMM algorithm
- val n: Int
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @IntrinsicCandidate()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @IntrinsicCandidate()
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- val transAeq: DenseMatrix[Double]
- def updateGram(upper: Array[Double]): Unit
updateGram API allows user to seed QuadraticMinimizer with upper triangular gram matrix (memory optimization by 50%) specified through primitive arrays.
updateGram API allows user to seed QuadraticMinimizer with upper triangular gram matrix (memory optimization by 50%) specified through primitive arrays. It is exposed for advanced users like Spark ALS where ALS constructs normal equations as primitive arrays
- upper
upper triangular gram matrix specified in primitive array
- def updateGram(H: DenseMatrix[Double]): Unit
updateGram allows the user to seed QuadraticMinimizer with symmetric gram matrix most useful for cases where the gram matrix does not change but the linear term changes for multiple solves.
updateGram allows the user to seed QuadraticMinimizer with symmetric gram matrix most useful for cases where the gram matrix does not change but the linear term changes for multiple solves. It should be called iteratively from Normal Equations constructed by the user
- H
rank * rank size full gram matrix
- val upperSize: Int
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])