Classification Problem
y ( x ) = w T x + b y(x)=w^Tx+b y ( x ) = w T x + b This is the line same to decision boundary. We now define the margin as the perpendicular distance between the line and the closet data point. We just want to find the line maximizing the margin.
max w , b 2 c ∣ ∣ w ∣ ∣ , s . t . y i ( w T x i + b ) ≥ c , ∀ i \max_{w,b} \dfrac{2c}{||w||}, \; s.t. \;y_i(w^Tx_i+b)\geq c, \forall i w , b max ∣∣ w ∣∣ 2 c , s . t . y i ( w T x i + b ) ≥ c , ∀ i This problem easily changes into the problem because c just changes the scale of w and b.
max w , b 1 ∣ ∣ w ∣ ∣ , s . t . y i ( w T x i + b ) ≥ 1 , ∀ i \max_{w,b} \dfrac{1}{||w||}, \; s.t.\; y_i(w^Tx_i+b) \geq1,\forall i w , b max ∣∣ w ∣∣ 1 , s . t . y i ( w T x i + b ) ≥ 1 , ∀ i It is equal to a constrained convex quadratic problem.
min w , b ∣ ∣ w ∣ ∣ 2 , s . t . y i ( w T x i + b ) ≥ 1 , ∀ i L ( w , b , α ) = ∣ ∣ w ∣ ∣ 2 + ∑ i α i ( y i ( w T x i + b ) − 1 ) \min_{w,b} ||w||^2, \; s.t. \; y_i(w^Tx_i+b) \geq 1, \forall i \\ L(w,b,\alpha)=||w||^2+\sum_i\alpha_i(y_i(w^Tx_i+b)-1) w , b min ∣∣ w ∣ ∣ 2 , s . t . y i ( w T x i + b ) ≥ 1 , ∀ i L ( w , b , α ) = ∣∣ w ∣ ∣ 2 + i ∑ α i ( y i ( w T x i + b ) − 1 ) This primal problem can be changed into a dual problem.
min w , b 1 2 w T w , s . t . 1 − y i ( w T x i + b ) ≤ 0 , ∀ i L ( w , b , α ) = 1 2 w T w + ∑ i α i ( 1 − y i ( w T x i + b ) ) \min_{w,b} \dfrac{1}{2}w^Tw, \; s.t. \; 1-y_i(w^Tx_i+b) \leq0, \forall i \\ L(w,b,\alpha)=\dfrac{1}{2}w^Tw+\sum_i\alpha_i(1-y_i(w^Tx_i+b)) w , b min 2 1 w T w , s . t . 1 − y i ( w T x i + b ) ≤ 0 , ∀ i L ( w , b , α ) = 2 1 w T w + i ∑ α i ( 1 − y i ( w T x i + b )) By Taking derivative and set to zero,
∂ L ∂ w = w − ∑ i α i y i x i = 0 ∂ L ∂ b = ∑ i α i y i = 0 \dfrac{\partial L}{\partial w}=w-\sum_i\alpha_iy_ix_i=0 \\ \dfrac{\partial L}{\partial b}=\sum_i \alpha_iy_i=0 ∂ w ∂ L = w − i ∑ α i y i x i = 0 ∂ b ∂ L = i ∑ α i y i = 0 Using these equations, the problem becomes more easy
L ( w , b , α ) = ∑ i α i − 1 2 ∑ i , j α i α j y i y j ( x i T x j ) , s . t . α i ≥ 0 , ∑ i α i y i = 0 L(w,b,\alpha)=\sum_i \alpha_i-\dfrac{1}{2}\sum_{i,j}\alpha_i\alpha_jy_iy_j(x_i^Tx_j), \\ s.t. \; \alpha_i \geq 0, \; \sum_i \alpha_i y_i =0 L ( w , b , α ) = i ∑ α i − 2 1 i , j ∑ α i α j y i y j ( x i T x j ) , s . t . α i ≥ 0 , i ∑ α i y i = 0 This is a constrained quadratic programming.
KKT condition(1)
In the KKT condition, there is the condition α i g i ( w ) = 0 \alpha_i g_i(w)=0 α i g i ( w ) = 0 .
α i ( 1 − y i ( w T x i + b ) ) = 0 \alpha_i(1-y_i(w^Tx_i+b))=0 α i ( 1 − y i ( w T x i + b )) = 0 alpha can be zero, or the other thing can be zero. When alpha is nonzero, the training points which have to be on the decision line are called support vectors.
KKT condition(2)
∂ L ∂ w = 0 \dfrac{\partial L}{\partial w}=0 ∂ w ∂ L = 0 , so w = ∑ i α i y i x i w=\sum_i \alpha_iy_ix_i w = ∑ i α i y i x i
For a new test point z, we need to compute this
w T z + b = ∑ i ∈ s u p p o r t v e c t o r s α i y i ( x i z ) + b w^Tz+b=\sum_{i \in support\; vectors} \alpha_iy_i(x_iz)+b w T z + b = i ∈ s u pp or t v ec t ors ∑ α i y i ( x i z ) + b
b can be derived from 1 − y i ( w T x i + b ) = 0 , w i t h α i > 0 1-y_i(w^Tx_i+b)=0, \; with \; \alpha_i>0 1 − y i ( w T x i + b ) = 0 , w i t h α i > 0
Problem 1
Data could be not linearly separable. In this case we use this.
( w T x + b ) y ≥ 1 − ξ (w^Tx+b)y \geq 1-\xi ( w T x + b ) y ≥ 1 − ξ min w , b , ξ ∣ ∣ w ∣ ∣ 2 + C ∑ i ξ i , s . t . y i ( w T x i + b ) ≥ 1 − ξ i , ξ i ≥ 0 , ∀ i min w , b 1 2 w T w , s . t . 1 − y i ( w T x i + b ) − ξ i ≤ 0 , ξ i ≥ 0 , ∀ i L ( w , b , α , β ) = 1 2 w T w + ∑ i C ξ i + α i ( 1 − y i ( w T x i + b ) − ξ i ) − β i ξ i \min_{w,b,\xi} ||w||^2+C\sum_i \xi_i, s.t. \; y_i(w^Tx_i+b)\geq1-\xi_i, \; \xi_i\geq0,\forall i \\ \min_{w,b} \dfrac{1}{2}w^Tw, \; s.t. \; 1-y_i(w^Tx_i+b)-\xi_i \leq0, \; \xi_i \geq0, \forall i \\ L(w,b,\alpha,\beta)=\dfrac{1}{2}w^Tw+\sum_i C\xi_i+\alpha_i(1-y_i(w^Tx_i+b)-\xi_i)-\beta_i\xi_i w , b , ξ min ∣∣ w ∣ ∣ 2 + C i ∑ ξ i , s . t . y i ( w T x i + b ) ≥ 1 − ξ i , ξ i ≥ 0 , ∀ i w , b min 2 1 w T w , s . t . 1 − y i ( w T x i + b ) − ξ i ≤ 0 , ξ i ≥ 0 , ∀ i L ( w , b , α , β ) = 2 1 w T w + i ∑ C ξ i + α i ( 1 − y i ( w T x i + b ) − ξ i ) − β i ξ i L ( w , b , α , β ) = ∑ i α i − 1 2 ∑ i , j α i α j y i y j ( x i T x j ) max α ∑ i α i − 1 2 ∑ i , j α i α j y i y j ( x i T x j ) , s . t . C − α i − β i = 0 , α i ≥ 0 , β i ≥ 0 , i = 1 , . . . , m , ∑ i α i y i = 0 L(w,b,\alpha,\beta)=\sum_i\alpha_i-\dfrac{1}{2}\sum_{i,j}\alpha_i\alpha_jy_iy_j(x_i^Tx_j) \\ \max_\alpha \sum_i \alpha_i - \dfrac{1}{2} \sum_{i,j} \alpha_i \alpha_j y_i y_j (x_i^Tx_j), \\ s.t. \; C-\alpha_i-\beta_i=0, \; \alpha_i \geq 0, \; \beta_i \geq 0, i=1,...,m, \sum_i \alpha_iy_i=0 L ( w , b , α , β ) = i ∑ α i − 2 1 i , j ∑ α i α j y i y j ( x i T x j ) α max i ∑ α i − 2 1 i , j ∑ α i α j y i y j ( x i T x j ) , s . t . C − α i − β i = 0 , α i ≥ 0 , β i ≥ 0 , i = 1 , ... , m , i ∑ α i y i = 0 It is also a constrained quadratic programming.
Soft margin SVM with primal form
min w , b , ξ ∣ ∣ w ∣ ∣ 2 + C ∑ i ξ i s . t . y i ( w T x i + b ) ≥ 1 − ξ i , ξ i ≥ 0 , ∀ i \min_{w,b,\xi} ||w||^2+C \sum_i \xi_i \\ s.t. \; y_i(w^Tx_i+b) \geq 1-\xi_i, \xi_i \geq 0, \forall i w , b , ξ min ∣∣ w ∣ ∣ 2 + C i ∑ ξ i s . t . y i ( w T x i + b ) ≥ 1 − ξ i , ξ i ≥ 0 , ∀ i min w , b ∣ ∣ w ∣ ∣ 2 + C ∑ i m a x { 0 , 1 − y i ( w T x i + b ) } \min_{w,b} ||w||^2+C\sum_i max\{0,1-y_i(w^Tx_i+b)\} min w , b ∣∣ w ∣ ∣ 2 + C ∑ i ma x { 0 , 1 − y i ( w T x i + b )}
It is an unconstrained strong convex problem, but not necessarily differentiable. Because it doesn't be differentiable over domain, we use subgradient method instead of original gradient method.
Anyway, this gradient method is used to find the optimal w.
b is solved by this equation: 1 − y i ( w T x i + b ) = 0 1-y_i(w^Tx_i+b)=0 1 − y i ( w T x i + b ) = 0
J ( w , b ) = ∣ ∣ w ∣ ∣ 2 + C ∑ m a x { 0 , 1 − y i ( w T x i + b ) } J(w,b) = ||w||^2+C\sum max\{0,1-y_i(w^Tx_i+b)\} J ( w , b ) = ∣∣ w ∣ ∣ 2 + C ∑ ma x { 0 , 1 − y i ( w T x i + b )}
∇ w s u b J ( w , b ) = w − C ∑ i { 0 , 1 − y i ( w T x i + b ) ≤ 0 y i x i , 1 − y i ( w T x i + b ) > 0 \nabla^{sub}_w J(w,b)=w-C\sum_i \begin{cases} 0, & 1-y_i(w^Tx_i+b) \leq 0 \\y_ix_i, & 1-y_i(w^Tx_i+b)>0 \end{cases} ∇ w s u b J ( w , b ) = w − C i ∑ { 0 , y i x i , 1 − y i ( w T x i + b ) ≤ 0 1 − y i ( w T x i + b ) > 0 ∇ b s u b J ( w , b ) = − C ∑ i { 0 , 1 − y i ( w T x i + b ) ≤ 0 y i , 1 − y i ( w T x i + b ) > 0 \nabla^{sub}_b J(w,b)=-C\sum_i\begin{cases} 0, & 1-y_i(w^Tx_i+b) \leq 0 \\ y_i, & 1-y_i(w^Tx_i+b)>0 \end{cases} ∇ b s u b J ( w , b ) = − C i ∑ { 0 , y i , 1 − y i ( w T x i + b ) ≤ 0 1 − y i ( w T x i + b ) > 0 w n e w = w o l d − η ∇ w s u b J ( w , b ) , b n e w = b o l d − η ∇ b s u b J ( w , b ) w^{new}=w^{old}-\eta \nabla^{sub}_{w}J(w,b), \; \; b^{new}=b^{old}-\eta \nabla^{sub}_{b}J(w,b) w n e w = w o l d − η ∇ w s u b J ( w , b ) , b n e w = b o l d − η ∇ b s u b J ( w , b )
Subgradient method
g i s a s u b g r a d i e n t o f f : X → R a t x ∈ X f o r a n y y ∈ X : f ( y ) ≥ f ( x ) + ⟨ g , y − x ⟩ g \; is \; a \; subgradient \;of \;f:X\rightarrow R \; at \; x \in X \\ for \; any \; y \in X: f(y) \geq f(x)+\langle g,y-x\rangle g i s a s u b g r a d i e n t o f f : X → R a t x ∈ X f or an y y ∈ X : f ( y ) ≥ f ( x ) + ⟨ g , y − x ⟩
[More about subgradient]
Regression Problem
f ( x ) = x T β + β 0 H ( β , β 0 ) = Σ i = 1 N V ( y i − f ( x i ) ) + λ 2 ∣ ∣ β ∣ ∣ 2 f(x)=x^T\beta+\beta_0 \\ H(\beta,\beta_0)=\Sigma^N_{i=1}V(y_i-f(x_i))+\dfrac{\lambda}{2}||\beta||^2 f ( x ) = x T β + β 0 H ( β , β 0 ) = Σ i = 1 N V ( y i − f ( x i )) + 2 λ ∣∣ β ∣ ∣ 2 In regression problem, we want to permit the error size by ϵ \epsilon ϵ . Based on the regression line, the point in the range from − ϵ -\epsilon − ϵ to + ϵ +\epsilon + ϵ is regarded as the correct point.
V ϵ ( r ) = { 0 i f ∣ r ∣ < ϵ , ∣ r ∣ − ϵ , o t h e r w i s e . V_\epsilon(r)=\begin{cases} 0 & if \;|r|<\epsilon, \\ |r|-\epsilon, & otherwise. \end{cases} V ϵ ( r ) = { 0 ∣ r ∣ − ϵ , i f ∣ r ∣ < ϵ , o t h er w i se . V H ( r ) = { r 2 / 2 i f ∣ r ∣ ≤ c , c ∣ r ∣ − c 2 / 2 , ∣ r ∣ > c V_H(r)=\begin{cases} r^2/2 & if \; |r| \leq c, \\ c|r|-c^2/2, & |r|>c \end{cases} V H ( r ) = { r 2 /2 c ∣ r ∣ − c 2 /2 , i f ∣ r ∣ ≤ c , ∣ r ∣ > c SVMs solve binary classification problems by formulating them as convex optimization problems (Vapnik 1998). The optimization problem entails finding the maximum margin separating the hyperplane, while correctly classifying as many training points as possible. SVMs represent this optimal hyperplane with support vectors. - Reference
The problem above can be solved by another optimization problem.
In the view of support vector, this problem is changed into this form.(let β \beta β be equal to w w w )
min w 1 2 ∣ ∣ w ∣ ∣ 2 + C ⋅ Σ i = 1 N ( ξ i ∗ + ξ i ) , s . t . y i − w T x i ≤ ε + ξ i ∗ w T x i − y i ≤ ε + ξ i ξ i , ξ i ∗ ≥ 0 \min_{w}\dfrac{1}{2}||w||^2+C\cdot\Sigma^N_{i=1}(\xi_i^*+\xi_i), \\ s.t. \;\; y_i-w^Tx_i\leq\varepsilon+\xi_i^* \\ w^Tx_i-y_i\leq\varepsilon+\xi_i \\ \xi_i,\xi_i^*\geq0 w min 2 1 ∣∣ w ∣ ∣ 2 + C ⋅ Σ i = 1 N ( ξ i ∗ + ξ i ) , s . t . y i − w T x i ≤ ε + ξ i ∗ w T x i − y i ≤ ε + ξ i ξ i , ξ i ∗ ≥ 0 L ( w , ξ , ξ ∗ , λ , λ ∗ , α , α ∗ ) = 1 2 ∣ ∣ w ∣ ∣ 2 + C ⋅ Σ ( ξ i + ξ i ∗ ) + Σ α i ∗ ( y i − w T x i − ε − ξ i ∗ ) + Σ α i ( − y i + w T x i − ε − ξ i ) − Σ ( λ i ξ i − λ i ∗ ξ i ∗ ) L(w,\xi,\xi^*,\lambda,\lambda^*,\alpha,\alpha^*)=\dfrac{1}{2}||w||^2+C\cdot\Sigma(\xi_i+\xi_i^*)+\Sigma\alpha_i^*(y_i-w^Tx_i-\varepsilon-\xi_i^*)\\+\Sigma \alpha_i(-y_i+w^Tx_i-\varepsilon-\xi_i)-\Sigma(\lambda_i\xi_i-\lambda_i^*\xi_i^*) L ( w , ξ , ξ ∗ , λ , λ ∗ , α , α ∗ ) = 2 1 ∣∣ w ∣ ∣ 2 + C ⋅ Σ ( ξ i + ξ i ∗ ) + Σ α i ∗ ( y i − w T x i − ε − ξ i ∗ ) + Σ α i ( − y i + w T x i − ε − ξ i ) − Σ ( λ i ξ i − λ i ∗ ξ i ∗ ) By taking derivative be equal to 0,
∂ L ∂ w = w − Σ ( α i ∗ − α i ) x i = 0 ∂ L ∂ ξ i ∗ = C − λ i ∗ − α i ∗ = 0 , ∂ L ∂ ξ i = C − λ i − α i = 0 ∂ L ∂ λ i ∗ = Σ ξ i ∗ ≤ 0 , ∂ L ∂ λ i = Σ ξ i ≤ 0 ∂ L ∂ α i ∗ = y i − w T x i − ε − ξ i ∗ ≤ 0 , ∂ L ∂ α i = − y i + w T x i − ε − ξ i ≤ 0 \dfrac{\partial L}{\partial w}=w-\Sigma(\alpha_i^*-\alpha_i)x_i=0 \\ \dfrac{\partial L}{\partial \xi_i^*}=C-\lambda_i^*-\alpha_i^*=0, \;\; \dfrac{\partial L}{\partial \xi_i}=C-\lambda_i-\alpha_i=0 \\ \dfrac{\partial L}{\partial \lambda_i^*}=\Sigma \xi_i^* \leq 0, \;\; \dfrac{\partial L}{\partial \lambda_i}=\Sigma \xi_i \leq 0 \\ \dfrac{\partial L}{\partial \alpha_i^*}=y_i-w^Tx_i-\varepsilon-\xi_i^*\leq 0, \;\; \dfrac{\partial L}{\partial \alpha_i}=-y_i+w^Tx_i-\varepsilon-\xi_i\leq0 ∂ w ∂ L = w − Σ ( α i ∗ − α i ) x i = 0 ∂ ξ i ∗ ∂ L = C − λ i ∗ − α i ∗ = 0 , ∂ ξ i ∂ L = C − λ i − α i = 0 ∂ λ i ∗ ∂ L = Σ ξ i ∗ ≤ 0 , ∂ λ i ∂ L = Σ ξ i ≤ 0 ∂ α i ∗ ∂ L = y i − w T x i − ε − ξ i ∗ ≤ 0 , ∂ α i ∂ L = − y i + w T x i − ε − ξ i ≤ 0 Using KKT condition,
α i ( − y i + w T x i − ε − ξ i ) = 0 α i ∗ ( y i − w T x i − ε − ξ i ∗ ) = 0 λ i ξ i = 0 , λ i ∗ ξ i ∗ = 0 α i , α i ∗ ≥ 0 \alpha_i(-y_i+w^Tx_i-\varepsilon-\xi_i)=0\\ \alpha_i^*(y_i-w^Tx_i-\varepsilon-\xi_i^*)=0\\ \lambda_i\xi_i=0, \;\; \lambda_i^*\xi_i^*=0\\ \alpha_i,\alpha_i^*\geq0 α i ( − y i + w T x i − ε − ξ i ) = 0 α i ∗ ( y i − w T x i − ε − ξ i ∗ ) = 0 λ i ξ i = 0 , λ i ∗ ξ i ∗ = 0 α i , α i ∗ ≥ 0 The form is summarized like this.
β ^ = ∑ i = 1 N ( α ^ i ∗ − α ^ i ) x i , f ^ ( x ) = ∑ i = 1 N ( α ^ i ∗ − α ^ i ) ⟨ x , x i ⟩ + β 0 \hat{\beta}=\sum^N_{i=1}(\hat{\alpha}_i^*-\hat{\alpha}_i)x_i, \;\;\hat{f}(x)=\sum^N_{i=1}(\hat{\alpha}_i^*-\hat{\alpha}_i)\langle x,x_i\rangle+\beta_0 β ^ = i = 1 ∑ N ( α ^ i ∗ − α ^ i ) x i , f ^ ( x ) = i = 1 ∑ N ( α ^ i ∗ − α ^ i ) ⟨ x , x i ⟩ + β 0 α ^ i ∗ , α ^ i \hat{\alpha}_i^*,\hat{\alpha}_i α ^ i ∗ , α ^ i are the optimization parameters of this quadratic problem.(It is just a form which replaces some elements in Lagrangian expression.)
min α i , α i ∗ ε ∑ i = 1 N ( α i ∗ + α i ) − ∑ i = 1 N y i ( α i ∗ − α i ) + 1 2 ∑ i , i ′ = 1 N ( α i ∗ − α i ) ( α i ′ ∗ − α i ′ ) ⟨ x , x i ⟩ \min_{\alpha_i,\alpha_i^*} \varepsilon \sum^N_{i=1}(\alpha^*_i+\alpha_i)-\sum^N_{i=1}y_i(\alpha_i^*-\alpha_i)+\dfrac{1}{2}\sum^N_{i,i'=1}(\alpha_i^*-\alpha_i)(\alpha^*_{i'}-\alpha_{i'})\langle x,x_i \rangle α i , α i ∗ min ε i = 1 ∑ N ( α i ∗ + α i ) − i = 1 ∑ N y i ( α i ∗ − α i ) + 2 1 i , i ′ = 1 ∑ N ( α i ∗ − α i ) ( α i ′ ∗ − α i ′ ) ⟨ x , x i ⟩ s . t . 0 ≤ α i , α i ∗ ≤ 1 / λ , ∑ i = 1 N ( α i ∗ − α i ) = 0 , α i α i ∗ = 0 s.t. \;\;0 \leq \alpha_i,\; \alpha_i^*\leq1/\lambda, \; \sum^N_{i=1}(\alpha_i^*-\alpha_i)=0, \; \alpha_i\alpha_i^*=0 s . t . 0 ≤ α i , α i ∗ ≤ 1/ λ , i = 1 ∑ N ( α i ∗ − α i ) = 0 , α i α i ∗ = 0 By replacing ⟨ x , x i ⟩ \langle x,x_i\rangle ⟨ x , x i ⟩ with other inner product function, we can generalize this method to richer spaces. ε \varepsilon ε plays a role of determining how many errors we tolerate.
Code From Scratch
Copy # Import library and data
import pandas as pd
import numpy as np
import matplotlib as mpl ; mpl . rcParams [ 'axes.unicode_minus' ] = False
import matplotlib . pyplot as plt ; plt . rcParams [ 'font.family' ] = 'AppleGothic'
import time
from sklearn import datasets
X , y = datasets . make_blobs (n_samples = 100 , centers = 2 , n_features = 2 , center_box = ( 0 , 10 ))
y = np . where (y == 0 , - 1 , 1 )
plt . plot (X[:, 0 ][y == 0 ], X[:, 1 ][y == 0 ], 'g^' )
plt . plot (X[:, 0 ][y == 1 ], X[:, 1 ][y == 1 ], 'bs' )
plt . show ()
df = pd . DataFrame (np. column_stack ([X,y]), columns = [ 'x1' , 'x2' , 'y' ])
def evaluate ( w_old , b_old , C ):
idx = np . where (df[ 'y' ].values * (df[[ 'x1' , 'x2' ]].values @ w_old + b_old) < 1 ) [ 0 ]
df_idx = df . iloc [ idx ]
yixi = (np . expand_dims (df_idx[ 'y' ].values, 1 ) * df_idx [ [ 'x1' , 'x2' ] ]. values)
yi = df_idx [ 'y' ]. values
w_subgrad = w_old - C * sum (yixi)
b_subgrad = - C * sum (yi)
return w_subgrad , b_subgrad
def batch_subgrad ( learning_rate ):
w_old = np . array ([ 0 , 0 ]) ; b_old = 0 #initialization
w_subgrad , b_subgrad = evaluate (w_old, b_old, C = 100 )
diff = 1 ; i = 0
while (diff > 10e-6 ) :
w_new = w_old - learning_rate * w_subgrad
b_new = b_old - learning_rate * b_subgrad
w_subgrad , b_subgrad = evaluate (w_new, b_new, C = 100 )
diff = sum (np. abs (w_new - w_old))
w_old , b_old = w_new , b_new
i += 1
if (i >= 20000 ) : break
print ( f 'Total iteration: { i } ' )
return w_new , b_new
def stoch_subgrad ( w_old , b_old , C , learning_rate ):
epoch = 0 ; diff = 1
while (diff > 10e-6 ) :
for x1 , x2 , y in df . values :
x = np . array ([x1,x2])
if (y * (x @ w_old + b_old) < 1 ) :
w_subgrad = w_old - C * (y * x)
b_subgrad = - C * y
else :
w_subgrad = w_old
b_subgrad = 0
w_new = w_old - learning_rate * w_subgrad
b_new = b_old - learning_rate * b_subgrad
w_old , b_old = w_new , b_new
epoch += 1
if (epoch >= 200 ) : break
print ( f 'Epochs: { epoch } ' )
return w_new , b_new
batch_start = time . time ()
w1 , b1 = batch_subgrad ( 0.001 )
slope1 , intercept1 = - w1 [ 0 ] / w1 [ 1 ], - b1 / w1 [ 1 ]
batch_end = time . time ()
stoch_start = time . time ()
w2 , b2 = stoch_subgrad (np. array ([ 0 , 0 ]), 0 , 100 , 0.001 )
slope2 , intercept2 = - w2 [ 0 ] / w2 [ 1 ], - b2 / w2 [ 1 ]
stoch_end = time . time ()
print ( 'Batch subgradient time: ' , batch_end - batch_start)
print ( 'Stochastic subgradient time: ' , stoch_end - stoch_start)
fig , ax = plt . subplots (figsize = ( 8 , 6 ))
ax . scatter ( 'x1' , 'x2' , data = df[df[ 'y' ] == 1 ], color = 'orange' )
ax . scatter ( 'x1' , 'x2' , data = df[df[ 'y' ] ==- 1 ], color = 'gray' )
ax . set_xlabel ( 'x1' )
ax . set_ylabel ( 'x2' )
ax . set_title ( 'Soft Margin SVM' )
ax . axline (( 0 , intercept1), slope = slope1, color = 'black' , linewidth = 0.8 )
ax . axline (( 0 , intercept2), slope = slope2, color = 'black' , linestyle = 'dashed' , linewidth = 0.8 )
plt . show ()