May, 2020
This blog presents a cluster analysis using the Gaussian mixture algorthm to estimate two components of travel speed distribution. These two components can be referred to as the slow moving component, congested regime and high speed component, free-flow regime.
# loading the library needed
using Turing, DataFrames, CSV
using MCMCChains, Plots, StatsPlots
gr(leg = false, bg = :lightyellow)
The traffic speed data used in the analysis were collected from a freeway using the micro wave detectors at 5-mins resolution. Lets load the data and review the dataframe
d_0 = CSV.read("../df.csv",copycols=true)
first(d_0, 6)
Here we are ploting the histogram of the travel speed and occupancy to see if the multimodal shape can be visually observed.
p1 = histogram(d_0[!, :speed], title = "Speed distribution", xlabel = "Speed (mph)")
p2 = histogram(d_0[!, :occupancy], title = "Occupancy distribution", xlabel = "Occupancy (%)")
p = plot(p1, p2, ylabel = "Data density", size = (950,500))
Looking at the figure above we can see that travel speed hase multimodal characteristics. Thus, the Gaussian mixture is more approapriate fitting this data to approximate the ditribution closely than unimodal distribution model. Ofcourse, we can compare the unimodal against multimadol distribution statisticaly to very this claim.
The Gaussian mixture model combine two or Gaussian densities by considering their weight composing the distribution. In this blog, two component Gaussian model is considered. For practical application, it a good idea to compare several components and select the best model that describe the data at hand. There are several metrics that can used such as information criteria approach, Bayesian factor etc.
\begin{equation} \ y_{i} \sim \sum\limits_{k=1}^2 \pi_{k}\mathcal{N}(y_{i}|\mu_{k},\delta_{k}^2) \label{eq1} \tag{1} \\ \pi_{1} \sim Beta(1, 1) \\ \pi_{1} + \pi_{1} = 1 \\ \mu_{k} \sim \mathcal{N}(0, 1) \\ \delta_{k} \sim \mathcal{HalfNormal}(1) \end{equation}The above Equation 1 can be written as follows in Turing for parameters estimation
@model MarginalizedGMM(x, K) = begin
N = length(x)
# Vector of std (1,..,K)
σ ~ filldist(truncated(Normal(0, 1), 0, Inf), K)
# Vector of mu (1,..,K)
μ ~ filldist(Normal(0, 1), K)
# Draw weights.
Ï€1 ~ Beta(1,1)
π2 = 1 - π1
#likelihood
for i in 1:N
x[i] ~ MixtureModel(Normal, [(μ[i], σ[i]) for i = 1:K], [π1, π2])
end
end;
dat1 = d_0[:speed]
dat1 .-= mean(dat1)
dat1 /= std(dat1);
dat0 = d_0[:occupancy]
dat0 .-= mean(dat0)
dat0 /= std(dat0);
gmm = MarginalizedGMM(dat1, 2)
chn = sample(gmm, NUTS(100, 0.95), 200);
plot(chn)
describe(chn)
p = get_params(chn);
x_plot = -4:0.1:1.5
post_pdfs_1 = [pdf(Normal(p.μ[1][i], p.σ[1][i]), x) * p.π1[i] for x = x_plot, i = 1:50]
post_pdfs_2 = [pdf(Normal(p.μ[2][i], p.σ[2][i]), x) * (1-p.π1[i]) for x = x_plot, i = 1:50];
histogram(dat1, xlabel = "Speed (std)", ylabel = "Data density", normed=true, legend = false)
plot!(x_plot, post_pdfs_1, lw = 3, alpha = 0.5, color = :gray, legend = false)
plot!(x_plot, post_pdfs_2, lw = 3, alpha = 0.5, color = :gray, legend = false)
plot!(x_plot, post_pdfs_1 + post_pdfs_2, lw = 3, alpha = 0.5, color = :green, legend = false)
You will notice that I chose Beta distribution for weight as the prior distribution. I found the model running faster with this prior distribution than using the Dirichlet distribution. I will try to expand this model by incorporating the occupancy variable in the model.
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom_5.css", "r").read()
return HTML(styles)
css_styling()