# Liu Sida's Homepage

Machine Learning & Human Learning

# Nuance in Monty Hall Paradox

Marilyn vos Savant has made a mistake. She knew the game show Let’s Make a Deal too well, that she assumed the rules of the game show also applied to the question she was asked.

On the websi...

# What is Mathematics According to Keith Devlin

In Keith Devlin’s book ‘Introduction to Mathematical Thinking’, he writes,

“Virtually nothing (with just two further advances, bo...

# Dynamic NN Allowing Additional Evidence?

We have traditional Neural Network (NN), with static structure like this:

signal -> input -> hidden layer -> prediction =?= truth

-> means to propagate forward =?= means ...

# Why does the person with highest IQ not become the most successful one?

I heard about the “Study of Mathematically Precocious Youth After 35 Years” years ago, but after studying machine learning, especially the generalization problem, I guess I have glanced some possib...

# Use Tensorflow to Compute Gradient

In most of Tensorflow tutorials, we use minimize(loss) to automatically update parameters of the model.

In fact, minimize() is an integration of two steps: computing gradients, and applying ...

# Scree of PCA(Principal Component Analysis)

I learned the concept of PCA today, and found out this method of reducing dimension is quite terse.

If we do PCA to a 40-d dataset, reduce it into a 2-d dataset, it simply choose the 2 most ...

# Learning Rate is Too Large

What if I see a training accuracy scalar graphic like this:

The accuracy curve of training mini-batch is going down a little bit over time after reached a relative high point. That m...

# Manipulating Tensorboard

Tensorboard is a very useful tool for visualizing the logs of Tensorflow. It is now an independent project on GitHub, here’s the link.

In the past, if we were doing small projects, we usuall...

# Implement a Deep Neural Network using Python and Numpy

I have just finished Andrew Ng’s new Coursera courses of Deep Learning (1-3). They are one part of his new project DeepLearning.ai.

In those courses, there is a series of interview of Heroes...

# Cross Entropy 的通俗意义

cross_entropy 公式如下：

# 学习Tensorflow的LSTM的RNN例子

RNN是一个非常棒的技术，可能它已经向我们揭示了“活”的意义。RNN我已经尝试学习了几次，包括前面我这篇笔记，所以就直接进入代码阅读吧。

# 学习Tensorflow的Embeddings例子

Udacity 上有一个 Google 技术人员提供的基于 Tensorflow 的深度学习课程，今天学到 Embeddings ，有点难理解，所以写个笔记记录下，以备日后回忆。

Udacity课程视频 这个课程在 Udacity 上的难度级别已经是 高 了。估计再下去就更少视频学习内容了。:~(

# Recurrent Neural Network(RNN) Implementation

I heard about RNN for a long time, and have learned the concept several times, but until yesterday, I can’t implement any useful code to solve my own problem.

So I checked some tutorial. The...

# from Tensorflow to Keras

I know 3 high level api for deep learning. They are Tensorflow.contrib.learn(SKFlow), TFLearn and Keras. All of them are great tools, but maybe I like Keras because of the easy style of code.

...

Yesterday, I saw tf.contrib.layers.sparse_column_with_hash_bucket in a tutorial. That’s a very useful function! I thought. I never met such a function in Keras or TFLearn.

Basically, the fun...

# Thank you, GitHub and Jekyll

Thank you, GitHub and Jekyll.

Now I have my homepage again.

I’d like to share my thoughts and ideas and source codes here. Hope they helped.

btw, I am so happy to see Jekyll’s ...