Tag: machine learning

Hidden Markov Model in LabVIEW

Consider this scenario: a guy (let’s call him John) has three dices, Dice 1, 2 and 3. The shapes of the dices are different. Dice 1 has number [1, 2, 3, 4, 5, 6] on it, Dice 2 has number [1, 2, 3, 4] and Dice 3 has [1, 2, 3, 4, 5, 6, 7, 8], as seen in the following figure.

image6

http://www.niubua.com/?p=1733

John throws one dice each time and the probability he picks the next dice is based on his previous selection. For example, he is more LIKELY to pickup Dice 1 if he just picked Dice 2 last time, and he UNLIKELY to pick Dice 1 if he just picked Dice 3 last time. We do not know which dice he selected but we can see the number shown on the dice. Now, after the observation of a sequence of throwing dices, what do you think the next number would be?

This may sound a very difficult question but actually in linguistics the researchers are dealing with this kind of problem all the time. It’s like you can HEAR the sound of each word every time and based on the HIDDEN connection rule of the words (i.e. syntax and meaning) we want to predict what the next word could be. Mathematical models were built to represent this type of question. In this example, the states are determined by its previous state(s) and we call it Markov Model, or Markov Chain. A simple case is the state is determined by its previous one state — a Markov chain of order 1. Also, which dice (state) was selected is not know and instead, the consequence of the state (number) can be observed. It is called Hidden Markov Model.

There are three problems in HMM that need be addressed. They are 1) Evaluation: Given the probability of the state transmission and the probability of the shown observations of each hidden state (I.e. for a given HMM), calculate the probability of an observed sequence.Β  2) Decoding: Given the HMM and the observed sequence, what is the most likely hidden states happened behind this. 3) Learning: Given the observed sequence, estimate the HMM. As we can see from this, the third problem is the most difficult one.

Hidden Markov Model (HMM) is a powerful tool for analyzing the time series signal. There is a good tutorial explaining the concept and the implementation of HMM. There are codes implementing HMM in different languages such as C, C++, C#, Python, MATLAB and Java etc. Unfortunately I failed to find one implemented in LabVIEW. This may be a reinvention of the wheel, but instead of calling the DLLs in LabVIEW, I built one purely in LabVIEW with no additional add-ons needed.

Multiple references were used to implement this LabVIEW HMM toolkit. [1], [2], [3], [4]. The test demo of forward algorithm, backward algorithm and Viterbi Algorithm in the code referenced [2].

The following demo analyzed the hidden states of a chapter of texts. You can find the detailed description in [1]. The following figure is the observed sequence of the HMM model. There are about 50,000 characters (including space) in this text. All punctuations were removed and only the space and letters were kept as the hidden states. Thus there are 27 states, State 0 to 26, of which State 0 = Space, State 1 = a/A, State 2 = b/B and so on.

Sample

With no prior knowledge of this text, or even English, we initialize a HHM model that has two hidden states. The probability of propagating from one state to another is unknown yet. The 26 letters are the observed phenomenons of the hidden states. The probability of each letter in State 1 is plotted in dots, and the probability of each letter is plotted in line in the 2nd state.

Init b

Running the forward-backward algorithm in HMM we obtained two states: Letters A, E, I, O, U more likely to appear in State 1 while the rest letters more likely to appear in State 2. So with no specified rules or prior knowledge we managed to divide the letters into vowels and consonants.Β  πŸ™‚

Final b

[1] https://www.cs.sjsu.edu/~stamp/RUA/HMM.pdf
[2] http://www.comp.leeds.ac.uk/roger/HiddenMarkovModels/html_dev/main.html
[3] http://www.52nlp.cn/hmm-learn-best-practices-one-introduction (Chinese)
[4] http://www.kanungo.com/software/software.html#umdhmm

 

 

 

 

Tags : , , ,

Genetic Algorithm in LabVIEW to solve the shortest path routine problem

I came across Genetic Algorithm (GA) the other day when I was doing the project. It is typically adopted to solve the shortest path routine problem or design and optimize the structure of proteins. It is a very smart algorithm inspired by the biological system.

I will try to describe the idea behind the concept briefly: In some problems there are many possible solutions, and we look for the best one. To find this very best solution it is like creating the chromosome of the genes in the most optimized order. To find out the best (-ish) combination, one way is comparing ALL possible combinations, which is impractical in some cases.Β  So instead of listing all solutions and comparing them, a sub-group of solutions (population) are created, and then we pick two out of them as the parents. The better the solution was, the higher chance it can be selected. Then the two chromosomes crossover (exchange genes) to “breed” new populations. There is a chance of mutation for the new population as well. The new population are usually more “advanced” than their parent population (not necessarily better than their parents). Then new parents are picked out again to breed new populations and so on.

Typically the genes to order in the chromosome are binary, but we can also do that for integer numbers and other values. Please find this tutorial for more encoding methods.

To demonstrate the implementation of GA in LabVIEW I downloaded the coordinates of 31 cities of China and tried to find out the shortest path routine of them. So here is the .gif demo (the labels for X and Y axises should be “Latitude” and “Longitude”). Please note that this may not be the BEST solution. But in term of the number iterations we ran it, it is good enough.

 

Shortest path routine for 31 cities in China
Shortest path routine for 31 cities in China

You can download this code here:Genetic Algorithm. Please rename it to .zip file and unzip it.

Let me know your score πŸ˜‰

 

 

Tags : , ,

A quick update for the Flappy Bird project

It seems last post was found by @labview and brought much visiting here. So this project is still undergoing and I was kind of busy this month. But I will try finish this ASAP πŸ™‚

Here are some updates since last post.

An Arduino controlled stylus now can be controlled by clicking the mouse.

Image

There is a tip that the stylus needs to be earthed (via the red wire) to take affect.

And the Flappy Bird game is (poorly) simulated on LabVIEW. So I simulated FB (not facebook!) on LabVIEW to test the Q-learning algorithm in a purely software environment. Now it can pass 5 gaps πŸ™‚ The algorithm needs to be altered a bit and then I will integrate the whole things.

Image

Tags : , ,

Implement Q-Learning algorithm in LabVIEW

So this is the story: Flappy Bird was so popular that my friend suggested that we should develop a LabVIEW kit with a motor to play it. Two days later, we found Sarvagya Vaish managed to score 1000 by applying Q-learning algorithm. A couple of days later, a studio used arduino to play the game. Hmm…I will finish my work anyway.

That’s where I learned about the Q-learning, one of the reinforcement learning algorithm. Here is a brief tutorial helped me to have a better understanding of it. So if a goal is achieved by multiple steps, this algorithm grades each step by assigning a reward to it. Each step, or action, is not graded right away, but one step later. In this way the “right” action can be determined by the reward it received.

The equation can be described as
Q'(s, a) = (1 – alpha)*Q(s, a) + alpha*(R(s, a) + Gamma * Max[Q(s’, all a’)])

Where Q (accumulative experience) is a table of s (state) and a (action), s’ is the next state and a’ is the next action. alpha is the step size and Gamma is the discount reward. I tried to google a Q-learning example in LabVIEW but failed. So I created this vi myself and hope it can be useful to someone.

Image

This is a single loop vi and the shift register stores the value for Q. The reset button is to initialize Q’s value and can be replaced by “first call?” node. The user shall build their own “Reward” vi according to their applications. In this vi the next action is determined by the Q value that rewards the most but it can also be a random action (or other methods).

Please find the tutorial links 1 2 for more information about Q-learning algorithm.

Tags : , , ,

Multiclass classification in LabVIEW using SVM and one-vs-all method

As I mentioned in the last post, I am now studying machine learning in my new position. Today I came across a problem to use SVM to do multiclass classification. The toolkit (link) downloaded from NI did not provide the ability to do multiclass classification with SVM but only for two classes (it’s quite a useful tool still). So I took use of the SVM VIs and made a multiclass version using one-vs-all method.

There is a good tutorial on one-vs-all or one-vs-rest classification by Andrew Ng (link). So basically we pick one class each iteration as Class A and make the rest classes as Class B. Only the test data that locate in Class A are allocated to the known class. Here is the code:

SVM one-vs-all

The original trained labelled data are classified as Class 0, 1, 2, … N. In the i-th iteration, only the data from Class i are re-classified to Class 1 and the rest data are re-classified to Class 0. When the test data locate in class 1 area, they are classified as Class i. Any unsorted data are left in Class -1. When I test the performance of this one-vs-all classifier, the result seems fine πŸ™‚

one-vs-all front panel

The code is not optimized and the execution may cost a while.

Tags : , , ,