I thought it was an animation for the first view. It’s cute!
btw, this post is a test for video update.
So we have a journal club now. In the ‘slab’ (short for neurophotonicslab), several persons had a meeting sharing some interesting and/or relevant papers. For the first talk, Jack from CS shared some good ones. I’m not sure if the other guys felt the same way, but I found it interesting and philosophic.
Here are some points/questions brought up from the meeting I want to share with you:
All computers came from the turing machine, and none of them goes beyond it. Turing machine is the prototype of all the present computers. If you can wait long enough, it can complete any task ran by any computer.
Computer cannot simulate a thing more powerful than itself. Since the computer is developed from the turing machine, and the basic of the turing machine is the RULE. It cannot break the rule, just like: if there is god, we will never know about him, because we are created to be “not possible to know about god”.
Turing test of how to judge if a computer can think. Machines nowadays cannot think, but how can we tell if it can? Turing gave this simple and elegant experiment. We put a computer and a human behind a curtain, and let a person talk/type to talk to one of them, without knowing whom he is talking to. If he cannot tell whom he’s talking to, then we declair the computer can think. It’s not a perfect experiment, you can always argue that. But no one gave a better experiment yet.
Can computer generate *truly* random number? Personally, I think that stops computer from thinking as a person. All the *random* we got from the computer now is not truly random, it’s based on some table or algrithm stuffs. Actually my colleage argued that we don’t come out a truly random number. The desicion we make is based on our experience and present state. That’s not ture. Unless we believe there is a Golden Rule that determines all the things happen next, in which case we admit the exist of god.
Googol is infinite. Googol is the number stands for 100 zeros behind 1. That’s much greater than it seems. But the truth is, if we divide all the things on earth into nano particles, divide the time from the birth of the earth into femtosecond, and multiple them. We cannot get this number. So if we enumerate the universe and cannot achieve it, we can consider it as infinite.
There are also some other interesting points, but the listed are the top of my head.
Currently I’m studying FPGA module of LabVIEW. It is quite a different task from my previous project. Before this, I was using M series DAQ card with DAQmx. I’m not even sure why I have to use FPGA of this project.
So now I *figure out* the reason might be:
The latency is known on FPGA; there is no OS on the H/W, which means a more robust system; there are more I/O ports for data acquisition; if we take use of the parallel algrithm properly, we might achieve a faster system; FPGA sounds cool.
As I said, the way of programming FPGA is different. It looks like you are using global variable through your target vi (code on the FPGA) and host vi (code on the computer). And you use sequence structure instead of data flow to force the execution order. Error cluster is not recommended, which costs *memory*. Some of my *good* programming habits have to be abandoned when I program on the target vi.
So, what I mastered so far is only to create a project generating digital output on lines or ports. I can either put the ‘calculation’ on the target or on the host. I think the questions need to thought of using FPGA are: Does it need UI? Does is cost lots of memory? Is there any calculation very complex that only can be implemented on the computer? If all the answers are NO, we can put it on the target, or else we have to use the host pc to do the calculation.
My next move is trying to generate a clock together the signal. The rising edge of the clock shold be within signal pulses. This is very useful when we drive the SLM in the future.
It’s tricky to post a new post. I cannot find the login toolbar on the top, and I finally found ‘Post at wordpress.com’ link at the bottom. What if I don’t want to post, and I simply want to manage my setting? Still the same link. It got me.
Anyway, this is a plot I drew to describe the study curve of LabVIEW. It’s so easy to hand on, and when you go deeper, you always find the screen is too small. That happens to most of people I know. My suggestion is, LEARN SOME STYLES. <The LabVIEW Style Book> is the book I recommend most.
Btw, the way of publishing in wordpress is tricky as well. It’s not on the top nor the bottom as I expected, it’s on the right.
To begin with, I would say the title of this blog is simply to memory my first LabVIEW blog based in vihome.com.cn, which was lost for no reason. I may not stick with this title, once I come up with a smarter one.
I am a PhD student studying in UK, originally from CN. I chose the blog here because:
0, It’s cool, the template, the font, etc;
1, It’s stable, or that’s what I assumed. I just dont want find my posts again from Google cached;
2, It’s world wide. So I can communicate with more (?) people about our common hobby. And if I said somethings bad occasionally, I wont be chased by some unknown people.
3, It’s flexible.
4, It’s future. I think I would use it sooner or later, so why not occupy the domain ‘foolooo’ at the earliest time? 🙂
Thank you all.