These days my supervisor and I have been working on some amazing (and silly) things. Briefly, using LabVIEW to synchronize a movie, a piece of music and a Jacob’s Ladder. The goal is to demo an interesting show to the school kids. And I would like to share a piece (cause it’s not done yet) of the LabVIEW code, which loads the music and plays it.
So what you need is a data acquisition card (analog output rate greater than 44k Hz, at least 2 AO channels if you like stereo), a speaker or an earphone, LabVIEW and your music. In LabVIEW we read the music file and send the waveform file to 2 analog output channels. If you chop your earphone and connect the three (L, R and ground) lines to the corresponding AO ports, you will hear the sound.
What’s the point of doing that rather than inserting the earphone straightly to your laptop? Because WE CAN.
Note: 1. In my test, reading a block of music won’t slow down your DAQ loop, since the AO task is only 44.1k Hz. I used a Producer/Consumer loop just to make it scalable.
2. To prevent the music from section repeated, I selected ‘Do not allow regeneration‘ mode.
3. Enable ‘auto start’ for ‘Analog write‘ VI.
4. Future work can be done on stopping the AO task automaticly once the music is over. To do this, we can simply bundle the ‘end of file?‘ state of ‘sound file read.vi‘ to the queue element, and use the state to stop the consumer loop.
5. Tell me if you really have done this. 🙂
Entering into November, I’ll begin my final year of PhD study. There are still (and always) so many things to do. Experiments, measurements, data, figures and writings. Well, this is the realworld. Sorry for not updating the posts for quite a while, since I was back to China and spent the time with friends and family.
Some one asked the titled question in a forum (link), and many answers were given. This qustion is not hard and may not bother you at all. But the conclusion that a loop was neccesary is incorrect.
Above is an answer given by bincker, sorry for the watermark. The idea was to initialize a number and use the shift register as a counter. Add by one when the condition is fulfilled. Three conditions are listed in this case.
Or, we can do it this way (by me):
I don’t know what happened in the “greater than” node, but the conclusion is we do not really need a loop to do that. There might be a traversal as well but the whole calculate is much simpler. And the bottom line is, thanks to the polymorphism of LabVIEW node, we can connect array and a single data to some nodes and do the calculations in one iteration.
Thanks for sx056 who brought up this question and bincker who gave me a hint.
This post lists the things I want to learn about LabVIEW in the future. See if I can master them and delete them from my to-do list.
– ***LabVIEW templates developed by JKI
– ***LabVIEW IMAQ module. How to make full use of the normal USB webcam? How to control multiple USB webcams? How to do pattern recognization? Can we do ROI on a webcam? Can we change the frame rate of a webcam?
– **LabVIEW OOP. OOP might be the future of LabVIEW programming and I don’t want to miss that.
– *LabVIEW database operation.
– *LabVIEW report generation.
– **LabVIEW VISA. I would like to write drivers for the USB, GPIP and firewire commercial devices.
– To be continued…
It’s not cool keep updating the blog with comics. Yet there is not much progress these days on LabVIEW. I integrated (or, glue 2 programs together) the spatial light modulator with the camera and will synchronize them later. Some algrithm might be adapted between the devices to let the camera control the SLM or the other way round. I’ve been thinking about an applicaiton for this close loop system, fingers crossed.
And here is the 3rd comic of <<Days of PhD>>. Please note the ‘alt-tab’ short-cut key in this comic. I found this one less funny after I finished it, but it is what it is. Hope you’ll like it. 🙂
So this is the 2nd picture I drew for <<Days of PhD>>. Hope you’ll like it. The ID is the one I use for Chinese micro blog. @科学玩家
I came up with this problem when I was doing my project. There are more than 66 parallel pinouts on the Spatial Light Modulator (SLM) chip to be controlled. I generated the commands on the computer and then send them to the fpga via DMA FIFO. It worked fine when there were only 64 lines. I can transfer a 1D U64 array to the target and then split each U64 data to 64 booleans.
See also the example ‘DMA buffered acquisition’ in LabVIEW 2009.
When there are 66 bits, I just created 2 FIFOs for the first place, one of which was type U64 while another is U8. I found it very hard to synchronize the 2 FIFOs. Some memory (I can’t remember exactly where it came from) recalled me using case structure to decimate the 1D array. So I interleaved the data to a 1D U32 (because there are 32 bits on Bus A and Bus B) array, and decimated them on the fpga side. By doing this I *wasted* 32×3-66=30 bits per command, which is tolerable and flexible.
We set up the order of procceding by creating a type defined enum (Enum ‘Bus B’ in the figure) and telling the state machine which state the next should be. Note that we need to create a for loop in each iteration. Because we want to dequeue the buffer several times to achieve the commands for all lines.
First of all, there are two news from the lab:
I (finally) managed to drive the Spatial Light Modulator (SLM) directly with LabVIEW FPGA. The coding was not really a challendge, for I already implemented it with DAQmx module. And I just transfered the digital output bits to FPGA. The wiring and debugging were the pains in the ass. After I did that, the work is done. So now we can send whatever images to the SLM chip via the computer without interruption, and the delay is minor.
An SLM chip was damaged. I didn’t figure out the reason yet. Since the chip was taken out and left for weeks, the static may damage it. We had been sending +5V TTL to the pinouts of the chip instead of the recommended +3.3V, which might cause problems. And there is a chance we just powered on and off the device too frequently. The SLM is a liquid crystal one. You can consider it as a 2D array of tiny mirrors. We need to keep its DC balance by sending positive and negative images for the same period. Thus a plan is to create a state machine for the SLM. When we are not using it, leave the state idle, in which state the FPGA sends ‘black image’ commands to the SLM continuously.
Following are 2 huge news I heard recently from internet.
1. Artificial life forms evolve basic intelligence Wow, so we can be the god of the computer world now? I believe there is a way we can simulate the human memory, but the intelligence and the evolution? I would like to digg in.
2. Claimed Proof That P != NP I don’t think I’m smart and patient enough to read through the 100-page paper. Actually this paper has not been peer-reviewed yet. So feel free to challenge it.
But I’m shocked by the conclusion anyway. Sorry, but “!=” means “not equal to” here. That’s not a surprise now. But still, i don’t know if it is true.
This idea just came up to my mind some days ago. The thinking was straightforward. I’m buiding cameras for my PhD project and I love (although I really have no idea about it) open source. I googled this term and found a blog of it. Also, a news was released last September telling us there are some people who already developed this stuff.
Well, I’m glad to see it. At least it appoves that this is not a stupid idea, and someone (in U of Stanford!) considers it serious. But the ‘Frankencamera’ is not so cool as I expected. A demo was given showing it can do auto-chop-and-paste thing, or “Photoshop on the camera”. I hardly found it impressive. I’m not saying it’s a dull camera but I don’t want to programme a camera just to do stuffs I can do off-line.
(Sorry if the order is messy. I’m trying to organize my thought.) So in my opinion, the camera I want to develop is a kind of study camera. It’s not advanced and it’s not expensive. The specification might be:
CMOS sensor, 1024*768 pixels, 8~10 bits, C-mount, USB/firewire interface, size less than 10*5*5 cm, with the price less than 100 pounds.
The potential users are teachers, students and fans who want to try their ideas (algorithm) before they buy the expensive instruments. The flexible parts is you can programme to change the exposure time, frame rate, regions of interest, binding pixels or not, gain, or even determine which regions to look at on the fly(which is implemented in my project:)). Assume the way you are looking for your girl friend in the crowd. You keep your eyes focus on the special target, and all the rest people are blur to you. You can use the camera that way to save the bandwidth and the storage space.
That’s about for this post. I’ll carry on talking about what I found existed on open source cameras and the feedbacks I got from my friends and my supervisor.
I would like to thank Otis and Todd for their kind help and comments. Glad to hear from US:).