I came up with this problem when I was doing my project. There are more than 66 parallel pinouts on the Spatial Light Modulator (SLM) chip to be controlled. I generated the commands on the computer and then send them to the fpga via DMA FIFO. It worked fine when there were only 64 lines. I can transfer a 1D U64 array to the target and then split each U64 data to 64 booleans.
See also the example ‘DMA buffered acquisition’ in LabVIEW 2009.
When there are 66 bits, I just created 2 FIFOs for the first place, one of which was type U64 while another is U8. I found it very hard to synchronize the 2 FIFOs. Some memory (I can’t remember exactly where it came from) recalled me using case structure to decimate the 1D array. So I interleaved the data to a 1D U32 (because there are 32 bits on Bus A and Bus B) array, and decimated them on the fpga side. By doing this I *wasted* 32×3-66=30 bits per command, which is tolerable and flexible.
We set up the order of procceding by creating a type defined enum (Enum ‘Bus B’ in the figure) and telling the state machine which state the next should be. Note that we need to create a for loop in each iteration. Because we want to dequeue the buffer several times to achieve the commands for all lines.
First of all, there are two news from the lab:
I (finally) managed to drive the Spatial Light Modulator (SLM) directly with LabVIEW FPGA. The coding was not really a challendge, for I already implemented it with DAQmx module. And I just transfered the digital output bits to FPGA. The wiring and debugging were the pains in the ass. After I did that, the work is done. So now we can send whatever images to the SLM chip via the computer without interruption, and the delay is minor.
An SLM chip was damaged. I didn’t figure out the reason yet. Since the chip was taken out and left for weeks, the static may damage it. We had been sending +5V TTL to the pinouts of the chip instead of the recommended +3.3V, which might cause problems. And there is a chance we just powered on and off the device too frequently. The SLM is a liquid crystal one. You can consider it as a 2D array of tiny mirrors. We need to keep its DC balance by sending positive and negative images for the same period. Thus a plan is to create a state machine for the SLM. When we are not using it, leave the state idle, in which state the FPGA sends ‘black image’ commands to the SLM continuously.
Following are 2 huge news I heard recently from internet.
1. Artificial life forms evolve basic intelligence Wow, so we can be the god of the computer world now? I believe there is a way we can simulate the human memory, but the intelligence and the evolution? I would like to digg in.
2. Claimed Proof That P != NP I don’t think I’m smart and patient enough to read through the 100-page paper. Actually this paper has not been peer-reviewed yet. So feel free to challenge it.
But I’m shocked by the conclusion anyway. Sorry, but “!=” means “not equal to” here. That’s not a surprise now. But still, i don’t know if it is true.
After a few days of using LabVIEW FPGA, here are some thoughts:
It IS much more convinient programming in LabVIEW. When I came up with some problems, I spent very few time checking the logic rather than considering the syntax (which is fine) and the logic cycle (which is a nightmare);
The conversion of bitstream file takes longer and longer. At first it takes about 6 minutes, a length for pee, as the code grows now it takes about 20 minutes, a time for lunch. I can’t imagine how long it would take when I do a more complex work;
The genenration of clock and signals works:)
We can use Target-Scoped FIFO to perform a Producer-Consumer Loop in FPGA vi;
The size of the array needs to be fixed before hand in FPGA vi;
Loop rates limited by longest path.If the process takes longer than the defined loop timer, it will use the longer one.
Attached is my code. In the producer loop, we generate a train of pulses and push them into a FIFO; in the consumer loop, we pop the data and output it on Connector1/Port0.
I’m somehow preventimg myself from updating the posts too frequently. It’s always harder to write the 10th post (or post after 1 month). I hope I can stick on this.
Currently I’m studying FPGA module of LabVIEW. It is quite a different task from my previous project. Before this, I was using M series DAQ card with DAQmx. I’m not even sure why I have to use FPGA of this project.
So now I *figure out* the reason might be:
The latency is known on FPGA; there is no OS on the H/W, which means a more robust system; there are more I/O ports for data acquisition; if we take use of the parallel algrithm properly, we might achieve a faster system; FPGA sounds cool.
As I said, the way of programming FPGA is different. It looks like you are using global variable through your target vi (code on the FPGA) and host vi (code on the computer). And you use sequence structure instead of data flow to force the execution order. Error cluster is not recommended, which costs *memory*. Some of my *good* programming habits have to be abandoned when I program on the target vi.
So, what I mastered so far is only to create a project generating digital output on lines or ports. I can either put the ‘calculation’ on the target or on the host. I think the questions need to thought of using FPGA are: Does it need UI? Does is cost lots of memory? Is there any calculation very complex that only can be implemented on the computer? If all the answers are NO, we can put it on the target, or else we have to use the host pc to do the calculation.
My next move is trying to generate a clock together the signal. The rising edge of the clock shold be within signal pulses. This is very useful when we drive the SLM in the future.