CasperTutorial00

From AstroBaki
Revision as of 14:21, 30 March 2010 by WikiSysop (talk | contribs) (Created page with '<latex> \documentclass[11pt]{article} \setlength\parindent{0 in} \setlength\parskip{0.1 in} \usepackage{fullpage} \usepackage{amsmath} \usepackage{graphicx} \begin{document} \ti…')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

An Introduction to Field-Programmable Gate Arrays for Digital Signal Processing

AN INTRODUCTION TO FIELD-PROGRAMMABLE GATE ARRAYS FOR DIGITAL SIGNAL PROCESSING

1 What’s an FPGA?

A Field-Programmable Gate Array (FPGA) is chip for performing digital logic operations. At its most basic level, and FPGA is a bunch of simple logic elements and a huge network of wires, and connections between logic elements and wires are activated by an array of bits that is loaded onto the chip. This makes an FPGA something of a chameleon; a user can rewire the chip to do many different things, depending on the application.

The only kind of chip that competes with the FPGA for flexibility is the Central Processing Unit (CPU) that sits at the heart of every modern computer. CPUs are flexible for a completely different reason than FPGAs. CPUs are not configurable chips. Instead, they are hard-wired with a number of registers for holding numbers, and a number of supported operations for how to modify or combine the numbers in those registers. What makes a CPU flexible is its capability to execute these operations in arbitrary order, according to the instructions contained in a program in memory that was written by a user. Long ago, CPUs had only a single core and so every instruction in a program was executed sequentially, one at a time. Nowadays, CPUs have multiple cores, and several instructions can be executed in parallel. This development notwithstanding, CPUs are basically serial processors, with some parallel capability allowed by multicore technology.

In contrast, FPGAs are massively parallel. Every logic element in an FPGA is like a parallel processor. The rub, of course, is that these “processors” do very simple things, and once you configure a logic element to perform an operation, that is the only operation it can perform until the FPGA is reprogrammed. The parallel-ality of an FPGA makes it incredibly powerful for processing high-speed streaming data, but it also makes FPGAs hard to program. The difference in programming paradigms between CPUs and FPGAs is probably the biggest road-block to the widespread adoption of FPGA co-processors in computers.

2 Clocks and Timing

FPGAs are spatially parallel processors. Employing spatially separated logic elements to process data in coordination with one another can greatly improve performance, but this comes at a cost. The cost is that look behind the curtain of how digital processors pass information around.

In digital electronics, wires carry a high voltage to signal 1 and a low voltage to signal 0. But what happens in between? Electronic signals, though they travel fast, do take time to arrive at their destination. It can also take a while for a wire carrying a 0 to ramp up its voltage high enough to signal a 1. When reading the state of a wire, a receiver needs to know when it is in a stable state. Likewise, a transmitter needs to know when it can change the state of the wire to send the next piece of information.

To coordinate the timing of transmission and reception, almost all digital electronics rely on a special signal called a clock to synchronize decision-making. A clock is a single-wire signal that toggles between high and low states at a fixed rate. Typically, every logic element in a synchronized system will make a decision about its next state on the rising edge of a clock (that is, when it goes from 0 to 1). Shortly after the rising edge of a clock, logic elements have the opportunity to look at their next inputs and decide what their next output will be. When the next rising clock edge occurs, a final decision is made, and all logic elements jump to their next states.

The foundational element of synchronization is called a register (another name for a register is flip-flop or flop for short). The simplest register (called a D flip-flop) has two inputs: data (D) and clock (clk). Its single output (Q) is assigned the state of D, read on the rising edge of clk.

Logic elements do not always need to be clocked. For example, simple logic gates like AND, OR, NOT, XOR, and NAND can continuously read their inputs and create the appropriate outputs after a nominal delay. It is possible for signals to propagate several logic levels in the time between rising clock edges, but at some stage the output of one of these logic elements will have to be registered. Any further logic operations will have to be applied to the output of the register on the next clock.

2.1 Timing Diagrams: Setup and Hold

Registers cannot make decisions instantaneously. Flip-flops typically have a setup time during which the D input to the register must be stable before the rising edge of a clock, and a hold time after the rising edge of a clock during which the D input must remain fixed. Furthermore, there is a clock-to-Q time, which is the time it takes the value to propogate from D to Q following the rising edge of a clock. The details of these mechanisms are not important for this discussion, but understanding that signals have a limited amount of time to propogate from one register to another in order to satisfy these timing demands is essential to designing FPGA circuits.

A typical FPGA signal might begin as a 0/1 on the D input of a flip-flop when a rising clock edge occurs. A nanosecond later, that state will appear on the Q output of the flip-flop, whereupon it will travel down a wire (incurring a couple more nanoseconds of delay), through a few logic circuits (a few more nanoseconds), to arrive at the D input of another flip-flop in time to satisfy the setup time required by that register before the next rising clock edge.

For visualizing how signals are sent and recaptured through synchronous digital circuits, timing diagrams are a useful tool.

2.2 Pipelining: Latency vs. Throughput vs. Resource Utilization

Suppose you have just designed a circuit that takes two signals from the outside world arriving on FPGA pins, ORs them together, and one clock later, ANDs the result with a signal arriving on a third pin, then outputs the result to a fourth pin. Suppose that you want a clock period of 5 ns so that the AND operation involving the third pin happens at just the write time. After telling the compiler to target a 200 MHz clock rate, you start compiling your design, only to find that it returns with an error: your design has not met timing. What does this mean, and how can you fix it?

When a design fails to meet timing, this means that there is a signal path between two registers whose total delay through layers of logic and routing down wires exceeds the 5 ns clock period you were shooting for. Though the compiler may finish compiling you design, you will not be able to run it at 200 MHz. If you do, the behavior of your design will be indeterminant–it will output junk.

To solve this problem, you have two options. If you don’t mind running your design at a lower clock rate, you can lower your clock frequency. Unfortunately, this is not commonly an option. The usual option is to add registers to your design to break up the long signal paths (called pipelining). In our example, registers could be added at the inputs of the AND and OR. However, when pipelining, one must be careful to keep signals time-aligned. For example, if we place registers at both inputs of the OR, we must also place one between the third pin and the AND. Failure to do so would mean that the “OR” side of the AND block would arrive one clock later than expected, and that would mean the circuit would be checking for a signal on pin 3 that is true 10 ns after pin 1 or pin 2 were true, instead of 5 ns.

Pipelining a design can help you reach higher clock rates, but it comes at a cost. Registers are ubiquitous on FPGAs, but they do eventually run out. Pipelining a large design raises its resource utilization, which can result in a design demanding more physical resources than are available on the chip (and an associated compiler error). Pipelining also raises the latency of a design–the number of clocks it takes to flow from input to output. For applications that require fast response to a stimulus, too much latency can pose a problem. However, many applications only require throughput, which is total sustained rate at which data flows through a design. Throughput is determined by clock rate, and so can be improved by pipelining a design.

When designing for an FPGA, a programmer must balance resource utilization and throughput by carefully pipelining a design. This involves adding latency to long signal paths, but avoiding unnecessary pipelining that consumes FPGA resources. A bizarre paradox of FPGA design is that consuming too many physical resources forces a design to be spread out across the entire chip. The wide physical spacing between circuit components incurs routing delays as signals are forced to travel farther. Higher routing delays lower the clock rate at which a design can be compiled, and hence, lower throughput. Thus, it is actually possible for excessive pipelining to decrease throughput.

3 Bits, Bytes, and Beyond

3.1 A Bit is a Wire

3.2 Value vs. Representation

3.3 Unsigned vs. Signed Representations

3.4 The Binary Point

4 What Else?

4.1 Slices

4.2 Block RAMs

4.3 Multipliers/DSPs

4.4 CPUs