Verilog Hex to Seven Segment Display |
Each Hexadecimal digit is represented by four bits of binary digit. Example: 1011 0010 1111 2 = (1011) (0010) (1111) 2 = B 2 F 16: Octal-To-Hexadecimal Hexadecimal-To-Octal Conversion: Convert Octal (Hexadecimal) to Binary first. Regroup the binary number by three bits per group starting from LSB if Octal is required. In the verilog, difference btw decimal and binary is nothing but writing. So, just assign it. If you want to convert to real number use $realtobits.
We will be moving on to write slightly more complex example, this time a hex to seven segment encoder. Basically LED number is displayed with 7 segments.
The hexadecimal to 7 segment encoder has 4 bit input and 7 output. Depending upon the input number, some of the 7 segments are displayed. The seven segments are represented as a,b,c,d,e,f,g. A high on one of these segements make it display. For example to write 1 we need to display segments b and C.
The 7 segment display also has a decimal point dp.
Verilog Ams Decimal To Binary
The figure below explains this Let write this example making use of the verilog case statementNote that we had to assign out as a register in
reg out;
In our case, it was not required because we had only one statement. We now suggest that you write a test bench for this code and verify that it works. If you have sifficulty, you can check it with following test bench
Exercise |
1. Change the above hex to BCD verilog code so that it take negative logic. A segment is on when it gets 0. A segment is off when it gets logic 1.
$begingroup$
The question needs some explanation:
Suppose I have an 8 bit value, say 8'b00000001 (1)
Suppose I have the module as follows:
to output on HEX0 and HEX1, I can do something like this:
and this will display 01 on HEX1, HEX0.
The only problem is that after 9, the values will become in hex letters. I want it so that if I pass in binary 10, (8'b00001010), then HEX1 HEX0 should be 1 0, not 0 A (as hex works)
Verilog Convert Decimal To Binary
How can I convert it like this?
3 Answers
$begingroup$The problem you are having is quite a common one - how to convert a binary number to something called 'Binary Coded Decimal' (BCD). In BCD each digit is 4 bits, but those 4 bits are only used to represent the numbers 0-9 (hence the decimal bit). This is an ideal format for outputting to 7-segment displays, screens, in fact anything that needs a decimal number to be displayed.
The simplest way of converting from binary to BCD is an algorithm called 'Shift-Add-3' or 'Double-Dabble' (both names for the same thing). Essentially the approach is to scan through the binary representation, then any time you see a number which is 5 or higher in each group of 4 bits, you add 3 to it. This approach basically is a way to overflow any values greater or equal to 10 in a digit into the next one without too much hardware.
Here is an example, stolen from this Wikipedia Page:
Double Dabble Conversion of 243 Hund Tens Unit Shift In 0000 0000 0000 11110011 Initialization 0000 0000 0001 11100110 Shift 0000 0000 0011 11001100 Shift 0000 0000 0111 10011000 Shift 0000 0000 1010 10011000 Add 3 to ONES, since it was 7 0000 0001 0101 00110000 Shift 0000 0001 1000 00110000 Add 3 to ONES, since it was 5 0000 0011 0000 01100000 Shift 0000 0110 0000 11000000 Shift 0000 1001 0000 11000000 Add 3 to TENS, since it was 6 0001 0010 0001 10000000 Shift 0010 0100 0011 00000000 Shift 2 4 3 BCD
Building this process in Verilog is relatively straight forward. I'll leave it as an exercise for you.
Tom CarpenterTom Carpenter
The problem with Double-Dabble is that it was written for software programs, so the solution is inherently serial. FPGA's and ASIC's are parallel in nature: we need to use their strengths to our advantage.
While Double-Dabble works, the example given here uses 11 clock cycles after initialization to arrive at a 3-digit result from an 8-bit input. For an 8-digit result from a 27-bit input Double-Dabble requires 35 clock cycles. A reachable goal for an N-digit result is N-1 clock cycles. How? By doing long division in parallel.
For an eight digit result from a 27-bit input, start by doing 9 subtractions in parallel, input1 minus: 9000_0000, 8000_0000, 7000_0000, ..., 2000_0000, 1000_0000 and implicit 0. Take the remainder from largest factor where the result is not negative (test the top bit of the result), record the factor as the top digit and use the remainder as the input to the next stage, which is again 9 subtractions in parallel. Input2 minus 900_0000, 800_0000, 700_0000, ..., 200_0000, 100_0000 and implicit 0. Repeat this procedure down to input7 minus 90, 80, 70, ..., 20 and 10. The remainder from the last subtraction is the one's digit.
Using this method requires seven clock cycles after initialization for an eight digit result. Depending on the FPGA, this might be implemented with nine DSP's and 63 constants. Without DSP's, the number of bits of subtraction go down by three or four bits on every cycle, so it does not take as much fabric as you might expect, and it still only requires seven clock cycles to complete.
A rom table could work if you like using up FPGA gates.... Say you have a typical FPGA dev board with four 7 segment displays, and you want to show unsigned values. That's 10000 total characters, 7 bits each, round up to 16K by 32 bits is a 64 Kbyte rom.(The 7 seg beside me also has a decimal LED, so 8 bits per character..)
Something like a 'character generator' from the stone age of ascii terminals.
You can also use up one 7 seg and get from 9999 down to -999.
Filling the rom table can be done by C or Python etc using one of the above and converting the output to the FPGA rom file format..
At FPGA speeds this is slightly(!!) more than a bit faster than needed and thus a waste of gates, but if you have space why not?
Definitely fast enough for Persistence of Vision games..
Typically though, if you already have a small embedded microprocessor, use one of the sequential routines above and let the micro handle it as one of its' tasks..