We learned many lessons as the project progressed from the initial plan and aggressive
schedule to the final working chess machine. The following are some of the important
lessons that we each learned:
Dont expect people to understand everything you think is "obvious" or
Dont underestimate your own stupidity.
Dont overestimate the amount of free time you will have at the end of the
Dont feel bad about prodding people and making them work. "It is better to
be feared than loved," said Machiavelli.
Keep the schedule flexible and updated once you lose it, you lose track of where
you are and you dont know what is left to do.
Information is vital to keeping everyone up to date, and not stuck in some no work
Have set times to work. Make people work in pairs, so that they expect other people to
I assumed that people in the group would work at a level that I expected and would yelp
early if they had problems.
Ask for help when you are stumped and help others when they are stumped. We are
suffering through it together.
Reward yourself and your fellow lab rats with a good grain-baised beverage. Yummy.
As we started Jon assigned people to tasks and found that some people needed help or
desperately hated what they were doing. This ineffective skill matching took a little
while to sort out, but I think were heading now heading in the right direction.
Basically, I think I was a poor choice to wire up the board. It seems to me that anybody
else could have done it better and faster. But of course, all this is in hindsight.
The danger of larger groups has been a significant lesson.
With five people, it is nearly impossible to find a time when everyone can meet, and even
when there is time it is difficult to ensure that the whole group is on the same page.
Five is too many people to be working in the lab at one time, at one bench. While each of
us working on a separate task at the bench, concurrently, is an exercise in tight spaces,
all of us actively debugging is an exercise in futility. Compared to previous lab projects
that I have done with one or maybe two others, you are working on less of the total
project, consequently it is easier to drop the ball, since this is a greater felling of
insignificance. The open-ended nature of the project, and the self defined timetables
makes it very easy for the amount of work accomplished to slip. The end of the course has
been very frustrating. It has become increasingly apparent where our scheduled slipped and
what might have prevented it. I wish that I had a clearer image of how the various pieces
of the project were fitting together and what was still to come throughout the course. If
I had seen a clearer path of dependencies, I would not have allowed myself to neglect
working on eval to do general debugging and creating other support programs. I would have
gotten it to a useable state and then worried about moving the rest of the project along.
I have learned that you cant expect people to be at the
same level in the project as you are. It may be your most important class, and for
somebody else in the group, they may have it way low on the priority scale. Thats
the way life is, and you have to figure out how to balance that relationship so that
everyone is working enough.
No matter how long you think it will take you to write code, you are always wrong.
Talk to people in your group - check in with them, ask them what they are doing. Learn
as much about the project as you can.
If you are having problems communicating with someone in the group, sit down and talk
it out till you understand each other.
If someone in the group isnt listening to you and really making you angry, let
them know because they probably dont realize they are overlooking what you have to
Make sure that everyone is being included in the group and that everyone has the
important information. It really helps to have more than one person know what is going on
in any given aspect of the project.
If theres one thing that I learned in this project more
than anything else, it is how to allot time to different aspects of a project, and plan in
advance for problems that could arise. This can be seen greatly in the HCI for the
project, since it was one of the largest time-intensive parts of the overall project. When
we initially planned out a schedule, none of us would have thought that we would spend
over a month trying to get the LCD panel working, and then abandon it with very little
time remaining for a completely new option. None of us would have thought that shared
memory would have been such a problem to get working, and none of us would have thought
that the 68HC11 and FPGA wouldnt want to work together.
We started the project with deadlines that were very ambitious, and continuously found
ourselves pushing those deadlines back week after week. It wasnt that we as a group
were "slacking" in our work on the project
it was just taking us two or
three times as long to complete a particular task, if we completed them at all (cough,
cough, LCD, cough, cough).
I also learned that I need to depend more heavily on the other members in my group to
compensate for areas where my skills are not very strong, mainly in the area of writing C
code for the 68HC11.
Ive been learning about wiring and designing systems that physically exist
there is a large gap between theory and reality. In other words simulation != real thing.
Dont underestimate the magnitude of hardware implementation issues
I understand the architecture of both the HC11 processor and the Altera FPGA. Running
the cycle count simulations of the HC11 and viewing the profiling information shows how
some of its limitations of the processor change the profiling data percentages. Limited
registers means lots of memory access. Also, since the HC11 is an 8 bit processor, many of
the 32bit ints in the original code is ported to more memory efficient 8 or 16 bit chunks.
The value of #ifdefs, #else, #endif, #defines in C and ifdef, define,
else, and else if in Verilog is extensive.
A good logic analyzer is better than a ghetto one. Thanks for helping us upgrade, Mom!
It helped us get things to work.
The HC11 is slow. Whoop. Hiware is a decent tool, but we really cant expect
something like it to be at the same level as vc++ or the mainstream applications we are
used to. Id expect most tools to be at a similar level of development and polish.
Dissecting portions of the chess algorithm have taught me
some of the finer points of chess rules, conventions, and etiquette. I am familiar with
the capabilities of the FPGA, its ability to mimic complex custom hardware, though
the one lesson I had expected to get early on from it that I have still not, how big is
it. Although, I now feel that it is a decent sized FPGA I would prefer a larger one so
that I could stop removing functionality from the Verilog in an attempt to squeeze another
adder onto the chip. I still lack an intuitive understanding of how much space typical c
code consumes. The 68HC11, and memory are much the same way, I have an ever increasing
comfortably with their pin-outs, and the logic behind their interconnections.
During the latter, half of the course Things remembered gave way to new details and
nuances of the tools learned. Unexpected problems with combinatorial loops in Synplify
showed me the respective uses of the = and <= operators. The differences between
simulatable and sythisable verilog became painfully obvious as Synplify showed how poorly
it dealt with some Verilog structures such as for loops, which it claimed to support. The
various layout views that Synplify and MaxPlus II were useful seeing if in that the
verilog code was properly implementing the design and verifying that signals are properly
wired and available at the expected time. The co-simulator was particularly useful for
testing system integration and the functionality of various modules in a larger scope then
simple test cases sent to the module. The co-simulator made it easier to test the overall
functionality of the eval module. Initial testing had taken place solely in verilog by
running the module and sensing it a board state, which was then compared to output from
the original code. Testing the eval code with the co-simulator quickly made a bug in the
module in which it only initialized once per instantiation apparent that had not been in
stand alone testing. A bug that would have been significantly harder to track down had the
system been running on the FPGA.
We had a problem with the FPGA corrupting the lines it shared with the HC11. For an
unknown reason, input lines to the FPGA were pulled low. To solve this problem, single
input lines, such as R/W and chip enable were moved from IO pins to dedicated input pins.
The address lines which are wide inputs, could not all be moved to dedicated input pins on
the FPGA. Instead, we changed these lines in the FPGA from inputs to inouts. This
guaranteed these lines were tri-stated so that both the FPGA and HC11 could download.
Related to this, we also moved our clock input to the dedicated clock pin.
Another interesting bug we ran into was the SRAM failure
message ocasionally showed up when we attempted to download to the FPGA. In order to work
around this, we wired up a switch to the clock input, so we can download to the Altera and
then turn on the clock, and the HC11 will download.
The logic analyzer made it possible for us to get the memory to work, without it we
were just shoting in the dark.
We also firmly established the reason behind commenting code, once Jon disappeared, it
was tough to change the code appropriately.
Also, the timing issues we ran into throughout this project make me very happy I took
There were three main technical things that I learned while
working on the project:
I expanded my knowledge of Visual Basic programming by learning how to interface with
the serial port of the PC using the MSComm Control.
- I learned never to use the MEG12864 LCD panel, because the KS0708 display controller
chip is poorly documented and doesnt perform the functions that the documentation
claims are feasible.
- I learned how slow a 68HC11 actually runs compared to the PCs of today. Most of the
errors in communication that the software HCI was having when communicating with the
68HC11 related to the 68HC11 not being ready to receive the next instruction when it was
sent. A lot of delay loops needed to be added to the Visual Basic code to slow it down so
that it could properly communicate with the 68HC11.
Assumptions. We could build our parts individually and then combine things easily on a
theoretical interface that "should" be feasible. Build/test the interfaces. We
ended up throwing out work that we couldnt use because an interface we assumed would
work became a nightmare and because of physical project constraints (we cant fit
250% of an FPGA on an FPGA? Or we cant get a 4 MHz unit to work at 8mhz?)
We assumed that we could get the hardware units to be small and to work faster than the
HC11. Not gonna happen. Designing efficient FPGA configuration is in itself an art, and
could have entire years dedicated to optimizing it. This cost us more speed and more time.
Incremental progress. Very important. I tried to get so many complicated parts together
so that we could gradually integrate. It ends up that in many cases the integration takes
the most time, even with nice interfaces and clear protocols. Test the integration parts
first so that they are known to be feasible and then build the parts that depend on them.
This project has really made clear to me the fine, but
distinct line between hardware and software and their cooperative relationship: there
isnt one. Hardware to software is a spectrum that goes from the soldering iron and
wirewrap gun to Visual Basic coding at a computer.
The co-simulator was very useful, but there were several aspects of its
structure and interface that were cumbersome. The errors that "ncupdate" would
return when it was attempting to compile and link Verilog code were cryptic at best.
Luckily, it did often at least provide line numbers, but even this little clue was
occasionally absent from the error. The fixed width of the wires that it supported writing
and reading from was a problem. I would have preferred if the co-simulator support wires o
any width, as is the costume in Verilog. This would have made it easier to read values off
of wires that were less then as well as greater then 32 bits. It would have elevated
problems with some negative values that were being stored in various sized registers in
the Verilog that when transferred to the c portion of the project became large random
positive numbers because the upper bits were filled with zeros, causing the sing bit to
change from 1 to zero.
The cosimulator is slow with larger, complex functionality
Because of the lack of units (and switch from shared memory to memory mapped IO) I was
unable to test the unified top-level hardware unit "seamlessly" like I wanted. I
ended up treating the co-sim as a verification tool, running simulation alongside with the
software version (for eval and attack) to verify results. I tested gen by using it, except
for a few interfacing issues it was very solid.
Issues were the inability to write "wide" wires to the cosimulation. We had
to break up wires in the Verilog in order to write more than 32 bits. Very much a pain
when the original version had 60 bit datapaths.
Simon went over multi-threading and the examples for using the cosimulator with
multi-threading. This confused me. Our implementation didnt explicitly need
multiplier threads (ended up using only one HC11).
Examples of using memory or possibly templates for using it would have helped.
Make doing something interesting part of Demo 0. Actually, make Demo 0 something so
that each member of the group has to do something as opposed to just a few.
First lesson, dont be afraid of the tools they make your
life easier! Been playing with every tool weve got.
Became fairly familiar with the Altera hardware architecture, and am starting to look
into the tools editor for optimization approaches. I was able to come up with some very
neat optimization methods when working at the gate level this summer maybe we can
do the same tricks here.
I never knew that there are tools to compile Verilog to a native binary.
In respect to the CAD tools that this project the most important lesson
is to rigorously check the configuration of each software package that is being used.
There were several different places where specific details about the connection being used
or the make and model of hardware, being used need to be properly configured, in each
application. If one of these places is missing, the computer often will proceed happily
and often appear to have successfully completed the task. This is most cumbersome when
downloading to hardware, since there are so many places where an error could give that
type of result and my instincts always make me first consider the board and wiring, then
the computer. verilog is not nearly as new an experience as working with simplify and
MaxPlus. Consequently, verilogs teaching is not so much "lessons learned"
as it is things remembered. It has forced me to remember many little details about how it
interprets data, handles concurrent timing, and how to go about debugging. Then there are
the details, which have cost time in the past and continued to this year, such as
representing high impedance as a z not an x, and always insuring that the data structures
are large enough (bit width) to store the desired information.
I learned that the Hi-Wave program can be very particular about when it
wants to download and what it wants to do. Sometimes it will load correctly, and sometimes
you have to try about 3 or 4 times before it does anything.
Copyright © 1999, Scooby Doo Gang.