Tuesday, May 29, 2012

RTL Viewer Update

State of the Viewer

Things are progressing slightly well with wxDebuggy, it now does a half-decent job of drawing some verilog modules and the wiring between them -- and all this while limiting the number of crossovers!

As mentioned before, I wasn't too happy with the wire-crossing reduction results when using a straight version of the Sugiyama et al algorithm. The current revision of the RTL Viewer improves the crossover reduction using two techniques:

  1. The layer reordering stage of the Sugiyama et al algorithm was tweaked using ideas found here (SFvHM09). With this tweak, the layout algorithm now knows that modules have ports and that these ports are in a fixed order.

  2. The orthogonal wire routing algorithm use 'Greedy Assign' to place the vertical line segments of each wire to a unique track between the layers. This idea comes from (EGB04).

Stuff to Fix for 'Dishwater' Tag

  • Y co-ordinate assignment of the modules should be improved.
  • Long dummy edges should be kept straight.
  • Clock/reset-like signals that go to multiple modules in multiple layers need to be handled better.
  • Feedback wires are not drawn all that well.

Misc worries

  • RTL parser is very slow. The files I test on have basic RTL and wiring, and there are only about 12 of them, but it takes around 3 seconds for my desktop to parse them and build the necessary data structures.
  • Greedy assign may not be enough for more involved circuits - I may need to add the 'Sifting' bit too.

References

Tuesday, February 28, 2012

Experiences Using Jenkins for ASIC Development

I've come to appreciate that laziness is a superpower. When you notice that some routine task has become a chore, it's probably time to get the computer to those things instead.

Imagine the scene. You're developing a chip so you're writing loads of RTL. You got bored tracking code versions, so you use a source code management (SCM) tool. Maybe SOS. And since you got fed up checking *all* your sims each time your design changes (cos, y'know, sometimes you break things) you looked into self-checking simulations. This is all good - computer does boring stuff and you do interesting stuff like figuring out how you should implement features.

But something is niggling at you. A whisper tells you that your computer could be doing more.

Why is it that it's left to you to launch these simulation suites every time something changes? You've forgotten to launch these simulation suites for a while because you were knee-deep in some implementation. When you got around to launching them again, sims lay broken all around your office, whimpering and red. A code fix for one thing broke other things. You wanted to know sooner. Why didn't your computer tell you that things were broken?

You now want to make sure simulation suites are launched each time your design changes, but you're too lazy to do this yourself. Fortunately software engineers are constructive-laziness trailblazers, and have something useful for us. In this case it goes by the name of "Continuous Integration". Continuous Integration (CI) means polling your repo and running all your tests when any files were updated - automatically and usually with nice graphs.

In my place of employment, our group had hand-rolled a alpha-ish version of such a software tool with no graphs until we discovered that CI was a thing and that open-source CI tools existed. We chose Jenkins for reasons that are lost in the midsts of time. Now we don't have to maintain our own CI tool - core competencies and all that.

Jenkins is software butler that runs errands for you. These 'jobs' have roughly 3 stages: a trigger stage; a build stage and an artifacts stage.

Jobs can be triggered by changes in your source code repo, or even periodically like a cron job. Jenkins has plugins that can talk to most source code management tools live SVN or CVS but not, sadly, SOS.

'Builds' are computer program compilations or maybe in our case, test suite runs. In fact, builds can be any task that can be called from a shell script.

In the Artifact Storage stage, you can instruct Jenkins to squirrel away interesting artifacts from a build, like test results or executables.

Once you start to get Jenkins to automatically do your dirty work, you get nice graphs of how things are getting along, like build times or test result trends. Jenkins will also show you which files have changed to trigger the build so you can quickly see what files are the culprits if sims start to fail.

***

At work We build mixed-signal chips, and we use SOS to manage everything about our designs: schematics, layout, RTL, synthesis scripts - the works. We run both digital (RTL-only) and analog (spice/RTL co-simulations) simulations at the toplevel. The vast majority toplevel simulations are self-checking. But each time our RTL changes, we'd have to manually relaunch all of this stuff. Booorrrring! So we decided to try out a bit of Continuous Integration using Jenkins.

The first thing was to get Jenkins to poll SOS, the source code management tool. This was our first problem - there are no SOS plugins for Jenkins in existence on the web. None of us can Java, and our CAD department wouldn't commit to writing one for us, so it wasn't a good start.

But we could use the File System SCM plugin instead of a proper SCM plugin. The idea is that Jenkins is set up with it's own SOS workarea for the project, then Jenkins is used as a glorified cron job to run an 'update' command on this workarea in ten-minute intervals. In effect, an "SOS Update" job triggered 6 times an hour; the build stage is a shell script that runs the SOS update command. For all other jobs, we can now use the File System SCM plugin to check against this SOS workarea to determine if those jobs need to be run again. It means that we've a bit of unnecessary file replication, but the SCM uses links so it's not too bad.

Next up was to get our RTL simulations running. Another Jenkins job was created to use the 'File System Plugin' to poll the Jenkins-specific workarea to look for updates. Once triggered by a change, a build script launched all the RTL sims out on the compute farm and waited for the results to come in. The only changes made to the sim suite launch script was to ensure it could be run from any directory and that it produced the sim results in JUnit XML style. There are no artifacts as such from these sim suite runs, but Jenkins will read the Junit XML files (once made aware of their existence) and remember the results in its database. The fact that our sims are self-checking is essential here.

Co-simulations were set up in the same way - another Jenkins job to poll the SOS workarea and launch the co-sim suite, and Jenkins pointed to the JUnit results summary file.

We were filled with verification joy at this point. We'd a bunch of sims that were launched when any RTL or netlists changed. Automatically! These sims were run in their own workarea so they ran on exactly what was checked into the SCM, no more, no less, so no more forgetting to checkin files. And we had traffic lights telling us the health of our design and some nice trend graphs.

But the whispers of automation were not quiet for long...

Sometimes we'd forget to netlist and our sims ran against out-of-date netlists. Sometimes we'd forget to update our synthesis scripts and our physical people would be sad. It's a lot of stuff to remember to do and the details are rarely documented accurately, if at all. Again, we turned to Jenkins for assistance.

Synthesis was the next task we automated. Setting a Jenkins job up to poll the SOS workarea and run synthesis was not a problem, and that might have been enough. But there is really no point in running things off automatically if the results are not going to be examined in some way. What metrics could we check for a synthesis run? What about RTL errors, Area and Critical Path Slack for all clock domains? Cool. Scripts were written to extract these metrics from the log files and to create a results XML file that flagged out-of-bounds errors in these metrics. Synthesis is now automatic and somewhat self-checking!

We were on a roll and the netlist problem would be next to fall. But there was an immediate problem as netlisting was traditionally a GUI-based click-this-then-that manual affair for us. One email to our CAD support group later and we had the solution - it *is* possible to netlist from the command line. This was Good News as anything we can run from the command line, we can get Jenkins to do! As all the other jobs fanned out from the SOS workarea update job, we modified it to include a netlisting step. Now we could be sure that all our simulations ran from only the freshest of netlists.

Automation of all these tasks is kinda a huge thing. We get more time to actually build the product rather than babysit a bunch of tasks. We get quick feedback on breakages. We've implicitly documented our processes for netlisting and checking synthesis results. If area suddenly bumps up, we just go to 'recent changes' to see which files were involved. We get a Jenkins dashboard showing us the up-to-the-10-minute health of our design were we can quickly see what's ok and what needs attention.

We're very happy with this. Now were waiting to hear the whispers of automation again. Analog Model-checking, maybe?

Epilogue

I was tempted for a second to finish this blog post with this, roughly:

After taking some time to set up Jenkins and making everything self-checking (including synthesis), we're getting designs out quicker and we're seeing fewer bugs in silicon.

But I was unhappy with it because it sounded like, well, bullshit. It does *feel* like we're producing better quality stuff faster, but without hard numbers it's all subjective. Although we do track these numbers (weeks and item count in issues lists) it's the comparisons that I don't understand. How do you compare time-to-tapeout numbers for different projects that have different levels of complexity and that start from different places? And how do you compare silicon bug rates for the same? And why do I have a funny feeling that software folks know?