Friday, September 29, 2023

A Teeny Weeny SPICE Circuit Simulator

Ionsamhlóir Ciorcaid An-bheag.

What and why?

I wrote a small SPICE circuit simulator to get over my fears of `RELTOL`, `ABSTOL` and time-step-too-small errors.

I'm at version `v0.8.0` which has quite a nice set of basic features. It can read SPICE decks with circuit descriptions. It can execute some commands if they are listed in a `.control` block in the SPICE deck. It can do 2 types of analyses: DC Operating point and Transient. Circuit device-wise, it can imagine resistors, capacitors and diodes. Sources supported are voltage and current sources (DC or sinewave).

It's written in Rust, cos that's what I like to use instead of C when I can. The source code is on github here: tiny-spice-rs. See the README for details of how to simulate a circuit.

Subcircuits!

One of the things I'm most happy about is that it supports subcircuits! And the subcircuits can be parameterised! And parameters can be very simple one-identifier expressions!

My working example is 3 copies of a fullwave rectifier system with parameterised loads. The SPICE for this circuit is shown below, as is a cartoon of the circuit.


Full-Wave Rectifier with parameterised subcircuits

* 3 instances of a diode bridge + RC load
* cap load in each instances parameterised and overriden from
*   the toplevel

V1 vstack1 gnd     SIN(0 5 1e3) ; input voltage
V2 vstack2 vstack1 SIN(0 2 2e3)
V3 vstack2 IN_p    SIN(0 1 3e3) ; flip to differentiate between "multi_"

* full-wave rectifier
.subckt bridge bp bn ba bb

  D1 bp ba
  D2 bb bp
  D3 bn ba
  D4 bb bn

  * Small caps across the diodes to prevent time-step-too-small
  CD1 bp ba 12pF
  CD2 bb bp 12pF
  CD3 bn ba 12pF
  CD4 bb bn 12pF

.ends

.subckt system sinp sinn soutp soutn cval=10uF
  Xbridge sinp sinn midnode soutn bridge
  Rd midnode soutp 1
  Xload soutp soutn rc_load cvalo={cval}
.ends

* Load
.subckt rc_load in1 in2 cvalo=1nF
* Split R so we have internal nodes
  Rl1 in1 la 200
  Rl2 la lb 300
  Rl3 lb lc 400
  Rl4 lc in2 100
  Cload in1 in2 {cvalo}
.ends

Xsystem1 IN_p gnd vp1 vn1 system cval=1uF
Xsystem2 IN_p gnd vp2 vn2 system ; DEFAULT cval=10uF
Xsystem3 IN_p gnd vp3 vn3 system cval=100uF

.control
*  option reltol = 0.001
*  option abstol = 1e-12

  tran 100ns 5ms
  option ; ngspice only shows new values after analysis

  plot v(IN_p) v(vp1,vn1) v(vp2,vn2) v(vp3,vn3); (ngspice)
.endc
ALT-TEXT: Circuit diagram showing 3 instances of a subcircuit. The supply to all three is a stack of sinewave sources at different frequencies and amplitudes. The subcircuits themselves are subcircuits: a diode bridge rectifier, a series resistor and an RC load with a parameterised capacitor value. The capacitor value is passed down to the capacitor in the RC load subcircuit all the way from the toplevel instantiations.

These waveforms are the proof that it works.

ALT-TEXT: Waveforms from a transient simulation of the above 3-bridge circuit. The input 3-tone sinewave is shown, as are the voltages across the 3 RC load blocks. The different parameterised values for the three blocks result in different smoothing curves.

Where next?

Next, maybe something with reciprocity, that seems interesting. I think that reciprocity can be used in noise simulations to work out the contributors to noise at a certain node.

A simple waveform viewer would be nice, but I've no intention of writing one of those. Even though there's basic `.control` support, I don't do anything with `print` or `plot` commands.

Monday, June 3, 2013

VLSI CAD - Logic to Layout on Coursera

I recently finished this course on Coursera. It was excellent. A small review follows (I'm an electronics engineer by trade, so, y'know...)

Topics Covered

The VLSI-CAD: Logic to Layout course held the promise of enlightenment about the things that go on within a logic synthesis tool. (If you program but don't Verilog|VHDL - think of synthesis as a compiler, but a compiler that has to ultimately draw things). Although we got through a lot in the 8 weeks, it was obviously not exhaustive. Topics covered included:

  • computational boolean algebra - getting the computer to mimimise logic expressions
  • tech mapping - how to take a logic expression and map it to actual gates for a library
  • placement of logic cells - surprising algorithm
  • routing of nets - this was the best part
  • timing - how to tell if a gate network will meet your expected clock rate, and how to enumerate the bad paths if not. Included how to account for the delays in the wires between gates too. This was, surprisingly for me, the second most interesting part.

Course keywords: Recursion, Heuristics, Shannon Cofactors.

Materials

The course materials consist of a bunch of video lectures that average about 20 minutes. Dr Rutegnbar would [scribble notes|fill in blanks] on the the slides as he spoke, which keep things moving. PDFs of the annotated slides were available for downloads. The lectures were as addictive as a box set of House or $your_favourite_tv_show. And a lot of the time I thought to myself, "how did I make it to ~$years as a digital designer without knowing this stuff? Why am I only finding out about Shannon Cofactors now?".

As well as the lectures, there are a bunch of boolean logic software tools used in the course. The idea is that you'd prepare a script, upload it to the Coursera servers, and after a bit it would show the results on the webpage. These tools are well described with an example or two in an accompanying PDF (which I found hard to navigate to at times), and a click-this-then-that video tutorial. Although the tools themselves were useful and interesting, I've unkind things to say about the web interface to these tools later.

Exams

A multiple-choice test at the end of each week, followed by another multiple-choice-final-exam made up the grading tests for the course. Most of the questions were show-us-you-can-do-this, but more interestingly they'd throw in long-form questions. These longer questions would first explain how a certain technique you'd already encountered could be used to solve a different problem, and then ask you to do this. These questions look overwhelming at first glance, but I enjoyed those the most - I didn't find them a 'grind' like some of the other show-us-you-can-do-this type questions.

Some of the questions would encourage you to use some of the online tools already introduced. Nice touch.

Another nice thing about the weekly test was that you could see your result nearly-immediately after you clicked submit. And better, when you review your answers the course's creators 'anticpated' the wrong answers, tagging those answers with possible reasons why you may have arrived at them incorrectly.

A few teething problems with the weekly problem sets meant keeping an eye on the forums for clarifications and regrading notices. There was a bit of heartache on the forums about this, but I didn't mind too much. Except for one question on maze-routing that after seeing the solution, I felt the lectures were ambiguous about. Some folk on the forum agreed.

Online Tools

Two types of tools were made available to students on the Coursera cloud: the boolean tools KBDD, Espresso and miniSAT; and two layout visualisation tools - placement and routing visualisers.

To use the boolean tools, you'd submit a script file via the web interface, and after a few moments you could read the output of the tool. Submissions to these tools were rate-limit to 1 go per minute. This is unfortunate, as KBDD is the most-used tool in the course and is the only one not available on the internet for download! Combined with the fact that the submit button redirected you to a page which was not the results page, this made using KBDD a bit of a chore.

On the other hand, the visualisation tools were mostly awesome. Especially the routing visualiser - through the power of HTML5, just drag your routing output onto a the page, and it'll draw your routing on two layers. I giggled like a child! Here's a snapshot of wot i made (I'm so proud!)...

Programming Assignments

Shooting for the "Mastery" badge meant writing a computer program every two weeks. One of these programs was a lengthy KBDD script, and the other 3 could be written in the programming language of your choice. The general idea is that input files would be provided, you'd run them through your program, and upload the output to the Coursera cloud for marking.

These programming tasks were interesting and on the whole well explained. The placement and the router programs were the most interesting to me, probably because I overly-enjoyed seeing the visualisations of my programs' output.

Help!

I struggled with the placement program though. I struggled so much that I missed the first deadline. I managed to complete it before the end of the course and so only got 50% of my score. The algorithm to use for the assignment (recursive, narturally) was well described in the programming assignment doc, but my main problem was how to manage the data structures as I stepped down the recursion levels.

This brings up another problem with the course - I couldn't get much help with my program. Under the honour code, you are not meant to share your code or otherwise post it on the internet, reducing the chances of cheating if the course is offered again. I couldn't post code to get help on the forums, or examine a reference program at the end of the course to see how things should be done. And for these kind of 'big' problems, the forums aren't really that useful because understanding the issue and writing a response would take up too much time of another busy student!

The forums, other than for 'deep' programming questions, were useful and the course staff kept an eye on things.

Overall

If you're interested in this stuff, and you like to dabble in programs, I'd recommend this course in a heartbeart. I had a brilliant time following it. I wish it were longer. Hopefully, they'll offer a follow-on course that mentions flip-flops! Or some simple verilog parsing!

Tuesday, May 29, 2012

RTL Viewer Update

State of the Viewer

Things are progressing slightly well with wxDebuggy, it now does a half-decent job of drawing some verilog modules and the wiring between them -- and all this while limiting the number of crossovers!

As mentioned before, I wasn't too happy with the wire-crossing reduction results when using a straight version of the Sugiyama et al algorithm. The current revision of the RTL Viewer improves the crossover reduction using two techniques:

  1. The layer reordering stage of the Sugiyama et al algorithm was tweaked using ideas found here (SFvHM09). With this tweak, the layout algorithm now knows that modules have ports and that these ports are in a fixed order.

  2. The orthogonal wire routing algorithm use 'Greedy Assign' to place the vertical line segments of each wire to a unique track between the layers. This idea comes from (EGB04).

Stuff to Fix for 'Dishwater' Tag

  • Y co-ordinate assignment of the modules should be improved.
  • Long dummy edges should be kept straight.
  • Clock/reset-like signals that go to multiple modules in multiple layers need to be handled better.
  • Feedback wires are not drawn all that well.

Misc worries

  • RTL parser is very slow. The files I test on have basic RTL and wiring, and there are only about 12 of them, but it takes around 3 seconds for my desktop to parse them and build the necessary data structures.
  • Greedy assign may not be enough for more involved circuits - I may need to add the 'Sifting' bit too.

References

Tuesday, February 28, 2012

Experiences Using Jenkins for ASIC Development

I've come to appreciate that laziness is a superpower. When you notice that some routine task has become a chore, it's probably time to get the computer to those things instead.

Imagine the scene. You're developing a chip so you're writing loads of RTL. You got bored tracking code versions, so you use a source code management (SCM) tool. Maybe SOS. And since you got fed up checking *all* your sims each time your design changes (cos, y'know, sometimes you break things) you looked into self-checking simulations. This is all good - computer does boring stuff and you do interesting stuff like figuring out how you should implement features.

But something is niggling at you. A whisper tells you that your computer could be doing more.

Why is it that it's left to you to launch these simulation suites every time something changes? You've forgotten to launch these simulation suites for a while because you were knee-deep in some implementation. When you got around to launching them again, sims lay broken all around your office, whimpering and red. A code fix for one thing broke other things. You wanted to know sooner. Why didn't your computer tell you that things were broken?

You now want to make sure simulation suites are launched each time your design changes, but you're too lazy to do this yourself. Fortunately software engineers are constructive-laziness trailblazers, and have something useful for us. In this case it goes by the name of "Continuous Integration". Continuous Integration (CI) means polling your repo and running all your tests when any files were updated - automatically and usually with nice graphs.

In my place of employment, our group had hand-rolled a alpha-ish version of such a software tool with no graphs until we discovered that CI was a thing and that open-source CI tools existed. We chose Jenkins for reasons that are lost in the midsts of time. Now we don't have to maintain our own CI tool - core competencies and all that.

Jenkins is software butler that runs errands for you. These 'jobs' have roughly 3 stages: a trigger stage; a build stage and an artifacts stage.

Jobs can be triggered by changes in your source code repo, or even periodically like a cron job. Jenkins has plugins that can talk to most source code management tools live SVN or CVS but not, sadly, SOS.

'Builds' are computer program compilations or maybe in our case, test suite runs. In fact, builds can be any task that can be called from a shell script.

In the Artifact Storage stage, you can instruct Jenkins to squirrel away interesting artifacts from a build, like test results or executables.

Once you start to get Jenkins to automatically do your dirty work, you get nice graphs of how things are getting along, like build times or test result trends. Jenkins will also show you which files have changed to trigger the build so you can quickly see what files are the culprits if sims start to fail.

***

At work We build mixed-signal chips, and we use SOS to manage everything about our designs: schematics, layout, RTL, synthesis scripts - the works. We run both digital (RTL-only) and analog (spice/RTL co-simulations) simulations at the toplevel. The vast majority toplevel simulations are self-checking. But each time our RTL changes, we'd have to manually relaunch all of this stuff. Booorrrring! So we decided to try out a bit of Continuous Integration using Jenkins.

The first thing was to get Jenkins to poll SOS, the source code management tool. This was our first problem - there are no SOS plugins for Jenkins in existence on the web. None of us can Java, and our CAD department wouldn't commit to writing one for us, so it wasn't a good start.

But we could use the File System SCM plugin instead of a proper SCM plugin. The idea is that Jenkins is set up with it's own SOS workarea for the project, then Jenkins is used as a glorified cron job to run an 'update' command on this workarea in ten-minute intervals. In effect, an "SOS Update" job triggered 6 times an hour; the build stage is a shell script that runs the SOS update command. For all other jobs, we can now use the File System SCM plugin to check against this SOS workarea to determine if those jobs need to be run again. It means that we've a bit of unnecessary file replication, but the SCM uses links so it's not too bad.

Next up was to get our RTL simulations running. Another Jenkins job was created to use the 'File System Plugin' to poll the Jenkins-specific workarea to look for updates. Once triggered by a change, a build script launched all the RTL sims out on the compute farm and waited for the results to come in. The only changes made to the sim suite launch script was to ensure it could be run from any directory and that it produced the sim results in JUnit XML style. There are no artifacts as such from these sim suite runs, but Jenkins will read the Junit XML files (once made aware of their existence) and remember the results in its database. The fact that our sims are self-checking is essential here.

Co-simulations were set up in the same way - another Jenkins job to poll the SOS workarea and launch the co-sim suite, and Jenkins pointed to the JUnit results summary file.

We were filled with verification joy at this point. We'd a bunch of sims that were launched when any RTL or netlists changed. Automatically! These sims were run in their own workarea so they ran on exactly what was checked into the SCM, no more, no less, so no more forgetting to checkin files. And we had traffic lights telling us the health of our design and some nice trend graphs.

But the whispers of automation were not quiet for long...

Sometimes we'd forget to netlist and our sims ran against out-of-date netlists. Sometimes we'd forget to update our synthesis scripts and our physical people would be sad. It's a lot of stuff to remember to do and the details are rarely documented accurately, if at all. Again, we turned to Jenkins for assistance.

Synthesis was the next task we automated. Setting a Jenkins job up to poll the SOS workarea and run synthesis was not a problem, and that might have been enough. But there is really no point in running things off automatically if the results are not going to be examined in some way. What metrics could we check for a synthesis run? What about RTL errors, Area and Critical Path Slack for all clock domains? Cool. Scripts were written to extract these metrics from the log files and to create a results XML file that flagged out-of-bounds errors in these metrics. Synthesis is now automatic and somewhat self-checking!

We were on a roll and the netlist problem would be next to fall. But there was an immediate problem as netlisting was traditionally a GUI-based click-this-then-that manual affair for us. One email to our CAD support group later and we had the solution - it *is* possible to netlist from the command line. This was Good News as anything we can run from the command line, we can get Jenkins to do! As all the other jobs fanned out from the SOS workarea update job, we modified it to include a netlisting step. Now we could be sure that all our simulations ran from only the freshest of netlists.

Automation of all these tasks is kinda a huge thing. We get more time to actually build the product rather than babysit a bunch of tasks. We get quick feedback on breakages. We've implicitly documented our processes for netlisting and checking synthesis results. If area suddenly bumps up, we just go to 'recent changes' to see which files were involved. We get a Jenkins dashboard showing us the up-to-the-10-minute health of our design were we can quickly see what's ok and what needs attention.

We're very happy with this. Now were waiting to hear the whispers of automation again. Analog Model-checking, maybe?

Epilogue

I was tempted for a second to finish this blog post with this, roughly:

After taking some time to set up Jenkins and making everything self-checking (including synthesis), we're getting designs out quicker and we're seeing fewer bugs in silicon.

But I was unhappy with it because it sounded like, well, bullshit. It does *feel* like we're producing better quality stuff faster, but without hard numbers it's all subjective. Although we do track these numbers (weeks and item count in issues lists) it's the comparisons that I don't understand. How do you compare time-to-tapeout numbers for different projects that have different levels of complexity and that start from different places? And how do you compare silicon bug rates for the same? And why do I have a funny feeling that software folks know?

Wednesday, January 19, 2011

DCC Firmware for Arduino

Firmware

So now that I had assembled the hardware, it was firmware time. I wanted to send an address:direction:speed string (eg "A001:F:S3") over the serial connection to the Arduino, and have the Arduino build the corresponding DCC packet and drive the H-Bridge accordingly.
The Arduino firmware I wrote to implement the DCC spec is interesting from two respects: it uses timer interrupts and it writes to the microcontroller ports directly. But I'm getting ahead of myself a little...

DCC Specification

Before going any further, we'd probably need to have a look at the DCC spec. DCC sends 1's and 0's as square waves of different lengths. A short square wave (58us * 2) represents a 1, and a longer one (>95us * 2) is a 0.
These 1's and 0's are then collected into packets and transmitted on to the rails. Each packet contains (at least):
  1. A preamble of eleven 1's
  2. An address octet. This is the address of the train you want to control on the layout.
  3. A command octet. This is 1 bit for direction and 7 bits for speed.
  4. An error checking octet. This is the address octet XORed with the command octet
Each of these sections is separated by a "0" and the packet ends with a "1" bit.
If a train picks up a control packet that is not addresses to it, the command is ignored - the train keeps doing what it was last instructed to do, all the while still taking power from the rails. When nothing has to be changed, power must still be supplied to the trains so packets are still broadcast on the rails to supply power. In this case either the previous commands can be repeated or idle packets sent.

Driving the H-Bridge

First, I had to figure out a way of driving the H-Bridge signals. Driving both legs of the H-Bridge incorrectly won't short out the power supply, but it will give ugly transitions on the rails ( instead of ) and DCC decoders may not be able to decode the packet. The H-Bridge control signals should be driven differentially - both must change at the same time. This ruled out using digital_write() to set pin states for two reasons: it can only change one pin at a time; and it's too slow.
So I needed to directly manipulate the a microcontroller digital port. I chose pins 11 and 12 which are both in PORTB. By directly manipulating PORTB with a macro, I could now change the pins at the same instant in time.
#include <avr/io.h>
#define DRIVE_1() PORTB = B00010000#define DRIVE_0() PORTB = B00001000

When to use these macros was the next problem.

Timing

As the DCC spec specifies quite a tight timing requirement on the 1 and 0 waveforms, I decided I should use the timer on the Arduino's microcontroller. Using the timer, I could place the transitions on the outputs accurately. So I set up the timer so that the interrupt would trigger every 58us. To simplify things, I defined the time of a 0 bit to be twice that of the 1 bit, ie 116us between transitions. For example, if I wanted to send a 1, I would drive LO HI, and I'd drive LO LO HI HI to transmit a 0. The timer setup routine is shown below.
void configure_for_dcc_timing() {
/* DCC timing requires that the data toggles every 58us
  for a '1'. So, we set up timer2 to fire an interrupt every
  58us, and we'll change the output in the interrupt service
  routine.

  Prescaler: set to divide-by-8 (B'010)
  Compare target: 58us / ( 1 / ( 16MHz/8) ) = 116
  */

  // Set prescaler to div-by-8
  bitClear(TCCR2B, CS22);
  bitSet(TCCR2B, CS21);
  bitClear(TCCR2B, CS20);
  
  // Set counter target
  OCR2A = timer2_target;
   
  // Enable Timer2 interrupt
  bitSet(TIMSK2, OCIE2A); 
}
The interrupt service routine (ISR) for the timer is shown below. For accurate timing when using a count target for a timer, I have to reset the timer counter straight away. Straight after, I figure out which level I need to drive and drive it. The point is, there's a fixed amount of processor cycles needed from when the ISR fires until I drive the pins. After this, I can be a little more relaxed about anything else I need to do during the ISR, like update the pattern count or load a new frame (explained later).
#include <avr/interrupt.h>

...

ISR( TIMER2_COMPA_vect ){
  TCNT2 = 0; // Reset Timer2 counter to divide...

  boolean bit_ = bitRead(dcc_bit_pattern_buffered[c_buf>>3], c_buf & 7 );

  if( bit_ ) {
    DRIVE_1();
  } else {
    DRIVE_0();
  }  
  
  /* Now update our position */
  if(c_buf == dcc_bit_count_target_buffered){
    c_buf = 0;
    load_new_frame();
  } else {
    c_buf++;
  }
};

Building Control Packets

There are two steps to getting packet UI data ready for transmission. First, the UI pattern must be constructed using the latest address, speed and direction data that the firmware has received from the serial link. And then when the driver interrupt is ready for it, the packet is copied to a buffer area so that output data is never updated mid way through the transmission of a packet. The picture right gives the general idea.
To keep things simple for the interrupt routine, I built a list of highs and lows that must be transmitted for a given packet. Now, each time the ISR fires it just outputs the next level in the list. For example, if I wanted to drive a packet of 1001, I'd actually be driving 12 UIs (LO HI, LO LO HI HI, LO LO HI HI, LO HI) on the pins. So I set up an array of bytes called dcc_bit_pattern to hold this HI LO HI ... sequence. It was sized so that it would hold the worst case packet length, transmitting all 0's.
So after receiving a new direction instruction, I'd determine the frame data and write it to this packet buffer in UI format. All the while, I'd be keeping a count of the number of UIs in the packet, and when I'd finished building the packet, squirrel this final UI count away for use later. To build a packet from the address, speed and direction data, I call build_packet(), which in turn calls a general-purpose packet builder function called _build_packet(), shown next:
void _build_frame( byte byte1, byte byte2, byte byte3) {
   
  // Build up the bit pattern for the DCC frame 
  c_bit = 0;
  preamble_pattern();

  bit_pattern(LOW);
  byte_pattern(byte1); /* Address */

  bit_pattern(LOW);
  byte_pattern(byte2); /* Speed and direction */

  bit_pattern(LOW);
  byte_pattern(byte3); /* Checksum */

  bit_pattern(HIGH);  
  
  dcc_bit_count_target = c_bit;
  };
The byte_pattern() function takes a byte and converts it to a string of UIs. For example, given an address of 12, this is b0000_1010 in binary and the byte_pattern() function would add the UIs {LO LO HI HI, LO LO HI HI, LO LO HI HI, LO LO HI HI, LO HI, LO LO HI HI, LO HI, LO LO HI HI} to the current packet being constructed.
The function byte_pattern() uses bit_pattern() which really does all the donkey work, doing the actual logic-to-UI conversion. Starting at position held in variable c_bit, bit_pattern() will lay down LO HI or LO LO HI HI for each bit and will increment the UI counter c_bit as it goes.
void bit_pattern(byte mybit){
    bitClear(dcc_bit_pattern[c_bit>>3], c_bit & 7 );
    c_bit++;
    
    if( mybit == 0 ) {
       bitClear(dcc_bit_pattern[c_bit>>3], c_bit & 7 );
       c_bit++;   
    }
    
    bitSet(dcc_bit_pattern[c_bit>>3], c_bit & 7 );
    c_bit++;
    
    if( mybit == 0 ) {
       bitSet(dcc_bit_pattern[c_bit>>3], c_bit & 7 );
       c_bit++;   
    }
    
}
The position of a given UI in the packet's byte array dcc_bit_pattern is decoded from the UI counter. The three LSBs, c_bit[2:0] are the position within the byte and the remaining MSBs are the byte address. This explains the bitClear(dcc_bit_pattern[c_bit>>3], c_bit & 7 ) stuff that's going on both here and in the ISR.
When the packet is built and the driver interrupt is ready for it, the packet is copied to a buffer area so that a transmitted packet is never updated mid way through being updated. The function load_new_packet() takes care of copying the new UI data and updating the buffered UI target count.

Reading Control Strings via Serial I/O

To read a control string from the serial port, I've used the Serial module and a finite state machine (FSM). The FSM detects a string in the form: "A" digit digit digit ":" "F" or "B" ":" "S" digit. If there's a handier way to do this, I'm all ears. The FSM diagram for this is shown below, with the red transitions being the main loop, and the dashed transistions being followed when there's an error. I snuck a few testmodes in there too: one so I could drive the rails constantly long enough to put a multimeter on them; and another to tweak the timer target count
Having the firware controlled by strings passed through the serial port opens up some interesting capabilities. For instance, I didn't know the address of the train initially, so I wrote small Python script to cycle through all the addresses and wait a while to see if the train responded (it turned out to be '1'):
#! /usr/bin/env python
""" Try to find the address of dad's train... """
from time import sleep
import serial
link = serial.Serial('/dev/ttyUSB0', baudrate=9600, timeout=2)

def search_address():
 for address in range(127):
  print "Address %03d" % (address)
  link.write("A%03d:F:S3" % address )
  sleep(10)
 
if __name__ == '__main__':
 search_address()
I also wrote one to move the train back and forth along the track:
#! /usr/bin/env python
from time import sleep
import serial

link = serial.Serial('/dev/ttyUSB0', baudrate=9600, timeout=2)
print "Link:", link
for i in xrange(10):
    link.write("A001:F:S5")
    sleep(10)
    link.write("A001:B:S6")
    sleep(14)

The Grand Opening

So after all this, you might be interested in what my dad thought of the whole endeavour. I took it back home and showed him, and he was like "Meh, that's nice I suppose. I'm more interested in the wireless control that's about these days...". Fair play, no point in using old tech, I suppose!

References

Saturday, January 15, 2011

Controlling Model Trains with an Arduino

‎Hear My Train a Coming

I was back home a few months ago, and I was in the auld fella's shed. He was giving me the grand tour of the model railway setup he was building (OO guage, I believe). Dad's kinda more into the scenery, building buildings, and wiring the tracks rather than playing with the trains. But what interested me was the operation of the trains - he could have a couple of trains on the tracks and control them seperately, going at different speeds and directions. But there's only two wires! What kind of magic was this?
Turns out it was Digital Command Control, or DCC.

The Golden Age of Steam

Back in olden times, the motors onboard model trains got their power (either AC or DC) from the tracks that the train ran on. This was cool if you had only the one train, you could control its speed by varying the voltage on the tracks, and if you had a DC setup, its direction by flipping the polarity. But if you wanted to run two or more trains at the same time on the same tracks, they'd go at the same speed in the same direction. Not too realistic. Or fun, I can imagine.
That's unless you split up the track layout into separate zones electrically. So a train on zone 1 say, would go at a different speed from a train on zone 2. This setup worked but was very flakey in a number of dimensions. It was especially troublesome at the boundaries between these sections, usually at the points. Points, if you don't know, are those things on a railway which direct a train onto one branch of a track or the other. In model railway land, with the tracks being electrically conductive and all, the points are essentially DPDT switches which can end up shorting the zones if things are not properly controlled. I'm a bit fuzzy on the details here to be honest, so I'll continue...

DCC

Anyways, DCC is the solution to all this. It's quite cool. Instead of DC or a sinewave on the rails, you drive a digital control packet at roughly +-15V. The motor on the train takes its power from this DCC signal (rectifies it, I think), and a chip onboard each train decodes the control packet to set the direction and speed of the train. Since each DCC train can be programmed with an address, each train on a layout can be individually addressed and controlled all without tricky zone wiring! Brill! For a train that's not being addressed, it can still rectify the signals on the rails to power its motor. And if its not being addressed, the train keeps doing what it's doing.

I had a spare Arduino

This was very interesting to me. Digital control, eh? I had a spare Arduino - I'd brought my RGB LED project to show the nephew/nieces. Digital Control. A spare Arduino. A plan was forming. Could I possibly program my Arduino to digitally control my dad's trains?

Power

The first problem was electrical. The Arduino pumps out 5V, and the trains would require a swing of ideally ±15V and quite a bit of current. So I was thinking MOSFET H-Bridge switching a hefty power supply and controlled by the Arduino's outputs. But I had no MOSFETs to hand. Luckily, my dad had a few L293D's lying about (he's cool like that). So with a bit of stripboard and a chopped up DIL socket I had a quick and dirty power driver circuit ready to go. A dusty wall wart rated for 12V DC (giving me ±6V) sourced from the bottom drawer in my dad's shed would supply the necessary power. The general idea of the circuit is shown below:

I used two of the four H-Bridge legs in the L293D to steer the 12V across the tracks. By controlling inputs 1A and 2A carefully, I could put +12V on one rail and 0V on the other, and vice versa, giving a swing of ±6V. This is not exactly to spec, but seemed to work for two trains at least.

The Grand Plan

Now that I was happy with the physics, it was time to get metaphysical. The basic DCC spec defines a packet made up of the train address, its direction and its speed. So I thought it would be nice if I could send an address:direction:speed triplet from a computer GUI to the Arduino via the USB/serial port. My firmware on the Arduino would then convert this command triplet string into voltage waveforms on its output pins, that would drive the power H-Bridge made from the L293D to, in turn, control the train.

So that's what I did. Although I didn't get it completed at home, so the auld fella tacked a few sections of track onto a length of 2x1 and let me borrow a train.
(Warning! as pointed out by Sergei in the comments, if you build this circuit on a breadboard and use it for long periods of time, the chip will heat up and melt your breadboard! So please build it on stripboard and connect pins 4,5,12 & 13 to as much copper as you can to act as a heatsink.)

Firmware

So when I got back to base, I started on the firmware. The firmware to implement the basic DCC spec is interesting enough and would make an interesting post on its own. So that's what I'll do.

Tuesday, June 15, 2010

SystemVerilog is a Big Mistake

I think we dropped the ball with SystemVerilog.
* It's based on old tech (but at least it has garbage collection). Why is it not more Python-like, y'know easier.
* It's a mishmash of languages
* It's getting 'unattainable'. For example, if you want to plug away at it on your own, there's no free simulator that you can practice with.

Toward a Fully Featured Programming Language


The Verilog standard should've only been updated to make it more useful from a HARDWARE DESCRIPTION point of view. SystemVerilog is an effort to grow Verilog towards a more traditional OOP programming language - and that's what's back to front. We should've taken Python (yield) (or even Go - after all it's built around concurrency and it compiles PDQ (not TCL, please)) and grown it to include a Verilog DUT.
SV adds useful stuff like hashes and foreach loops that make it a lot more expressive - stuff that's empiricaly proven to increase productivity by 100.09%. But why not just start from a real programming language in that case? It's not like OOP testbenches do connectivity and timing like traditional RTL - SV testbenches expect you to call .run() on all your class instantiations and pass around handles to interfaces for connectivity. And since we're back to forking a load of .run() methods, why not start from a 'real' programming language, and allow it to twiddle the inputs of RTL descriptions of hardware?

Adding Broken Things


Since SV is a huge amalgamation of things by an amalgamation of vested interests, things were added to the SV standard that should not have been.

program Block Fail


Also, what's with the program blocks? That's a fail right there. And we still have problems with time -0 initialisation, still have possible race conditions at the start of a sim if you want a monitor module to have reasonable defaults, and then change them at the start of an initial block.

final Blocks


I don't get these. They're supposed to be able to let you do things at the end of the simulation. But like most Verilog procedural blocks, you've no visibility on the order that they'll execute. So say you want to open a file at the end of a simulation and have all your testbench monitors write their status to it. Yay, so put a final block in each of your monitor blocks to write to the file... uh, hold on, how do you know that file has been opened? How do you keep the order consistent? Ah, I know, call a .summary() function/method for each of your monitors. But now to call these functions you need to know what monitors you have, so monitors have to register themselves somewhere because SV has no introspection. So now you've a single final block calling a bunch of .summary() functions and if you've only one final block, what's the point? You may as well just have a function that you call at the end of your 'main()' initial procedure.

Open Verification? Hmmm...


SV testbench-building methodologies seem to be settling around the UVM - a nice 'open' standard that's being put together by the Accellera consortium. Yeah, you can download the code for free and have a peek at it, and maybe send some patches back to fix things that trouble you, but it ain't open, baby. If you have to pay loads of cash for a simulator to run this, I'm not sure that you can claim that it's open.
This is another good reason for going the {Real_Programming_Language, Verilog} route. With just a Verilog-2001 open source simulator, open source programming language and some tasty interfacing, you'd be able to run fancy testbenches on pre-existing RTL from the comfort of your own home. No expensive licenses needed. And more than that, you wouldn't have to limit the maximum concurrent jobs on the compute farm to 10 when doing regressions because co-workers write pleading e-mails to you not to hog the licenses...

Assertions, Coverage & Constrained Randomisation


I admit that I haven't used assertions, coverpoints or constrained randomisation in anger. And I suspect that this weakens my argument somewhat. But this could be done in a Python module instead of, y'know, bolting together several existing languages? I've a feeling I underestimate the amount of work needed to get all this stuff working. Yip, I admit it - this portion of my argument is weak.

Companies


Companies. Why would they do {Real_Programming_Language, Verilog} when they could build SystemVerilog to steer us away from the opensource verilog simulators that were somewhat catching up, and make us all move to something that we need to look on feature vs price matrices to see which portions of the bright new thing we can afford to run? Companies, I suppose I can have nothing against them, after all I do work for one! They have to make a buck, I suppose.

So...


It's interesting to think about what a "Real Programming Language + Verilog 2001" SystemVerilog would look like. What Real Programming Language would we use? Would it actually improve productivity?