Luke Maurits' bloghttp://www.luke.maurits.id.au/blog/Luke Maurits' blog of assorted geekery.Luke MauritsMinimising ATTiny ADC power consumptionhttp://www.luke.maurits.id.au/blog/post/minimising-attiny-adc-power-consumption.html<p>This post is a quick note on making full use of the power saving settings for the analog-to-digital converter in ATTiny chips. This is based on my experience tinkering with the ATTiny84 but I suspect that it applies pretty broadly across the family, including to the more popular ATTiny85.</p>
<p>The ADC is one of the most power hungry peripherals on these chips, and if you are trying to achieve an ultra low power sleep state for a battery- or solar-powered application its important to shut it down when its not in use. Somewhat confusingly, there are two options provided to facilitate this. You can disable the ADC by clearing the <code>ADEN</code> bit in the <code>ADCSRA</code> register (which, to my understanding, stops the ADC clock running but leaves the ADC powered up), but you can also actually power down the ADC by setting the <code>PRADC</code> bit in the <code>PRR</code> power reduction register (<a href="http://www.nongnu.org/avr-libc/user-manual/group__avr__power.html">AVR libc provides convenience functions</a> for doing this, <code>power_adc_disable()</code> and <code>power_adc_enable()</code>). Disabling the ADC seems to provide a much greater power saving than powering it down (i.e. the ADC consumes very little power when the clock is not running), but if you are really gunning for nanoamp consumption then to get maximum savings you should do both.</p>
<p>Basically all I'm writing this for is to emphasise the fact that when doing both of these things you need to pay attention to the <em>order</em> in which you do them for things to work as expected. The datasheet is almost explicit about this, saying "The ADC must be disabled before shut down" (i.e. write to <code>ADCSRA</code> before you write to <code>PRR</code>). What it <em>doesn't</em> say is that, when bringing the ADC back up, you need to do the reverse and restore power before enabling it (i.e. write to <code>PRR</code> before you write to <code>ADCSRA</code>). Arguably this is obvious, but its easy to overlook, especially if you do what I did: write your "power up" code by copying and pasting your "power down" code and reversing the bit setting/clearing logic without reversing the order of the lines. As far as I can tell, if you try to enable a powered down ADC, your write to <code>ADCSRA</code> will simply be ignored. When you then power it up, <code>ADEN</code> remains cleared, and any reads from <code>ADCH</code> or <code>ADCL</code> will simply return the value of the last successful conversion, making it look like your ADC is "stuck". So be careful!</p>2017-04-17T22:02:27+12:0Luke MauritsUsing the alarms on the MCP7940Nhttp://www.luke.maurits.id.au/blog/post/using-the-alarms-on-the-MCP7940N.html<p>Lately I've been tinkering with an ATTiny-based microcontroller project, exploring the possibilities for ultra low power operation after being inspired by <a href="http://heliosoph.mit-links.info/solar-arduino/">this wonderful series of blog posts by Heliosoph</a>. Part of the setup is using a real time clock chip to periodically wake the microcontroller from a very deep sleep state. The darling RTCs of the hacker/maker world are unquestionably the DS13XX chips from Maxim, most commonly the DS1307 - these are the chips you'll find for sale at places like Sparkfun, Adafruit, Tayda, etc. However, I prefer to use the <a href="https://www.microchip.com/wwwproducts/en/MCP7940N">MCP7940N from Microchip</a>.</p>
<p>The MCP7940 is considerably cheaper than any of the DS13XX chips, at least from my suppliers, which is my main reason for using it. However, it also has excellent power specs (works down to 1.8V and draws 1.2 microamps at 3.3V, compared to the DS1307 which requires 4.5V and draws 200 microamps - admittedly the DS1337 is much more closely matched) and a lot of nice additional features, like a digital trimmer which lets you correct for the tolerance of the quartz crystal, a multifunction output pin which can generate alarm interrupts or squarewaves at various frequencies, and logging timestamps to battery-backed SRAM recording when the chip switches between its main power supply and the battery backup. Paradoxically, I think it is simultaneously the cheapest <em>and</em> the most featureful DIP-packaged RTC chip on the market. Of course, it has some drawbacks, too, which is presumably the reason it is not widely used. The DS13XX chips have internal capacitors for their oscillator circuit, whereas these need to be external components on the MCP7940. Microchip also recommend (in a <a href="http://ww1.microchip.com/downloads/en/DeviceDoc/20002337B.pdf">guide for those migrating from the DS1307 to the MCP7940</a>) connecting the backup battery to the RTC via a diode, resistor and capacitor network, whereas the DS13XX chis permit a direct connection. The need for these external components offsets the cheaper chip price to some extent, but I still think the MCP7940 has the lower total BOM cost.</p>
<p>However, the biggest drawback is that the programming interface for the MCP7940's alarms is frankly a bit of a pig to work with, which is the motivation for this post. One user on Microchip's forum <a href="https://www.microchip.com/forums/m881867.aspx">declared the alarms to be "unusable"</a>. The specific complaint was that it's very hard to set an alarm to fire each day at a specific time, say 0730, the way you'd expect a traditional alarm clock for sleeping humans to work (which, I acknowledge, is not really the kind of use case an RTC chip is designed for). To set an alarm on the MCP7940, you specify a value for seconds, minutes, hours, day of the week, date and month, and then select one of the following six matching conditions:</p>
<ul>
<li>seconds match</li>
<li>minutes match</li>
<li>hours match</li>
<li>day of week match</li>
<li>date match</li>
<li>seconds, minutes, hour, day of week, date and month match</li>
</ul>
<p>Conspicuously absent is a "minutes and hours match" mode (which the DS1337 <em>does</em> provide, or rather it provides a "seconds, minutes and hours match" mode). This means it's easy to trigger alarms at 0630, 0730, 0830, etc. (by having the minutes match 30) and easy to trigger alarms at 0700, 0701, 0702, etc. (by having the hours match 07), but apparently impossible to trigger an alarm at 0730. The only option is to use the last mode, where everything has to match. This lets you trigger your alarm at 0730 on Sunday the 9th of April 2017, but the drawback is that to reenable that alarm for 0730 on Monday you'll have to increment the day of week and date settings for the alarm - and then check to see if you've rolled over to a new month (requiring you to write "30 days has September..." code), and if you have to check if you've rolled over to a new year. Don't forget about leap years! This is rather too much mucking about for any sane person's taste to achieve as simple a task as triggering an alarm once per day at the same time each day.</p>
<p>It turns out that with a more careful reading of the datasheet, there is an easier way than this, but it's a little odd. The MCP7940 has an <code>ALMPOL</code> bit in the <code>ALM0WKDAY</code> register (at address <code>0x0D</code>) which specifies the polarity of the alarm output pin (<code>MFP</code>). When only one of the two alarms is used, this functions sensibly. If <code>ALMPOL</code> is set, then the alarm output is active high (i.e. the output is low while waiting for the alarm to trigger, and goes high once triggered), and if <code>ALMPOL</code> is cleared, then the alarm output is active low. However, if <em>both</em> alarms are used at once, things get interesting. If <code>ALMPOL</code> is set, then the output on the MFP pin is low by default and goes high when <em>either</em> of the alarms fires. You get two independent alarms with this configuration. However, If <code>ALMPOL</code> is cleared, then the output on MFP is high by default and goes low only when <em>both</em> of the alarms fire. So if you set alarm 0 to fire whenever the hour matches 07, and alarm 1 to fire whenever the minutes match 30, then you essentially end up with a single active low alarm which fires at 0730. It's one alarm for the price of two, but it's better than worrying about all that date incrementing nonsense.</p>
<p>An alternative solution raised in the forum thread is to implement some of the alarm matching in software. You can set an alarm to trigger whenever the minutes match 30, say, and in the interrupt service routine which is triggered by this alarm you can read the current hour out of the RTC and check for yourself whether its 0730 or half past some other hour. If its 0730, do what you have to do, and if its not put the microcontroller back to sleep. Unlike the "native" solution using both alarms described above, this approach lets you use both alarms independently, and also lets you retain the choice between an active high and active low alarm. You can also generalise this quite far: set an alarm to fire whenever the seconds match zero and your ISR will be called once per minute, then you can check any other condition in software. In this fashion you can have as many alarms as you like, and they can be as unusual as you like, like trigger at 0730 on the second Tuesday of the month but only if the date is divisible by three. The sky is the limit!</p>
<p>That's not the end of the story, though, as both this solution <em>and</em> the native solution suffer from a common additional problem, which in my opinion is the <em>real</em> pitfall of the MCP7940's alarm system. This is that the alarms trigger <em>repeatedly</em> as long as the matching condition holds true. If you set an alarm to trigger when the seconds match 00 and in response your microcontroller wakes up, does some work for 10 milliseconds and then goes back to sleep, it will be immediately awoken again because the seconds <em>still</em> match 00. This will in fact repeat <em>one hundred</em> times until the seconds value finally ticks over from 00 to 01 and only <em>then</em> will the microcontroller finally stay asleep. The DS1337 does not suffer from this problem at all, since it tests for a match only once per second, when the time and date registers are updated. The problem is even worse if you are matching on a particular value of the minutes. Instead of your ISR firing constantly for a whole second, it will fire constantly for an entire minute! Of course, your ISR can simply disable the alarm the first time it is called, in which case it will only fire the once, but then your alarm won't fire at all when your condition matches again one minute or one hour later. </p>
<p>So, what do we do if we want to use the MCP7940 to, say, trigger an alarm <em>exactly once</em> each hour when the minutes match 00? The solution is to use both of the alarms in a "leapfrog" configuration:</p>
<ul>
<li>Configure alarm 0 to fire when the minutes match 00 and enable the alarm.</li>
<li>Configure alarm 1 to fire when the minutes match 01 and disable the alarm.</li>
<li>Put the microcontroller to sleep, where it will remain until alarm 0 fires at the start of a new hour.</li>
<li>When the microcontroller awakes due to alarm 0, perform your hourly task, then do the following <em>in this order</em>:<ul>
<li>Disable alarm 0</li>
<li>Clear the <code>ALM0IF</code> interrupt flag</li>
<li>Enable alarm 1</li>
<li>Go back to sleep</li>
</ul>
</li>
<li>The microcontroller will then stay asleep for the remainder of the first minute of the hour.</li>
<li>When the microcontroller awakes due to alarm 1, do the following <em>in this order</em>:<ul>
<li>Disable alarm 1</li>
<li>Clear the <code>ALM1IF</code> interrupt flag</li>
<li>Enable alarm 0</li>
<li>Go back to sleep</li>
</ul>
</li>
<li>The microcontroller will then stay asleep until the start of the next hour, whereupon it will be awoken due to alarm 0 and we rinse and repeat.</li>
</ul>
<p>Triggering an alarm exactly once each minute when the seconds match 00 can, of course, be done in the same way. Its very important that you disable an alarm <em>before</em> clearing its interrupt flag. If you clear the interrupt flag with the alarm still enabled, the flag will immediately be set again and your ISR will unexpectedly fire a second time. This approach is, once again, one alarm for the price of two, but because it is well behaved you can easily implement a software alarm on top of it and have an arbitrary number of alarms which occur every day at particular times specified by an hour and minute. Note that if your repeating task takes longer than two seconds (if it happens every minute) or two minutes (if it happens every hour), then you may miss alarm 1, and then alarm 0 will never be reenabled and your controller will fall into an endless slumber! In this case adjust the match value for alarm 1 accordingly.</p>2017-04-08T12:40:21+12:0Luke MauritsShortwave DXinghttp://www.luke.maurits.id.au/blog/post/shortwave-dxing.html<p>Recently I decided I'd like to try my hand at shortwave <a href="https://en.wikipedia.org/wiki/DXing">DXing</a> - receiving and, hopefully, identifying <a href="https://en.wikipedia.org/wiki/Shortwave_radio">shortwave radio</a> broadcasts from distant lands which have <a href="https://en.wikipedia.org/wiki/Radio_propagation">propagated</a> very long distances by bouncing off the ionosphere. I have fond memories of listening to our shortwave radio as a child; I don't think I had any interest or even awareness of the geopolitical aspects of shortwave listening, I just really enjoyed all the weird and wonderful science-fictiony sounds that you could find in between stations.</p>
<p>To dip my toe in without risking a lot of wasted money if I quickly got bored of the hobby, I purchased the <a href="http://swling.com/db/2011/01/tecsun-r-9012/">Tecsun R9012</a> off eBay. Tecsun seems to be the preferred brand of the "ultralight DXing" community (DXing using small, portable and affordable radios instead of big, heavy, professional grade desktop receivers), and the R9012 seems to be their current "top end of the bottom end" (Tecsun produce a baffling array of radios - I found <a href="http://swling.com/db/wp-content/uploads/2011/01/Tecsun-Gallery-20090321A.pdf">this guide</a> useful for getting my bearings). If I really get into this, my planned upgrade route is to first get the <a href="http://radio-timetraveller.blogspot.com/2010/06/review-of-tecsun-pl-380-dsp-receiver.html">Tecsun PL-380</a> and then, if I'm still not sated, the <a href="">PL-880</a>, which as far as I can tell is their current top of the line.</p>
<p>Radio propagation is best at night, and I took the R9012 out on my deck both evenings this past weekend. Everything I've found so far has been quite faint and with quite a bit of static. Apparently this may be, at least in part, due to interference from household electronic appliances (I live in a block of four units, so I'm in close proximity to quite a lot of these). Fortunately, I happen to live right next door to an almost 200m high dormant volcanic cone with easy pedestrian access. Between the elevation and the fact that I'll be some distance from anybody's home I imagine the DXing prospects up there at night might be quite impressive, and I'm keen to try this out when I get a chance.</p>
<p>Despite the sub-par reception on my deck, I have already identified with confidence <a href="http://tf.nist.gov/stations/wwvh.htm">WWVH</a>, a US NIST time signal broadcast from Hawaii (just over 7,000km away!), and I suspect but can't be completely sure that I've also picked up <a href="http://www.english.rfi.fr/">Radio France Internationale</a> and the <a href="http://english.irib.ir/">Voice of the Islamic Republic of Iran</a>. I'm looking forward to seeing what else I can find as I get used to the radio and get more experience identifying stations. There's a lot of interesting SW content in Asia which I figure should be an easy catch for me, including a <a href="http://www.rachi.go.jp/en/shisei/radio/">Japanese government broadcast</a> aimed at abducted Japanese citizens held in North Korea, and <a href="http://www.hfunderground.com/wiki/Firedrake">China's "Firedrake" jamming signal</a>, which drowns out <a href="https://en.wikipedia.org/wiki/Radio_Free_Asia">Radio Free Asia</a> and similar stations with an hour-long loop of folk music. And, of course, I'd just love to catch a <a href="https://en.wikipedia.org/wiki/Numbers_station">numbers station</a>.</p>2016-01-18T07:50:33+12:0Luke MauritsFarewell to PrettyTablehttp://www.luke.maurits.id.au/blog/post/farewell-to-prettytable.html<p>Almost seven years ago (as hard as that is for me to believe!), I <a href="/blog/post/prettytable-0.1-released.html">announced the release of PrettyTable</a>, a Python library I wrote almost on a whim to make it easy to produce nice looking ASCII tables of the kind I first saw in the command line shell for PostgreSQL. It was <a href="/blog/post/the-unexpected-success-of-prettytable.html">much better received</a> than I had ever anticipated; people blogged about it, it spawned a mailing list, it was <a href="https://packages.debian.org/search?keywords=prettytable">packaged for Debian</a> and other people wrote things which depended on it. It remains to this day the most successful piece of software I've ever written.</p>
<p>At first I was, of course, elated at this turn out, and worked diligently to add features people wanted and fix bugs. But, truth be told, it got old, and it got boring. It's not a terribly exciting concept to begin with, and it didn't take long before 90% of the work became related to fiddly details surrounding obscure edge cases. For the past several years, I've been a pretty awful maintainer of PrettyTable. I did occasionally make changes in the trunk branch of the svn repository when people poked me, but I never actually made a new release for PyPi or anything like that, so most users have never seen these bug fixes or new features.</p>
<p>I read <a href="https://en.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar">The Cathedral and the Bazaar</a>, on physical paper, something like <em>thirteen</em> years ago. I don't think I've reread it since, but nevertheless, ESR's "lesson 9" has always remained clear in my mind: "when you lose interest in a program, your last duty to it is to hand it off to a competent successor". I tried to do this with PrettyTable, although not terribly hard, and I always felt bad about letting it stay in the state it did for as long as it had.</p>
<p>Recently, one of those people who wrote something which depends upon PrettyTable got in touch with me. Or rather, <a href="http://blog.flaper87.org/">Flavio Peroco</a> got in touch with me as a representative of the <a href="https://www.openstack.org/">OpenStack</a> project, which is what an <em>actual</em> successful free software project looks like. It has its own <a href="https://www.openstack.org/summit/">conference</a>! It also uses PrettyTable. With the official PrettyTable repository now frozen thanks to the inevitable <a href="http://google-opensource.blogspot.com/2015/03/farewell-to-google-code.html">death of Google Code</a>, there was some concern about the ability for OpenStack to get needed changes pushed upstream in the future. It turns out that OpenStack has previously adopted other projects that it relies upon, and there was interest in doing this for PrettyTable. After a brief exchange of emails, I was quickly convinced that PrettyTable had an infinitely brighter future under the OpenStack umbrella than I was ever likely to provide it myself, so a handover is now in progress. Once PrettyTable has an official new homepage, I will update this post to link to it.</p>
<p>Even though it didn't really work out in the end, I don't regret the time I spent on PrettyTable. Over the years, on the order of ten people sent me emails for no other reason than to say that they had enjoyed using PrettyTable and to thank me for writing it. One person even sent me a US$5 PayPal donation! Every one of those emails made my day. And I've learned an important, if retrospectively obvious, lesson: it's important that your projects really do scratch your own itches, and it's best if that itch is a strong one. I've actually made very little use PrettyTable myself, which probably went a long way toward me being able to develop the indifference that I ended up showing the project.</p>2016-01-15T05:32:40+12:0Luke MauritsStill alivehttp://www.luke.maurits.id.au/blog/post/still-alive.html<p>I'm hoping to write here more in 2016. We'll see what happens.</p>2016-01-12T09:21:43+12:0Luke MauritsOn BIOS calls for Z80 systemshttp://www.luke.maurits.id.au/blog/post/on-bios-calls-for-z80-systems.html<p>After a lot of quite successful and quite entertaining early playing around with <a href="/projects/computers/lm512/">my homebrew Z80 system</a>, I recently decided to knuckle down and rewrite the ROM code "properly". By this I mean structuring it so that all direct interaction with the hardware happens through a small set of clearly defined and general purpose functions, and all other code works by using these functions. The functions will also be available to user programs, so they basically define a low-level programming interface for the machine. I call such functions "BIOS functions"; I think that, strictly speaking, BIOS is a term specific to the IBM PC, but it has become genericised and I don't know of a better term, so BIOS functions it is. This post will consider the smartest way to call these functions.</p>
<p>Despite the special role they play, BIOS functions are still just plain old functions, and it's perfectly cromulent to call them using the CALL instruction. All you need to do is set up an include file which uses your assembler's equ directive to define easy to remember names for the relevant entry addresses, and you can CALL away as needed. Each BIOS call will require three bytes of machine code and execute in 17 clock cycles. This is the simplest thing that could possibly work, but it's arguably too simple.</p>
<p>The main downside to this approach is that if you update your BIOS code and any function ends up being just one byte shorter or longer than it was before, the entry address for all subsequent functions gets shifted. This means software assembled for the old BIOS version will not work on the new version. For a long term, constantly changing, never-really-finished hobbyist project, this is a pretty massive inconvenience. It perhaps would not have been such a big problem for a home computer manufacturer in the 1980s, where the BIOS most likely would have been aggressively debugged by engineers for months and then released in millions of machines which would never receive firwmare updates. There'd still be a downside, though, in that it would be very difficult to write a backwards-compatible BIOS for your next machine.</p>
<p>The way around all of this inflexibility is to insert a layer of indirection by using a so-called "jump table" of JP instructions. To call a particular function, you might, say, CALL 0042h, and at address 0042 is the instruction JP 1234h. Address 1234 is where the actual implementation of the function is. If a future revision of the BIOS changes the entry address, all you need to do is change the jump instruction at 0042 to point to the new address, and existing software will continue to work without being reassembled because 0042 is still a valid entry point. The price for this convenience is that the jump table consumes three bytes of ROM space for every function, and after the initial 17 clock cycles to CALL into the table, a further 10 are used on the JP instruction, making the function call almost 60% slower than the naive. This is the design approach that <a href="https://en.wikipedia.org/wiki/CP/M">CP/M</a> used: the CP/M BIOS begins with a table of 16 jump instructions, which point to implementations of the 16 BIOS functions.</p>
<p>It's possible to get smarter yet, making use of the Z80's RST instruction. RST is a bit of an oddity; There are, in fact, 8 RST instructions: RST 00h, RST 08h, RST 10h, RST 18h, RST 20h, RST 28h, RST 30h and RST 38h. In terms of what they cause the processor to do, each one is absolutely identical to a CALL instruction with an argument of 0000h, 0008h, 0010h, 0018h, etc. The difference is that while a CALL requires three bytes of machine code and 17 clock cycles to complete, a RST instruction requires just one byte of machine code and completes in 11 cycles - just one cycle slower than a JP, which doesn't touch the stack at all. RST is basically a "small and fast" CALL to one of eight special memory locations. The reason this odd instruction exists comes from the Z80's compatibility with the Intel 8080. The 8080 had only a single interrupt mode, which involved reading and executing a single instruction off the data bus from the interrupting peripheral (similar to the Z80's interrupt mode 0). The RST instruction is naturally very useful for this, although the Z80's interrupt mode 2 is more powerful.</p>
<p>The <a href="https://en.wikipedia.org/wiki/MSX">MSX</a> BIOS takes very careful and full advantage of the RST instruction. It features a jump table which begins at address 0000h. Rather than putting the 3 byte jump instructions immediately one after the other, ending up with entry address of 0000h, 0003h, 0006h, 0009h, etc., one byte of padding is placed between each jump. This means that the entry addresses are 0000h, 0004h, 0008h, 000Ch, etc. Notice that the 1st, 3rd, 5th, 7th, etc. addresses now line up the available RST addresses. This means that eight of the BIOS functions can be called using the appropriate RST, making those functions cheaper to call than the others. Compared to the CP/M approach, at first glance it seems that MSX uses more ROM space because the jump table is larger by one byte for every function due to the padding. However, RSTs are not possible beyond address 0038h, and indeed once this point has been passed the MSX BIOS stops including padding bytes so that JP instructions are spaced three bytes apart. This means the total cost for the approach is just an extra 16 bytes of ROM space. Furthermore, thanks to using this approach, eight BIOS functions can now be called with 1 instead of three bytes of machine code, saving you two bytes per call. If each of the RST-compatible functions is used exactly once in the ROM (and of course each one is, otherwise it wouldn't exist!) then you break even on space. It's almost certain that some functions are called more than once, so in reality the RST savings will easily save you more than the 16 padding bytes they cost. This approach is thus both faster <em>and</em> more compact than the CP/M approach: a rare engineering free lunch. The fact that the MSX BIOS uses this approach shows that it was carefully designed. I would be very interested to know if the functions chosen for RST-compatible addresses are exactly the most frequently called ones elsewhere in the ROM, so as to maximise the space saving. This is the approach I will take for my own homebrew Z80 system, as I only have 8kb of ROM to work with.</p>
<p>Another question which arises is how to pass arguments to these BIOS function calls. There's less room to be clever here. The choices are to either assign them to particular registers, or to push them in a particular order onto the stack. The stack approach has the advantages that you can pass as many arguments as you like (or rather, as your memory will allow, but this is not a practical limitation), and also that you can call BIOS functions from inside BIOS functions without any problem (this is why e.g. C compilers for the Z80 pass function arguments via the stack). However, this approach is not cheap: each PUSH of arguments takes 11 clock cycles, and each corresponding POP inside the function itself takes 10. At 22 clock cycles <em>per argument</em>, the 6 clock cycles per function call that are saved by using RST instead of CALL are quickly lost again. Passing via the registers is a lot faster: 7 clock cycles to LD an 8-bit value into a register (10 for a 16-bit value) before making the BIOS call, and no cost at all once you're inside. Unsurprisingly, both CP/M and MSX use this approach, and so will I.</p>2015-08-02T10:31:20+12:0Luke MauritsOn choosing the Z80 over the 6502http://www.luke.maurits.id.au/blog/post/on-choosing-the-z80-over-the-6502.html<p>As previously mentioned, in passing, I've been working on a homebrew 8-bit computer for the past few months, and doing a miserable job of documenting it. I'm hoping to get to work on correcting that in the near future, starting with this entry which will discuss a design decision I made without an awful lot of consideration to begin with, but have retro-actively justified a few times over.</p>
<p>When it comes to building your own 8-bit computer, there are essentially two choices for the processor. You can use the <a href="https://en.wikipedia.org/wiki/Zilog_Z80">Z80</a> or the <a href="https://en.wikipedia.org/wiki/MOS_Technology_6502">6502</a>. These are both classic chips which dominated the homecomputing market in the late 70s and early 80s. Perhaps surprisingly, both of them are still manufactured today (with higher clock speeds and lower power consumption than the original versions), and are available quite cheaply in hobbyist friendly packages like 40 pin DIP or 44 pin PLCC. I'm using the Z80, and a very fair question to ask is "Why?".</p>
<p>In some ways, the 6502 would have been a more natural choice for me, to the extent that this kind of project is often motivated by nostalgia. I used a lot of 6502-powered devices growing up - mostly the Commodore 64 but to a lesser degree the BBC Micro and the Atari 2600. I think the first and only Z80 product I used was the Gameboy, but both these chips were used in all kinds of things in those days so it's very possible I had Z80s in my house controlling the VCR or the microwave or something. Building a computer using the same processor as the machine which gave me my love of computing in the first place would have felt amazing, and I'm slightly surprised I didn't charge into using the 6502 on this basis alone.</p>
<p>I chose the Z80 because when I first started researching the project I somehow got the impression that it was a much more popular choice for homebrew computing, and that by choosing it I'd find a lot more documentation on the web, a lot more people to ask questions to, etc., etc. Being at the point where I'm almost finished the project, I now think I got this pretty much perfectly backward. There is a vibrant 6502 community online, with <a href="http://www.6502.org">6502.org</a> forming an obvious hub, complete with active forums full of amazing projects. There are a lot of "superstars" active in the 6502 enthusiast world, including ex-Commodore engineer <a href="http://c128.com/">Bill Herd</a> and the self-taught <a href="https://en.wikipedia.org/wiki/Jeri_Ellsworth">Jeri Ellsworth</a>, who implemented an entire C64 in an FPGA. I've found the online Z80 scene to be largely dead in comparison, especially if you are not interested specifically and exclusively in TI's range of Z80-powered calculators. <a href="http://z80.info/">z80.info</a> is actually, to be frank, fairly crummy. The most useful resources I've found are the old book "Build your own Z80 computer" by Steve Ciarcia (legitimately freely available as PDF online) for the hardware side and <a href="http://sgate.emt.bme.hu/patai/publications/z80guide/app1.html">this website</a> describing the Z80 instruction set for the software side. The <a href="http://n8vem-sbc.pbworks.com/">N8VEM</a> community is fairly active, but to be honest I find it fairly impenetrable: lots of different designs by different people, all with hard to remember acronyms and no clear picture on what's compatible with what. All of the information is stored in a terrible CMS which feels like I'm navigating somebody else's hard drive...</p>
<p>I've considered switching to the 6502 on more than one occasion throughout the project, motivated by a variety of things: the initial realisation that I'd get a better community, finishing reading Brian Bagnall's book "<a href="http://www.amazon.com/Commodore-Company-Edge-Brian-Bagnall/dp/0973864966">Commodore: A company on the edge</a>" and being overcome with warm fuzzies for all things Commodore, and really wanting to use the super-nifty <a href="https://en.wikipedia.org/wiki/MOS_Technology_6522">6522 VIA</a> chip but not understanding at the time what the hell to do with the "Phi 2" pin if interfacing it to a non-6502 processor (I know now that this basically acts as a chip enable pin, so you can just connect it to the Z80's IOREQ). Each time I considered it, I tried to weigh up the choice in a sombre and technically minded way, and each time I stuck with the Z80. I really do think it has a lot to recommend it, especially for first-time computer homebrewers. I've discussed some of these points below.</p>
<p>Emphatically, this is <em>not</em> supposed to be a "6502 sux0rs, here's why the z80 beats the snot out of it!!1" kind of article (even though the web is full of the reverse argument). I have tremendous respect for the 6502, for the team who designed it, and for the great machines that it powered. I can see the beauty in its minimalist architecture. Building your own 6502 computer can be just as educational, rewarding and empowering as building your own Z80 system; if you're doing either of these things, you are doing a good thing. I do feel like the Z80 is a little bit of an underdog in the homebrewing world, and this article attempts to challenge this state of affairs a little bit with what I think are legitimate advantages the Z80 holds over the 6502. I will say that I am quite new to the Z80, and actually have no hands-on experience building 6502 systems, though I've done a lot of reading. There may be factual inaccuracies in what follows. If you find any, please <a href="http://www.luke.maurits.id.au/email.html">email me</a> and I'll make corrections as appropriate.</p>
<h2>Native 16-bit operations</h2>
<p>The Z80 has native support for 16-bit addition and subtraction - two 8-bit registers are combined to form a single 16-bit one. On the 6502, you need to do this manually, adding the two lower order bytes, checking the carry flag, and adding the two higher order bytes. The 6502 will still probably do it faster than the Z80 at any given clock speed, because it typically uses fewer cycles per instruction, but the machine code will be longer because it takes multiple instructions. I can imagine this getting old fast. You can of course write the code once as a function and call it whenever you need to do 16-bit arithmetic, but this reduces some of the speed advantage.</p>
<h2>Powerful data moving instructions</h2>
<p>The Z80 instruction set has a couple of really nifty, powerful instructions for moving data around in memory, and between memory and IO (they're LDIR, LDDR, INIR, INDR, OTIR, and OTDR). You can do things like set up a pointer to a memory location, a byte counter and a port number and then use a single instruction to tell the CPU to move that number of consecutive bytes between memory and the port, and it'll just loop away. On the 6502, you'd need to do this manually - including the 16-bit address arithmetic. This is more boring to write, more error prone and takes up a lot more code space. Admittedly the 6502 instruction set is smaller, simpler and neater for not having these kinds of instructions, and I can see the appeal in that, as well as the potential satisfaction that can come from crafting your whole program yourself from the simplest possible conceptual units - but for newcomers to assembly (and, I imagine, old hats who are over the thrill of bare metal and just want to get stuff done) it's really nice to be able to just write a single instruction and know that your data will get shuffled around as quickly as the people who designed the CPU could make it happen.</p>
<h2>A full-fledged 16-bit stack pointer</h2>
<p><b>NOTE</b>: the original version of this blog post contained a mistake, where I claimed that the 6502's stack was restricted to the zero page (0x0000 - 0x00FF), and argued that this was especially problematic since the zero page is also often used as a bank of registers, due to faster access time. I claimed that having to share this page for these two purposes forced one to choose between having lots of fast data storage or a deep recursion capability but not both. Two readers, Ola and Chris, emailed me to let me know that the stack is actually restricted to page 1 (0x0100 - 0x01FF), so this trade-off doesn't really exist. The stack situation on the 6502 is thus not as bad as I first thought, but is still more limited than the Z80. Thanks for the corrections!</p>
<p>The 6502's stack pointer is also an 8-bit value, meaning it can address only 256 bytes of RAM, and it is constrained to lie within page 1 of the 6502's address space (addresses 0x0100 through 0x01FF). This means that you can never make nested function calls more than 128 levels deep (as each call pushes a 2 byte return address onto the stack). Perhaps not much of a problem in practice for many people, but also a fairly ugly restriction, and definitely problematic for recursively defined functions.</p>
<p>This restriction is also a bit of a pain if you want to try multitasking. You can't, say, have one process running in the bottom 32 kB of memory and one in the top 32 kB of memory, because the process in the top 32 kB can't have its own separate stack space. You'd need to either implement a bank switching scheme so you can physically point the zero page at different parts of your physical RAM address space (even if the number of processes you want to run multiplied by their size is less than 64 kB, so that you don't really <em>need</em> bank switching), or else physically copy the entire zero pages of each process back and forth between some other part of memory whenever you switched processes, both of which strike me as a pain.</p>
<p>If you wanted to implement a stack-based programming language, like <a href="https://en.wikipedia.org/wiki/Forth_%28programming_language%29">Forth</a> on the 6502, I have to imagine that you need to simulate your own stack using general purpose memory rather than relying on the system stack, due to its small size. This must slow things down a bit. </p>
<p>The Z80 has a 16-bit stack pointer which can point anywhere in the address space at all, so you don't have any of these limitations and you can use your whole 64 kB address space to its maximum potential.</p>
<h2>Separate address spaces for memory and peripherals</h2>
<p>The Z80 makes a fundamental distinction between reading or writing memory (RAM or ROM, though obviously not writing ROM) and reading or writing peripheral chips like UARTs, real-time clocks, etc. These actions are done with separate instructions: the various forms of LD (load) for memory, and IN and OUT for peripherals. When reading or writing memory, the MREQ pin is brought low, along with the RD or WR pin as necessary, and when reading or writing peripherals, the IOREQ pin is brought low. Values in memory are addressed using a 16-bit address bus, giving you 64 kB of memory, and peripherals are addressed using the lower 8-bits of the address bus, giving you 256 bytes of IO space (this is actually a bit of a simplification, but it's the official story on IO and I'll stick with it for here). Importantly, these 64 kB and 256 B address spaces are totally separate, non-overlapping entities.</p>
<p>The 6502, in contrast, uses "memory mapped IO". This means that the CPU itself makes no distinction between memory or peripheral chips. You use the same kinds of instructions, LD (load) and ST (store), to read or write to each. There's once again a 16-bit address bus, leading to 64 kB of memory space. It's up to the computer designer to come up with an address decoding circuit which routes the appropriate signals to memory or peripheral chips accordingly. Having no distinction between memory and peripherals does make the 6502's instruction set smaller and more consistent - some would say elegant - but this comes at a pretty high cost.</p>
<p>All of your peripherals on a 6502 system have to be put somewhere into the same 64 kB address space as your actual memory, and this causes quite a lot of flow on nastiness. First up, it obviously means that the more peripherals you add, the less actual memory you can use for your programs. It's true that your peripherals probably won't actually need a whole lot of address space: the fact that useful Z80 computers exist proves you can get away with under 256 bytes of such space without problems. However, if you want to dedicate <em>just</em> a 256 byte space to your peripherals, you need an address decoding scheme capable of decoding access to exactly that page, i.e. you have to compare all of the 8 higher order bits of the address bus to some fixed value. And then, of course, you have to do additional decoding to split that 256 byte range up for your individual peripherals. This leads to an increased chip count compared to the Z80, where you just have to split up the 256 byte space already separated out for you by the CPU's non-memory-mapped architecture. You could do this with a single extra chip, the 74688 8-bit comparator, but every extra chip increases costs and, more importantly (since 7400 chips are usually pretty cheap) eats up more of your limited board space.</p>
<p>With this comparator-based approach, your IO page is fixed at one particular point in memory. In simple systems this is no big deal. You probably have a bunch of ROM at the top of your address space (since the 6502 looks at the top of the space when starting up) and RAM underneath that. Your programs are constrained to live in RAM anyway, so you can just stick the IO page in between these two sections and you don't really lose much space. But what if you have a more complicated design, where you can switch the ROM in and out of the address space, so you can get a full 64 kB of RAM when you want it? Well, unless you switch your IO page out as well (in which case your program obviously can't do any IO, limiting its usefulness), then you're stuck with 63.75 kB of RAM in total, with 256 bytes of IO stuck somewhere in the middle of it. This limits your available contiguous space and forces you to split your program up into two chunks. If you want the option of switching to a mode with a lot of contiguous RAM <em>and</em> IO, you need to make your IO page relocatable by putting an 8-bit latch in front of your 8-bit comparator - yet another chip. This will let you move the IO page to the top or bottom of the address space, giving you a contiguous 63.75 kB.</p>
<p>All of this is quite horrible, and all the homebrewed 6502 machines I've seen online do nothing of the sort. They all have very simple address decoding schemes using a small number of chips, and as such they all have the same shortcomings: inflexible memory layouts with a lot of wasted space. These computers often have several kilobytes of memory which are either "holes" in the address space which don't do anything, or are repetitions of small chunks of memory space connected to peripherals, which are "mirrored" several times because higher order address lines aren't being decoded. A lot of simple Z80 systems have holes and mirroring too, of course, mine included, but the important difference is that this "wasted" spaced is not wasted RAM, which could have been used for code or data, but rather wasted IO space which there's no use for anyway once all your peripheral chips have the space they need.</p>
<p>This has been a fairly long section, but in brief: the Z80's use of separate address spaces for memory and peripherals means that you can use a small number of chips and a fairly simple design and end up with minimal wasted memory space, and with just a little bit more work you can get a system where you can easily give yourself a full 64 kB of RAM when you need it. The 6502's memory mapped IO means that simple, low chip-count solutions are probably pretty wasteful and inflexible.</p>
<h2>Easy Linux development</h2>
<p>I've been using the GNU project's <a href="http://www.nongnu.org/z80asm/">Z80 assembler</a>, z80asm, and disassembler, z80dasm, to generate machine code for my project thus far. z80asm is in the apt repositories used by Debian and Ubuntu, so after an <tt>apt-get install z80asm</tt> I'm good to go. Surprisingly, there are no packages with "6502" in the name or description in those repositories. 6502.org has <a href="http://6502.org/tools/asm/">a list of assemblers</a>, and 3 of them claim to work on linux. None of them are in the Debian or Ubuntu repositories - which is not to say they won't work, but there's more effort involved in getting up and running. All 3 projects have websites which have not been updated in a very long time. <b>UPDATE</b>: S. Bryant kindly emailed to let me know that in fact there <em>is</em> a 6502 assembler in the Debian repositories, you just need to know what it's called! It's <a href="http://www.floodgap.com/retrotech/xa/">xa65</a>. I was pleasantly surprised to find that it is maintained by Cameron Kaiser, who I already knew of from his great work in preserving <a href="h://en.wikipedia.org/wiki/Gopher_%28protocol%29">Gopher</a>. xa65 seems to be under active development, which is good to see.</p>
<p>I'm really looking forward to eventually using C to write programs for my z80 machine, mostly because getting it to work will require me to completely and totally understand exactly what is involved in producing a C runtime. The <a href="http://sdcc.sourceforge.net/">Small Device C Compiler</a> supports the Z80 (along with some other 8-bit architectures), and is also available via apt. The most prominent C compiler for the 6502 seems to be <a href="http://www.cc65.org/">cc65</a>, which was abandoned by its author about 1 year ago. An old version is being maintained on Github, but that's it.</p>
<p>If you're a Linux user and you want to cross-develop software for an 8-bit CPU, it seems to me like you can get up and running in a flash for the Z80, but not the 6502.</p>
<h2>Easy to find</h2>
<p>I only know of two places online where you can buy the 6502 - the mega-distributor Mouser and the smaller supplier Jameco, both based in the US. Mouser will ship to you if you're outside of the US, but you need to either pay a fairly hefty shipping fee, or spend a huge amount on parts to become eligible for free shipping. Either option will massively inflate the base cost of building an 8-bit computer. It's also worth noting that Mouser's page for the 6502 claims "This product may require additional documentation to export from the United States". I don't know exactly what is in involved in this, and under what circumstances it applies, but it doesn't sound like fun. Jameco's shipping seems a little more sensible - they'll ship a 6502 to New Zealand for about US$15. That's still about 3 times what the chip itself costs, but it's not prohibitive. However, Jameco are a much smaller supplier than Mouser, and they recently stopped selling the modern 16-bit version of the 6502, so it's possible the 8-bit version will eventually disappear too.</p>
<p>On the other hand, the Z80 seems to be a lot easier to find anywhere in the world. In the US, it's carried by both Mouser and Digikey. The Farnell/Element 14 group seem to stock it in most other major countries - I can buy it locally here in New Zealand, which is nice.</p>
<h2>Cheap</h2>
<p>The 6502 found its way into many more home computers and game machines in the 70s and 80s than the Z80, and it's no secret that one of the major reasons is that the 6502 was <em>cheap</em> - especially for Commodore, who bought MOS Technology early on and could therefore include the 6502 in their machines at cost, while everyone else had to pay a markup. Even with the markup, the 6502 was the cheapest 8-bit microprocessor on the market by a wide margin, so cheap that when it was originally announced, many thought it was a joke. Steve Wozniak has emphasised the 6502's price playing a prominent role in his decision to use it for the early Apple machines.</p>
<p>Price is perhaps not as important a factor in this decision today, as both the 6502 and Z80 are reasonably cheap, but in a cost-sensitive application, the tables have actually turned. The modern version of the 6502, the W65C02, costs US$6.95 at Mouser, whereas a modern Z80, the Z84C costs US$4.14 at Mouser or US$4.31 at Digikey. These are unit prices. If you bought 100 Z80s from Mouser they'd cost you US$3.06 each, compared to US$5.85 each for the WC65C02 - almost half the price. Most hobbyists aren't going to buy 100 of either chip, but if you wanted to use one of these processors in a hobbyist kit or something the bulk discount matters. Combine the lower cost with the easier international sourcing and the Z80 is the clear winner for anything you want to make it easy for anybody in the world to get their hands on.</p>
<h2>Summary</h2>
<p>That's about it. The Z80 is easier to find outside the US than the 6502, it's cheaper, and you can start hacking for it on Linux in no time flat. Both the Z80 and 6502 give you 64 kB of memory space, but the Z80 makes it easier to use all of this space as however you like, and to exercise the full capabilities of your system. Every byte in the space is the same as any other, and the stack works identically everywhere, so there are no restrictions on how you choose to use the space. The space is never interrupted by memory-mapped peripherals, so there's no shortage of contiguous blocks of RAM. And the instruction set makes it very easy and very natural to move these contiguous blocks around with clear, concise code. All of this makes it very easy to try things like multitasking. There are a few other things I've not gone into detail on, like the Z80's shadow registers or native vectored interrupt handling, but I think I've covered the main points above.</p>
<p>In the interests of fairness, I should state that there is one place the 6502 is always going to kick the z80's ass, and that's speed. Z80 instructions typically take more clock cycles than their 6502 counterparts. This actually wasn't much of a problem back during the hey-day of these chips: the z80's design allowed it to be clocked faster than the 6502 so, for example, the Sinclair ZX machines' Z80 CPUs ran at 3.25 MHz while the C64's 6502 ran at 1 MHz. The faster Z80 clock offset the lower cycle efficiency, leaving the machines on roughly equal footing - in fact, <a href="http://www.alfonsomartone.itb.it/aunlzr.html">this article</a> argues (which much vitriol) that the ZX Spectrum is faster than the C64. However, much like in the case of cost, the tables have turned since the 1980s. The modern WC65C02 can be clocked at 14 MHz. 20 MHz versions of the Z84C exist, but are hard to find in stock, whereas the 10 MHz versions are abundant and so that's probably a more realistic speed to consider the maximum. So today's 6502s are more cycle efficient <em>and</em> clocked faster. If speed is really important to you, you should probably go with the 6502.</p>2014-07-07T10:21:11+12:0Luke MauritsCertificate Patrol: Security vs complexityhttp://www.luke.maurits.id.au/blog/post/certificate-patrol-security-vs-complexity.html<p>Today I uninstalled the Firefox add-on <a href="https://addons.mozilla.org/en-us/firefox/addon/certificate-patrol/">Certificate Patrol</a>. Certificate Patrol implements a technique called "certificate pinning", which is designed to help detect <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack">Man in the Middle attacks</a> against HTTPS. The idea is simple: when you connect to a site over HTTPS, you store a copy of the fingerprint of the presented SSL certificate. The next time you connect, the fingerprint of the offered certificate is compared to the copy you have cached. If the certificate has changed, without apparent good reason (e.g. the old certificate is expiring soon) then you raise the alarm bells. It's a neat idea, that feels like common sense in retrospect.</p>
<p>Why am I uninstalling it? Because security measures with high false alarm rates are useless: you end up ignoring them. Certificate pinning has a very high false alarm rate. It is based on the assumption, which feels like common sense at first glance, that SSL certificates are relatively stable things: the certificate you get from foo.com today should be the same one you get tomorrow, and next month, and probably next year. This is false for any major website. The widespread use of content delivery networks means that connections to foo.com go to different actual servers all the time. And those different servers seem to almost always have different SSL certificates. Often they are signed by different Certificate Authorities. Sometimes they are valid for different URLs (e.g. some use wildcards, and some don't. Some seem to be valid for the domain of the CDN company and not the domain being visited, I don't even know how that works). Any heuristic you might come up with for separating MITM attacks from "ordinary" variation is doomed to failure, because the degree of variation day to day is immense.</p>
<p>There was <a href="https://www.schneier.com/blog/archives/2014/05/is_antivirus_de_1.html">a discussion on Bruce Schneier's blog</a> recently about whether or not antivirus technology is "dead", in which a commenter said the following:</p>
<blockquote>
<p>There are plenty of other ways to own a computer, some more sophisticated than others, but with DRM, anti-cheat measures, BitTorrent, NSA-mandated backdoors, and so forth, <b>it's harder and harder to say whether a certain sequence of activities is malicious or operating as normal. It's getting too hard to tell the difference</b>.</p>
</blockquote>
<p>Emphasis mine. The practice of denying activity by default and only permitting known safe activity has been long and widely recognised as the ideal approach to security, and is <i>certainly</i> far better than its logical inverse of allowing everything by default and trying to exhaustively enumerate all the known bad things (ideas 1 and 2 on Marcus Ranum's <a href="http://www.ranum.com/security/computer_security/editorials/dumb/">Six Dumbest Ideas in Computer Security</a>. Complexity is the natural enemy of this approach. The modern web is <i>drowning</i> in needless complexity, and the consequences of this are inescapable.</p>2014-05-21T10:01:20+12:0Luke MauritsComputational mottainaihttp://www.luke.maurits.id.au/blog/post/computational-mottainai.html<p>For the past few months I have been slowly working on designing and building my own 8-bit computer, something I should have been blogging about and soon will. As part of this project, and especially fueled by recent conversations with <a href="http://markovlife.com/">a friend</a>, I've been thinking a lot about the state of modern computing, and what other states computing could possibly be in, and what state I personally would like it to be in. I've taken some inspiration from Stanislav Datskovskiy and his <a href="http://www.loper-os.org/?p=284">Seven Laws of Sane Personal Computing</a> - prescribing the obviously desirable properties of computers in advance and then thinking about what kind of machines could be built within those constraints.</p>
<p>I was doing some idle research today to try and think of a sufficiently pretentious name for my homebrew computer project which captured some of the vision I've been developing, and I came across the Japanese term <a href="https://en.wikipedia.org/wiki/Mottainai">mottainai</a>. Mottainai conveys "a sense of regret concerning waste". This word has apparently been popularised in the West recently by a <a href="https://en.wikipedia.org/wiki/Wangari_Maathai">Kenyan environmentalist</a>, in the obvious sense of urging people not to waste non-renewable resources. But there are "fuller" meanings of the term, which Wikipedia sketches as such:</p>
<blockquote>
<p>A more elaborate meaning conveys a sense of value and worthiness and may be translated as "do not destroy (or lay waste to) that which is worthy."</p>
<p>Buddhists traditionally used the term mottainai to indicate regret at the waste or misuse of something sacred or highly respected, such as religious objects or teaching</p>
<p>its full sense conveys a feeling of awe and appreciation for the gifts of nature</p>
</blockquote>
<p>In this sense mottainai is less "don't <i>waste</i> that!", and more "don't waste <i>that</i>!". This gets nicely at one part of the way I feel about computers. The rest of this post may sound a bit silly at times, but I'm being sincere.</p>
<p>I'm an utterly secular, <a href="https://en.wikipedia.org/wiki/Materialism">materialist</a> (not materialistic) guy and so nothing is truly sacred or holy or magical to me, but I do have something bordering on a religious awe for computers and I often feel like there is something deeply wrong about the way the they have become such undervalued, disposable commodities. I see old computers discarded on suburban curbs (presumably replaced by a new one because it "got slow", which is physically absurd) and feel a tremendous sense of waste. In some sense, this is stupid: people wouldn't dump computers on the curb if they could make any significant amount of money selling them, and the fact that they can't make any significant amount of money selling them makes it clear that there isn't any kind of scarcity of computers. We can make these things very cheaply, and everyone owns several, so they aren't precious and rare gems. But in most Western countries, food is cheap and abundant and almost everybody has enough of it. In fact, many of us have <i>too much</i> of it. Despite this, if you saw perfectly good, edible food sitting on the curb, you'd feel some sense of wrongness, because it's <i>perfectly good food</i> for crying out loud. This is how I feel about discarded computers (most of which are still perfectly good, regardless of what terrible software has convinced their former owners of). That discarded food <i>could be</i> eaten by somebody, and that discarded computer <i>could be</i> computing something - in fact, it could be computing <i>anything</i> and that's kind of the <i>whole point</i>. This sense of wasted potential has actually moved me to "rescue" curb computers, even if I end up rarely or never using them myself, because the thought of them being crushed up and buried as landfill just seems so incontrovertibly <i>wrong</i>.</p>
<p>Our personal computers are <a href="https://en.wikipedia.org/wiki/Universal_Turing_machine">universal Turing machines</a>: single machines which, once built, can perform any information processing task under the sun, without any change to their internal construction at all. They can add numbers, or sort lists, or spell check documents, manipulate sound and images, simulate the motion of the planets, or the tides, or any other physical process you care to name, and with the appropriate hardware connected they can control factories and fly planes. The <i>same machine</i> can do <i>all of these things</i>, and in ten years time when someone thinks of some new kind of information processing that was undreamed of at the time the machine was made, the machine will be able to do <i>that</i> too, without the hardware needing to be touched at all. You'll not encounter anything closer to magic in your life. The fact that such remarkable, limitless machines are <i>possible in principle</i>, and that we humans can make them small and fast and cheap is not some fundamentally necessary fact of the universe. It could be otherwise! The fact that it <i>isn't</i> otherwise is really something like a blessing - a gift of nature, and we should treat it as such. Inside every computer, even a ten year old one which is now "useless", is a tiny silicon miracle, something so utterly unlikely that most people would probably insist that nothing like it ever <i>could</i> exist, if only they weren't surrounded by examples to the contrary. It's a crying shame to throw out a perfectly working machine that contains infinite potential inside of it without a hint of regret, just because you can now easily buy a new machine which has a little bit <i>more</i> infinite potential inside of it.</p>
<p>Mottainai!</p>2014-01-26T06:34:17+12:0Luke MauritsPower.comhttp://www.luke.maurits.id.au/blog/post/power.com.html<p>Bruce Schneier really isn't pulling any punches these days. <a href="https://www.schneier.com/blog/archives/2013/04/what_ive_been_t.html">His next book</a> sounds like it's going to be a doozy.</p>2013-04-02T05:39:20+12:0Luke Maurits