copyleft hardware planet

December 19, 2014

Richard Hughes, ColorHug

OpenHardware Random Number Generator

Before I spend another night reading datasheets; would anyone be interested in an OpenHardware random number generator in an full-size SD card format? The idea being you insert the RNG into the SD slot of your laptop, leave it there, and the kernel module just slurps trusted entropy when required.

Why SD? It’s a port that a a lot of laptops already have empty, and on server class hardware you can just install a PCIe addon card. I think I can build such a thing for less than $50, but at the moment I’m just waiting for parts for a prototype so that’s really just a finger-in-the-air estimate. Are there enough free software people who care about entropy-related stuff?

by hughsie at December 19, 2014 05:06 PM

December 18, 2014

Bunnie Studios

Maker Pro: Soylent Supply Chain

A few editors have approached me about writing a book on manufacturing, but that’s a bit like asking an architect to take a photo of a building that’s still on the drawing board. The story is still unfolding; I feel as if I’m still fumbling in the dark trying to find my glasses. So, when Maker Media approached me to write a chapter for their upcoming “Maker Pro” book, I thought perhaps this was a good opportunity to make a small and manageable contribution.

The Maker Pro book is a compendium of vignettes written by 17 Makers, and you can pre-order the Maker Pro book at Amazon now.

Maker Media was kind enough to accommodate my request to license my contribution using CC BY-SA-3.0. As a result, I can share my chapter with you here. I titled it the “Soylent Supply Chain” and it’s about the importance of people and relationships when making physical goods.

Soylent Supply Chain

The convenience of modern retail and ecommerce belies the complexity of supply chains. With a few swipes on a tablet, consumers can purchase almost any household item and have it delivered the next day, without facing another human. Slick marketing videos of robots picking and packing components and CNCs milling components with robotic precision create the impression that everything behind the retail front is also just as easy as a few search queries, or a few well-worded emails. This notion is reinforced for engineers who primarily work in the domain of code; system engineers can download and build their universe from source–the FreeBSD system even implements a command known as ‘make buildworld’, which does exactly that.

The fiction of a highly automated world moving and manipulating atoms into products is pervasive. When introducing hardware startups to supply chains in practice, almost all of them remark on how much manual labor goes into supply chains. Only the very highest volume products and select portions of the supply chain are well-automated, a reality which causes many to ask me, “Can’t we do something to relieve all these laborers from such menial duty?” As menial as these duties may seem, in reality, the simplest tasks for humans are incredibly challenging for a robot. Any child can dig into a mixed box of toys and pick out a red 2×1 Lego brick, but to date, no robot exists that can perform this task as quickly or as flexibly as a human. For example, the KIVA Systems mobile-robotic fulfillment system for warehouse automation still requires humans to pick items out of self-moving shelves, and FANUC pick/pack/pal robots can deal with arbitrarily oriented goods, but only when they are homogeneous and laid out flat. The challenge of reaching into a box of random parts and producing the correct one, while being programmed via a simple voice command, is a topic of cutting-edge research.

bunnie working with a factory team. Photo credit: Andrew Huang.

The inverse of the situation is also true. A new hardware product that can be readily produced through fully automated mechanisms is, by definition, less novel than something which relies on processes not already in the canon of fully automated production processes. A laser-printed sheet will always seem more pedestrian than a piece of offset-printed, debossed, and metal-film transferred card stock. The mechanical engineering details of hardware are particularly refractory when it comes to automation; even tasks as simple as specifying colors still rely on the use of printed Pantone registries, not to mention specifying subtleties such as textures, surface finishes, and the hand-feel of buttons and knobs. Of course, any product’s production can be highly automated, but it requires a huge investment and thus must ship in volumes of millions per month to amortize the R&D cost of creating the automated assembly line.

Thus, supply chains are often made less of machines, and more of people. Because humans are an essential part of a supply chain, hardware makers looking to do something new and interesting oftentimes find that the biggest roadblock to their success isn’t money, machines, or material: it’s finding the right partners and people to implement their vision. Despite the advent of the Internet and robots, the supply chain experience is much farther away from or Target than most people would assume; it’s much closer to an open-air bazaar with thousands of vendors and no fixed prices, and in such situations getting the best price or quality for an item means building strong personal relationships with a network of vendors. When I first started out in hardware, I was ill-equipped to operate in the open-market paradigm. I grew up in a sheltered part of Midwest America, and I had always shopped at stores that had labeled prices. I was unfamiliar with bargaining. So, going to the electronics markets in Shenzhen was not only a learning experience for me technically, it also taught me a lot about negotiation and dealing with culturally different vendors. While it’s true that a lot of the goods in the market are rubbish, it’s much better to fail and learn on negotiations over a bag of LEDs for a hobby project, rather than to fail and learn on negotiations on contracts for manufacturing a core product.

One of bunnie’s projects is Novena, an open source laptop. Photo credit: Crowd Supply.

This point is often lost upon hardware startups. Very often I’m asked if it’s really necessary to go to Asia–why not just operate out of the US? Aren’t emails and conference calls good enough, or worst case, “can we hire an agent” who manages everything for us? I guess this is possible, but would you hire an agent to shop for dinner or buy clothes for you? The acquisition of material goods from markets is more than a matter of picking items from the shelf and putting them in a basket, even in developed countries with orderly markets and consumer protection laws. Judgment is required at all stages — when buying milk, perhaps you would sort through the bottles to pick the one with greatest shelf life, whereas an agent would simply grab the first bottle in sight. When buying clothes, you’ll check for fit, loose strings, and also observe other styles, trends, and discounted merchandise available on the shelf to optimize the value of your purchase. An agent operating on specific instructions will at best get you exactly what you want, but you’ll miss out better deals simply because you don’t know about them. At the end of the day, the freshness of milk or the fashion and fit of your clothes are minor details, but when producing at scale even the smallest detail is multiplied thousands, if not millions of times over.

More significant than the loss of operational intelligence, is the loss of a personal relationship with your supply chain when you surrender management to an agent or manage via emails and conference calls alone. To some extent, working with a factory is like being a houseguest. If you clean up after yourself, offer to help with the dishes, and fix things that are broken, you’ll always be welcome and receive better service the next time you stay. If you can get beyond the superficial rituals of politeness and create a deep and mutually beneficial relationship with your factory, the value to your business goes beyond money–intangibles such as punctuality, quality, and service are priceless.

I like to tell hardware startups that if the only value you can bring to a factory is money, you’re basically worthless to them–and even if you’re flush with cash from a round of financing, the factory knows as well as you do that your cash pool is finite. I’ve had folks in startups complain to me that in their previous experience at say, Apple, they would get a certain level of service, so how come we can’t get the same? The difference is that Apple has a hundred billion dollars in cash, and can pay for five-star service; their bank balance and solid sales revenue is all the top-tier contract manufacturers need to see in order to engage.

Circuit Stickers, adhesive-backed electronic components, is another of bunnie’s projects. Photo credit: Andrew “bunnie” Huang.

On the other hand, hardware startups have to hitchhike and couch-surf their way to success. As a result, it’s strongly recommended to find ways other than money to bring value to your partners, even if it’s as simple as a pleasant demeanor and an earnest smile. The same is true in any service industry, such as dining. If you can afford to eat at a three-star Michelin restaurant, you’ll always have fairy godmother service, but you’ll also have a $1,000 tab at the end of the meal. The local greasy spoon may only set you back ten bucks, but in order to get good service it helps to treat the wait staff respectfully, perhaps come at off-peak hours, and leave a good tip. Over time, the wait staff will come to recognize you and give you priority service.

At the end of the day, a supply chain is made out of people, and people aren’t always rational and sometimes make mistakes. However, people can also be inspired and taught, and will work tirelessly to achieve the goals and dreams they earnestly believe in: happiness is more than money, and happiness is something that everyone wants. For management, it’s important to sell your product to the factory, to get them to believe in your vision. For engineers, it’s important to value their effort and respect their skills; I’ve solved more difficult problems through camaraderie over beers than through PowerPoint in conference rooms. For rank-and-file workers, we try our best to design the product to minimize tedious steps, and we spend a substantial amount of effort making the tools we provide them for production and testing to be fun and engaging. Where we can’t do this, we add visual and audio cues that allow the worker to safely zone out while long and boring processes run. The secret to running an efficient hardware supply chain on a budget isn’t just knowing the cost of everything and issuing punctual and precise commands, but also understanding the people behind it and effectively reading their personalities, rewarding them with the incentives they actually desire, and guiding them to improve when they make mistakes. Your supply chain isn’t just a vendor; they are an extension of your own company.

Overall, I’ve found that 99% of the people I encounter in my supply chain are fundamentally good at heart, and have an earnest desire to do the right thing; most problems are not a result of malice, but rather incompetence, miscommunication, or cultural misalignment. Very significantly, people often live up to the expectations you place on them. If you expect them to be bad actors, even if they don’t start out that way, they have no incentive to be good if they are already paying the price of being bad — might as well commit the crime if you know you’ve been automatically judged as guilty with no recourse for innocence. Likewise, if you expect people to be good, oftentimes they will rise up and perform better simply because they don’t want to disappoint you, or more importantly, themselves. There is the 1% who are truly bad actors, and by nature they try to position themselves at the most inconvenient road blocks to your progress, but it’s important to remember that not everyone is out to get you. If you can gather a syndicate of friends large enough, even the bad actors can only do so much to harm you, because bad actors still rely upon the help of others to achieve their ends. When things go wrong your first instinct should not be “they’re screwing me, how do I screw them more,” but should be “how can we work together to improve the situation?”

In the end, building hardware is a fundamentally social exercise. Generally, most interesting and unique processes aren’t automated, and as such, you have to work with other people to develop bespoke processes and products. Furthermore, physical things are inevitably owned or operated upon by other people, and understanding how to motivate and compel them will make a difference in not only your bottom line, but also in your schedule, quality, and service level. Until we can all have Tony Stark’s JARVIS robot to intelligently and automatically handle hardware fabrication, any person contemplating manufacturing hardware at scale needs to understand not only circuits and mechanics, but also how to inspire and effectively command a network of suppliers and laborers.

After all, “it’s people — supply chains are made out of people!”

by bunnie at December 18, 2014 11:02 AM

Name that Ware December 2014

The Ware for December 2014 is shown below.

Thanks again to dmo and QB for letting me photograph this ware.

Happy holidays!

by bunnie at December 18, 2014 08:22 AM

Winner, Name that Ware November 2014

The Ware for November 2014 is a linear actuator for ulta-high vacuum environments, with a pass-through. You can actually download a spec for the ware at 真空機器・部品.com. Thanks again to dmo and QB for letting me snag a couple wares and use them for the competition.

Albert got the correct first guess about it being a linear actuator for UHV environments (but missed the pass-through part). I really like Arnuschky’s detailed explanation, and he also identified the pass-through feature, so I’ll declare him the winner. Congrats, thanks for playing!

by bunnie at December 18, 2014 08:21 AM

December 17, 2014

Richard Hughes, ColorHug

Actually shipping AppStream metadata in the repodata

For the last couple of releases Fedora has been shipping the appstream metadata in a package. First it was the gnome-software package, but this wasn’t an awesome dep for KDE applications like Apper and was a pain to keep updated. We then moved the data to an appstream-data package, but this was just as much of a hack that was slightly more palatable for KDE. What I’ve wanted for a long time is to actually ship the metadata as metadata, i.e. next to the other files like primary.xml.gz on the mirrors.

I’ve just pushed the final patches to libhif, PackageKit and appstream-glib, which that means if you ship metadata of type appstream and appstream-icons in repomd.xml then they get downloaded automatically and decompressed into the right place so that gnome-software and apper can use the data magically.

I had not worked on this much before, as appstream-builder (which actually produces the two AppStream files) wasn’t suitable for the Fedora builders for two reasons:

  • Even just processing the changed packages, it took a lot of CPU, memory, and thus time.
  • Downloading screenshots from random websites all over the internet wasn’t something that a build server can do.

So, createrepo_c and modifyrepo_c to the rescue. This is what I’m currently doing for the Utopia repo.

createrepo_c --no-database x86_64/
createrepo_c --no-database SRPMS/
modifyrepo_c					\
	--no-compress				\
	/tmp/asb-md/appstream.xml.gz		\
modifyrepo_c					\
	--no-compress				\
	/tmp/asb-md/appstream-icons.tar.gz	\

If you actually do want to create the metadata on the build server, this is what I use for Utopia:

appstream-builder			\
	--api-version=0.8		\
	--origin=utopia			\
	--cache-dir=/tmp/asb-cache	\
	--enable-hidpi			\
	--max-threads=4			\
	--min-icon-size=48		\
	--output-dir=/tmp/asb-md	\
	--packages-dir=x86_64/		\
	--temp-dir=/tmp/asb-icons	\

For Fedora, I’m going to suggest getting the data files from during compose. It’s not ideal as it still needs a separate server to build them on (currently sitting in the corner of my office) but gets us a step closer to what we want. Comments, as always, welcome.

by hughsie at December 17, 2014 08:50 PM

December 12, 2014

Free Electrons

DMAEngine Documentation: Work (finally) in Progress

While developping a DMA controller driver for the Allwinner A31 SoCs (that eventually got merged in the 3.17 kernel), I’ve realised how under-documented the DMAEngine kernel subsystem was, especially for a newcomer like I was.

After discussing this with a few other kernel developers in the same situation, I finally started to work on such a documentation during the summer, and ended up submitting it at the end of July. As you might expect, it triggered a lot of questions, comments and discussions that enhanced a lot the documentation itself but also pointed out some inconsistencies in the API, obscure areas or just enhancements.

This also triggered an effort to clean up these areas, and hopefully, a lot more will follow, allowing to eventually clean up the framework as a whole.

And the good thing is that this documentation has been merged by the DMAEngine maintainer and is visible in linux-next, feel free to read it, and enhance it!

by Maxime Ripard at December 12, 2014 07:41 AM

December 11, 2014

Video Circuits

F & S Themerson

Here is a great early visual music film by Franciszka and Stefan Themerson from 1944/45 The Eye and the Ear.

by Chris ( at December 11, 2014 04:24 PM

Free Electrons

Linux 3.18 released, Free Electrons contributions

PenguinLinus Torvalds has recently released the 3.18 version of the Linux kernel. As usual, made an excellent coverage of the merge window: part 1, part 2 and part 3.

As of 3.18-rc6, gathered some statistics about the 3.18 kernel contributions, and Free Electrons is ranked as the 14th contributing company for this release in number of patches, right after MEV Limited and before Qualcomm.

A quick summary of our contributions:

  • Improvements to the support of Atmel ARM processors: addition of a memory driver for the RAM controller (Alexandre Belloni), improvements to the irqchip driver to support the new SAMA5D4 processor (Alexandre Belloni), updates to the defconfigs (Alexandre Belloni), new clock driver for the SAMA5D4 processor (Alexandre Belloni), preparation work for multi-platform (Boris Brezillon), numerous fixes to clock drivers (Boris Brezillon), NAND driver improvements (Boris Brezillon), new reset and poweroff drivers and moved all the corresponding logic to a Device Tree based description (Maxime Ripard), refactoring of the clocksource driver and move to the proper drivers/clocksource directory (Maxime Ripard).
  • Improvements to the support of Marvell EBU ARM processors: XOR driver improvements (Ezequiel Garcia), pin-muxing description in Device Tree for more platforms (Ezequiel Garcia), support for the RTC on Armada 375 (Grégory Clement), support for the Spread Sprectrum Generator on Armada 370 (Grégory Clement), improvements to the support of the Armada 370 RD platform (Thomas Petazzoni), extensions to the cpufreq-dt driver to support platforms with independent clocks for each CPU, various fixes.
  • Improvements to the support of Marvell Berlin ARM processors: add support for the Ethernet controller by re-using the existing pxa168_eth driver (Antoine Ténart).
  • Improvements to the support of Allwinner ARM processors: addition of the support for a phase property to the Common Clock Framework, and usage in the context of the MMC clock on Allwinner processors (Maxime Ripard).
  • Various small UBI improvements (Ezequiel Garcia).
  • A number of trivial fixes: removal of IRQF_DISABLED, typo fixes, etc. (Michael Opdenacker).

The detailed list of the patches we have contributed:

by Thomas Petazzoni at December 11, 2014 02:30 PM

November 28, 2014

Video Circuits

Gieskes 3TrinsRGB1

Gieskes has come out with a lovely little closed architecture video synthesizer with a small break out bread board which opens up the whole thing to more interesting exploitation. Beautiful stuff.

by Chris ( at November 28, 2014 04:47 AM

November 27, 2014

Bunnie Studios

Name that Ware, November 2014

The Ware for November 2014 is shown below.

(No, it’s not my turkey baster. But happy Thanksgiving!)

Thanks to dmo & QB for allowing me to photograph this ware.

by bunnie at November 27, 2014 05:46 PM

Winner, Name that Ware October 2014

The Ware from October 2014 is the active element of an HP 4900A inkjet printhead. It’s a pretty neat example of a piece of silicon being used to manipulate liquids on a micro-scale to create macro-scale results.

The winner is Adrian for getting the first near-correct guess, although I really enjoyed Marcan’s detailed thoughts about the ware. Congrats, email me for your prize.

by bunnie at November 27, 2014 05:46 PM

Altus Metrum

keithp's rocket blog: Black Friday 2014

Altus Metrum's 2014 Black Friday Event


Altus Metrum announces two special offers for "Black Friday" 2014.

We are pleased to announce that both TeleMetrum and TeleMega will be back in stock and available for shipment before the end of November. To celebrate this, any purchase of a TeleMetrum, TeleMega, or EasyMega board will include, free of charge, one each of our 160, 400, and 850 mAh Polymer Lithium Ion batteries and a free micro USB cable!

To celebrate NAR's addition of our 1.9 gram recording altimeter, MicroPeak, to the list of devices approved for use in contests and records, and help everyone get ready for NARAM 2015's altitude events, purchase 4 MicroPeak boards and we'll throw in a MicroPeak USB adapter for free!

These deals will be available from 00:00 Friday, 28 November 2014 through 23:59 Monday, 1 December, 2014. Only direct sales through our web store at are included; no other discounts apply.

Find more information on all Altus Metrum products at

Thank you for your continued support of Altus Metrum in 2014. We continue to work on more cool new products, and look forward to meeting many of you on various flight lines in 2015!

November 27, 2014 07:47 AM

November 25, 2014


#oggstreamer – Repair Series 1 – channel not working / high gain

some weeks ago I had to exchange an OggStreamer because one channel was not working – the user Mike recorded the following video to demonstrate the problem:

If you watch the video closely you can see that the left channel is always amplifying the Audio with a very high gain – and only as he turns the gain up also the right channel appears – actually it is the right channel that is working correctly and the left has an error where it has a very high gain.

Looking at the schematics reveals that a connection error to the Potentiometer (which is in the Feedback loop) could cause such a high gain of the inverting amplifier:


So a loose connector is my first guess. My second guess would be a faulty potentiometer.

Lets take the device apart:


gently push the whole assembly (top and pcb) out of the extruded alluminium case:


A quick look around and touching cables reveals the problem:


because I had a spare contact lying around i soldered on a new contact, but elsewise I could have recycled the orignial one:


Time to put everything to gether and check if it works:



both channels work now :) success.

by oggstreamer at November 25, 2014 12:26 PM

November 24, 2014

LZX Industries

Visual Cortex

Your key to a new creative dimension! Consolidating several previous LZX Industries modules, Visual Cortex is an integrated core module with all the essential functions required for modular video synthesis in the EuroRack format.  Visual Cortex is capable of a wide variety of processes, including external video processing, 2D shape/animation generation, complex colour mixing, automated transition control, luma and chroma keying, and much more.

Visual Cortex Technical Manual (pdf)
Visual Cortex Basic Patches (pdf) 

- Input Decoder prepares external video sources for processing inside your modular video synthesis system.  The video input is YPbPr (Component) for full color video processing.  The Y input may be used with NTSC/PAL sources if only monochromatic processing is desired.

- Output Encoder transforms any type of voltage — video, audio, CV, logic — into video signals suitable for display or recording.  Outputs include NTSC/PAL, S-Video and YPbPr (Component) video signals.

- Sync Generator produces master synchronizing pulses for your entire modular video synthesis system. It can lock to external sources for synchronized processing.  NTSC/480i and PAL/576i timing formats are supported.

- Animation & Key Generator is a multi-function low frequency control voltage generator designed specifically for video transitions, with an internal keyer as well.  It is capable of triggered, manual and automated transitions.

- Colourizer & Compositor is a multi-function analogue RGB processor designed to combine two sets of RGB signals into a single composite output.  Blending modes include crossfading, addition/subtraction, and multiplication. Special features include color inversion, solarization, and a triple band linear colourizer.

by Liz Larsen at November 24, 2014 05:55 PM

November 17, 2014


Torex XC6206 - CMOS LDO : weekend die-shot

Torex XC6206 is a popular and really tiny CMOS LDO, especially if you compare it to older bipolar ones, which were magnitude larger. 250mA LDO in SOT-23 might be hard to believe at first.

Datasheet mentions "laser trimming" but we see voltage set via mask and 2 fuses for fine tuning. It is possible though that they have common values set in mask (like this 3.3V one) , and rare voltages laser trimmed.

Die size 500x356 µm, 500nm technology.

Etching off metals:

November 17, 2014 04:49 AM

November 10, 2014


DIP 10Mhz Quartz oscillator based on Seiko NPC HA5022A3 : weekend die-shot

Seiko NPC HA5022A3 contains internal load capacitors, oscillator with amplitude limiting (for reduced power consumption) and optional frequency divider.
Die size 976x770 µm.

Quartz crystal is mounted on springs - in order to reduce impact of vibration on oscillation stability and to make it's damage less likely:

There is an oscillator IC soldered on the ceramic PCB, as well as 0.01uF power supply decoupling cap. It seems we need to go deeper:

November 10, 2014 02:32 AM

November 03, 2014

Video Circuits

Just Jam Barbican

Got some work in this thing, coming up soon, along with lots of other nice artists and musicians.

by Chris ( at November 03, 2014 11:20 AM

Peter Chamberlain

Peter C has uploaded some amazing work from earlier in his career.

by Chris ( at November 03, 2014 11:16 AM

Analogue Video Workshop

So the Video Workshop went really well, we are planning to do more but I also have a bunch of other related projects going on at the moment, so if you want to host one send me an email and I am sure we could work something out. I currently have a diy sync-gen on the workbench and have been generating a lot of footage, theres nothing like meeting a bunch of people and getting them enthusiastic about video art to scare you into finishing some of your own work. Thanks to all who were involved in putting the workshop on including Encounters, Seeing Sound, Arnolfini and Bath Spa University.

oh yeah and one of the attendees wrote us a very nice review, which I think is a really kind thing I should do more often when I go to talks.

Here are two shots of Alex's video synthesis lesson.

by Chris ( at November 03, 2014 11:01 AM

Free Electrons

Yocto Project and OpenEmbedded training materials published

Yocto Project and OpenEmbedded trainingAs we announced in out latest newsletter, we recently launched a new Yocto Project and OpenEmbedded development training course.

The first public session will take place in Toulouse, France on November 18-20 and we still have a few seats available. We can also deliver on-site sessions at the location of your choice, see our Training cost and registration page for more details.

However, what brings us here today is that we are happy to announce the release of all the training materials of this new course: like all Free Electrons training materials, they are available under the Creative Commons Attribution Share-Alike license.

Fully committed to its knowledge sharing principles, Free Electrons has chosen to publish those materials even before the first session has taken place.

The materials available are:

We of course welcome reviews, feedback and comments about these materials, in order to improve them where needed. Send us your comments!

by Thomas Petazzoni at November 03, 2014 10:55 AM

Video Circuits

Sismo VGA Box

Here is what looks like a simple VGA breakout box for eurorack standard synths, pretty fun way to get it to video synthesis, you could have something similar together in an afternoon and use any audio source as an input. 

by Chris ( at November 03, 2014 10:31 AM

And sometimes you find something you really shouldn't have missed 

"Secret Cinema was an email list that provided announcements of avant-garde & artists' film and video screenings in London. Secret Cinema ran from 2001-2011, and this blog covers events from 2006 onwards. Subscribers received information about more events than are listed on this website."

by Chris ( at November 03, 2014 10:15 AM

November 02, 2014

Kristian Paul

Piaware statics

Got a cheap rasberry pi recentently. Since the httpd on it was not eating lot cpu cycles decided to run something else that will give use to my rtl-sdr as well. Is called pi aware [1]. So far got interesting ADS-B statics from my place [2]. Not bad.

As a side note use rtl-power as a passive radar could be another interesting option, just for the stats ;-).


November 02, 2014 05:00 AM

October 31, 2014


BFR93 - BJT RF transistor : weekend die-shot

BFR93 is a popular, BJT npn RF transistor.
Die size 265x264 µm. Transistor itself occupy only small part of the die - it is impractical to cut smaller die, it is already almost a silicon cube:

October 31, 2014 10:06 PM

Free Electrons

Call for participation for the FOSDEM Embedded developer room

BrusselsThe FOSDEM is by far the largest and most vibrant open-source event in Europe. With 5000+ participants, 400+ talks in just two days, a completely free entrance with no registration required, and many topics covered, it has become over the years a major meeting event of open-source developers.

The 2015 edition will take on January 31 and February 1st in Brussels. Like most years, a specific track dedicated to embedded systems is on the schedule, called the “Embedded Developer Room”. A call for participation has been published, and proposals are expected by December, 1st.

It is worth mentioning that the scope of the FOSDEM Embedded Developer Room goes much beyond Embedded Linux: it covers all types of embedded systems, including micro-controller based development, fun hacking or do-it-yourself projects, and much more. Looking at last year’s schedule of the Embedded Devroom is a good way of getting a feeling of the topics that are covered.

Also, FOSDEM has many other tracks that can be interesting to embedded Linux developers: last year there was a track about Tracing and debugging, a track about Memory and Storage, a track about Hardware, a developer room about Graphics, etc.

So, save the date, and join FOSDEM 2015 in Brussels!

by Thomas Petazzoni at October 31, 2014 09:55 AM

October 30, 2014

Richard Hughes, ColorHug

appdata-tools is dead

PSA: If you’re using appdata-validate, please switch to appstream-util validate from the appstream-glib project. If you’re also using the M4 macro, just replace APPDATA_XML with APPSTREAM_XML. I’ll ship both the old binary and the old m4 file in appstream-glib for a little bit, but I’ll probably remove them again the next time we bump ABI. That is all. :)

by hughsie at October 30, 2014 02:53 PM

October 28, 2014

Free Electrons

Videos of XDC2014 and Kernel Recipes 2014

Recently, two interesting conferences took place in France: the developer conference (in Bordeaux, October 8th-10th) and the Kernel Recipes conference (in Paris, September 25th-26th). Foundation logo

Kernel Recipes logo

Both conferences have now published videos and slides of the different talks:

  • for the XDC 2014 conference, they are available in the program page
  • for the Kernel Recipes conference, they are available from the schedule page

It also means that the video of the talk given by Free Electrons engineer Maxime Ripard about the support for Allwinner processors in the kernel is now available: video, slides.

by Thomas Petazzoni at October 28, 2014 08:54 PM

October 27, 2014

Free Electrons

Call for participation for the Embedded Linux Conference 2015

San Jose, CaliforniaThe Embedded Linux Conference Europe is just over that it’s already time to think about the Embedded Linux Conference 2015, which will take place on March 23-25 in San Jose, California.

The call for participation has been published recently, and interested speakers are invited to submit their proposals before January, 9th 2015. The notifications of whether your talk is accepted or not will be sent on January, 16th and the final schedule is planned to be published on January, 23th.

At Free Electrons, we really would like to encourage developers who are working on interesting embedded Linux related projects to submit a talk about what they are doing: talking about a specific open-source project, feedback on some experience doing an embedded Linux based product, etc. The scope of topics covered by the Embedded Linux Conference is fairly broad, so do not hesitate to submit a proposal. Giving a talk at this conference is really a great way of getting feedback about what you’re doing, raising awareness about a particular project or issue, and generally meeting other developers interested in similar topics.

It is worth mentioning that for those people whose talk is accepted, the entrance ticket is free. For hobbyists working on their own on open-source projects, the Linux Foundation also has the possibility of funding travel to the conference.

by Thomas Petazzoni at October 27, 2014 12:34 PM


10Mhz Quartz SMD oscillator based on Seiko NPC SM5009 : weekend die-shot

Seiko NPC SM5009 contains internal load capacitors, oscillator with amplitude limiting (for reduced power consumption) and optional frequency divider.

Die size 1194x897 µm, 800nm technology.

October 27, 2014 04:59 AM

October 23, 2014

Free Electrons

Free Electrons team back from ELCE and Linux Plumbers

As we announced in an earlier blog post, the entire Free Electrons engineering team was at the Embedded Linux Conference Europe and Linux Plumbers Conference last week in Düsseldorf.

Free Electrons engineering team at the Embedded Linux Conference Europe 2014

Free Electrons engineering team at the Embedded Linux Conference Europe 2014. From left to right, Grégory Clement, Alexandre Belloni, Maxime Ripard, Antoine Ténart, Thomas Petazzoni, Boris Brezillon and Michael Opdenacker.

In addition to attending many talks, meeting developers of the embedded Linux community and therefore keeping us up-to-date with the most recent developments in this domain, we also gave a number of talks, for which the slides are now available:

Boris Brezillon giving his DRM/KMS talk

Boris Brezillon giving his DRM/KMS talk

Maxime Ripard giving his Allwinner kernel talk

Maxime Ripard giving his Allwinner kernel talk

Thomas Petazzoni giving his Buildroot talk

Thomas Petazzoni giving his Buildroot talk

At the social event, from left to right: Grégory Clement (Free Electrons), Kevin Hilman (Linaro), Boris Brezillon (Free Electrons), Maxime Ripard (Free Electrons)

At the social event, from left to right: Grégory Clement (Free Electrons), Kevin Hilman (Linaro), Boris Brezillon (Free Electrons), Maxime Ripard (Free Electrons)

All the slides of the conference are also available on the event site of the Linux Foundation, and all talks have been video-recorded by the Linux Foundation so hopefully videos should become available in the near future.

by Thomas Petazzoni at October 23, 2014 08:39 PM

October 21, 2014

Free Electrons

Free Electrons registered as Yocto Project Participant.

Earlier this month, Free Electrons applied and was elected Yocto Project Participant by the Yocto Project Advisory Board. This badge is awarded to people and companies actively participating to the Yocto Project and promoting it.

We have mainly contributed to the meta-fsl-arm and meta-fsl-arm-extra layers but we also have some contributions in OpenEmbedded Core and in the meta-ti layer.

Free Electrons offers a Yocto Project and OpenEmbedded training course that we can deliver at your location, or that you can attend by joining one of our public sessions. Our engineers are also available to provide consulting and development services around the Yocto Project, to help you use this tool for your embedded Linux projects. Do not hesitate to contact us!

by Alexandre Belloni at October 21, 2014 08:32 AM

October 20, 2014


OnSemi MMBT2222A - npn BJT transistor : weekend die-shot

Die size 343x343 µm. Comparing to NXP BC847B die area is 1.5x larger (0.118 vs 0.076mm²), but maximum continuous collector current is 6 times higher (600mA vs 100mA, SOT-23 in both cases). This huge increase in current per transistor area is achieved by shunting thin (=high-resistance) base layer with metal. High resistance of base layer is the limiting factor for maximum collector current in BC847B.

October 20, 2014 12:42 PM

NXP 2N7002 N-channel MOSFET : weekend die-shot

Die size 377x377 µm.

Hexagonal cells of TrenchMOS transistor has 4µm size.

October 20, 2014 06:01 AM

October 19, 2014


Espressif ESP8266 WiFi-serial interface : weekend die-shot

Since August of 2014 internet is literally blown by WiFi-serial modules on new ESP8266 chip which are currently being sold for less than 4$. Chinese company Espressif managed to cram entire WiFi, TCP/IP and HTTP stack into on-chip memory, without external DRAM. Analog front-end requires minimal external components, all filters are internal. All this allowed them to offer extremely aggressive price. Chip has marking ESP8089, which is their more advanced 40nm product. Apparently, they only differ in bonding and ROM content.

Die size 2050x2169 µm, half of which is occupied by transceiver and PA, 25% - on-chip memory (rough size estimations are ~300KiB), and the rest is Xtensa LX106 CPU core and other digital logic.

Chinese engineers did an outstanding job here on finally making WiFi IoT devices cost effective. Let's hope Espressif will eventually open more internal chip information for amateurs and end users.

October 19, 2014 08:32 PM

October 18, 2014

Bunnie Studios

Name that Ware October 2014

The Ware for October 2014 is shown below.

Very busy with getting Novena ready for shipping, and Chibitronics is ramping into full holiday season production. And then this darn thing breaks! Well, at least I got pictures to share.

Have fun!

by bunnie at October 18, 2014 11:32 AM

Winner, Name that Ware September 2014

The Ware from September 2014 was a Totalphase Beagle USB 480. Gratz to Nick Ames for having the first correct guess, email me for your prize! Unfortunately, none of the claims on the FPGA identification were convincing enough for me to accept them without having to do a lot of legwork of my own to verify.

by bunnie at October 18, 2014 11:31 AM

October 15, 2014

Richard Hughes, ColorHug

GNOME Software and Fonts

A few people have asked me now “How do I make my font show up in GNOME Software” and until today my answer has been something along the lines of “mrrr, it’s complicated“.

What we used to do is treat each font file in a package as an application, and then try to merge them together using some metrics found in the font and 444 semi-automatically generated AppData files from a manually updated .csv file. This wasn’t ideal as fonts were being renamed, added and removed, which quickly made the .csv file obsolete. The summary and descriptions were not translated and hard to modify. We used the pre-0.6 format AppData files as the MetaInfo specification had not existed when this stuff was hacked up just in time for Fedora 20.

I’ve spent the better part of today making this a lot more sane, but in the process I’m going to need a bit of help from packagers in Fedora, and maybe even helpful upstreams. This are the notes of what I’ve got so far:

Font components are supersets of font faces, so we’d include fonts together that make a cohesive set, for instance,”SourceCode” would consist of “SoureCodePro“, “SourceSansPro-Regular” and “SourceSansPro-ExtraLight“. This is so the user can press one button and get a set of fonts, rather than having to install something new when they’re in the application designing something. Font components need a one line summary for GNOME Software and optionally a long description. The icon and screenshots are automatically generated.

So, what do you need to do if you maintain a package with a single font, or where all the fonts are shipped in the same (sub)package? Simply ship a file like this in /usr/share/appdata/Liberation.metainfo.xml like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">
  <summary>Open source versions of several commercial fonts</summary>
      The Liberation Fonts are intended to be replacements for Times New Roman,
      Arial, and Courier New.
  <url type="homepage"></url>

There can be up to 3 paragraphs of description, and the summary has to be just one line. Try to avoid too much technical content here, this is designed to be shown to end-users who probably don’t know what TTF means or what MSCoreFonts are.

It’s a little more tricky when there are multiple source tarballs for a font component, or when the font is split up into subpackages by a packager. In this case, each subpackage needs to ship something like this into /usr/share/appdata/LiberationSerif.metainfo.xml:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Copyright 2014 Your Name <you@domain> -->
<component type="font">

This won’t end up in the final metadata (or be visible) in the software center, but it will tell the metadata extractor that LiberationSerif should be merged into the Liberation component. All the automatically generated screenshots will be moved to the right place too.

Moving the metadata to font packages makes the process much more transparent, letting packagers write their own descriptions and actually influence how things show up in the software center. I’m happy to push some of my existing content from the .csv file upstream.

These MetaInfo files are not supposed to replace the existing fontconfig files, nor do I think they should be merged into one file or format. If your package just contains one font used internally, or where there is only partial coverage of the alphabet, I don’t think we want to show this in GNOME Software, and thus it doesn’t need any new MetaInfo files.

by hughsie at October 15, 2014 01:48 PM

October 12, 2014

Free Electrons

Linux 3.17 released, Free Electrons 14th contributing company

PenguinLinux 3.17 has been released a few days ago. One can read the coverage of the 3.17 merge window by LWN (part 1 and part 2) to get some details about the new features brought by this kernel release.

As usual, Free Electrons has continued to contribute a significant number of patches to this kernel release, even though with 147 patches, our contribution has been less important than for the 3.16 release for which we contributed 388 patches. With 147 patches merged, Free Electrons is the 14th contributing company by the number of patches.

Our contributions remain mainly focused on support for various families of ARM processors:

  • For the Atmel processors
    • Switched to use the generic PWM framework instead of custom PWM drivers. This allowed to remove three obsolete drivers (a backlight driver, a LED driver and a misc driver). This work was done by Alexandre Belloni.
    • Continue the migration to the common clock framework, by adding clock information to a large number of Atmel boards. Done by Alexandre Belloni.
    • Migration of the interrupt controller driver from arch/arm/mach-at91 to drivers/irqchip. Done by Boris Brezillon.
  • For the Marvell EBU processors (Armada 370, 375, 38x, XP)
    • Addition of the mvpp2 network driver, which is used on the Armada 375 SoC. This work was done by Marcin Wojtas from Semihalf, with a lot of review, help and debugging done by Ezequiel Garcia.
    • Addition of cpuidle support for Armada 370 and Armada 38x. This work was done by Grégory Clement and Thomas Petazzoni.
    • Preparation work to enable cpufreq on Armada XP was merged. However the feature cannot be enabled yet due to missing features in the cpufreq-cpu0 driver. Done by Thomas Petazzoni.
  • For Marvell Berlin processors
    • SMP support has been added. Done by Antoine Ténart.
    • Description of the I2C controller has been added to the Device Tree. Done by Antoine Ténart.
    • Support for AHCI has been added. Also done by Antoine Ténart.
  • For Allwinner processors
    • New DMA controller driver for the DMA engine of the Allwinner A31 SoC. Done by Maxime Ripard.
    • A number of fixes and improvements to the pin-muxing driver for Allwinner platforms. Done by Maxime Ripard.
    • Support for the Merrii A31 Hummingbird board has been added. Done by Maxime Ripard.
  • Other changes
    • Addition of a helper function to convert an ONFI timing mode into the according NAND timings. Done by Boris Brezillon.
    • Addition of a driver for the Foxlink FL500WVR00-A0T panel. Done by Boris Brezillon.

The detailed list of our contributions:

by Thomas Petazzoni at October 12, 2014 03:50 PM

October 08, 2014

Free Electrons

Xenomai 2.6.4 released, with Atmel SAMA5D3 support

XenomaiAt the end of September, the Xenomai project has announced the release of version 2.6.4. For the record, Xenomai is a hard real-time extension to the Linux kernel.

Amongst a number of bug fixes and improvements, this new release brings an interesting new feature to which Free Electrons contributed: the support for the Atmel SAMA5D3. This means that Xenomai can now be used on platforms such as the Xplained SAMA5D3 and any other SAMA5D3 based platform. This work was done by Xenomai ARM maintainer Gilles Chanteperdrix, thanks to the testing and insights of Free Electrons engineer Maxime Ripard.

Mainly, the change needed was to support the AIC5 interrupt controller used in SAMA5D3, which is different from the interrupt controller used on earlier AT91 processors. This change should also provide compatibility with the recently released SAMA5D4, though we haven’t tested this at this time, and Xenomai only provides its patch up to kernel 3.14, while SAMA5D4 support was only recently added to the mainline kernel.

This 2.6.4 Xenomai release also brings support for the 3.14 kernel version, through the corresponding I-Pipe patch.

There are also some other interesting Xenomai news: in early October, they have released the first release candidate of Xenomai 3, the next generation Xenomai architecture. And they also have a brand new and more modern website at

by Thomas Petazzoni at October 08, 2014 07:45 PM

Video Circuits

Why Not, Jim Sosnin (1980)

A recent upload by Jim Sosnin drawn to my attention by the ever vigilant Jeffrey

"Video synthesis demo from 1980, realised using EMS Spectron video synth plus some homebrew gear. The audio was created in 1978 using 3 Transaudio synths linked together. This digital transfer via an old U-matic (3/4-inch format) VCR, repaired for the occasion, to retrieve original stereo audio (my more recent VHS copy had mono audio only)."

by Chris ( at October 08, 2014 06:44 AM

October 06, 2014


Analog Devices AD558 - MIL-Spec 8-bit I²L DAC : weekend die-shot

Analog Devices AD558 is an 8-bit I²L DAC in ceramic package (MIL spec).

It is still an open question how this chip got into ex-USSR/Russia - anonymous reader left no comments on that (this smells like cold war...). It is not a secret that Russia had no extensive civilian IC assortment in manufacturing, hence all military IC's must have been designed and manufactured from scratch (i.e. all R&D, prototypes and masks must be paid by government). In such conditions providing all variety of domestic ICs is economically impossible, at least without government expenses comparable to whole world's expenses on IC R&D. So "temporary", "case-by-case" permit to use imported (both legitimately and not-so-legitimately) western ICs in military equipment "until domestic products are ready" is still here after 24 years despite numerous attempts to end this practice.

Die size 2713x2141 µm, 6µm manufacturing technology, trimming laser was leaving ~8µm diameter spots.

Oh, these rounded resistors are just beautiful... Autorouters in 2014, do you see this?
Note how amount of laser trimming on R-ladder is different for different bits.

PS. Could anyone share position of western engineers on plastic-vs-ceramic package for military/space-grade IC's? It appears modern plastic packages offer more benefits (like better G-shock/vibration reliability and obviously cost) without sacrificing anything (temperature range and moisture are less of a concern now, radiation was never a concern for a package).

October 06, 2014 09:52 PM

October 05, 2014

Andrew Zonenberg, Silicon Exposed

Electronic Privacy: A Realist's Perspective

    Note: I originally wrote this in a Facebook note in March 2012, long before any of the recent leaks. I figured it'd be of interest to a wider audience so I'm re-posting it here.
    There's been a lot of hullabaloo lately about Google's new privacy policy etc so I decided to write up a little article describing my personal opinions on the subject.
    Note that I'm describing defensive policies which may be a bit more cynical than most people's, and not considering relevant laws or privacy policies at all. The assumption being made here is that if it's possible, and someone wants it to happen enough, they will make it happen regardless of whether it's legal.

    RULE 1: If it's on someone else's server, and not encrypted, it's public information.
    Rationale: Given the ridiculous number of data breaches we've had lately it's safe to say that any sufficiently motivated and funded person / agency could break into just about any company storing data they're interested in. On top of this, in many countries government agencies have a history of sending companies subpoenas asking for data they're interested in, which is typically forked over with little or no question.
    This goes for anything from your Facebook profile to medical/financial records to email.
    RULE 1.1: Privacy settings/policies keep honest people honest.
    Rationale: Hackers and government agencies, especially foreign ones, don't have to play by the rules. Services have bugs. Always assume that your privacy settings are wide open and set them tighter only as an additional (small) layer of defense.
    RULE 2: If it's encrypted, but you don't control the key completely, it's public information.
    Rationale: Encryption is only as good as your key management. If somebody else has the key they're a potential point of failure. Want to bet $COMPANY's key management isn't as good as yours? Also, if $COMPANY can be forced/tricked/hacked into turning over the key without your knowledge, the data is as good as public anyway.
    RULE 3: If someone can talk to it, they can root it.
    Rationale: It's pretty much impossible to say "there are no undiscovered bugs in this code" so it's safest to assume the worst... there is a bug in your operating system / installed software and anyone with enough time or money can find or buy an 0day. Want to bet there are NO security-related bugs in the code your box is running? Me neither. If your system isn't airgapped assume it could have been pwned.
    RULE 4: If it goes over an RF link and isn't end-to-end encrypted, it's public information.
    Rationale: This includes wifi (even with most grades of WEP/WPA encryption), cellular links, and everything else of that nature. Sure, the carrier may be encrypting your SMS/voice calls with some proprietary scheme of uncertain security, but they have the key so Rule 2 applies.
    RULE 5: If you have your phone with you, your whereabouts and anything you say is public information.
    Rationale: This can be derived from Rule 3. Your phone is just a computer and third parties can communicate with it. Since it includes a microphone and GPS, assume the device has been rooted and they're logging to $BADGUY on a 24/7 basis.
    RULE 6: All available data about someone/something can and will be correlated.
    Rationale: If two points of data can be identified as related, someone will figure out a way to combine them. Examples include search history (public according to Rule 1), identical usernames/emails/passwords used on different services, and public records. If someone knows that JoeSchmoe1234 said $FOO on and someone else called JoeSchmoe1234 said $BAR on it's a pretty safe bet both comments came from the same person who's interested in gaming and hacking.

by Andrew Zonenberg ( at October 05, 2014 12:24 AM

Why Apple's iPhone encryption won't stop NSA (or any other intelligence agency)

Recent news headlines have made a big deal of Apple encrypting more of the storage on their handsets, and claiming to not have a key. Depending on who you ask this is either a huge win for privacy, or a massive blow to intelligence collection and law enforcement capabilities. I'm going to try avoiding expressing any opinions of government policy here and focus on the technical details of what is and is not possible - and why disk encryption isn't as much of a major game-changer as people seem to think.

Matthew Green at Johns Hopkins wrote a very nice article on the subject recently, but there are a few points I feel it's worth going into more detail on.

The general case here is that of two people, Alice and Bob, communicating with iPhones while a third party, Eve, attempts to discover something about their communications.

First off, the changes in iOS 8 are encrypting data on disk. Voice calls, SMS, and Internet packets still cross the carrier's network in cleartext. These companies are legally required (by CALEA in the United States, and similar laws in other countries) to provide a means for law enforcement or intelligence to access this data.

In addition, if Eve can get within radio range of Alice or Bob, she can record the conversation off the air. Although the radio links are normally encrypted, many of these cryptosystems are weak and can be defeated in a reasonable amount of time by cryptanalysis. Numerous methods are available for executing man-in-the-middle attacks between handsets and cell towers, which can further enhance Eve's interception capabilities.

Second, if Eve is able to communicate with Alice or Bob's phone directly (via Wi-Fi, SMS, MITM of the radio link, MITM further upstream on the Internet, physical access to the USB port, or using spearphishing techniques to convince them to view a suitably crafted e-mail or website) she may be able to use an 0day exploit to gain code execution on the handset and bypass any/all encryption by reading the cleartext out of RAM while the handset is unlocked. Although this does require that Eve have a staff of skilled hackers to find an 0day, or deep pockets to buy one, when dealing with a nation/state level adversary this is hardly unrealistic.

Although this does not provide Eve with the ability to exfiltrate the device  encryption key (UID) directly, this is unnecessary if cleartext can be read directly. This is a case of the general trend we've been seeing for a while - encryption is no longer the weakest link, so attackers figure out ways to get around it rather than smash through.

Third, in many cases the contents of SMS/voice are not even required. If the police wish to geolocate the phone of a kidnapping victim (or a suspect) then triangulation via cell towers and the phone's GPS, using the existing e911 infrastructure, may be sufficient. If intelligence is attempting to perform contact tracing from a known target to other entities who might be of interest, then the "who called who when" metadata is of much more value than the contents of the calls.

There is only one situation where disk encryption is potentially useful: if Alice or Bob's phone falls into Eve's hands while locked and she wishes to extract information from it. In this narrow case, disk encryption does make it substantially more difficult, or even impossible, for Eve to recover the cleartext of the encrypted data.

Unfortunately for Alice and Bob, a well-equipped attacker has several options here (which may vary depending on exactly how Apple's implementation works; many of the details are not public).

If the Secure Enclave code is able to read the UID key, then it may be possible to exfiltrate the key using software-based methods. This could potentially be done by finding a vulnerability in the Secure Enclave (as was previously done with the TrustZone kernel on Qualcomm Android devices to unlock the bootloader). In addition, if Eve works for an intelligence agency, she could potentially send an NSL to Apple demanding that they write firmware, or sign an agency-provided image, to dump the UID off a handset.

In the extreme case, it might even be possible for Eve to compromise Apple's network and exfiltrate the certificate used for signing Secure Enclave images. (There is precedent for this sort of attack - the authors of Stuxnet appear to have stolen a driver-signing certificate from Realtek.)

If Apple did their job properly, however, the UID is completely inaccessible to software and is locked up in some kind of on-die hardware security module (HSM). This means that even if Eve is able to execute arbitrary code on the device while it is locked, she must bruteforce the passcode on the device itself - a very slow and time-consuming process.

In this case, an attacker may still be able to execute an invasive physical attack. By depackaging the SoC, etching or polishing down to the polysilicon layer, and looking at the surface of the die with an electron microscope the fuse bits can be located and read directly off the surface of the silicon.

E-fuse bits on polysilicon layer of an IC (National Semiconductor DMPAL16R). Left side and bottom right fuses are blown, upper right is conducting. (Note that this is a ~800nm process, easily readable with an optical microscope. The Apple A7 is made on a 28nm process and would require an electron microscope to read.) Photo by John McMaster, CC-BY
Since the key is physically burned into the IC, once power is removed from the phone there's no practical way for any kind of self-destruct to erase it. Although this would require a reasonably well-equipped attacker, I'm pretty confident based on my previous experience that I could do it myself, with equipment available to me at school, if I had a couple of phones to destructively analyze and a few tens of thousands of dollars to spend on lab time. This is pocket change for an intelligence agency.

Once the UID is extracted, and the encrypted disk contents dumped from the flash chips, an offline bruteforce using GPUs, FPGAs, or ASICs could be used to recover the key in a fairly short time. Some very rough numbers I ran recently suggest that an 6-character upper/lowercase alphanumeric SHA-1 password could be bruteforced in around 25 milliseconds (1.2 trillion guesses per second) by a 2-rack, 2500-chip FPGA cluster costing less than $250,000. Luckily, the iPhone uses an iterated key-derivation function which is substantially slower.

The key derivation function used on the iPhone takes approximately 50 milliseconds on the iPhone's CPU, which comes out to about 70 million clock cycles. Performance studies of AES on a Cortex-A8 show about 25 cycles per byte for encryption plus 236 cycles for the key schedule. The key schedule setup only has to be done once so if the key is 32 bytes then we have 800 cycles per iteration, or about 87,500 iterations.

It's hard to give exact performance numbers for AES bruteforcing on an FPGA without building a cracker, but if pipelined to one guess per clock cycle at 400 MHz (reasonable for a modern 28nm FPGA) an attacker could easily get around 4500 guesses per second per hash pipeline. Assuming at least two pipelines per FPGA, the proposed FPGA cluster would give 22.5 million guesses per second - sufficient to break a 6-character case-sensitive alphanumeric password in around half an hour. If we limit ourselves to lowercase letters and numbers only, it would only take 45 seconds instead of the five and a half years Apple claims bruteforcing on the phone would take. Even 8-character alphanumeric case-sensitive passwords could be within reach (about eight weeks on average, or faster if the password contains predictable patterns like dictionary words).

by Andrew Zonenberg ( at October 05, 2014 12:24 AM

October 03, 2014

Andrew Zonenberg, Silicon Exposed

Threat modeling for FPGA software backdoors

I've been interested in the security of compilers and related toolchains ever since I first read about Ken Thompson's compiler backdoor many years ago. In a nutshell, this famous backdoor does two things:

  • Whenever the backdoored C compiler compiles the "login" command, it adds a second code path that accepts a hard-coded default password in addition to the user's actual password
  • Whenever the backdoored C compiler compiles the unmodified source code of itself, it adds the backdoor to the resulting binary.
The end result is a compiler that looks fine at the source level, silently backdoors a critical system file at compilation time, and reproduces itself.

Recently, there has also been a lot of concern over backdoors in integrated circuits (either added at the source code level by a malicious employee, or at the netlist/layout level by a third-party fab). DARPA even has a program dedicated to figuring out ways of eliminating or detecting such backdoors. A 2010 paper stemming from the CSAW Embedded Systems Challenge presents a detailed taxonomy of such hardware Trojans.

As far as I can tell, the majority of research into hardware Trojans has been focused on detecting them, assuming the adversary has managed to backdoor the design in some way that provides him with a tactical or strategic advantage. I have had difficulty finding detailed threat modeling research quantifying the capability of the adversary under particular scenarios.

When we turn our attention to FPGAs, things quickly become even more interesting. There are several major differences between FPGAs and ASICs from a security perspective which may grant the adversary greater or lesser capability than with an ASIC.

Attacks at the IC fab

The function of an ASIC is largely defined at fab time (except for RAM-based firmware) while FPGAs are extremely flexible. When trying to backdoor FPGA silicon the adversary has no idea what product(s) the chip will eventually make it into. They don't even know which pins on the die will be used as inputs and which as outputs.

I suspect that this places substantial bounds on the capability of an attacker "Malfab" targeting FPGA silicon at the fab (or pre-fab RTL/layout) level since the actual RTL being targeted does not even exist yet. To start, we consider a generic FPGA without any hard IP blocks:
  • Malfab does not know which flipflops/SRAM/LUTs will eventually contain sensitive data, nor what format this data may take.
  • Malfab does not know which I/O pins may be connected to an external communications interface useful for command-and-control.
As a result, his only option is to create an extremely generic backdoor. At this level, the only thing that makes sense is to connect all I/O pins (perhaps through scan logic) to a central malware logic block which provides the ability to read (and possibly modify) all state in the device. This most likely would require two major subsystems:
  • A detector, which searches I/Os for a magic sync sequence
  • A connection from that detector to the FPGA's internal configuration access port (ICAP), used for partial reconfiguration and readback.
The design of this protocol would be very challenging since the adversary does not know anything about the external interfaces the pin may be connected to. The FPGA could be in a PLC or similar device whose only external contact is RS-232 serial on a single input pin. Perhaps it is in a network router/switch using RGMII (4-bit parallel with double data rate signalling).

I am not aware of any published work on the feasibility of such a backdoor however I am skeptical that a sufficiently generic Trojan could be made simple enough to evade even casual reverse engineering of the I/O circuitry, and fast enough to not seriously cripple performance of the device.

Unfortunately for our defender Alice, modern FPGAs often contain hard IP blocks such as SERDES and RAM controllers. These present a far more attractive target to Malfab as their functionality is largely known in advance.

It is not hard to imagine, for example, a malicious patch to the RAM controller block which searches each byte group for a magic sync sequence written to consecutive addresses, then executes commands from the next few bytes. As long as Malfab is able to cause the target's system to write data of his choice to consecutive RAM addresses (perhaps by sending it as the payload of an Ethernet frame, which is then buffered in RAM) he can execute arbitrary commands on the backdoor engine. If one of these commands is "write data from SLICE_X37Y42.A5FF to RAM address 0xdeadbeef", and Malfab can predict the location of a transmit buffer of some sort, he now has the ability to exfiltrate arbitrary state from Alice's hardware.

I thus conjecture that the only feasible way to backdoor a modern FPGA at fab time is through hard IP. If we ensure that the JTAG interface (the one hard IP block whose use cannot be avoided) is not connected to attacker-controlled interfaces, use off-die SERDES, and use softcore RAM controllers on non-standard pins, it is unlikely that Malfab will be able to meaningfully affect the security of the resulting circuit.

Attacks on the toolchain

We now turn our attention to a second adversary, Maldev - the malicious development tool. Maldev works for the FPGA vendor, has compromised the source repository for their toolchain, has MITMed the download of the toolchain installer, or has penetrated Alice's network and patched the software on her computer.

Since FPGAs are inherently closed systems (more so than ASICs, in which multiple competing toolchains exist), Alice has no choice but to use the FPGA vendor's binary-blob toolchain. Although it is possible in theory for a dedicated team with sufficient time and budget to reverse engineer the FPGA and/or toolchain and create a trusted open-source development suite, I discount the possibility for the scope of this section since a fully trusted toolchain is presumably free of interesting backdoors ;)

Maldev has many capabilities lacking to Malfab since he can execute arbitrary code on Alice's computer. Assuming that Alice is (justifiably) paranoid about the provenance of her FPGA software and runs it on a dedicated machine in a DMZ (so that it cannot infect the remainder of her network), this is equivalent to having full access to her RTL and netlist at all stages of design.

If Alice gives her development workstation Internet access, Maldev now has the ability to upload her RTL source and/or netlist, modify it at will on his computer, and then push patches back. This is trivially equivalent to a full defeat of the entire system.

Things become more interesting when we cut off command-and-control access. This is a realistic scenario if Alice is a military/defense user doing development on a classified network with no Internet connection.

The simplest attack is for Maldev to store a list of source file hashes and patches in the compromised binary. While this is very limited (custom-developed code cannot be attacked at all), many design teams are likely to use a small set of stock communications IP such as the Xilinx Tri-Mode Ethernet MAC, so patching these may be sufficient to provide him with an attack vector on the target system. Looking for AXI interconnect IP provides Maldev with a topology map of the target SoC.

Another option is graph-based analytics on the netlist at various stages of synthesis. For example, by looking for a 32-bit register initialized to 0x67452301, which is in a strongly connected component with three other registers initialized to 0xefcdab89, 0x98badcfe, and 0x10325476, Maldev can say with a high probability that he has found an implementation of MD5 and located the state registers. By looking for a 128-bit comparator between these values and another 128-bit value, a hash match check has been found (and a backdoor may be inserted). Similar techniques may be used to look for other cryptography.


If FPGA development is done using silicon purchased before the start of the project, on an air-gapped machine, and without using any pre-made IP, then some bounds can clearly be placed on the adversary's capability.

I have not seen any formal threat modeling studies on this subject, although I haven't spent a ton of time looking for them due to research obligations. If anyone is aware of published work in this field I'm extremely interested.

by Andrew Zonenberg ( at October 03, 2014 10:28 PM

Bunnie Studios

Novena Update

It’s been four months since we finished Novena’s crowd funding campaign, and we’ve made a lot of progress since then. Since then, a team of people have been hard at work to make Novena a reality.

It takes many hands to build a product of this complexity, and we couldn’t do it without our dedicated and hard-working team at AQS. Above is a photo from the conference room where we did the T1 plastics review in Dongguan, China.

In this update, we’ll be discussing progress on the Casing, Electronics, Accessories, Firmware and the Community.

Case construction update
We’re very excited that the Novena cases we’re carrying around are now made of entirely production-process hardware — no more prototypes. A total of 10 injection molding tools, many of the family molds, have been opened so far; for comparison, a product like NeTV or chumby had perhaps 3-4 tools.

For those not familiar with injection molding, it’s a process whereby plastic is molded into a net shape from hot, high pressure liquid plastic forced into a cavity made out of hardened steel. The steel tool is a masterpiece of engineering in itself – it’s a water-cooled block weighing in at about a ton, capable of handling pressures found at the bottom of the Mariana Trench, and the internal surfaces are machined to tolerances better than the width of a human hair. And on top of that, it contains a clockwork of moving pieces, with dozens of ejector pins, sliders, lifters and parting surfaces coming apart and back together again smoothly over thousands of cycles. It’s amazing that these tools can be crafted in a couple of months, yet here we are.

With so much complexity involved, it’s no surprise that the tools require several iterations of refinement to get absolutely perfect. In tooling jargon, the iterations are referred to as T0, T1, T2…etc. You’re doing pretty good if you can go to full production at T2; we’re currently at T1 stage. The T1 plastics are 99% there, with a few issues relating to flow and knit lines, as well as a couple of spots where the plastic is warping during cooling or binding to the tool during ejection and causing some deformation. This manifests itself in a couple spots where the seams aren’t as tight as we’d like them to be in the case.

Most people have only seen products of finished tooling, so I thought I’d share what a pretty typical T0 shot looks like, particularly for a large and complex tool like the Novena case base part. Test shots like this are typically done in colors that highlight defects and/or the resin is available as scrap, hence the gray color. The final units will be black.

There’s a lot going on with this piece of plastic, so below is a visual guide to some of the artifacts.

In the green boxes are a set of “sink marks”. These happen when the opposite side of the plastic has a particularly thin or thick feature. These areas will cool faster or slower than the bulk of the plastic, causing these regions to pucker slightly and cause what looks like a bit of a shadow. It’s particularly noticeable on mirror-finish parts. In this case, the sink marks are due to the plastic underneath the nut bosses of the Peek array being much thinner than the surrounding plastic. The fix to this problem was to slightly thicken that region, reducing the overall internal clearance of the case by 0.8mm. Fortunately, I had designed in a little extra clearance margin to the case so this was possible.

The red arrow points to a “knit line”. This is a region where plastic flow meets within the tool. Plastic, as it is injected into the cavity, will tend to flow from one or more gates, and where the molten plastic meets itself, it will leave a hairline scar. It’s often located at points of symmetry between the gates where the plastic is injected (on this tool, there are four gates located underneath the spot where the rubber feet go — gates are considered cosmetically unattractive and thus they are strategically placed to hide their location).

The white feathery artifacts, as indicated by the orange arrow, are flow marks. In this case, it seems plastic was cooling a bit too quickly within the tool, causing these streaks. This problem can often be fixed by adjusting the injection pressure, cycle length, and temperature. This tweaking is done using test shots on the molding machine, with one parameter at a time tweaked, shot after shot, until its optimum point is found. This process can sometimes take hundreds of shots, creating a small hill of scrap plastic as a by-product.

Most of these gross defects were fixed by T1, and the plastic now looks much closer to production-grade (and the color is now black). Below is the T1 shot in initial testing after transferring live hardware into the plastics.

There’s still a few issues around fit and finish. The rear lip is binding to the tool slightly during ejection, which is causing a little bit of deformation. Also, the panel we added at the last minute to accommodate oversized expansion boards isn’t mating as tightly as we’d like it to. But, despite all of these issues, the case feels much more solid than the prototypes, and the gas piston mechanism is finally consistent and really smooth.

Front bezel update
The front bezel of Novena’s case (not to be confused with the aluminum LCD bezel) has gone through a couple of changes since the campaign. When we closed funding, it had two outward-facing USB ports and one switch. Now, it has two switches and one outward-facing USB port and one inward-facing USB port.

One switch is for power — it goes directly to the power board and thus can be used to turn the system on and off even when the main board is fully powered down.

The other switch is wired to a user key press, and the intent is to facilitate Bluetooth association for keyboards that are being stupid. It seems some keyboards can take up to a half-minute to cycle through — something (presumably, it’s trying to be secure) — before they connect. There are hacks you can do to bypass that, but it requires you to run a script on the host, and the idea is by pressing this button users can trigger a convenience script to get past the utter folly of Bluetooth. This switch also doubles as a wake-up button for when the system is in suspend.

As for the USB ports, there are still four ports total in the design, but the configuration is now as follows:

  • Two higher-current capable ports on the right
  • One standard-current capable port on the front
  • One standard-current capable port facing toward the Peek Array
  • In other words, we face one USB port toward the inside of the machine; since half the fun of Novena is modding the hardware, we figure making a USB port available on the inside is at least as useful as making it available on the outside.

    For those who don’t do hardware mods, it’s also a fine place to plug small dongles that you generally keep permanently attached, such as a radio transceiver for your keyboard. It’s a little inconvenient to initially plug in the dongle, but keeping the radio transceiver dongle facing the inside helps protect it from damage when you throw your laptop into your travel bag.

    We toyed with several iterations of speaker selection for Novena. One of the core ideas behind the design was to make speaker choice something every user would be encouraged to make on their own. One driving reason for this is some people really listen to music on their laptop when they travel, but others simply rely upon the speaker for notification tones and would prefer to use headphones for media capabilities.

    Physics dictates that high-quality sound requires a certain amount of space and mass, and so users who have a more relaxed fidelity requirement should be able to reclaim the space and weight that nicer speakers would require.

    Kurt Mottweiler, the designer of the Heirloom model, had selected a nice but very compact off-the-shelf speaker, the PUI ASE06008MR-LW150-R, for the Heirloom. We evaluated that in the context of the standard Novena model and found that it fit well into the Peek Array and it also had acceptable fidelity, particularly for its size. And so, we adopted this as the standard offering for audio. However, it will be provided with a mounting kit that allows for easy removal so users who need to reclaim the space they take, or who want to go the other way and put in larger speakers, can do so with ease.

    PVT2 Mainboard
    The Novena mainboard went through a minor revision prior to mass production. The 21-point change list can be viewed here; the majority of the changes focused on replacing or updating components that were at risk of EOL. The two most significant changes from a design standpoint were the addition of an internal FPC header to connect to the front bezel cluster, and a dedicated hardware RTC module.

    The internal FPC header was added to improve the routing of signals from the mainboard to the front bezel cluster. We had to run two USB ports, plus a smattering of GPIOs and power to the front bezel and the original scheme required multiple cables to execute the connection. The updated design condenses all of this into a single FPC, thereby simplifying the design and improving reliability.

    A dedicated hardware RTC module was added because we couldn’t get the RTC built into the i.MX6 to perform well. It seems that the CPU simply had a higher leakage on the RTC than reported in the datasheet, and thus the lifetime of the RTC when the system was turned off was measured in, at best, minutes. We made the call that there was too much risk in continuing to develop with the on-board RTC and opted to include an external, dedicated RTC module that we knew would work. In order to increase compatibility with other i.MX6 platforms, we picked the same module used by the Solid-Run Hummingboard, the NXP PCF8523T/1.

    The GPBB got a face-lift and a couple of small mods to make it more hacker-friendly.

    I think everything looks a little bit nicer in matte black, so where it doesn’t compromise production integrity we opted to use a matte black soldermask with gold finish.

    Beyond the obvious cosmetic change, the GPBB also features an adjustable I/O voltage for the digital outputs. The design change is still going through testing, but the concept is to by default allow a 5V/3.3V selectable setting in software. However, the lower voltage can also be adjusted to 2.5V and 1.8V by changing a single resistor (R12), which I also labelled “I/O VOLTAGE SET” and made a 1206 part so soldering novices can make the change themselves.

    In our experience, we’re finding an ever-increasing gulf between the voltage standards used by hobbyists and what we’re actually finding inside equipment we need to reverse engineer; and thus, to accommodate both applications a flexible voltage output selection mechanism was added to the GPBB.

    Desktop Passthrough
    The desktop case originally included just the Novena mainboard, and the front panel breakout. It turns out this makes power management awkward, as the overall power management system for the case was designed with the assumption there is a helper microcontroller managing a master cut-off switch.

    Complexity is the devil, and it’s been hard enough to get the software going for even a single configuration. So in net we found it would be cheaper to introduce a new piece of hardware rather than deal with multiple code configurations.

    Therefore, desktop systems are now getting a power pass-through board as part of the offering. It’s a simple PCBA that contains just the STM32 controller and power switch of the full Senoko board. This allows us to use a consistent gross power management architecture across both the desktop and the laptop systems.

    Of course, this is swatting a fly with a sledgehammer, but this sledgehammer costs as much as the flyswatter and it’s inconvenient to carry both a fly swatter and a sledgehammer around. And so yes, we’re using a 32-bit ARM CPU to read the state of a pushbutton and flip a GPIO, and yes, this is all done using a full multi-threaded real time operating system (ChibiOS) running underneath it. It feels a little silly, which is why we broke out some of the unused GPIOs so there’s a chance some clever user might find an application for all that untapped power.

    The battery pack for Novena is and will continue to be a wildcard in the stack. It’s our first time building a system with such a high-capacity battery, and working through all the shipping regulations to get these delivered to your front door will be a challenge.

    Some countries are particularly difficult in terms of their regulations around the importation of lithium batteries. In the worst case, we’ll send your laptop with no battery inside, and we will ship separately, at our cost, an off-the-shelf battery pack from a vendor that specializes in RC battery packs (e.g. Hobby King). You will have the same battery we featured in the crowd funding campaign, but you’ll need to plug it in yourself. We consider this to be a safe fall-back solution, since Hobby King ships thousands of battery packs a day all around the world.

    However, this did not stop us from developing a custom battery pack. As it’s very difficult to maintain a standing stock of battery packs (they need to be periodically conditioned), we’re including this custom battery pack only to backers of the campaign, providing their country of residence allows its import (and we won’t know for sure until we try). We did get UN38.3 certification for the custom battery pack, which in theory allows it to be shipped by air freight, but regulations around this are in flux. It seems countries and carriers keep on inventing new rules, particularly with all the paranoia about the potential use of lithium batteries as incendiary devices, and we don’t have the resources to keep up with the zeitgeist.

    For those who live in countries that allow the importation of our custom pack, the new pack features a 5000mAh rated capacity (about 2x the capacity over the pack we featured in the crowd campaign, which had 3000mAh printed on the outside but actually delivered about 2500mAh in practice). In real-life testing, the custom pack is getting about 6-7 hours of runtime with minimal power management enabled. Also, since I got to specify the battery, I know this one has the correct protection circuitry built into it, and I know the provenance of its cells and so I have a little more confidence in its long-term performance and stability.

    Of course, it’s a whole different matter convincing the lawmakers, customs authorities, and regulatory authorities of those facts…but fear not, even if they won’t accept this custom limited-edition battery, you will still get the original off-the-shelf pack promised in the campaign.

    Hard Drive
    In the campaign, we referenced providing 240GiB Intel 530 (or equivalent) and 480GiB Intel 720 drives for the laptop and heirloom models, respectively. We left the spec slightly ambiguous because the SSD market moves quickly, and probably the best drive last February when we drew up the spec will be different from the best drive we could get in October, when we actually do the purchasing.

    After doing some research, it’s our belief that the best equivalent drives today are the 240GiB Samsung 840 EVO (for the laptop model) and the 512GiB Samsung 850 Pro (for the Heirloom). We’ve been personally using the 840 EVO in our units for several months now, and they have performed admirably. An important metric for us is how well the drives hold up under unexpected power outages — this happens fairly often, for example, when you’re doing development work on the power management subsystem. Some hard drives, such as the SanDisk Extreme II, fail quite reliably (how’s that for an oxymoron) after a few unexpected power-down cycles. We’ve also had bad luck with OCZ and Crucial drives in the past.

    Intel drives have generally been pretty good, except that Intel stopped doing their own controllers for the 520 and 530 series and instead started using SandForce controllers, which in my opinion removes any potential advantage they could have being both the maker of the memory chips and the maker of the controller. The details of how flash memory performs, degrades, and yields are extremely process-specific, and at least in my fantasy world a company that produces flash + controller combinations should have an advantage over companies that have to mix-and-match multiple flash types with a semi-generic controller. Furthermore, while the Intel 720 does use their home-grown controller solution, it’s a power hog (over 5W active power) and requires a 12V rail, and is thus not suitable for use in laptop environments.

    The 840 EVO series comes with a reasonable 3-year warranty and at it’s held up well against one site’s write endurance test. After using mine for several months, I’ve had no complaints about it, and I think it’s a solid every-day use drive for firmware development. We also have a web server that hosts most of the media content for this and a couple other blogs, wikis, and bug tracking tools, and it’s a Novena running off an 840 EVO.

    For the premium Heirloom users, we’re very excited to build in the 850 PRO series. This drive comes with a serious warranty that matches the “heirloom” name — 10 years. The reason behind their ability to offer such a high claim of reliability is even more remarkable. The drive uses a technology that Samsung has branded “V-NAND”, which I consider to be the first bona-fide production-grade 3D transistor technology. Intel claims they make 3D transistors, but that’s just marketing hype — yes, the gate region has a raised surface topology, but you still only get a single layer of devices. From a design standpoint you’re still working with a 2D graph of devices. It’s like calling Braille a revolutionary 3D printing technology. The should have stuck with what I consider to be the “original” (and more descriptive/less misleading) name, FinFET, because by calling these 3D transistors I don’t know what they’re going to call actual 3D arrays of transistors, if they ever get around to making them.

    Chipworks did an excellent initial analysis of Samsung’s V-NAND technology and you can see from this SEM image they published that V-NAND isn’t about stacking just a couple transistors, Samsung is shipping a full-on 38-layer sandwich:

    This isn’t some lame Intel-style bra-padding exercise. This is full-on process technology bad-assery at its finest. This is Neo decoding the Matrix. This is Mal shooting first. It’s a stack of almost 40 individual, active transistors in a single spot. It’s a game changer, and it’s not vapor ware. Heirloom backers will get a laptop with over 4 trillion of these transistors packed inside, and it will be awesome.

    Sorry, I get excited about these kinds of things.

    From the software side, we’re working on finalizing the kernel, bootloader, and distro selection, as well as deciding what you’ll see when you first power on Novena.

    Marek Vasut is working on getting Novena supported in mainline U-Boot, which involves a surprising number of patches. Few ARM boards support as much RAM as Novena, so some support patches were needed first. Full support is in progress, including USB and video.

    We intend to ship with a mainline kernel, but interestingly Jon Nettleson has a 3.14 long-term-support kernel that is a hybrid of Freescale’s chip-specific patches combined with many backported upstream patches. Users may be interested in using this kernel over the upstream one, which has better support for thermal events and for power management.

    While we prefer to go with an upstream kernel, and to get our changes pushed into mainline, other users might find this kernel’s interesting blend of community and vendor code to satisfy their needs better.

    The kernel that we’ll use has most of the important parts upstreamed, including the audio chip which should be part of the 3.17 kernel. We’re still carrying a few local patches for various reasons ranging from specialized hacks to experimental features, or features that are not yet ready to push upstream, or rely on other features that are not yet upstream.

    For example, the display system on a laptop is very different from what is usually found on an ARM device, and we have local patches to fix this up. In most ARM devices, the screen is fixed during boot and it isn’t possible to hot-swap displays at runtime. Novena supports two different displays at once, and allows you to plug in an HDMI monitor without needing to reboot.

    Speaking of displays, the community has been hard at work on an accelerated 2D Xorg DDX driver. 2D acceleration is important, because most of the time users are interacting with the desktop, and 2D hardware uses significantly less power than 3D hardware. On a desktop machine, the 3D chip is used to composite the desktop. On Novena, which doesn’t have a fan and a small overall active power footprint, saving power is very important. By taking advantage of the 2D-only hardware, we save power while having a smoother experience. There are a few bugs that remain with the 2D driver, but it should be ready by the time we ship.

    There is a 3D driver that is in progress as well. It’s able to run Quake 3 on the framebuffer, but still has to be integrated into an OpenGL ES driver before it works under X.

    We’ve also been working on getting a root filesystem setup. This includes deciding which packages are installed, and customizing the list of software repositories. We want to add a repository for our kernel and bootloader, as well as for various packages which haven’t made it upstream such as an imx6 version of irqbalance. This will allow us to provide you with updated kernels as we add more support.

    Finally, the question remains of what you’ll see when you first power it up. In Linux, it’s not at all common to have a first-boot setup screen where you create your user, set the time, and configure the network. That’s common in Windows and OS X, which come preinstalled, but under Linux that’s generally taken care of by the installer. As we mull the topic, we’re torn between creating a good desktop-style experience vs. making a practical embedded developer’s experience. A desktop-style experience would ship a blank-slate and prompt the user to create an account via a locally attached keyboard and monitor; however, embedded developers may never plug a monitor into their device, and instead prefer to connect via console or ssh, thereby requiring a default username, password and hostname. Either way, we want to create just a single firmware common across all platforms, and so special-casing releases to a particular target is the least desired solution. If you have an opinion, please share it in our user forum.

    We’re pleased to see that even before shipping, we have a few alpha developers who continue to be very active. In addition to Jon Nettleton (gfx), Russell King (also gfx), and Marek Vasut (u-boot), we have a couple of other alpha user’s efforts we’d like to highlight in this update.

    MyriadRF continues to move forward with their SDR solution for Novena. About three weeks ago they sent us pre-production boards, and they are looking good. We’ve placed a binding order for their boards, and things look on track to get them into our shop by November, in time for integration with the first desktop units we’ll be shipping. MyriadRF is working on a fun demo for their hardware, but I’ll save that story for them to tell :)

    The CrypTech group has also been developing applications with the help of Novena. The CrypTech project is developing a BSD / CC BY-SA 3.0 licensed reference design and prototype examples of a Hardware Security Module. Their hope is to create a widely reviewed, designed-for-crypto device that anyone can compose for their application and easily build with their own trusted supply chain. They are using Novena to prototype elements of their design.

    The expansion board highlighted above is a prototype noise source based on avalanche noise from the transistor that can be seen on the middle of the board. CrypTech uses that noise to generate entropy in the FPGA. The entropy is then combined with entropy generated by ring oscillators in the FPGA and mixed using e.g. SHA-512 to generate seeds. The seeds are then used to initialize the ChaCha stream cipher, ultimately resulting in a stream of cryptographically sound random values. The result is a high performance, state-of-the art random number generator coprocessor. This of course represents just a first draft; since the implementation is done in an FPGA, the CrypTech team will continue to evolve their methodology and experiment with alternative methods to generate a robust stream of random numbers.

    Thanks to the CrypTech team for sharing a sneak-peek of their baby!

    Looking Forward

    From our current progress, it seems we’re still largely on track to release an initial shipment of bare boards to early backers in late November, and have an initial shipment of desktop units ready to go by late December. We’ll be shipping the units in tranches, so some backers will receive units before others.

    Our shipping algorithm is roughly a combination of how early someone backed the campaign, modified by which region of the world you’re in. As every country has different customs issues, we will probably ship just one or two items to each unique country first to uncover any customs or regulatory problems, before attempting to ship in bulk. This means backers outside the United States (where Crowd Supply’s fulfillment center is located) will be receiving their units a bit later than those within the US.

    And as a final note, if there’s one thing we’ve learned in the hardware business, is that you can’t count your chickens before they’ve hatched. Good progress to date doesn’t mean we’ve got an easy path to finished units. We still have a lot of hills to climb and rivers to cross, but at least for now we seem to be on track.

    Thanks again to all of our Novena backers, we’re looking forward to getting hardware into your hands soon!

    -bunnie & xobs

    by bunnie at October 03, 2014 06:15 PM

    Free Electrons

    Atmel SAMA5D4 support in the mainline Linux kernel

    Atmel SAMA5D4Atmel announced its new ARM Cortex-A5-based SoC on October 1, the SAMA5D4. Compared to the previous Cortex-A5 SoC from Atmel, the SAMA5D3, this new version brings a L2 cache, NEON, a slightly different clock tree, a hardware video decoder, and Trustzone support.

    Free Electrons engineers have worked since several months with Atmel engineers to prepare and submit the support for this new SoC to the mainline Linux kernel. We have actually submitted the patches on September, 11th, almost a month before the official release of the new chip! This means that most of the support for this new SoC will already be part of the upcoming 3.18 kernel release. Meanwhile, it is already possible to test it out by using the linux-next repository.

    There are however a few pieces missing pieces to support all aspects of the chip:

    • A few patches are needed to get proper NAND flash controller support.
    • The DMA controller is brand new in this SAMA5D4 SoC, and the DMA controller driver has not yet been merged, even though the patches have been posted a long time ago, and are currently in their sixth iteration.
    • Display support, through a DRM/KMS driver, is also being reviewed. The driver, written by Free Electrons engineer Boris Brezillon, was initially designed for the sam9x5 and sam5d3, but will be compatible with sama5d4 as well. The patch series is currently in its seventh iteration

    The last big missing part is support for non-secure mode: for the moment, the system always runs in secure mode. Running the kernel in non-secure mode will require some more work but an initial version will probably be pushed during the next development cycle.

    Besides this work on SAMA5D4 support ahead of its public release, Free Electrons is also doing a lot of maintenance work on all the Atmel ARM platforms in the Linux kernel: migration to the Device Tree, to the clock framework, to several other new subsystems, etc. See the summary of our kernel contributions to 3.16, 3.15 and 3.14.

    Through this work, the Free Electrons engineering team has a very deep knowledge of the Linux support for Atmel ARM processors. Do not hesitate to contact us if you need help to bring up the bootloader or kernel on your custom Atmel ARM platform! It is also worth mentioning that Free-Electrons is part of the Atmel partner ecosystem.

    by Alexandre Belloni at October 03, 2014 11:31 AM

    September 29, 2014

    Village Telco

    SECN 2.0 Final Released

    SECN 2.0It’s been a while coming but we’re happy to announce the general release of the SECN 2.0 firmware.  This firmware is available for the Mesh Potato 2.0 and a range of TP-Link and Ubiquiti devices.  We posted details in the RC1 release of the software but here is a comprehensive list of features:

    • OpenWrt Attitude Adjustment:  SECN 2.0 is based on the final release of OpenWrt Attitude Adjustment.  We will continue to tie SECN releases as closely as possible to OpenWrt releases in order to maximise device compatibility.
    • Batman-adv:  The SECN firmware now runs the 2013.4 release of batman-adv which includes essential features such as Bridge Loop Avoidance.
    • WAN Support:  SECN 2.0 now offers WAN features that allow the device to configure an upstream connection via WiFi, USB Modem or Mesh
    • Configurable Ethernet:  Ethernet ports can be individually configured for WAN or LAN function.
    • Timezone setting
    • WiFi Country Code setting
    • Web page for Firmware Upgrade

    The SECN 2.0 firmware can be downloaded at  Please check all downloaded files against their MD5 sums prior to flashing your device.  If you have any questions about upgrading your firmware, please don’t hesitate to ask questions in the development community.

    Also available very soon will be an SECN 2.0 firmware for the MP1 which will allow full compatibility among first generation Mesh Potatoes and all current generation devices including the MP2 Basic, MP2 Phone, and TP-Link/Ubiquiti devices.

    This final release of the 2.0 SECN firmware wouldn’t have been possible without countless hours of tweaking, testing and innovation by Terry Gillett.  Thanks too to Keith Williamson and Elektra for invaluable support.

    Upcoming Firmware

    SECN 2.1
    Firmware for the MP2 Phone is currently in alpha development.  The 2.1 release of the SECN firmware will be the first release to fully support the MP2 Phone.
    SECN 2.x
    Successive point releases of the 2.0 firmware will include support for:
    • a softphone directory web page which will allow for local registration and management of SIP-enabled devices to a master Mesh Potato allowing for small-scale local directory management and services for VoIP
    • local instant messaging support via XMPP through the integration of the Prosody jabber server
    • integration of a Twitter Bootstrap based UI which will make for faster and more intuitive configuration interface.
    SECN 3.0
    The 3.0 release of the SECN firmware will be coordinated with the release of the Barrier Breaker of OpenWrt.  It will also include the most recent updates to the Batman-adv mesh protocol.

    by steve at September 29, 2014 02:30 PM

    Richard Hughes, ColorHug

    Shipping larger application icons in Fedora 22

    In GNOME 3.14 we show any valid application in the software center with an application icon of 32×32 or larger. Currently a 32×32 icon has to be padded with 16 pixels of whitespace on all 4 edges, and also has to be scaled x2 to match other UI elements on HiDPI screens. This looks very fuzzy and out of place and lowers the quality of an otherwise beautiful installing experience.

    For GNOME 3.16 (Fedora 22) we are planning to increase the minimum icon size to 48×48, with recommended installed sizes of 16×16, 24×24, 32×32, 48×48 and 256×256 (or SVG in some cases). Modern desktop applications typically ship multiple sizes of icons in known locations, and it’s very much the minority of applications that only ship one small icon.

    Soon I’m going to start nagging upstream maintainers to install larger icons than 32×32. If you’re re-doing the icon, please generate a 256×256 or 64×64 icon with alpha channel, as the latter will probably be the minimum size for F23 and beyond.

    At the end of November I’ll change the minimum icon size in the AppStream generator used for Fedora so that applications not fixed will be dropped from the metadata. You can of course install the applications manually on the command line, but they won’t be visible in the software center until they are installed.

    If you’re unclear on what needs to be done in order to be listed in the AppStream metadata, refer to the guidelines or send me email.

    by hughsie at September 29, 2014 11:59 AM

    September 28, 2014

    Bunnie Studios

    Name that Ware, September 2014

    The Ware for September 2014 is shown below.

    This months ware has a little bit of a story behind it, so I’ll give you this much about it to set up the story: it’s a USB protocol analyzer of some sort. Question is, what make and model?

    Now for the story.

    Name that ware is typically about things that cross my desk and get opened for one reason or the other — sometimes simply curiosity, sometimes more than that. This is a case where it was more than curiosity.

    Turns out this analyzer broke at an inopportune moment. Xobs was working on a high-priority reverse engineering task that required some USB analysis. Unfortunately, when we plugged in the analyzer, it just reported a series of connect/disconnect events but no data. We initially suspected a driver issue, but after connecting the analyzer to a previously known good configuration, we suspected a hardware failure.

    So, it was time to take the unit apart and figure out how to repair it. Of course, this is a closed-source device (still eagerly anticipating my OpenVizsla) so there are no schematics available. No worries; you’ll often hear me make the claim that it’s impossible to close hardware because you can just read a circuit board and figure out what’s going on. This particular ware was certainly amenable to that, as the construction is a four-layer board with a relatively simple assortment of chips on one side only.

    The USB analysis front-end consists of three major types of chip, outlined below.

    The chips in the red boxes are a pair of LMH6559 1.75GHz bandwidth amplifiers. Fortunately the top marking, “B05A”, was resolvable with a google search plus a few educated guesses as to the function of the chips. The chip in the yellow box is a Fairchild USB1T11A, a full-speed USB transceiver. And the chip in the green outline box is a Microchip (formerly SMSC) USB3300, a high-speed USB to ULPI transceiver. A casual read of the four-layer PCB indicates that the USB signal is passed through from the B-type port to the A-type port, with the LMH6559 acting as buffers to reduce loading, plus a resistor network of some type to isolate the USB1T11A. We figured that the most likely cause of the issue was electrical overstress on the LMH6559′s, since they lay naked to the USB port and quite possibly we could have shorted the data wires to a high voltage at some point in time, thereby damaging the buffers. We did a couple of quick tests and became semi-convinced that these were actually working just fine.

    Most likely the issue wasn’t the USB1T11A; it’s well-isolated. So the next candidate was the USB3300. Fortunately these were in stock at Element14 in Singapore and just a few bucks each, so we could order at 4:30PM and have it delivered the next morning to our office for a very nominal delivery fee.

    After replacing this chip, I was pleased to find that the unit came back alive again. I have to say, I’ve found the hot-air rework skills I learned at Dangerous Prototype’s hacker camp to be incredibly useful; this level of rework is now child’s play for me. I’m not quite sure how we damaged the USB3300 chip in the first place, but one possibility is that someone tried plugging something into the mini-DIN connector on the analyzer that wasn’t meant to be plugged into the device.

    And so, despite this being a closed-source device, it ended up being repairable, although it would have been much more convenient and required a lot less guesswork to fix it had schematics been made available.

    Significantly, the maker of this box was acutely aware of the fact that hardware is difficult to close and attempted to secure their IP by scrubbing the original markings off of the FPGA. An inspection under the microscope shows the surface texture of the top-part of the chip does not match the edges, a clear sign of reprocessing.

    For what it’s worth, this is the sort of thing you develop an eye for when looking for fake chips, as often times they are remarked, but in this case the remarking was done as a security measure. The removal of the original manufacturer’s markings isn’t a huge impediment, though; if I cared enough, there are several ways I could try to guess what the chip was. Given the general age of the box, it’s probably either a Spartan 3 or a Cyclone II of some type. Based on these guesses, I could map out the power pin arrangement and run a cross-check against the datasheets of these parts, and see if there’s a match. Come to think of it, if someone actually does this for Name that Ware based on just these photos, I’ll declare them the winner over the person who only guesses the make and model of the analyzer. Or, I could just probe the SPI ROM connected to the FPGA and observe the bitstream, and probably figure out which architecture, part and density it was from there.

    But wait, there’s more to the story!

    It turns out the project was pretty urgent, and we didn’t want to wait until the next day for the spare parts to arrive. Fortunately, my Tek MDO4000B has the really useful ability to crunch through analog waveforms and apply various PHY-level rules to figure out what’s going on. So, on any given analog waveform, you can tell the scope to try rules for things like I2C, SPI, UART, USB, etc. and if there’s a match it will pull out the packets and give you a symbolic analysis of the waveform. Very handy for reverse engineering! Or, in this case, we hooked up D+ to channel 1 and D- to channel 2, and set the “bus” trace to USB full-speed, and voila — protocol analysis in a pinch.

    Above in a screenshot of what the analysis looks like. The top quarter of the UI is the entire capture buffer. The middle of the window is a 200x zoom of the top waveform, showing the analog representation of the D+ and D- lines as the cyan and yellow traces. And just below the yellow trace, you will see the “B1″ trace which is the scope’s interpretation of the analog data as a USB packet, starting with a SYNC. Using this tool, we’re able to scroll left and right through the entire capture buffer and read out transactions and data. While this isn’t a practical way to capture huge swathes of data, it was more than enough for us to figure out at what point the system was having trouble, and we could move on with our work.

    While the Tek scope’s analysis abilities made fixing our USB analyzer a moot point, I figured I’d at least get a “Name that Ware” post out of it.

    by bunnie at September 28, 2014 07:17 PM

    Winner, Name that Ware August 2014

    The ware for August 2014 was a Dell PowerEdge PE1650 Raid controller. Thanks again to Oren Hazi for contributing the ware! Also, the winner is Bryce C, for being the first to correctly identify make and model. Congrats, email me for your prize!

    by bunnie at September 28, 2014 07:17 PM

    Village Telco

    Introducing Wildernets

    WN-204x264The following is a guest post from Keith Williamson.

    Wildernets is alternative firmware for the MP02 that aims to widen the customer base for the MP02 by making initial configuration much easier and adding new features such as Instant Messaging support. So even if you are comfortable operating a SECN 2.X network (as most are on this forum), you may find some of the Wildernets features of interest.

    Wildernets is based on the latest version of SECN 2.1 but simplifies both the initial and ongoing configuration making it possible for a user with few technical skills to get the network up and running quickly. In addition to SIP and POTS telephony, Wildernets supports Instant Messaging and local Web content service. Wildernets firmware is complementary to SECN 2.X firmware. It targets a slightly different user base than the traditional VillageTelco user but there is certainly a lot of overlap. Deployment assumptions for SECN firmware have generally included an entrepreneur with the technical “chops” to roll out the network and a user base that may have never had access to even basic telephony services. As we added support for softphone client software on smartphones, tablets, and laptops, the SECN network started to become useful for limited emergency services communications and small groups of users who need communications services in environments that are outside the range of traditional PSTN and cellular services. These users are likely to own smartphones, tablets, and laptops that become much less useful in those environments. With a Wildernets network, these devices become very useful again for calling or instant messaging other people on the network and browsing local Web content. Generally, these users already know the basics of downloading, installing, and using Internet applications on their devices but likely don’t know how to setup networks with IP addresses, netmasks, gateways and application services such as telephony and IM servers. Wildernets goal is to remove that impediment.

    For more information check out the Wildernets project page.



    by Keith Williamson at September 28, 2014 03:55 PM

    September 25, 2014

    Richard Hughes, ColorHug

    AppStream Progress in September

    Last time I blogged about AppStream I announced that over 25% of applications in Fedora 21 were shipping the AppData files we needed. I’m pleased to say in the last two months we’ve gone up to 45% of applications in Fedora 22. This is thanks to a lot of work from Ryan and his friends, writing descriptions, taking screenshots and then including them in the fedora-appstream staging repo.

    So fedora-appstream doesn’t sound very upstream or awesome. This week I’ve sent another 39 emails, and opened another 42 bugs (requiring 17 new bugilla/trac/random-forum accounts to be opened). Every single file in the fedora-appstream staging repo has been sent upstream in one form or another, and I’ve been adding an XML comment to each one for a rough audit log of what happened where.

    Some have already been accepted upstream and we’re waiting for a new tarball release; when that happens we’ll delete the file from fedora-appstream. Some upstreams are really dead, and have no upstream maintainer, so they’ll probably languish in fedora-appstream until for some reason the package FTBFS and gets removed from the distribution. If the package gets removed, the AppData file will also be deleted from fedora-appstream.

    Also, in the process I’ve found lots of applications which are shipping AppData files upstream, but for one reason or another are not being installed in the binary rpm file. If you had to tell me I was talking nonsense in an email this week, I apologize. For my sins I’ve updated over a dozen packages to the latest versions so the AppData file is included, and fixed a quite a few more.

    Fedora 22 is on track to be the first release that mandates AppData files for applications. If upstream doesn’t ship one, we can either add it in the Fedora package, or in fedora-appstream.

    by hughsie at September 25, 2014 01:15 PM

    September 24, 2014

    September 23, 2014

    Altus Metrum

    keithp&#x27;s rocket blog: easymega-118k

    Neil Anderson Flies EasyMega to 118k' At BALLS 23

    Altus Metrum would like to congratulate Neil Anderson and Steve Cutonilli on the success the two stage rocket, “A Money Pit”, which flew on Saturday the 20th of September on an N5800 booster followed by an N1560 sustainer.

    “A Money Pit” used two Altus Metrum EasyMega flight computers in the sustainer, each one configured to light the sustainer motor and deploy the drogue and main parachutes.

    Safely Staged After a 7 Second Coast

    After the booster burned out, the rocket coasted for 7 seconds to 250m/s, at which point EasyMega was programmed to light the sustainer. As a back-up, a timer was set to light the sustainer 8 seconds after the booster burn-out. In both cases, the sustainer ignition would have been inhibited if the rocket had tilted more than 20° from vertical. During the coast, the rocket flew from 736m to 3151m, with speed going from 422m/s down to 250m/s.

    This long coast, made safe by EasyMega's quaternion-based tilt sensor, allowed this flight to reach a spectacular altitude.

    Apogee Determined by Accelerometer

    Above 100k', the MS5607 barometric sensor is out of range. However, as you can see from the graph, the barometric sensor continued to return useful data. EasyMega doesn't expect that to work, and automatically switched to accelerometer-only apogee determination mode.

    Because off-vertical flight will under-estimate the time to apogee when using only an accelerometer, the EasyMega boards were programmed to wait for 10 seconds after apogee before deploying the drogue parachute. That turned out to be just about right; the graph shows the barometric data leveling off right as the apogee charges fired.

    Fast Descent in Thin Air

    Even with the drogue safely fired at apogee, the descent rate rose to over 200m/s in the rarefied air of the upper atmosphere. With increasing air density, the airframe slowed to 30m/s when the main parachute charge fired at 2000m. The larger main chute slowed the descent further to about 16m/s for landing.

    September 23, 2014 04:33 AM

    September 22, 2014

    Free Electrons

    2014 Q3 newsletter

    This article was published on our quarterly newsletter.

    Free Electrons is happy to share some news about the latest training and contribution activities of the company.

    Kernel contributions

    Since our last newsletter, our engineering team continued to make significant contributions to the Linux kernel, especially in the area of supporting ARM processors and platforms:

    • 218 patches from Free Electrons were merged into Linux 3.15, making Free Electrons the 12th contributing company for this release by number of patches. See our blog post.
    • 388 patches from Free Electrons were merged into Linux 3.16, making Free Electrons the 7th contributing company for this release, by number of patches. See our blog post.
    • For the upcoming 3.17 release, we already have 146 patches merged, and we have a lot more work being done for future kernel releases.

    The major areas of our contributions were:

    • The addition of an ubiblk driver, which allows traditional block filesystems to be used on top of UBI devices, and therefore on NAND flash storage. Only read-only support is available, but it already allows to make use of the super efficient SquashFS filesystem on top of NAND flash in a safe way.
    • Another major addition is support for the new Marvell Armada 375 and Armada 38x processors. In just two releases (3.15 and 3.16) we almost pushed entire support for these new processors. The network driver for Armada 375 is one missing piece, coming in 3.17.
    • Our maintenance work on the Atmel AT91 and SAMA5 processors has continued, with more conversion to the Device Tree, the common clock framework, and other modern kernel mechanisms. We have also developed the DRM/KMS (graphics) driver for the SAMA5D3 SoC, which has already been posted and should hopefully be merged soon.
    • Our work to support the Marvell Berlin processor has started to be merged in 3.16. This processor is used in various TVs, set-top boxes or devices like the Google Chromecast. Basic support was merged including Device Trees, clock drivers, pin-muxing driver, GPIO and SDHCI support. AHCI support will be in 3.17, and USB and network support should be in 3.18.
    • Additional work was done on support for Allwinner ARM SoCs, especially the A31 processor: SPI and I2C support, drivers for the P2WI bus and the PRCM controller, and support for USB.

    We now have broad experience in writing kernel drivers and getting code merged into the mainline tree. Do not hesitate to contact us if you need help to develop Linux kernel drivers, or to support a new board or processor.

    Buildroot contributions

    Our involvement into the Buildroot project, a popular embedded Linux build system, is going on. We have merged 159 patches in the 2014.05 release of the project (total of 1293 patches), and 129 patches in the 2014.08 release of the project (total of 1353 patches). Moreover, our engineer Thomas Petazzoni is regularly an interim maintainer of the project, when the official maintainer Peter Korsgaard is not available. Some of the major features we contributed: major improvements to Python 3 support, addition of EFI bootloaders, addition of support for the Musl C library.

    Regular embedded Linux projects

    Of course, we also conducted embedded Linux development and boot time optimization projects for various embedded system makers, with less visible impact on community projects. However, we will try to share generic technical experience from such projects through future blog posts.

    New training course: Yocto Project and OpenEmbedded

    A large number of embedded Linux projects use embedded Linux build systems to integrate the various software components of the system into a working root filesystem image. Among the solutions available to achieve this, the Yocto Project and OpenEmbedded are very popular.

    We have therefore launched a new 3 day Yocto Project and OpenEmbedded training course to help engineers and companies who are using, or are interested in using these solutions for their embedded Linux projects. Starting from the basics of understanding the core principles of Yocto, the training course goes into the details of writing package recipes, integrating support for a board into Yocto, creating custom images, and more.

    The detailed agenda of the training course is available. You can order this training course at your location, or participate to the first public session organized on November 18-20 in France.

    Embedded Linux training course updated

    The embedded Linux ecosystem is evolving very quickly, and therefore we are continuously updating our training courses to match the latest developments. As part of this effort, we have recently conducted a major update to our Embedded Linux course: the hardware platform used for the practical labs has been changed to the popular and very interesting Atmel Xplained SAMA5D3, and many practical labs have been improved to provide a more useful learning experience. See our blog post for more details.

    Mailing list for training participants

    We have launched a new service for the participants to our training sessions: a mailing list dedicated to them, and through which they can ask additional questions after the course, share their experience, get in touch with other training participants and Free Electrons engineers. Of course, all Free Electrons engineers are on the mailing list and participate to the discussions. Another useful service offered by our training courses!

    See more details.

    Conferences: ELC, ELCE, Kernel Recipes

    The Free Electrons engineering team will participate to the Embedded Linux Conference Europe and Linux Plumbers, next month in Düsseldorf, Germany. Several Free Electrons engineers will also be giving talks during ELCE:

    In addition, Thomas will participate to the Buildroot Developers Day, taking place right before the Embedded Linux Conference Europe in Düsseldorf.

    See also our blog post about ELCE for more details.

    Maxime Ripard and Michael Opdenacker will participate to the Kernel Recipes 2014 conference, on September 25-26 in Paris. Maxime will be giving his Allwinner kernel talk at this conference. See our blog post for more details.

    Last but not least, we have recently published the videos of a number of talks from the previous Embedded Linux Conference, held earlier this year in San Jose. A lot of interesting material about embedded Linux! Check out our blog post for more details.

    Upcoming training sessions

    We have a number of public training sessions dates, with seats available:

    Sessions and dates

    by Michael Opdenacker at September 22, 2014 01:04 PM

    Free Electrons at Kernel Recipes 2014

    Kernel RecipesThe Kernel Recipes conference is holding its third edition next week in Paris, on September 25th and 26th. With speakers like Greg Kroah-Hartmann, Hans Peter Anvin, Martin Peres, Hans Verkuil or Jean Delvare and many others, it is going to be a very interesting kernel-oriented conference.

    Free Electrons will participate to this conference, as our engineer Maxime Ripard will give a talk about Supporting a new ARM platform: the Allwinner example, and Maxime will be attending the event on both days.

    Also, Free Electrons’ CEO Michael Opdenacker will be attending the conference as well.

    A good opportunity to meet Free Electrons folks, and discuss business or career opportunities! We are always interested in getting to know more engineers with embedded Linux or Linux kernel knowledge to join our engineering team, so do not hesitate to meet us during the conference, or contact us ahead of time to plan a discussion. If you don’t have a seat yet, unfortunately the conference is fully booked, but meeting in the area is possible too.

    by Thomas Petazzoni at September 22, 2014 07:31 AM

    September 19, 2014

    Video Circuits

    Video Workshop

    Here are pics of the analogue video workshop packs each attendee will be getting tomorrow!

    by Chris ( at September 19, 2014 05:14 PM

    Free Electrons

    Videos from Embedded Linux Conference 2014

    San Jose, CaliforniaAs the summer is coming to an end, we finally managed to publish the videos we recorded during the last Embedded Linux Conference, held earlier this year in San Jose, California.

    This year, the Linux Foundation was only recording the audio of the talks, and we’ve been recording the video only for a few talks. Sorry to all the speakers that won’t be able to see their footage, but we were not able to attend (and record) all of the talks this year. Still, we include below the links to all the talks, slides and their audio recording, in order to cover all of this year’s schedule.

    Our videos

    Alan OttVideo capture
    Signal 11 Software
    USB and the Real World
    Audio Recording
    Video (49 minutes):
    full HD (365M), 800×450 (224M)

    Alexandre BelloniVideo capture
    Free Electrons
    Using Yocto for Modules Manufacturers
    Audio Recording
    Video (56 minutes):
    full HD (421M), 800×450 (224M)

    David Anders, Matt RanostayVideo capture
    CircuitCo, Intel
    Hardware Debugging Tools, Sigrok: Using Logic to Debug Logic
    Audio Recording
    Video (42 minutes):
    full HD (314M), 800×450 (223M)

    David Anders, Matt Porter, Matt Ranostay, Karim YaghmourVideo capture
    CircuitCo, Linaro, Intel, Opersys
    Debugging – Panel Discussion
    Audio Recording
    Video (43 minutes):
    full HD (322M), 800×450 (228M)

    Gregory ClementVideo capture
    Free Electrons
    SMP Bring Up On ARM SOCs
    Audio Recording
    Video (48 minutes):
    full HD (359M), 800×450 (253M)

    Linus WalleijVideo capture
    Fear and Loathing in the Media Transfer Protocol
    Audio Recording
    Video (55 minutes):
    full HD (414M), 800×450 (224M)

    Martti PiirainenVideo capture
    Productizing Telephony and Audio in a GNU/Linux (Sailfish OS) Smartphone
    Audio Recording
    Video (46 minutes):
    full HD (343M), 800×450 (204M)

    Matt PorterVideo capture
    Debugging – Linux Kernel Testing
    Audio Recording
    Video (47 minutes):
    full HD (357M), 800×450 (254M)

    Matt PorterVideo capture
    Kernel USB Gadget Configfs Interface
    Audio Recording
    Video (42 minutes):
    full HD (317M), 800×450 (224M)

    Maxime RipardVideo capture
    Free Electrons
    Supporting a New ARM Platform: The Allwinner SoCs Example
    Audio Recording
    Video (48 minutes):
    full HD (364M), 800×450 (203M)

    Micheal E AndersonVideo capture
    The PTR Group, Inc.
    Extending Linux using Arduinos
    Audio Recording
    Video (57 minutes):
    full HD (430M), 800×450 (230M)

    Michael OpdenackerVideo capture
    Free Electrons
    Update on Boot Time Reduction Techniques with Figures
    Audio Recording
    Video (45 minutes):
    full HD (340M), 800×450 (198M)

    Thomas PetazzoniVideo capture
    Free Electrons
    Buildroot: What’s New?
    Audio Recording
    Video (52 minutes):
    full HD (392M), 800×450 (278M)

    Thomas PetazzoniVideo capture
    Free Electrons
    Two Years of ARM SoC Support mainlining: Lessons Learned
    Audio Recording
    Video (52 minutes):
    full HD (388M), 800×450 (221M)

    Tomasz FigaVideo capture
    Samsung R&D Institute
    Trees need care: A Solution to Device Tree Validation Problem
    Audio Recording
    Video (50 minutes):
    full HD (377M), 800×450 (234M)

    Tristan LelongVideo capture
    Adeneo Embedded
    Linux Quickboot
    Audio Recording
    Video (54 minutes):
    full HD (406M), 800×450 (288M)

    Other talks

    Adrian Perez de Castro
    Improving Performance Of A WebKit Port MIPS Platform
    Audio Recording

    Adrien Verge
    Ecole Polytechnique Montreal
    Hardware-Assisted Software Tracing
    Audio Recording

    Behan Webster
    Converse in Code Inc.
    LLVMLinux: Embracing the Dragon
    Audio Recording

    Belen Barros Pena
    Intel’s Open Source Technology Center
    Building Tools From the Outside In: Bringing User-Centered Design to Embedded Linux
    Audio Recording

    Bradley M. Kuhn
    Software Freedom Conservancy
    Collaborative GPL Enforcement Through Non-Profit Entities
    Audio Recording

    Joe Kontur
    CE Workgroup (BoFs)
    Audio Recording

    Chase Maupin
    Texas Instruments
    Using Agile Development Practices For Kernel Development
    Audio Recording

    Chris Simmonds
    A Timeline For Embedded Linux
    Audio Recording

    David Anders, Tim Bird, Matt Porter, Benjamin Zores, Karim Yaghmour
    CircuitCo, Sony Mobile, Linaro, Alcatel-Lucent, OperSys
    Keynote Panel: IoT and the Role of Embedded Linux and Android
    Audio Recording

    David Greaves
    Mer Project
    The #qt/#wayland/#systemd/#btrfs-phone … the Jolla phone
    Audio Recording

    Denys Dmytriyenko
    Texas Instruments
    Qt5 & Yocto – adding SDK and easy app migration from Qt4
    Audio Recording

    Gabriel Huau
    Adeneo Embedded
    Hardware Accelerated Video Streaming with V4L2
    Audio Recording

    Geert Uytterhoeven
    Glider bvba
    Engaging Device Trees
    Audio Recording

    Hans Verkuil
    Cisco Systems Norway
    An Introduction to the Video4Linux Framework
    Audio Recording

    Hisao Munakata, Tsugikazu Shibata
    Renesas Electronics, NEC
    LTSI Project Update for 3.10 Kernel and Future Plan
    Audio Recording

    Insop Song
    Can A Board Bringing Up Be Less Painful, if with Yocto and Linux?
    Audio Recording

    Iyad Qumei
    LG Electronics
    webOS, An Openembedded Use Case
    Audio Recording

    Jeff Osier-Mixon
    Intel Corporation
    Yocto Project / OpenEmbedded BoF
    Audio Recording

    Josh Cartwright
    Qualcomm Innovation Center
    System Power Management Interface (SPMI)
    Audio Recording

    Khem Raj
    Juniper Networks
    (Tutorial) Some GCC Optimizations for Embedded Software
    Audio Recording

    Laurent Pinchart
    Renesas Linux Kernel Team
    Mastering the DMA and IOMMU APIs
    Audio Recording

    John ‘Warthog9′ Hawley, Nitin Kamble
    Making a Splash: Digital Signage Powered by MinnowBoard and the Yocto Project
    Audio Recording

    Mark Brown
    What’s going on with SPI
    Audio Recording

    Mark Skarpness
    Keynote: Scaling Android at the Speed of Mobility
    Audio Recording

    Marta Rybczynska
    Porting Linux to a New Architecture
    Audio Recording

    Michael Christofferson
    User Space Drivers in Linux ? Pros, Cons, and Implementation Issues
    Audio Recording

    Michael E Anderson
    The PTR Group, Inc.
    How to Build a Linux-Based Robot
    Audio Recording

    Minchan Kim
    LG Electronics
    Volatile Ranges
    Audio Recording

    Tim Bird
    Sony Mobile
    (BoFs) QCOM SoC Mainlining
    Audio Recording

    Patrick Titiano
    Use-Case Power Management Optimization: Identifying & Tracking Key Power Indicators
    Audio Recording

    Philip Balister
    Open-Source Tools for Software-Defined Radio on Multicore ARM+DSP
    Audio Recording

    Ricardo Salveti de Araujo
    Ubuntu Touch low level stack
    Ubuntu Touch Internals
    Audio Recording

    Thomas Petazzoni
    Free Electrons
    Device Tree for Dummies
    Audio Recording

    Tim Bird
    Sony Mobile
    Keynote: The Paradox of embedded and Open Source
    Audio Recording

    Tom Zanussi
    Intel’s Open Source Technology Center
    MicroYocto and the ‘Internet of Tiny’
    Audio Recording

    Victor Rodriguez
    Introducing Embedded Linux to Universities
    Audio Recording

    Vitaly Wool
    Softprise Consulting OU
    Linux for Microcontrollers: Spreading the Disease
    Audio Recording

    Wolfgang Mauerer
    Understanding the Embedded Linux Ecosystem with Codeface
    Audio Recording

    Yoshitake Kobayashi
    Using Real-Time Patch with LTSI Kernel
    Audio Recording

    by Maxime Ripard at September 19, 2014 12:51 PM

    September 13, 2014

    Altus Metrum

    keithp&#x27;s rocket blog: Altos1.5

    AltOS 1.5 — EasyMega support, features and bug fixes

    Bdale and I are pleased to announce the release of AltOS version 1.5.

    AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

    This is a major release of AltOS, including support for our new EasyMega board and a host of new features and bug fixes

    AltOS Firmware — EasyMega added, new features and fixes

    Our new flight computer, EasyMega, is a TeleMega without any radios:

    • 9 DoF IMU (3 axis accelerometer, 3 axis gyroscope, 3 axis compass).

    • Orientation tracking using the gyroscopes (and quaternions, which are lots of fun!)

    • Four fully-programmable pyro channels, in addition to the usual apogee and main channels.

    AltOS Changes

    We've made a few improvements in the firmware:

    • The APRS secondary station identifier (SSID) is now configurable by the user. By default, it is set to the last digit of the serial number.

    • Continuity of the four programmable pyro channels on EasyMega and TeleMega is now indicated via the beeper. Four tones are sent out after the continuity indication for the apogee and main channels with high tones indicating continuity and low tones indicating an open circuit.

    • Configurable telemetry data rates. You can now select among 38400 (the previous value, and still the default), 9600 or 2400 bps. To take advantage of this, you'll need to reflash your TeleDongle or TeleBT.

    AltOS Bug Fixes

    We also fixed a few bugs in the firmware:

    • TeleGPS had separate flight logs, one for each time the unit was turned on. Turning the unit on to test stuff and turning it back off would consume one of the flight log 'slots' on the board; once all of the slots were full, no further logging would take place. Now, TeleGPS appends new data to an existing single log.

    • Increase the maximum computed altitude from 32767m to 2147483647m. Back when TeleMetrum v1.0 was designed, we never dreamed we'd be flying to 100k' or more. Now that's surprisingly common, and so we've increased the size of the altitude data values to fit modern rocketry needs.

    • Continuously evaluate pyro firing condition during delay period. The previous firmware would evaluate the pyro firing requirements, and once met, would delay by the indicated amount and then fire the channel. If the conditions had changed state, the channel would still fire. Now, the conditions are continuously evaluated during the delay period and if they change state, the event is suppressed.

    • Allow negative values in the pyro configuration. Now you can select a negative speed to indicate a descent rate or a negative acceleration value to indicate acceleration towards the ground.

    AltosUI and TeleGPS — EasyMega support, OS integration and more

    The AltosUI and TeleGPS applications have a few changes for this release:

    • EasyMega support. That was a simple matter of adapting the existing TeleMega support.

    • Added icons for our file types, and hooked up the file manager so that AltosUI, TeleGPS and/or MicroPeak are used to view any of our data files.

    • Configuration support for APRS SSIDs, and telemetry data rates.

    September 13, 2014 06:47 PM

    September 10, 2014


    More lenses tested: Evetar N123B05425W vs. Sunex DSL945D

    We just tested two samples of Evetar N123B05425W lens that is very similar to Sunex DSL945D described in the previous post.

    Lens Specifications

    Sunex DSL945D Evetar N123B05425W
    Focal length 5.5mm 5.4mm
    F# 1/2.5 1/2.5
    IR cutoff filter yes yes
    Lens mount M12 M12
    image format 1/2.3 1/2.3
    Recommended sensor resolution 10Mpix 10MPix

    Lens comparison

    Both lenses are specified to work with 10 megapixel sensors, so it is possible to compare “apples to apples”. This performance comparison is based only on our testing procedure and does not involve any additional data from the lens manufacturers, the lens samples were randomly selected from the purchased devices. Different applications require different features (or combination of features) of the lens, and both lenses have their advantages with respect to the other.

    Sunex lens has very low longitudinal chromatic aberration (~5μm) as indicated on “Astigmatism” (bottom left) graphs, it is well corrected so both red and blue curves are on the same side of the green one. Evetar lens have very small difference between red and green, but blue is more than 15 μm off. My guess is that the factory tried to make the lens that can work in “day/night” applications and optimized design for visible and infrared spectrum simultaneously. Sacrificing infrared (it anyway has no value in high resolution visible light applications) at the design stage could improve performance of this already very good product.

    Petzval field curvature of the DSL945D is slightly better than that of the N123B05425W, astigmatism (difference between the sagittal and the tangential focal shift for the same color) is almost the same with maximum of 18 μm at ~2 mm from the image center.

    Center resolution (mtf50% is shown) of the DSL945D is higher for each color, but only in the center. It drops for peripheral areas much quicker than the resolution of the N123B05425W does. Evetar lens has only sagittal (radial) resolution for blue component dropping below 100 lp/mm according to our measurements, and that gives this lens higher full-field weighted resolution values (top left plot on each figure).

    Lens testing data

    The graphs below and the testing procedure are described in the previous post. Solid lines correspond to the tangential and dashed – to the sagittal components for the radial aberration model, point marks account for the measured parameter variations in the different parts of the lenses at the same distance from the center.

    Sunex DSL945D

    Fig.1 Sunex SLR945D sample #1020 test results

    Fig.1 Sunex SLR945D sample #1020 test results. Spreadsheet link.

    Evetar N123B05425W

    Fig.2 Evetar  N123B05425W sample #9071 test results

    Fig.2 Evetar N123B05425W sample #9071 test results. Spreadsheet link.

    Fig.3 Evetar  N123B05425W sample #9072 test results

    Fig.3 Evetar N123B05425W sample #9072 test results. Spreadsheet link.


    by andrey at September 10, 2014 08:25 PM

    September 09, 2014

    Michele's GNSS blog

    At ION GNSS+ 2014

    To whom it may happen to be in Tampa these days: I will also be around.

    Feel free to come and chat!

    by (Michele Bavaro) at September 09, 2014 11:04 AM

    September 07, 2014

    Video Circuits

    Tomislav Mikulic

    "Tomislav Mikulic is a Croatian computer Graphics pioneer who exhibited at the Tendencies 5 in Zagreb(former Yugoslavia) in 1973 at the age of 20. He had composed the First Yugoslav Computer Animation Film which had it’s premiere on 13th May 1976 in Zagreb.324"

    by Chris ( at September 07, 2014 09:01 AM

    September 06, 2014


    74HC4094 - 8-bit shift register : weekend die-shot

    74HC4094 is an 8-bit serial-in/parallel-out shift register.

    September 06, 2014 09:43 PM

    September 05, 2014

    LZX Industries

    Visual Cortex Release Pending

    Visual Cortex prototyping and development are now complete and we are about to enter manufacturing. We’ll be able to provide an accurate release date very soon. As you can see, we’ve added a lot of features in the final stages of development! Please feel free to write us with any questions.

    by Liz Larsen at September 05, 2014 02:00 PM

    September 02, 2014


    NibbleKiosk: controlling chromium through sound

    **updated for verison 0.0.2**

    The idea of NibbleKiosk is to turn old monitors into interactive displays using simple hardware such as a Raspberry Pi with a microphone. The sounds received by the microphone are turned into URLs and sent to Chromium browser. The software comes with 3 programs:

    • one to create the sound files based on the URLs to be used by the client
    • one to create a database of URLs
    • the main program which does the signal processing and controlling of Chromium

    You first need to create a database of URLs:

    nibbledb -u -d test.db

    which outputs:

    test.db: key 1B95FB47 set to

    You can then create a sound file to use to trigger the URL:

    nibblewav 1B95FB47

    This will output a wav file with the same hex code in lowercase to your /tmp directory

    aplay /tmp/1b95fb47.wav

    and you should hear what it sounds like.

    You can now start the main program on the receiver. You should first start Chromium listening on port 9222

    chromium-browser --remote-debugging-port=9222&

    You are now ready to start the main program with the database you created earlier:

    nibblekiosk -d test.db

    This should now listen continually for the right sounds to trigger URLs on Chromium. You can build your own clients with the wav files you generate.

    There are number of variables to get a functioning system. A key variable is getting the right signal magnitude to trigger the system. You can use the -m flag to experiement with this. On a Raspberry Pi I have set this as low as -m 2, e.g.

    nibblekiosk -d test.db -m 2

    I have had good performance from the microphone on an old USB webcam or if you want something small for the Pi, Konobo makes a very small USB microphone.

    If you are feeling brave and want to try I have made some packages for Ubuntu (14.04) and Raspbian:

    The only dependencies are OpenAL and Berkeley DB.

    by john at September 02, 2014 08:15 PM

    Bunnie Studios

    Name that Ware, August 2014

    The Ware for August 2014 is below.

    Sorry this month’s ware is a little bit late, I’ve been offline for the past couple of weeks. Thanks to Oren Hazi for contributing this ware!

    by bunnie at September 02, 2014 05:12 PM

    Winner, Name that Ware July 2014

    The Ware for July 2014 is a GSM signal booster, bought over the counter from a white-label dealer in China. There were many thoughtful, detailed and correct responses, making it very hard to choose a winner. Lacking a better algorithm than first-closest response, wrm is the winner. Congrats, email me for your prize!

    by bunnie at September 02, 2014 05:10 PM

    Free Electrons

    Embedded Linux Development with Yocto Project

    Embedded Linux Development with Yocto Project Cover

    We were kindly provided a copy of Embedded Linux Development with Yocto Project, written by Otavio Salvador and Daiane Angolini. It is available at Packt Publishing, either in an electronic format (DRM free) or printed.

    This book will help you start with your embedded system development and integration using the Yocto Project or OpenEmbedded.

    The first chapter sheds some light on the meaning of commonly misused names: Yocto Project, Poky, OpenEmbedded, BitBake. Then, it doesn’t waste time and explains how to install and use Poky to build and then run an image. The entire book is full of examples that can easily be tested, providing useful hands-on experience, using Yocto Project 1.6 (Poky 11).

    The following chapters cover:

    • Hob: a user friendly interface, however, it will soon be deprecated and replaced by Toaster.
    • BitBake and Metadata: how to use BitBake, how to write recipes for packages or images, how to extend existing recipes, how to write new classes, how to create a layer, where to find existing layers and use them.
    • The build directory layout: what the generated files are, and what their use is.
    • Packaging: how to generate different package formats, how to handle a package feed and the package versions.
    • The various SDKs that can be generated and their integration in Eclipse.
    • Debugging the metadata: what the common issues are, how to find what is going wrong, and solving these issues.
    • Debugging the applications on the target: how to generate an image with debugging tools installed.
    • Available tools to help achieve copyleft compliance: in particular, how to cope with the GPL requirements.

    Finally, there is a chapter dedicated to explaining how to generate and run an image on the Wandboard, an i.MX6 based community board.

    The book is easy to read, with plenty of examples and useful tips. It requires some knowledge about generic embedded Linux system development (see our training) as only the Yocto Project specifics are covered. I would recommend it both for beginners wanting to learn about the Yocto Project and for developers wanting to improve their current knowledge and their recipes and also understand the BitBake internals.

    Speaking of the Yocto Project, it is worth noting that Free Electrons is now offering a Yocto Project and OpenEmbedded training course (detailed agenda). If you’re interested, join one of the upcoming public training sessions, or order a session at your location!

    by Alexandre Belloni at September 02, 2014 09:20 AM

    August 30, 2014

    Video Circuits

    Richard Paul Lohse

    Picked up an exhibition catalogue of a Richard Paul Lohse show from 1970. There were some pretty interesting diagrams of the systems he used to construct his images. Similar concerns to early computer art/constructivist type stuff. Different image generation/process control systems are interesting me at the moment. from multi plane cameras, to the scanimate to digital software but somthing about doing things hands on like Lohse is still interesting.

    by Chris ( at August 30, 2014 09:14 AM

    August 29, 2014

    Richard Hughes, ColorHug

    Putting PackageKit metadata on the Fedora LiveCD

    While working on the preview of GNOME Software for Fedora 20, one problem became very apparent: When you launched the “Software” application for the first time, it went and downloaded metadata and then built the libsolv cache. This could take a few minutes of looking at a spinner, and was a really bad first experience. We tried really hard to mitagate this, in that when we ask PackageKit for data we say we don’t mind the cache being old, but on a LiveCD or on first install there wasn’t any metadata at all.

    So, what are we doing for F21? We can’t run packagekitd when constructing the live image as it’s a D-Bus daemon and will be looking at the system root, not the live-cd root. Enter packagekit-direct. This is an admin-only tool (no man page) installed in /usr/libexec that designed to be run when you want to run the PackageKit backend without getting D-Bus involved.

    For Fedora 21 we’ll be running something like DESTDIR=$INSTALL_ROOT /usr/libexec/packagekit-direct refresh in fedora-live-workstation.ks. This means that when the Live image is booted we’ve got both the distro metadata to use, and the libsolv files already built. Launching gnome-software then takes 440ms until it’s usable.

    by hughsie at August 29, 2014 07:04 PM

    Free Electrons

    Free Electrons at the Embedded Linux Conference Europe

    DüsseldorfThe Embedded Linux Conference Europe will take place on October 13-15 in Düsseldorf, Germany. As usual, a large part of the Free Electrons engineering team will participate to the conference, with no less than 7 engineers: Alexandre Belloni, Boris Brezillon, Grégory Clement, Michael Opdenacker, Thomas Petazzoni, Maxime Ripard and Antoine Ténart.

    Several of our talk proposals have been accepted, so we’ll be presenting about the following topics:

    In addition to this participation to the Embedded Linux Conference Europe:

    • Many of us will also participate to the Linux Plumbers conference, on October 15-17. It’s another great opportunity to talk about topics around real-time, power management, storage, multimedia, and more.
    • Thomas Petazzoni will participate to the next Buildroot Developers Meeting.

    As usual, we’re looking forward to this event! Do not hesitate to get in touch with us if you’re interested in meeting us during these events for specific discussions.

    by Thomas Petazzoni at August 29, 2014 09:47 AM

    Altus Metrum

    bdale&#x27;s rocket blog: EasyMega v1.0

    Keith and I are pleased to announce the immediate availability of EasyMega v1.0!

    EasyMega is effectively a TeleMega without the GPS receiver and radio telemetry system. TeleMega and EasyMega both have 6 pyro channels and enough sensors to lock out pyro events based on conditions like tilt-angle from vertical, making both boards ideal solutions for complex projects with air start or multi-stage engine ignition requirements. Choose TeleMega for a complete in-airframe solution including radio telemetry and GPS, or EasyMega if you already have a tracking solution you like and just need intelligent control of multiple pyro events.

    EasyMega is 2.25 x 1.25 inches (57.15 x 31.75 mm), which means it can be easily mounted in a 38 mm air frame coupler. The list price for EasyMega is $300, but as an introductory special, you can purchase one now through Labor Day for only $250! This special is only good for in-person purchases at Airfest and orders placed directly through Bdale's web store.

    Altus Metrum products are available directly from Bdale's web store, and from these distributors:

    All Altus Metrum products are completely open hardware and open source. The hardware design details and all source code are openly available for download, and advanced users are invited to join our developer community and help to enhance and extend the system. You can learn more about Altus Metrum products at

    August 29, 2014 03:12 AM

    August 28, 2014

    Free Electrons

    Embedded Linux training update: Atmel Xplained, and more!

    Atmel SAMA5D3 Xplained boardWe are happy to announce that we have published a significant update of our Embedded Linux training course. As all our training materials, this update is freely available for everyone, under a Creative Commons (CC-BY-SA) license.

    This update brings the following major improvements to the training session:

    • The hardware platform used for all the practical labs is the Atmel SAMA5D3 Xplained platform, a popular platform that features the ARMv7 compatible Atmel SAMA5D3 processor on a board with expansion headers compatible with Arduino shields. The fact that the platform is very well supported by the mainline Linux kernel, and the easy access to a wide range of Arduino shields makes it a very useful prototyping platform for many projects. Of course, as usual, participants to our public training sessions keep their board after the end of the course! Note we continue to support the IGEPv2 board from ISEE for customers who prefer this option.
    • The practical labs that consist in Cross-compiling third party libraries and applications and Working with Buildroot now use a USB audio device connected to the Xplained board on the hardware side, and various audio libraries/applications on the software side. This replaces our previous labs which were using DirectFB as an example of a graphical library used in a system emulated under QEMU. We indeed believe that practical labs on real hardware are much more interesting and exciting.
    • Many updates were made to various software components used in the training session: the toolchain components were all updated and we now use a hard float toolchain, more recent U-Boot and Linux kernel versions are used, etc.

    The training materials are available as pre-compiled PDF (slides, labs, agenda), but their source code in also available in our Git repository.

    If you are interested in this training session, see the dates of our public training sessions, or order one to be held at your location. Do not hesitate to contact us at for further details!

    It is worth mentioning that for the purpose of the development of this training session, we did a few contributions to open-source projects:

    Thanks a lot to our engineers Maxime Ripard and Alexandre Belloni, who worked on this major update of our training session.

    by Thomas Petazzoni at August 28, 2014 05:52 AM

    August 27, 2014

    Andrew Zonenberg, Silicon Exposed

    Updates and pending projects

    It's been a while since I've written anything here so here's a bit of a brain-dump on upcoming stuff that will find its way here eventually.

    Thesis stuff

    This has been eating the bulk of my time lately. I just submitted a paper to ACM Computing Surveys and am working on a conference paper for EDSC that's due in two weeks or so. With any luck the thesis itself will be finished by May and I can graduate.

    Lab improvements

    I'm in the process of fixing up my lab to solve a bunch of the annoying things that have been bugging me. Most/all of these will be expanded into a full post once it's closer to completion.
    • Racking the FPGA cluster
      The "raised floor" FPGA cluster was a nice idea but the 2D structure doesn't scale. I've filled almost all of it and I really need the desk space for other things.

      I ordered a 3U Eurocard subrack from Digikey and once it arrives will be making laser-cut plastic shims to load all of my small boards into it. The first card made for the subrack is already inbound: a 3U x 4HP 10-port USB hub to replace several of the 4-port hubs I'm using now. It will be hosted by my Beaglebone Black, which will function as a front-end node bridging the USB-UART and USB-JTAG ports out to Ethernet.

      The AC701 board is huge (well over 3U on the shortest dimension) so I may end up moving it into one of the two empty 1U Sun "pizza box" server cases I have lying around. If this happens the Atlys boards may accompany it since they won't fit comfortably in 3U either.
    • Ethernet - JTAG card
      FTDI-based JTAG is simple and easy but the chips are pricey and to run in a networked environment you need a host PC/server. I'm in the early stages of designing an XC6SLX45 based board with a gigabit Ethernet port, IPv6 TCP offload engine, and 16 buffered, level-shifted JTAG ports. It will speak the libjtaghal jtagd protocol directly, without needing a CPU or operating system, for ultra-low latency and near zero jitter.
    • Logo
      I've gone long enough without having a nice logo to put on my boards, enclosures, etc. At some point I should come up with one...

    Test equipment

    I've gradually grown fed up with current test equipment. Why would I want to fiddle with knobs and squint at a tiny 320x240 LCD when I could view the signal on my 7040x1080 quad-screen setup or, better yet, the triple 4K displays I'm going to buy when prices come down a bit? Why waste bench space on dials and buttons when I could just minimize or close the control application when it's not in use? As someone who spends most of his time sitting in front of a computer I'd much prefer a "glass cockpit" lab with few physical buttons.

    I'm now planning to make a suite of test equipment based on the Unix philosophy: do one thing and do it well. Each board will be a 3U Eurocard with a power input on the back and Ethernet + probe/signal connections on the front. They will implement the low-level signal capture/generation, buffering, and trigger logic but then leave all of the analysis and configuration UI to a PC-based application, connected over 1- or 10-gigabit Ethernet depending on the tool. Projects are listed in the approximate order that I plan to build them.
    • 4-channel TDR for testing cat5e cable installs
      This design will be based on the same general concept as a SAR ADC, with the sampling matrix transposed. Instead of gradually refining one sample before proceeding to the next, the entire waveform will be sampled once, then gradually refined over time.

      Each channel of the TDR will consist of a high-speed 100-ohm differential output from a Spartan-6 FPGA to generate a pulse with very fast rise time, AC coupled into one pair of a standard RJ45 jack which will plug into the cable under test.

      On the input stage, the differential signals will be subtracted by an opamp, then the single-ended differential voltage compared against a reference voltage produced by a DAC using a LMH7324SQ or similar ultra-fast comparator. The comparator will have LVDS outputs driving a differential input on the Spartan-6, which can sample DDR LVDS at up to 1 GHz. This will produce a single horizontal slice across a plot of impedance mismatch/reflection intensity vs time/distance.

      By sending multiple pulses in sequence with successively increasing reference voltages from the DAC, it should be possible to reconstruct an entire TDR trace to at least 8 bits of precision for a fraction of the cost of even a single 1 GSa/s ADC.

      Given the 5ns/m nominal propagation delay of cat5 cable (10us/m after round trip delay), the theoretical spatial resolution limit is 10cm although I expect noise and sampling issues to reduce usable positioning accuracy down to 20-50, and the TDR will need to be calibrated with a known length of cable from the same lot if exact propagation delays are needed to compute the precise location of a fault.
    • 10-channel DC power supply

      Offshoot of the PDU. Ten-channel buck converter stepping 24 VDC down to an adjustable output voltage, operating frequency around 1.5 MHz. Digital feedback loop with support for soft-start, state machine based current limiting and overcurrent shutdown, etc.

      More details TBD once I have time to flesh out the concept a bit.
    • Gigabit Ethernet protocol analyzer
      Spartan-6 connected to three 1gbaseT PHYs. Packets coming in port A are sent out port B unchanged, and vice versa. All traffic going either way is buffered in some kind of RAM, then encapsulated inside a TCP stream and sent out port C to an analysis computer which can record stats, write a pcap, etc.

      The capture will be raw layer-1 and include the preamble, FCS, metadata describing link state changes and autonegotiation status, and cycle-accurate timestamps. Error injection may be implemented eventually if needed.

    • 128-channel logic analyzer
      This will be based on RED TIN, my existing FPGA-based ILA, but with more features and an external 4GB DDR3 SODIMM for buffering packet data. A 64-bit data bus at 1066 MT/s should be more than capable of pushing 32 channels at 1 GHz, 64 at 500 MHz, or 128 at 250 MHz. The input standards planned to be supported are LVCMOS from 1.5 to 3.3V, LVDS, SSTL, and possibly 5V LVTTL if the input buffer has sufficient range. I haven't looked into CML yet but may add this as well.

      The FPGA board will connect to the host PC via a 10gbit Ethernet link using SFP+ direct attach cabling. Dumping 4GB (32 Gb) of data over 10gbe should take somewhere around 4 seconds after protocol overhead, or less if the capture depth is set to less than the maximum.

      The FPGA board will connect via matched-impedance 100-ohm parallel cables (perhaps something like DigiKey 670-2626-ND)) to eight active probe cards. Each probe card will have a MICTOR or similar connector to the DUT providing numerous grounds, optional SSTL Vref, 16 digital inputs, and two clock/strobe inputs with optional complement inputs for differential operation. An internal DAC will allow generation of a threshold voltage for single-ended LVCMOS inputs.

      The probe card input stage will consist of the following for each channel:
      • Unity-gain buffer to reduce capacitive load on the DUT
      • Low-speed precision analog mux to select external Vref (for SSTL) or internal Vref (for LVCMOS). This threshold voltage may be shared across several/all channels in the probe card, TBD.
      • High-speed LVDS-output comparator to compare single-ended inputs against the muxed Vref.
      • 2:1 LVDS mux for selecting single-ended or differential inputs. Input A is the LVDS output from the comparator, input B is the buffered differential input from this and the adjacent channel. To reduce bit-to-bit skew all channels will have this mux even though it's redundant for odd-numbered channels.
      The end result will be 16 LVDS data bits and 2 LVDS clock bits, fed over 18 differential pairs to the FPGA board. The remaining lines in the ribbon will be used for shielding grounds, analog power, and an I2C bus to control the DAC and drive an I/O expander for controlling the mux selectors.
    LA input stage for two single-ended or one differential channel
    • 4-channel DSO
      This will use the same FPGA + DDR3 + 10gbe back end as the LA, but with the digital input stage replaced by an AFE and two of TI's 1.5 GSa/s dual ADCs with interleaving support.

      This will give me either two channels at 3 GSa/s with a target bandwidth of 500 MHz, or four channels at 1.5 GSa/s with a target bandwidth of 250 MHz. The resulting raw data rate will be 3 GSa/s * 8 bits * 2 channels or 48 Gbps, and should comfortably fit within the capacity of a 64-bit DDR3 1066 interface.

      I have no more details at this point as my mixed-signal-fu is not yet to the point that I can design a suitable AFE. This will be the last project on the list to be done due to both the cost of components and the difficulty.

    by Andrew Zonenberg ( at August 27, 2014 12:46 AM

    August 24, 2014


    Atmel AT90USB162 : weekend die-shot

    Atmel AT90USB162 is an 8-bit microcontroller with hardware USB, 16KiB flash and 512B of SRAM/EEPROM.

    August 24, 2014 09:41 PM

    August 22, 2014


    A bit of advertising

    This year a book “GPS, GLONASS, Galileo, and BeiDou for Mobile Devices: From Instant to Precise Positioning” by author Dr Ivan G. Petrovski was published. It contains link of my article. More details about book are available through the link


    August 22, 2014 10:13 AM

    GLONASS: step towards CDMA

    This summer almost unnoticed event has happened. In june GLONASS-M (№755) satellite with L3-band equipment was launched. Since beginning of august this satellite was included in GLONASS constellation. This means that at this moment there are two satellites capable of transmitting CDMA signals in L3 band...These events has become a reason for experiments with receiving signals in L3 band from two satellites simultaneously. Another reason is possibility to use SDR-receiver USRP B200 for these experiments. So the time when both satellites were visible had been chosen and the record was made. Pilot-component of the signal was chosen for processing. During experiments the fact that GLONASS-M transmits only pilot-component while GLONASS-K transmits both pilot and data-components of the signal was revealed. Results of signal processing are of the figures below.




    August 22, 2014 10:10 AM

    August 17, 2014


    Ti CC1100 (formerly Chipcon) : weekend die-shot

    Ti CC1100 is a radio transceiver for 300-348 MHz, 400-464 MHz and 800-928 MHz ranges.

    Apparently there are 30 initials of the people, involved in the design of this chip mentioned at the lower right corner. Although this chip was designed after Ti acquisition of Chipcon (that happened in January 2006), it is still marked as Chipcon.

    August 17, 2014 11:04 PM

    August 14, 2014

    Video Circuits

    Bristol Video Workshop

    So Alex and I will be teaching a beginners workshop in analogue video techniques as part of Encounters Festival at the Arnolfini Gallery on Saturday the 20th of September. The whole reason and drive behind the workshop is as  part of Mclaren 2014 a celebration of Norman McLaren's work and  life. Joseph who is the force behind Seeing Sound festival asked if I would put together a workshop exploring some analogue video techniques. The Arnolfini is one of my favourite venues down south, I recently caught a screening of Jordan Belson's work on film which absolutely blew me away and they seem to have a regular program of interesting audio visual and electronic performance stuff. Alex and I have prepared a simple starter in to the world of electronic video with some basic experiments to try and a little background history.

    Chris J King and Alexander Peverett
    10:00 – 13:00
    Mclaren for event page
    Media Artists Chris J King and Alexander Peverett will present a workshop on hands on video techniques influenced by McLaren. The themes in McLaren’s work of drawn sound and visual music were expanded by later artists using electronic video and video synthesis. The workshop will include an introduction to both the historical and technical aspects of electronic video work as well as the construction of a simple circuit and experimentation with video feedback. Be prepared for vivid colours, frenetic sounds and dancing shapes! The cost includes all the parts to make your circuit to take away and play with as well as mirrors to manipulate video feedback and a small publication containing all the information covered in the workshop.

    by Chris ( at August 14, 2014 05:37 AM

    August 13, 2014

    Bunnie Studios

    Dangerous Prototypes’ Hacker Camp SZ, 2nd Edition

    My buddies at Dangerous Prototypes are hosting another Shenzhen hacker camp at the end of September. If you missed the last hacker camp or are just curious about Shenzhen, check it out — the slots are filling up fast!

    Come to the world’s electronics capital and experience Shenzhen like a local hacker. Tour the famous Huaqiangbei electronics markets with people who live in the neighborhood, figure out what to eat and how to get around, and of course – learn how to reball BGA chips from a soldering master with noth’n but hand tools.

  • Optional: Tuesday 23 – early arrival dinner at Japanese Secret Location
  • Optional: Wednesday 24 – tour of Dongmen market & sign street, copy mall
  • Thursday 25 – talks: how to survive Shenzhen, Huaqianbei tour
  • Friday 26 – talks: how to use Shenzhen to the fullest, BGA reballing day 1
  • Saturday 27 – BGA reballing day 2, hacker BBQ
  • That’s just an overview. See the full Hacker Camp Shenzhen schedule here. You can expect nightly dinners and parties all week. If you want to come really early, we’re hacking Phuket from the 15th to the 19th.

    by bunnie at August 13, 2014 12:56 PM

    August 10, 2014

    Andrew Zonenberg, Silicon Exposed

    Microchip PIC32MZ process vs PIC32MX

    Those of you keeping an eye on the MIPS microcontroller world have probably heard of Microchip's PIC32 series parts: MIPS32 CPU cores licensed from MIPS Technologies (bought by Imagination Technologies recently) paired with peripherals designed in-house by Microchip.
    Although they're sold under the PIC brand name they have very little in common with the 8/16 bit PIC MCUs. They're fully pipelined processors with quite a bit of horsepower.

    The PIC32MX family was the first to be introduced, back in 2009 or so. They're a MIPS M4K core at up to 80 MHz and max out at 128 KB of SRAM and 512 KB of NOR flash plus a fairly standard set of peripherals.

    PIC32MX microcontroller

    Somewhat disappointingly, the PIC32MX MMU is fixed mapping and there is no external bus interface. Although there is support for user/kernel privilege separation, all userspace code shares one address space. Another minor annoyance is that all PIC32MX parts run from a fixed 1.8V on-die LDO which normally cannot (the 300 series is an exception) be disabled or bypassed to run from an external supply.

    The PIC32MZ series is just coming out now. They're so new, in fact that they show as "future product" on Microchip's website and you can only buy them on dev boards, although I'm told by around Q3-Q4 of this year they'll be reaching distributors. They fix a lot of the complaints I have with PIC32MX and add a hefty dose of speed: 200 MHz max CPU clock and an on-die L1 cache.

    PIC32MZ microcontroller

    On-chip memory in the PIC32MZ is increased to up to 512 KB of SRAM and a whopping 2 MB of flash in the largest part. The new CPU core has a fully programmable MMU and support for an external bus interface capable of addressing up to 16MB of off-chip address space.

    I'm a hacker at heart, not just a developer, so I knew the minute I got one of these things I'd have to tear it down and see what made it tick. I looked around for a bit, found a $25 processor module on Digikey, and picked it up.

    The board was pretty spartan, which was fine by me as I only wanted the chip.

    PIC32MZ processor module
    Less than an hour after the package had arrived, I had the chip desoldered and simmering away in a beaker of sulfuric acid. I had done a PIC32MX340F512H a few days previously to provide comparison shots.

    Without further ado, here's the top metal shots:

    These photos aren't to scale, the MZ is huge (about 31.9 mm2). By comparison the MX is around 20.

    From an initial impression, we can see that although both run at the same core voltage (1.8V) the MZ is definitely a new, significantly smaller fab process. While the top layer of the MX is fine-pitch signal routing, the top layer of the MZ is (except in a few blocks which appear to contain analog circuitry) completely filled with power distribution routing.

    Top layer closeups of MZ (left), MX (right), same scale

    Thick power distribution wiring on the top layer is a hallmark of deep-submicron processes, 130 nm and below. Most 180 nm or larger devices have at least some signal routing on the top layer.

    Looking at the mask revision markings gives a good hint as to the layer count and stack-up.

    Mask rev markings on MZ (left), MX (right), same scale
    The MZ appears to be one thick aluminum layer and five thin copper layers for a total of six, while the MX is four layers and probably all aluminum.

    Enough with the top layer... time to get down! Both samples were etched with HF until all metal and poly was removed.

    The first area of interest was the flash.

    NOR flash on MZ (left), MX (right), different scales
    Both arrays appear to be the same standard NOR structure, although the MZ's array is quite a bit denser: the bit cell pitch is 643 x 270 nm (0.173 μm2/bit) while the MX's is 1015 x 676 nm (0.686 μm2/bit). The 3.96x density increase suggests a roughly 2x process shrink.

    The white cylinders littering the MX die are via plugs, most likely tungsten, left over after the HF etch. The MZ appears to use a copper damascene process without via plugs, although since no cross section was performed details of layer thicknesses etc are unavailable.

    The next target was the SRAM.

    6T SRAM on MZ (left), MX (right), different scales
    Here we start to see significant differences. The MX uses a fairly textbook 6T "doughnut + H" SRAM structure while the MZ uses a more modern lithography-optimized pattern made of all straight lines with no angles, which is easier to etch. This kind of bit cell is common in leading-edge processes but this is the first time I've seen it in a commodity MCU.

    Cell pitch for the MZ is 1345 x 747 nm (1.00 μm2/bit) while the MX is 1895 x 2550 nm (4.83 μm2/bit). This is a 4.83x increase in density.

    The last area of interest was the standard cell array for the CPU.

    Closeup of standard cells on MZ (left), MX (right), different scales
    Channel length was measured at 125-130 nm for the MZ and 250-260 nm for the MX.

    Both devices also had a significant number of dummy cells in the gate array, suggesting that the designs were routing-constrained.

    Dummy cells in MZ
    Dummy cells in MX

    In conclusion, the PIC32MZ is a significantly more powerful 130 nm upgrade to the slower 250 nm PIC32MX family. If Microchip fixes most of the silicon bugs before they launch I'll definitely pick up a few and build some stuff with them.

    I wasn't able to positively identify the fab either device was made on however the fill patterns and power distribution structure on the MZ are very similar of the TI AM1707 which is fabricated by TSMC so they're my first guess.

    For more info and die pics check out the SiliconPr0n pages for the two chips:

    by Andrew Zonenberg ( at August 10, 2014 08:30 AM

    August 05, 2014

    Bunnie Studios

    Introducing lowRISC

    There’s a new, open-to-the-RTL CPU project called lowRISC.

    lowRISC is producing fully open hardware systems. From the processor core to the development board, our goal is to create a completely open computing eco-system.

    Our open-source SoC (System-on-a-Chip) designs will be based on the 64-bit RISC-V instruction set architecture. Volume silicon manufacture is planned as is a low-cost development board.

    lowRISC is a not-for-profit organisation working closely with the University of Cambridge and the open-source community.

    This is a positive development for the open source hardware community and I’m excited and honored to be included on their technical advisory board. Can’t wait to play with it!

    by bunnie at August 05, 2014 02:37 PM

    Video Circuits

    South Kiosk Summer Screen #2 - Oscillate Wildly

    South Kiosk have turned their gallery into a screening space, for a series of one-off events taking place over the course of latter summer months.

    For the second installment of their Summer Screen programme, South Kiosk will work with a number of artists, musicians and technologists to construct an immersive installation of flickering CRT surfaces. The various works will offer up a series of experiments in visual mutation through analogue processes, and the degradation of video signal and the VHS tape format. These different approaches offer perspective on a particular branch of filmmaking, and sonic experimentation.

    Featuring work by: James Alec Hardy, Phil Baljeu, Will Cenci, Greg Zifcak, Dan Sandin!

    by Chris ( at August 05, 2014 02:45 AM

    August 03, 2014


    Fairchild NC7SZ57 - universal 2-input gate : weekend die-shot

    Fairchild NC7SZ57 (and 58) - are universal 2-input shmitt gates, which let us implement various 2-input logic functions by wiring pins in different ways.

    Die size 416x362 µm, which is the smallest among microchips we've seen.

    Comparing to 1-gate NAND2 Ti SN74AHC1G00 - die area here is 1/3 smaller because area below pads is not wasted and used for IO transistors and wiring. It is unclear though how they achieved decent yields (as things there might get damaged during wire bonding) - we can only tell that insulation before last metal is much thicker than usual.

    Drop us a message if you have experience or knowledge on getting high-yield logic under pads - this is something we would be interested to have in our own product.

    August 03, 2014 11:17 PM

    July 30, 2014

    Bunnie Studios

    Name that Ware July 2014

    The Ware for July 2014 is shown below.

    Sorry that posts and updates have been infrequent the past few months — been really busy!

    by bunnie at July 30, 2014 09:07 AM

    Winner, Name that Ware June 2014

    The Ware for June 2014 is a Lantronix SLC RS232 I/O server. I’ll declare Jacob Creedon as the winner for being very close with his first response and providing some in-depth analysis to back up his guesses! Congrats, email me for your prize.

    by bunnie at July 30, 2014 09:07 AM

    July 29, 2014


    NXP PCA9570 - 4-bit IO expander : weekend die-shot

    NXP PCA9570 is an I²C 4-bit IO expander, although there are 4 unused pads on the die: probably 8-bit version uses the same die. 800nm technology.

    Die size 589x600 µm.

    After (terrible) metal etch - we see IO transistors right under pads:

    July 29, 2014 04:39 AM

    July 27, 2014


    KILAR KV1084 5A linear regulator : weekend die-shot

    Remember good old times when you could feed CPU from single linear regulator? KILAR KV1084 came from this time.

    Comparing to LM2940L or LM1117 there are more bonding pads per signal and obviously larger output transistors. Chip was soldered on copper heat spreader to help dissipate 10W+.

    Die size 3075x3026 µm.

    July 27, 2014 11:35 AM

    July 26, 2014


    Lens testing at Elphel

    We were measuring lens performance since we’ve got involved in the optical issues of the camera design. There are several blog posts about it starting with "Elphel Eyesis camera optics and lens focus adjustment". Since then we improved methods of measuring Point Spread Function (PSF) of the lenses over the full field of view using the target pattern modified from the standard checkerboard type have better spatial frequency coverage. Now we use a large (3m x 7m) pattern for the lens testing, sensor front end (SFE) alignment, camera distortion calibration and aberration measurement/correction for Eyesis series cameras.

    Fig.1 PSF measured over the sensor FOV

    Fig.1 PSF measured over the sensor FOV – composite image of the individual 32×32 pixel kernels

    So far lens testing was performed for just two purposes – select the best quality lenses (we use approximately half of the lenses we receive) and to precisely adjust the sensor position and tilt to achieve the best resolution over the full field of view. It was sufficient for our purposes, but as we are now involved in the custom lens design it became more important to process the raw PSF data and convert it to lens parameters that we can compare against the simulated achieved during the lens design process. Such technology will also help us to fine-tune the new lens design requirements and optimization goals.

    The starting point was the set of the PSF arrays calculated using images acquired from the the pattern while scanning over the range of distances from the lens to the sensor in small increments as illustrated on the animated GIF image Fig.1. The sensor surface was not aligned to be perpendicular to the optical axis of the lens before the measurement -each lens and even sensor chip has slight variations of the tilt and it is dealt with during processing of the data (and during the final alignment of the sensor during production, of course). The PSF measurement based on the repetitive pattern gives sub-pixel resolution (1.1μm in our case with 2.2μm Bayer mosaic pixel period – 4:1 up-sampled for red and blue in each direction), but there is a limit on the PSF width that the particular setup can handle. Too far out-of-focus and the pattern can not be reliably detected. That causes some artifacts on the animations made of the raw data, these PSF samples are filtered during further processing. In the end we are interested in lens performance when it is almost in perfect focus, so scanning too far away does not provide much of the practical value anyway.

    Acquiring PSF arrays

    Fig. 2 Pattern Grid

    Fig. 2 Pattern grid image

    Each acquired image of the calibration pattern is split into color channels (Fig.2 shows the pattern raw image – if you open the full version and zoom in you can see that there is 2×2 pixel periodic structure) and each channel is processed separately, colors are combined back on the images only for illustrative purposes. Of the full image the set of 40 samples (per color) is processed, each corresponding to 256×256 pixels of the original image.

    Fig. 3 shows these sample areas with windowing functions applied (this reduces artifacts during converting data to frequency domain). Each area is up-sampled to 512×512 pixels. For red and blue channels only one in 4×4=16 pixels is real, for green – two of 16. Such reconstruction is possible as multiple periods of the pattern are acquired (more description is available in the earlier blog post). The size of the samples is determined by a balance of the sub-pixel resolution (the larger the area – the better) and resolution of the PSF measurements over the FOV. It is also difficult to process large areas in the case of higher lens distortions, because the calculated “ideal” grid used for deconvolution had to be curved to precisely match to the acquired image – errors would widen the calculated PSF.

    Fig. 3 Pattern image split into 40 regions for PSF sampling

    Fig. 3 Pattern image split into 40 regions for PSF sampling

    The model pattern is built by first correlating each pattern grid node (twisted corner of the checkerboard pattern) over smaller area that still provides sub-pixel resolution, and then calculating the second degree polynomial transformation of the orthogonal grid to match these grid nodes. The calculated transformation is applied to the ideal pattern and result is used in deconvolution with the measured data producing the PSF kernels as 32×32 pixel (or 35μm x 35μm) arrays. These arrays are stored as 32-bit multi-page TIFF images arranged similarly to the animated GIF on Fig.1 making it easier to handle them manually. The full PSF data can be used to generate MTF graphs (and it is used during camera aberration correction) but for the purpose of the described lens testing each PSF sample is converted to just 3 numbers describing ellipse approximating PSF full width half maximum (FWHM). These 3 numbers are reduced to just two when the lens center is known – sagittal (along the radius) and tangential (perpendicular to the radius) projections. The lens center is determined either from finding the lens radial distortion center using our camera calibration software, or it can be found as a pair of variable parameters during the overall fitting process.

    Data we collected in earlier procedure

    In our previous lens testing/adjustment procedures we adjusted tilt of the sensor (it is driven by 3 motors providing both focal distance and image plane tilt control) by balancing vertical to horizontal PSF FWHM difference in both X and Y directions and then finding the focal distance providing the best “averaged” resolution. As we need good resolution over the full FOV, not just in the center, we are interested in maximizing the worst resolution over the FOV. As a compromise we currently use a higher (fourth) power of the individual PSF components width (horizontal and vertical) over all FOV samples, average the results and extract the fourth root. Then mix results for individual colors with 0.7:1.0:0.4 weights and consider it as a single quality parameter value of the lens (among the samples of the same lens model). There are different aberration types that widen the PSF of the lens-sensor combination, but they all result in degradation of the result image “sharpness”. For example the lens lateral chromatic aberration combined with the spectral bandwidth of the sensor color filter array reduces lateral resolution of the peripheral areas compared to the monochromatic performance presented on the MTF graphs.

    Automatic tilt correction procedure worked good in most cases, but it depended on a particular lens type characteristics and even sometimes failed for the known lenses because of the individual variations between lens samples. Luckily it was not a production problem as this happened only for lenses that differed significantly from the average and they also failed the quality test anyway.

    Measuring more lens parameters

    To improve the robustness of the automatic lens tilt/distance adjustment of the different lenses, and for comparing lenses – actual ones, not just the theoretical Zemax or OSLO simulation plots we needed more processing of the raw PSF data. While building cameras and evaluating different lenses we noticed that it is not so easy to find the real lens data. Very few of the small format lens manufacturers post calculated (usually Zemax) graphs for their products online, some other provide them by request, but I’ve never seen the measured performance data of such lenses so far. So far we measured small number of lenses – just to make sure the software works (the results are posted below) and we plan to test more of the lenses we have and post the results hoping they can be useful for others too.

    The data we planned to extract from the raw PSF measurements includes Petzval curvature of the image surface including astigmatism (difference between sagittal and tangential surfaces) and resolution (also sagittal and tangential) as a function of the image radius for each of the 3 color components, measured at different distances from the lens (to illustrate the optimal sensor position). Resolution is measured as spot size (FWHM), on the final plots it is expressed as MTF50 in lp/mm – the relation is valid for Gaussian spots, so for real ones it is only an approximation: MTF50≈2*ln2π*PSFFWHM. Reported results are not purely lens properties as they depend on the spectral characteristics of the sensor, but on the other hand, most lens users attach them to some color sensor with the same or similar spectral characteristics of the RGB micro-filter array as we used for this testing.

    Consolidating PSF measurements

    We planned to express PSF size dependence (individually for 2 directions and 3 color channels) on the distance from the sensor as some functions determined by several parameters, allow these parameters to vary with the radius (distance from the lens axis to the image point) and then use Levenberg-Marquardt algorithm (LMA) to find the values of the parameters. Reasonable model for such function would be a hyperbola:

    (1) f(z)=(a*(z-z0))2+r02

    where z0 stands for the “best” focal distance for that sample point/component, a defines asymptotes (it is related to the lens numeric aperture) and r0 defines the minimal spot size. To match shift and asymmetry of the measured curves two extra terms were added:

    (2) f(z)=(a*(z-z0))2+(r0-s)2 +s+t*a*(z-z0)

    New parameter s adjusts the asymptotes crossing point above zero and t “tilts” the function together with the asymptotes. To make the parameters less dependent on each other the whole function was shifted in both directions so varying tilt t does not change position and value of the minimum:

    (3) f(z)=(a*(z-z0-zcorr))2+(r0-s)2 +s+t*a*(z-z0-zcorr)-fcorr

    where (solved by equating the first derivative to zero:dfdz=0):

    (4) zcorr=(r0-s)*ta*1-t2


    (5) fcorr=(a*zcorr)2+(r0-s)2-t*a*zcorr-(r0-s)

    Finally I used logarithms of a, r0, s and arctan(t) to avoid obtaining invalid parameter values from the LMA steps if started far from the optimum, and so to increase the overall stability of the fitting process.

    There are five parameters describing each sample location/direction/color spot size function of the axial distance of the image plane. Assuming radial model (parameters should depend only on the distance from the lens axis only) and using polynomial dependence of each of the parameter on the radius that resulted in some 10-20 parameter per each of the direction/color channel. Here is the source code link to the function that calculates the values and partial derivatives for the LMA implementation.

    Applying radial model to the measured data

    Fig.4 PSF sample points naming

    Fig.4 PSF sample points naming

    Fig.5 Fitting individual spot size functions to radial aberration model Spreadsheet link

    Fig.5 Fitting individual spot size functions to radial aberration model. Spreadsheet link

    When I implemented the LMA and tried to find the best match (I was simultaneously adjusting the image plane tilt too) for the measured lens data, the residual difference was still large. Top two plots on Fig.5 show sagittal and tangential measured and modeled data for eight location along the center horizontal section of the image plane. Fig.4 explains the sample naming, linked spreadsheet contains full data for all sample locations and color/direction components. Solid lines show measured data, dashed – approximation by a radial model described above.

    The residual fitting errors (especially for some lens samples) were significantly larger than if each sample location was fitted with individual parameters (the two bottom graphs on Fig.5). Even the best image plane tilt determined for sagittal and tangential components (if fitted separately) produced different results – one one lens the angle between the two planes reached 0.4°. The radial model graphs (especially for Y2X6 and Y2X7) show that the sagittal and tangential components are “pulling” the result in opposite directions It became obvious that the actual lenses can not be fully characterized in the terms of just the radial model as used for simulation of the designed lenses, the deviations of the symmetrical radial model have to be accounted for too.

    Adjustment of the model parameters to accommodate per-location variations

    I modified the initial fitting program to allow individual (per sample location) adjustment of the parameter values, adding cost of correction variation from zero and/or from the correction values of the same parameter at the neighbors sites. Sum of the squares of the corrections (with balanced weights) was added to the sum of the squares of the differences between the measured PSF sizes and the modeled ones. This procedure requires that small parameter variations result in small changes of the functions values, that was achieved by the modeling function formula modification as described above.

    Lenses tested

    New program was tested with 7 lens samples – 5 of them were used to evaluate individual variations of the same lens model, and the two others were different lenses. Each result image includes four graphs:

    • Top-left graph shows weighted average resolution for each individual color and the combination of all three. Weighted average here processes the fourth power of the spot size at each of the 40 locations in both (sagittal and tangential) directions so the largest (worst) values have the highest influence on the result. This graph uses individually fitted spot size functions
    • Bottom-left graph shows Petzval curvature for each of the 6 (2 directions of 3 colors) components. Dashed lines show sagittal and solid lines – tangential data for the radial model parameters, data point marks – the individually adjusted parameters, so same radius but different direction from the lens center results in the different values
    • Top-right graph shows the resolution variation over radius for the plane at the “best” (providing highest composite resolution) distance from the lens, lines showing radial model data and marks – individual samples
    • Bottom-right graph shows a family of the resolution functions for -10μm (closest to the lens), -5μm, 0μm, +50μm and +10μm positions of the image plane
    Linked spreadsheet files contain more graphs and source data for each lens.

    Evetar N125B04518W

    Evetar N125B04518W is our “workhorse” lens used in Eyesis cameras. 1/2.5″ format lens, focal length=4.5mm, F#=1.8. It is a popular product, and many distributors sell this lens under their own brand names. One of the reasons we are looking for the custom lens design is that while this lens has “W” in the model name suffix meaning “white” (as opposed to “IR” for infrared) it is designed to be a “one size fits all” product and the only difference is addition of the IR cutoff filter at the lens output. This causes two problems for our application – reduced performance for blue channel (and high longitudinal chromatic aberration for this color) and extra spherical aberration caused by the plane-parallel plate of the IR cutoff filter substrate. To mitigate the second problem we use non-standard very thin – just 0.3mm filters.

    Below are the test results for 5 randomly selected samples of the batch of the lenses with different performance.

    Fig.6 Evetar N125B04518W sample #0294 test results

    Fig.6 Evetar N125B04518W sample #0294 test results. Spreadsheet link.

    Fig.7 Evetar N125B04518W sample #0274 test results

    Fig.7 Evetar N125B04518W sample #0274 test results. Spreadsheet link.

    Fig.8 Evetar N125B04518W sample #0286 test results

    Fig.8 Evetar N125B04518W sample #0286 test results. Spreadsheet link.

    Fig.9 Evetar N125B04518W sample #0301 test results

    Fig.9 Evetar N125B04518W sample #0301 test results. Spreadsheet link.

    Fig.10 Evetar N125B04518W sample #0312 test results

    Fig.10Evetar N125B04518W sample #0312 test results. Spreadsheet link.

    Evetar N125B04530W

    High resolution 1/2.5″ f=4.5mm, F#=3.0 lens
    Fig.11 Evetar N125B04530W sample #9101 test results

    Fig.11 Evetar N125B04530W sample #9101 test results. Spreadsheet link.

    Sunex DSL945D

    Sunex DSL945D – compact 1/2.3″ format f=5.5mm F#=2.5 lens. Datasheet says “designed for cameras using 10MP pixel imagers”. The sample we tested has very high center resolution, excellent image plane flatness and low chromatic aberrations. Unfortunately off-center resolution degrades with the radius rather fast.

    Fig.12 Sunex SLR945D sample #1020 test results

    Fig.12 Sunex SLR945D sample #1020 test results. Spreadsheet link.

    Sunex DSL355A-650-F2.8

    Sunex DSL355A – 1/2.5″ format f=4.2mm F#=2.8 hybrid lens.

    Fig.12 Sunex SLR355A sample #9063 test results

    Fig.13 Sunex SLR355A sample #9063 test results. Spreadsheet link.

    Software used

    This project used Elphel plugin for the popular open source image processing program ImageJ with new classes implementing the new processing described here. The results were saved as text data tables and imported in free software LibreOffice Calc spreadsheet program to create visualization graphs. Finally free software Gimp program helped to combine graphs and create the animation of Fig.1.

    by andrey at July 26, 2014 10:37 PM

    July 25, 2014


    OPA627, genuine one this time : weekend die-shot

    Last time we decapped 2 fake OPA627's from ebay: one was remarked AD744 part, another was unidentified remarked BB part.

    Recently reader sent us one more OPA627 from ebay. This chip appeared to be genuine.

    Die size 2940x2005 µm.

    July 25, 2014 05:02 PM

    July 24, 2014


    Optimization Intermediate Results


        Running OSLO’s optimization has shown that having a single operand defined is probably not enough. During the optimization run the program computes the derivative matrix for the operands and solves the least squares normal equations. The iterations are repeated with various values of the damping factor in order to determine the optimal value of the damping factor.     So, extra operands were added to split the initial error function – each new operand’s value is a contribution to the spot size (blurring) calculated for each color, aberration and certain image heights. See Fig.1 for formulas.
    Fig.1 Extra Operands

    Fig.1 Extra Operands

    FieldCurvature(), LateralColor(), LongSpherical() and Coma() functions are defined in a ccl script found here – they use OSLO’s built in functions to get the data. FY – fractional (in OSLO) pupil coordinate – 0 in the center, 1.0 – the edge (at the aperture stop) FBY – fractional (in OSLO) image height (at the image plane) NA – numeric aperture

    Field Curvature (1)

        3 reference wavelengths, 7 image plane points (including center) and sagittal & tangential components make up 42 operands total affecting field curves shapes and astigmatism. To get the contribution to the spot size one need to multiply the value by Numerical Aperture (NA). NA is taken a constant over the full field.

    Lateral Color (2)

        There are 3 bands the pixels are sensitive to – 510-560, 420-480 and 585-655 nm. The contribution to the spot size is then calculated for each band and 6 image plane points – there’s neither central nor tangential component – 18 operands total.

    Longitudinal Spherical (3)

        The spot size contribution is calculated for the 3 reference wavelengths and 7 points at the aperture stop (including center). The tangential and sagittal components are equal, thus there are 42 operands.

    Coma (4)

    It doesn’t have a huge impact on the optimization but it was still added for some control. The operands are calculated for 3 wavelengths and 6 image plane points – adds up 18 extra operands.


    See Fig.2-5. All of the curvatures and thicknesses were set to variables, except for the field flattener and the sensor’s cover glass. The default OSLO’s optimization method was used – Dump Least Squared (DLS).
    Parameter Comments
    Field Curvature decreased from 20um to 5 um over the field
    Astigmatism decreased max(T-S) from ~15um to ~2.5 um
    Chromatic Focal Shift almost no changes
    Lateral Color almost no changes
    Longitudinal Spherical got better in the middle and worse in the edge
    Resolution somewhat insignificantly improved
    Tried to vary the glasses but this didn’t lead to anything good – it tends to make the front surface extremely thin.


    This might be the best(?) what can be achieved with the current curvatures-thicknesses (and glasses) configuration. Spherical aberration seem to contribute the most at the current f/1.8. What would be the next step?
    1. It’s always possible to go down to f/2.0-f/2.5. But we would keep the aperture as wide as possible.
    2. Add extra elements(s)?
      • Where? Make changes closer to the surfaces that affect spherical aberration the most?
    3. Make up extra achromatic doublet(s)?
      • Where? Make changes closer to the surfaces that affect spherical aberration the most?
    4. Introduce aspheric surface(s)?
      • Plastic or glass? Some guidlines suggest to place glass close to the aperture stop and plastic – away. At the same time, “a surface close to the aperture stop tend to affect or benefit spherical aberration, surfaces located further from the stop can help minimize some or all of the off-axis aberrations such as coma and astigmatism”:
        • Glass
          • Where? Make changes to the surfaces that affect spherical aberration the most?
          • One of the surfaces of the achromatic doublet?
        • Plastic
          • Where? Place a plano-aspheric element (flat front, aspheric back) at locations wheres rays are (almost) parallel? The thermal expansion might not affect the performance very much.
          • Plano-aspheric element in the front of the lens?
          • Aspheric surface on the achromatic doublet?
          • As thin as possible? How thin can it be?
          • Make the element after the doublet plano-aspheric?
    Other questions:
    1. Are there glass-plastic (glass-polymer? hybrid?) aspheric achromatic doublets available?
    2. Is it possible to glue a thin plastic aspherics on a glass element (like a contact lens)?


    Fig.2 Before

    Fig.2 Before

    Fig.3 After

    Fig.3 After

    Fig.4 Before. MTF(green)

    Fig.4 Before. MTF(green)

    Fig.5 After. MTF(green)

    Fig.5 After. MTF(green)

    by Oleg Dzhimiev at July 24, 2014 12:52 AM

    July 11, 2014


    Milandr 1986VE21 : weekend die-shot

    Milandr 1986VE21 - is a microcontroller for 3-phase electricity meters. Rare example of purely civilian Russian microchip which was not funded by any government agency.

    July 11, 2014 12:05 AM

    July 08, 2014

    Richard Hughes, ColorHug

    Important AppData milestone

    Today we reached an important milestone. Over 25% of applications in Fedora now ship AppData files. The actual numbers look like this:

    • Applications with descriptions: 262/1037 (25.3%)
    • Applications with keywords: 112/1037 (10.8%)
    • Applications with screenshots: 235/1037 (22.7%)
    • Applications in GNOME with AppData: 91/134 (67.9%)
    • Applications in KDE with AppData: 5/67 (7.5%)
    • Applications in XFCE with AppData: 2/20 (10.0%)
    • Application addons with MetaInfo: 30

    We’ve gone up a couple of percentage points in the last few weeks, mostely from the help of Ryan Lerch, who’s actually been writing AppData files and taking screenshots for upstream projects. He’s been concentrating on the developer tools for the last week or so, as this is one of the key groups of people we’re targetting for Fedora 21.

    One of the things that AppData files allow us to do is be smarter suggesting “Picks” on the overview page. For 3.10 and 3.12 we had a farly short static list that we chose from at random. For 3.14 we’ve got a new algorithm that tries to find similar software to the apps you already have installed, and also suggests those. So if I have Anjunta and Devhelp installed, it might suggest D-Feet or Glade.

    by hughsie at July 08, 2014 10:42 AM

    July 05, 2014

    July 02, 2014

    Richard Hughes, ColorHug

    Blurry Screenshots in GNOME Software?

    Are you a pixel perfect kind of maintainer? Frustrated by slight blurriness in screenshots when using GNOME Software?

    If you have one screenshot, capture a PNG of size 752×423. If you have more than one screenshot use a size of 624×351.

    If you use any other 16:9 aspect ratio resolution, we’ll scale your screenshot when we display it. If you use some crazy non-16:9 aspect ratio, we’ll add padding and possibly scale it as well, which is going to look pretty bad. That said, any screenshot is better than no screenshot, so please don’t start removing <screenshot> tags.

    by hughsie at July 02, 2014 08:28 PM


    Defining Error Function for Optical Design optimization (in OSLO)


    The Error Function calculates the 4th root of the average of the 4th power spot sizes over several angles of the field of view.


    Fig.1 Sensor's quantum efficiency

    Fig.1 Pixel’s quantum efficiency

    Fig.2 Example of pixel sensitivity range

    Fig.2 Example of pixel’s sensitivity range

    The function takes into account:
    • Pixels’ sensitivity to a band rather than a single wavelength (Fig.1). It negatively affects the sagittal component of the Point Spread Function (PSF).

    • One of the goals is the uniform angular resolution and applies the corresponding coefficients to the sagittal component. The angular resolution increases with the field angle increase and degrades with negative distortion amount increase with the field angle increase


    Fig.3 formulas

    Fig.3 Formulas 1-5

    • If PSF shape is approximated with a Gauss function (Fig.2) (in OSLO actual PSF shapes’s data can be extracted but anyways) then the sagittal PSF for a range of wavelengths will be a Gauss function as well with its Full Width Half Maximum (FWHM) calculated using (5) (Fig.3). FWHM is the spot size.

    • With a known frequency for the Modulation Transfer Function (MTF) at the value of 1/2 level FWHM for a single wavelength is calculated with (1)-(4) (Fig.3)

    • The final Error Function is shown in (6) (Fig.4). Its value is set as a user-defined operand for minimization (note: the value does not tend to zero).

    • The 4th power is used to be able to improve the worst parameters in the first place
    Fig.4 Error Function

    Fig.4 Error Function


    • Distortion has not been added yet to the script that sets optimization operands
    • Half of the FoV is manually picked at the moment and is 38°
    • Field angles are picked to split the circular area of the image plane into the rings (circle in the center) of equal area
    • N=6
    i αi, rad cos(αi)
    1 0.0000 1.0000
    2 0.2513 0.9686
    3 0.3554 0.9375
    4 0.4353 0.9067
    5 0.5027 0.8763
    6(N) 0.5620 0.8462
    Pixel's filter color λpeak,nm range,nm
    green 530 510-560
    red 600 585-655
    blue 450 420-480


    1. The script to set user-defined custom operands before running optimization in OSLO: set_elphel_operands.ccl

    by Oleg Dzhimiev at July 02, 2014 02:05 AM

    June 30, 2014


    Open Hardware Lens for Eyesis4π camera



    Elphel has embarked on a new project, somewhat different from our main field of designing digital cameras, but closely related to the camera applications and aimed to further improve image quality of Eyesis4π camera. Eyesis4π is a high resolution full-sphere panoramic and stereophotogrammetric camera. It is a tiled multi-sensor system with a single sensor’s format of 1/2.5″. The specific requirement of such system is uniform angular resolution, since there is no center in a panoramic image.

    Current lens

    Fig.1. Sensors layout

    Fig.1. Eyesis4π modules layout

    Lens selection for the camera was dictated by small form factor among other parameters and after testing a dozen of different lenses we have selected N125B04518IR, by Evetar, to be used in Eyesis4π panoramic camera. It is M12 mount (also called board lens), EFL=4.5mm, F/1.8 lens with the same 1/2.5″ format as camera’s sensor. This sensor is perfected by volume production and wide use in security and machine vision applications, which contributed to it’s high performance at a relatively low price. At the same time the price-quality balance for board lenses has mostly shifted to the lower price, and while these lenses provide good quality in the center of the image the resolution in the corners is lower and aberrations are worse. Each lens of the same model is slightly different from another, it’s overall resolution, resolution in the corners, and aberrations vary, so we have developed a more or less universal method to measure the optical parameters of the sensor-lens module that allows us to select the best lenses from a received batch. This helped us to formulate quantitative parameters to compare lens performance for our application. We have also researched other options. For example, there are compact lenses for smaller formats (used in smartphones) but most, if not all of them are designed to be integrated with the device. On the consumer cameras side better lenses are mostly designed for formats of at least 3/4″. C-mount lenses we use with other Elphel camera models are too large for Eyesis4π panoramic camera sensor-lens module layout.

    Lens with high resolution over the Full Field of View

    In panoramic application and other multi-sensor tiled cameras we are designing the center can be set anywhere and none of the board lenses (and other lenses) we have tested could provide the desirable uniform angular resolution. Thus there is a strong interest to have the lens designed in response to panoramic application requirements. Our first approach was to order custom design from lens manufacturers, but it proved to be rather difficult to specify the lens parameters, based on a standard specifications list we were offered to fill out. The following table describes basic parameters for the initial lens design:
    Parameter Description
    Mount S-mount (M12x0.5)
    Size compact (fit in the barrel of the current lens)
    Format 1/2.5"
    Field of View V: 51°, H: 65°, D: 77°
    F# f/1.8
    EFL 4-4.5 mm (maybe 4.8)
    Distortion barrel type
    Field Curvature undercorrected (a field flattener will be used)
    Aberrations as low as possible
    The designed lens will be subjected to the tests similar to the ones we use in actual camera calibration before it is manufactured. This way we can simulate the virtual optical design and make corrections based on it’s performance, to ensure that the designed lens satisfies our requirements before we even have the prototype. To be able to do that we realized that we need to be involved in the lens design process much more then just provide the manufacturer our list of specifications. Not having an optical engineer on board (although Andrey had majored in Optics at Moscow Institute for Physics and Technology, but worked only with laser components, and has no actual experience of lens design) we decided to get professional help from Optics For Hire with initial lens design and meanwhile getting familiar with optical design software (OSLO 6.6) – trying to create an error (merit) function that formalizes our requirements. In short, the goal is to minimize the RMS of squared spot sizes (averaging 4th power) over the full field of view taking into account the pixel’s spectral range. Right now we are trying to implement custom operands for minimization using OSLO software.

    Feedback is welcome


    Fig.2 Online demo snapshot

    As always with Elphel developments the lens design will be published under CERN Open Hardware License v1.2 and available on github – some early files are already there. We would like to invite feedback from people who are experienced in optical design to help us to find new solutions and avoid mistakes. To make it easier to participate in our efforts we are working on the online demonstration page that helps to visualize optical designs created in Zemax and OSLO. Once the lens design is finished it will be measured using Elphel set-up and software and measurement results will be also published. Other developers can use this project to create derivative designs , optimized for other applications and lens manufacturers can produce this lens as is, according to the freedoms of CERN OHL.


    1. Eyesis4π
    2. Lens measurement and correction technique
    3. Optical Design Viewer: onlinegithub
    4. Optics For Hire company – Optical Design Consultants for Custom Lens Design
    5. Initial optical design files

    by Oleg Dzhimiev at June 30, 2014 09:19 PM

    June 27, 2014

    Bunnie Studios

    Name that Ware June 2014

    The Ware for June 2014 is shown below.

    This is a reader-submitted ware, but the submitter requested to remain anonymous. Thanks, though, you know who you are!

    by bunnie at June 27, 2014 11:42 AM

    Winner, Name that Ware May 2014

    The Ware for May 2014 was a “screamer tag” from Checkpoint Systems. The board bears the silkscreen markings “SC-TG001 Ver05″, and was made by the Kojin Company for use in Japan. Presumably, this tag is part of an anti-theft system that activates an alarm when connectivity between a pair of contacts, visible in the photo, is broken; or if a signal is received (or lost) via RF.

    A lot of good and very close guesses this time, but I’ll have to hand the prize to Hugo for naming the function quite explicitly. Congrats, email me for your prize!

    by bunnie at June 27, 2014 11:42 AM

    June 25, 2014

    Altus Metrum

    keithp&#x27;s rocket blog: Altos1.4.1

    AltOS 1.4.1 — Fix ups for 1.4

    Bdale and I are pleased to announce the release of AltOS version 1.4.1.

    AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

    This is a minor release of AltOS, incorporating a small handful of build and install issues. No new features have been added, and the only firmware change was to make sure that updated TeleMetrum v2.0 firmware is included in this release.

    AltOS — TeleMetrum v2.0 firmware included

    AltOS version 1.4 shipped without updated firmware for TeleMetrum v2.0. There are a couple of useful new features and bug fixes in that version, so if you have a TeleMetrum v2.0 board with older firmware, you should download this release and update it.

    AltosUI and TeleGPS — Signed Windows Drivers, faster maps downloading

    We finally figured out how to get our Windows drivers signed making it easier for Windows 7 and 8 users to install our software and use our devices.

    Also for Windows users, we've fixed the Java version detection so that if you have Java 8 already installed, AltOS and TeleGPS won't try to download Java 7 and install that. We also fixed the Java download path so that if you have no Java installed, we'll download a working version of Java 6 instead of using an invalid Java 7 download URL.

    Finally, for everyone, we fixed maps downloading to use the authorized Google API key method for getting map tiles. This makes map downloading faster and more reliable.

    Thanks for flying with Altus Metrum!

    June 25, 2014 05:35 AM

    June 21, 2014


    DDR3 Memory Interface on Xilinx Zynq SOC – Free Software Compatible

    External memory controller is an important part of many FPGA-centered designs, it is true for Elphel cameras too. When I was working on the board design for NC393 I tried to verify inteface pinout using the code output from the MIG (Memory Interface Generator) module. I was planning to use MIG code as a reference design and customize it for application in the camera, adding more functionality to our previous designs. Memory interface is a rather intimate part of the design where FPGA approach can shine it all its glory – advance knowledge of the types of needed memory transactions (in contrast with the general CPU system memory) helps to increase performance by planning bank and address sequences, crafting memory mapping to utilize close to 100% of the bus bandwidth.

    Fig. 1. DDR3 memory controller block diagram, source code at

    Why new DDR3 controller when Xilinx provides MIG?

    That was my original plan, but MIG  code used 6 undocumented modules (PHASER_*,PHY_CONTROL) and four more (ISERDESE2,OSERDESE2,IN_FIFO and OUT_FIFO) that are only partially documented and the source code of the simulation modules is not available to Xilinx users. This means that MIG as it is currently provided by Xilinx does not satisfy our requirements. It would prevent our customers from simulating Elphel code with Free Software tools, and it also would not allow us to develop efficient code ourselves. Developing HDL code, troubleshooting complex cases through simulation is a rather challenging task already, guessing what is going on inside the “black boxes” without the possibility to at least add some debug output there – it would be a nightmare. Why does the signal differs from what I expected – is it one of my stupid assumptions that are wrong in this case? Did I understand documentation incorrectly? Or is there just a bug in that secret no-source-code module? I browsed the Internet support forums and found that yes, there are in fact cases where users have questions about the simulation of the encrypted modules but I could not find clear answers to them. And it is understandable – it is usually difficult to help with the design made by somebody else, especially when that encrypted black box is connected to the customer code that differs from what black box developers had in mind themselves.

    Does that mean that Zynq SOC is completely useless for Elphel projects?

    Efficient connection to the dedicated (not shared with the CPU) high performance memory is a strict requirement for Elphel products and Xilinx FPGA were always very instrumental in achieving this goal. Through more than a decade of developing cameras based on Xilinx programmable logic our cameras used SDR, then DDR and later DDR2 memory devices.  After discovering  that while advancing silicon technology Xilinx made a step back in the quality of the documentation and simulation support I analyzed the set of still usable modules and features of this new device to see if they alone are sufficient for our requirements. The most important are serializer, deserializer and programmable delay elements (in both input and output directions)  on each I/O pin connected to the memory device, and Xilinx Zynq does provide them. The OSERDES2 and ISERDESE2 (serializer and deserializer modules in Xilinx Zynq) can not be simulated with Free Software tools directly as they depend on encrypted code, but their functionality (without undocumented MEMORY_DDR3 mode) matches that of Xilinx Virtex 6 devices. So with the simple wrapper modules that switch between the *SERDESE2 for synthesis with Xilinx tools and *SERDESE1 for simulation with Icarus Verilog simulator that problem was solved. Input/output delay modules have their HDL source available and did not cause any simulation problems, so the minimal requirements were met and the project goals seemed possible to achieve.

    DDR3 memory interface requirements

    Looking at the Xilinx MIG implementation I compared it with our requirements and I’ve got an impression it tried to be the single universal solution for every possible application. I do not agree with such approach that contradicts the very essence of the FPGA solutions – possibility to generate “hardware” that best suits the custom application. Some universal high-level hard modules enhance bare FPGA fabric – such elements as RAM blocks, DSP, CPU – these units being specialized lost some of their flexibility  (compared to  than arbitrary HDL code)  but became adopted by the  industry and users as they offer high performance while maintaining reasonable universality – same modules can be reused in numerous applications developed by users. The lack of possibility to modify hard modules beyond provided configurable options comes as understandable price for performance – these limitations are imposed by the nature of the technology, not by the bad (or good – trying to keep inexperienced developers away from the dangers of the unrestricted FPGA design) will of the vendors. Below is the table that compares our requirements (and acceptable limitations) of the DDR3 memory interface in comparison with Xilinx MIG solution.

    Feature comparison table

    Feature MIG eddr3 notes
    Usable banks HP,HR HP only HR I/O do not support output delays and limit DCI
    Data width any 16 bits Data width can be manually modified
    Multi-rank support yes no Not required for most applications
    FBG484 single bank no yes MIG does not allow 256Mx16 memory use one bank in FBG484 package
    Access type any block oriented Overlapping between accesses may may be disregarded
    R/W activity on-the-fly pre-calculated Bank mapping, access sequences pre-calculated in advance
    Initialization, leveling hardware software Infrequent procedures implemented in software
    Undocumented features yes no Difficult to debug the code
    Encrypted modules yes no Impossible to simulate with Free Software tools, difficult to debug
    License proprietary GNU GPLv3.0+ Proprietary license complicates distribution of derivative code

    Usable I/O banks

    Accepting HR or “high (voltage) range” banks for memory interfacing lead MIG to sacrifice the ODELAYE2 blocks that are available in HP (“high performance”) banks only. And we did not have this limitation, as the DDR3 chip was already connected to HP bank. I believe it is true for other designs too – it makes sense do follow the bank specialization and use memory with HP banks and reserve HR for other application (like I/O) where the higher voltage range is actually needed.

    Block accesses only

    Another consideration is that having abundance of 32Kb block memory resources in the FPGA and parallel processing nature of the programmable logic, the small memory accesses are not likely, many applications do not need to bother with reduced burst sizes, data byte masking or even back-to-back reads and writes. In our applications we use 1/4 of the BRAM size transfers in most cases (1/4 comes from having a 4-page buffer at each channel to implement simple 2-level prioritizing between multiple channels. Block access does not have to be limited to memory pages – it can be any large predefined sequences of data transfer.

    Hardware vs software implementation of infrequent actions

    MIG feature that I think leads to unneeded complication – everything is done in “hardware”, even write leveling and temperature compensation from the on-chip temperature sensor. I was once impressed by the circuit diagram of Apple ][ computer, and learned a lesson that you do not need to waste special hardware resources on what easily can be done in software without significant sacrifice of performance. Especially in the case of a SOC like Zynq where a high-performance dual-core processor is available. Algorithms that need to run once at start-up and very infrequently during operation (temperature correction) can easily be implemented in software. The memory controller implemented in PL is initialized when the system is fully loaded, so initialization and training can be performed when the full software is available, it is not as system memory that has to be operational from the early boot stage.

    Computation of the access sequences in advance

    When dealing with the multi-channel block access (blocks do not need to be the same size and shape) in the camera, it is acceptable to have an extra latency comparable to the block read/write time, that allowed to simplify the design (and make it more flexible at the same time) by splitting generation and execution of the block access sequences in two separate processes. The physical interface sequencer reads the commands, memory addresses and control signals (as well as channel buffer read/write enable from the block memory, the sequence data is prepared in advance from 2 sources: custom PL circuitry that calculates the next block access sequence and loaded directly by the software over AXI channel (refresh, calibrate ZQ, write leveling and other delay measurement/adjustment sequences)

    No multi-rank

    Another simplification - I did not plan to use multi-rank systems, supplementing FPGA with just one (or several, but just to increase data width/bandwidth, not the depth/capacity) high performance memory chip is a most common configuration. Internal data paths of the programmable logic have so much higher bandwidth than the connection to an external memory, that when several memory chips are used they are usually connected to achieve the highest possible bandwidth. Of course, these considerations are usually, but not always valid. And the FPGA are very good for creating custom solutions for particular cases, not just "one size fits all".

    DDR3 Interface Implementation

    Fig. 1 shows simplified block diagram of the eddr3 project module. It uses just one block (HP34) for interfacing 512M x 16 DDR3 memory with pinout following Xilinx recommendations for MIG. There are two identical byte lanes each having 8 bidirectional data signals running in DDR mode (DQ[0]..DQ[7] and DQ[8]..DQ[15] – only two bits per lane are shown on the diagram), one bidirectional differential DQS. There is also data mask (DM) signal in each byte lane – it is similar to DQ without input signal, and while it is supported in the physical level of the interface, it is not currently used on a higher level of the controller. There is also a differential driver for the memory clock input (CLK,~CLK) and address/command signals that are output only and run in SDR mode at the clock rate.

    I/O ports

    Data bit I/O buffers (IOBUF_DCIEN modules) are directly connected to the I/O pads produce read data outputs feeding IDELAYE2 modules, have data inputs for the write data coming form ODELAYE2 modules, output tristate control and DCI enable inputs. There is only one output delay unit per bit, so tristate control has to come directly from the OSERDESE2 module, but that is OK as the it is still possible to meet the memory requirements when controlling tristate at clock half-period granularity, even when switching between read and write commands. But in the block-oriented memory access in the camera it is even easier as there are no back-to-back read to write accesses. DCIEN control is even less timing critical – basically it is just a power reduction feature so turning it off later and turning on earlier than needed is acceptable. This signal  is controlled with the clock period granularity, same as address/command signals.

    Delay elements

    ODELAYE2 and IDEALYE2  provide 5-bit (31-tap) programmable delays  with  78 ps/tap resolution for 200MHz calibration and 52 ps tap for 300MHz one. The device I have on the prototype board has speed grade 1 so I was limited to 200MHz only (300MHz option is only available for the speed grade 2 or higher devices). From the tools output I noticed that these primitives have *_FINEDELAY option and while these primitives are not documented in Libraries Guide they are in fact available in unisims library so I decided to take a risk and try them, tools happily accepted such code. According to the code FINEDELAY option provides additional stage with five levels of delay with uncalibrated 10 ps step and just static multiplexer control though the 3 inputs. It will be great if Xilinx will add 3 more taps to use all 3 bits of fine delay value  the delay range of this stage will cover the full distance between the outputs of the main (31-tap) delay. It is OK if the combined 8-bit (5+3) delay will not provide monotonic results, that can be handled by the software in most cases. With current hardware the maximal delay of the fine stage only reaches the middle between the main stage taps (4*10 ps ~= 78 ps/2), so it adds just one extra bit of resolution, but even that one bit is very helpful in interfacing DDR3 memory. The actual hardware measurements confirmed that the fine delay stage functions as expected and that there are only 5 steps there. Fine delay stage does not have memory registers to support load/set operations as the main stage, so I added it with additional HDL code. The fine delay mode applies to all IDEALYE2 and ODELAYE2 block shown on the diagram, each 8-bit delay value is individually loaded by software through MAXIGP0 channel, additional write sets all the delays simultaneously.

    Source-synchronous clocks

    Received DQS signal in each byte lane goes through input delay and then drives BUFR primitive that in turn provides input clock to all data bit ISERDESE2 modules in the same byte lane. I tried to use BUFIO for that purpose, but the tools did not agree with me.

    Serializers and deserializers, clocks

    The two other clocks driving ISERDESE2 and OSERDESE2 (they have to be the same for input and output paths) are generated by the MMCME2_ADV module. One of them is the full memory clock rate, the other has half frequency. The same MMCME2_ADV module generates another half frequency clock that through the global buffer BUFG drives the rest the controller, registers are inserted in the data paths crossing clock domains to compensate for possible phase variations between BUFG and BUFR. Additional output drives memory clock input pair, MMCME2_ADV dynamically phase shifts all the other outputs but this one, effectively adding one extra degree of freedom for meeting write leveling requirements (zero phase shift between clock and DQS outputs). This clock control is implemented in phy_top.v module.

    I/O delay calibration

    PLLE2_BASE is used to generate 200MHz used for calibration of the input/output delays by the instance of IDELAYCTRL primitive.

    PHY control sequencer

    The control signals: memory addresses/bank addresses, commands, read/write enable signals to channel data buffers are generated by the sequencer module running at half of the memory clock, so the width of data read/write to the data buffers is 64 bits for 16 bit DDR3 memory bus. Sequencer data is encoded as 32-bit words and is provided by the multiplexed output from the read port of one of the two parallel memory blocks. One of these block is written by software, the other one is calculated in the fabric. Primary application is to read/write block data to/from multiple concurrent channels (for NC393 camera we plan to use 16 such channels), and with each channel buffer accommodating 4 blocks it is acceptable to have significant latency in the data channels. And I decided to calculate the control data separately from accessing the memory, not to do that on-the-fly. That simplifies the logic, adds flexibility to optimize sequences and with software programmable memory it simplifies evaluation of different accesses without reconfiguring the FPGA fabric. In the current implementation only one non-NOP command can be issued in the sequencer 2-clock time slot, but which clock to use – first or second is controlled by a program word bit individually for each slot. Another bit adds a NOP cycle after the current command, this is used for bulk of the read/write commands for consecutive burst of 8 accesses. When the sequencer command is NOP the address fields are re-used to specify duration of the pause and the end-of-sequence flag.

    CPU interface, AXI port

    Initial implementation goal was just to test the memory interface, it has only two (instead of 16) memory access channels – program read and program write data, and there is only one of the two sequencer memory banks (also programmed by the software), the only asynchronously  running channel is memory refresh channel. All the communications are performed over AXI PS Master GP0 channel with memory mapped addresses for the controller configuration, delays and MMCM phase set up, access to the sequencer and data memory. All the internal clocks are derived from a single (currently 50MHz) FCLKCLK[0] clock coming from the PS7 module (PS-PL bridge), EMIO pins are used for debugging only.

    EDDR3 Performance Evaluation

    Current implementation uses internal Vref and the Zynq datasheet specifies the maximal clock rate 400MHz (800 Mb/s) rate so I started evaluation at the same frequency. But the memory chip connected to Zynq is Micron MT41K256M16HA-107:E (same as the other two used for the system memory) capable of running at 933MHz, so the plan was to increase the operational frequency later, so 400 MHz clock (1600MB/s for x16 memory) is sufficient just to start porting our earlier camera functionality to the Zynq-based NC393. Initial settings for all output and I/O ports SLEW is “SLOW” so the inter-symbol interference should reveal itself at lower frequencies during evaluation. Power supply voltage  for the HP34 port and memory device is set to 1.5V, hardware allows to reduce it to 1.35V so later we plan to evaluate 1.35V performance also. Performance measurements are implemented as a Python script (it does not look like Pythonian, most of the text was just edited from the Verilog text fixture used for simulation) running on the target system, the results were imported into Libreoffice Calc spreadsheet program to create eye diagram plots. Python script directly accesses memory-mapped AXI PS Master GP0 port to read/write data, no custom kernel space drivers were needed for this project. Both simulation test fixture and the Python script programmed delay values, controller modes and created sequence data for memory initialization, refresh, write leveling, fixed pattern reading, block write and block read operations. For eye pattern generation one of the delay values was scanned over the available range, randomly generated 512 byte block of data was written and then read back. Then the read data was compared  to the one written, each of the 4096 bits in a block was assigned  a group depending on the previous, current and next bit written to the same DQ signal. These groups are shown on the next plots, marked in the legend as binary strings, “001″ means that previous written bit was “0″, current one is also “0″ and the next one will be “1″.  Then the read data was averaged in each block per each of 8 groups, first for each DQ individually and averaged between all of the 16 DQ signals. The delays scanned over 32 values of the main delays and 5 values of fine delays for each, the relative weight of fine delays was calculated from the measured data and used in the final plots.
    DQ input delay common for all bits, DQS input delay variable

    Fig. 2. DQ input delay common for all bits, DQS input delay variable

    DQ and DQS input delay selection by reading fixed pattern from memory

    First I selected initial values for DQ and DQS input delays reading fixed pattern data form the memory – that mode eliminates dependence on write operation errors, but does not allow testing over the random data, each bit toggles simultaneously between zero and one. This is a special mode of DDR3 memory devices activated by control bits in the MR3 mode register, reading this pattern does not require activation or any other commands before issuing READ command.

    Scanning DQS input delay with fixed DQ input delay using randomly generated data

    DQ delays can scan over the full period, but DQS input delay has certain timing dependence on the pair of output clock. Fig. 2. illustrates this – the first transition centered at ~150 ps is caused by the relative input delays of DQ and DQS. Data strobe latches mostly previous bit at delays around 0 and correctly latches the current bit for delays form 400 to 1150 ps, then switches to the next bit. And at around the same delay of 1300 ps the iclk to oclk timing in ISERDESE2 is not satisfied causing errors not related to DQ to DQS timing. The wide transition at 150 ps is caused by a mismatch between individual bit delays, when those individual bits are aligned (Fig. 4) the transition is narrower.
    Fig. 3. Alignment of individual DQ input delays using 90-degree shifted DQS delay

    Fig. 3. Alignment of individual DQ input delays using 90-degree shifted DQS delay

    Aligning individual DQ input delay values

    For aligning individual DQ input delays (Fig. 3) I programmed DQS 90 degrees offset from the eye center of Fig. 2, and find the delay value for each bit that provides the closest to 50% value. Scanning takes over both main (32 steps) and fine (5 steps) delays, there are no special requirements on the relative weights of the two, no need for the combined 8-bit delay to be monotonic. This eye patter doe not have an abnormality similar to the one for DQS input delay, the result plot only depends on DQ to DQS delay, there are no additional timing requirements. The transition ranges are wide, plot averages results from all individual bits, alignment process uses individual bits data.
    Fig.4. DQ input delays aligned, DQS input delay variable

    Fig.4. DQ input delays aligned, DQS input delay variable

    Scanning over DQS input delay with DQ input delays aligned

    After finishing individual data bits (DQ) input delays alignment I measured the eye pattern for DQS input delay again. This time the eye opened more as one of the sources of errors was greatly diminished. Valid data is now from 100 ps to 1050 ps and DQS delay can be set to 575 ps in the center between the two transitions. At the same time there is more than 90 degrees phase shift of the DQS from the value when iclk to oclk delay causes errors. Fig.4. also shows that (at ~1150 ps) there is very little difference between 010 and 110 patterns, same for 001 and 101 pair. That means that inter-symbol interference is low and the bandwidth of the read data transfer is high so the data rate can likely be significantly increased.

    Evaluation of memory WRITE operations

    When data is written to memory DDR3 device is expecting certain (90 degree shift) timing relation between DQS output and DQ signals. And similar to the read operation there are additional restrictions on the DQS timing itself. The read DQS timing restrictions were imposed by the ISERDESE2 modules, in the case of write the DQS timing requirements come form the memory device – DQS should be nominally aligned to the clock on the input pads of the memory device. And there is a special mode supported by DDR3 memory devices to facilitate this process – “write leveling” mode – the only mode when memory uses DQS as input (as in WRITE modes) and drives DQ as outputs (as in READ mode), with least significant bit in each byte lane signals the level of clock signal at DQS rising edge. By varying the DQS phase and reading data it is possible to find the proper delay of the DQS output, additionally the relative memory clock phase is controlled by the programmable delay in the  MMCME2_ADV module.
    Fig. 5. DQ output delay common for all bits, DQS output delay variable

    Fig. 5. DQ output delay common for all bits, DQS output delay variable

    Scanning over DQS output delay with the individual DQ output delays programmed to the same value

    With the DQ and DQS  input delays determined earlier and set to the middle of the respective ranges it is possible to use random data writing to memory for evaluation of the eye patterns for WRITE mode. Fig. 5. shows the result of scanning of the DQS output delay over the full available range while all the DQ output delays were set to the same value of 1400 ps. The optimal DQS output delay value determined by write leveling was 775 ps. The plot shows the only abnormality at ~2300 ps caused by a gross violation of the write leveling timing, but this delay is far from the area of interest and results show that it is safe to program the DQS delay off by 90 degrees from the final value for the purpose of aligning DQ delays to each other.
    Fig. 6. Alignment of individual DQ output delays using 90-degree shifted DQS output delay

    Fig. 6. Alignment of individual DQ output delays using 90-degree shifted DQS output delay

    Aligning individual DQ output delay values

    The output delay of the individual DQ signals is adjusted similarly  to how it was done for the input delays. The DQS output delay was programmed with 90 degree offset to the required value (1400 ps instead of 775 ps) and each data bit output delay was set to the value that results in as close to 50% as possible. This condition is achieved around 1450 ps as shown on the Fig. 6. 50% level at low delays (<150 ps) on the plot comes from the fact that the bit “history” is followed to only 1 before the current, and the range of the Fig. 6 is not centered around the current bit, it covers the range of two bits before current, 1 bit before current and the current bit. And as two bits before current are not considered, the result is the average of approximately equal probabilities of one and zero.
    Fig.7. DQ output delays aligned, DQS output delay variable

    Fig.7. DQ output delays aligned, DQS output delay variable

    Scanning over DQS output delays with the individual data bits aligned

    When the individual bit output delays are aligned, it is possible to re-scan the eye pattern over variable DQS output delays, the results are shown on Fig. 7. Comparing it with Fig. 5 you may see that improvement is very small,  the width of the first transition is virtually the same and on the second transition (around 1500 ps) the individual curves while being “sharper” do not match each other (o10 does not match 110 and 001 does not match 101). This means that there is significant inter-symbol interference (previous bit value influences the next one). There is no split between individual curves around the first transition (~200 ps), but that is just because the history is not followed that far and the result averages both variants, causing the increased width of the individual curves transitions compared to the 1500 ps area. But we used SLEW=”SLOW”  for all memory interface outputs in this setup. This it is quite adequate for the 400MHz (800Mb/s) clock rate to reduce the power consumption, but this option will not work when we will increase the clock rate in the future. Then the SLEW=”FAST” will be the only option.

    Software Tools Used

    This project used various software tools for development.
    • Icarus Verilog provided simulation engine. I used the latest version from the Github  repository and had to make minor changes to make it work with the project
    • GTKWave for viewing simulation results
    • Xilinx Vivado and Xilinx ISE WebPack Edition for synthesis, place and route and other implementation tasks. To my personal opinion Xilinx ISE still provides better explanation of what it does during synthesis than newer Vivado, for example – why did it remove some of the register bits. So I was debugging code with ISE first, then later running Vivado tools for the final bitstream generation
    • Micron Technology DDR3 SDRAM Verilog Model
    • Eclipse IDE (4.3 Kepler) as the development environment to integrate all the other tools
    • Python programming language and PyDev – Python development plugin for Eclipse
    • VDT plugin for Eclipse (documentation) including the modified version of VEditor. This plugin (currently working for Verilog, tested on GNU Linux and Mac) implements support for Tool Specification Language (TSL) and enables easy integration of the 3rd party tools with support of custom message parsing. I’ll write a separate blog post about this tool, this current eddr3 project is the first one to test VDT plugin in real action.

    Fig. 8. VDT plugin screenshot with eddr3 project opened


    The eddr3 project demonstrated performance that makes it suitable for Elphel NC393 camera system, successfully implementing DDR3 memory interface to the 512Mx16 device (Micron MT41K256M16HA-107:E) in a single HP34 bank of Xilinx XC7Z030-1FBG484C. The initial data rate equals to the maximal recommended by Xilinx for the hardware setup (using internal Vref) providing 1600MB/s data bandwidth, design uses the SLEW=”SLOW” on all control and data outputs. Evaluation of the performance suggests that it is possible to increase the data rate, probably to above the 3GB/s for the same configuration. The design was simulated using exclusively Free Software tools without any use of encrypted or undocumented features.

    by andrey at June 21, 2014 12:36 AM

    June 18, 2014

    Altus Metrum

    keithp&#x27;s rocket blog: TeleGPS-Battery-Life

    TeleGPS Battery Life

    I charged up one of the "160mAh" batteries that we sell. (The ones we've got now are labeled 200mAh; the 160mAh rating is something like a minimum that we expect to be able to ever get at that size.)

    I connected the battery to a TeleGPS board, hooked up a telemetry monitoring setup on my laptop and set the device in the window of my office. This let me watch the battery voltage through the day without interrupting my other work. Of course, because the telemetry was logged to a file, I've now got a complete plot of the voltage data:

    It looks like a pretty typical lithium polymer discharge graph; slightly faster drop from the 4.1V full charge voltage down to about 3.9V, then a gradual drop to 3.65 at which point it starts to dive as the battery is nearly discharged.

    Because we run the electronics at 3.3V, and the LDO has a dropout of about 100mV, it's best if the battery stays above 3.4V. That occurred at around 21500 seconds of run time, or almost exactly six hours.

    We also have an "850mAh" battery in the shop; I'd expect that to last a bit more than four times as long, or about a day. Maybe I'll get bored enough at some point to hook one up and verify that guess.

    June 18, 2014 02:23 AM

    June 17, 2014

    Richard Hughes, ColorHug

    DNF v.s. Yum

    A lot has been said on fedora-devel in the last few weeks about DNF and Yum. I thought it might be useful to contribute my own views, considering I’ve spent the last half-decade consuming the internal Yum API and the last couple of years helping to design the replacement with about half a dozen of the packaging team here at Red Hat. I’m also a person who unsuccessfully tried to replace Yum completely with Zif in fedora a few years ago, so I know quite a bit about packaging systems and metadata parsing.

    From my point of view, the hawkey depsolving library that DNF is designed upon is well designed, optimised and itself built on a successful low-level SAT library that SUSE has been using for years on production level workloads. The downloading and metadata parsing component used by DNF, librepo, is also well designed and complements the hawkey API nicely.

    Rather than use the DNF framework directly, PackageKit uses librepo and hawkey to share 80% of the mechanism between PK and DNF. From what I’ve seen of the DNF codebase it’s nice, with unit tests and lots of the older compatibility cruft removed and the only reason it’s not used in PK was that the daemon is written in C and didn’t want to marshal everything via python for latency reasons.

    So, from my point of view, DNF is a new command line tool built on 3 new libraries. It’s history may be of a fork from yum, but it resembles more of a 2014 rebuilt American hot-rod with all new motor-sport parts apart from the 1965 modified and strengthened chassis. Renaming DNF to Yum2 would be entirely the wrong message; it’s a new project with a new team and new goals.

    by hughsie at June 17, 2014 03:12 PM

    Video Circuits

    Étienne-Jules Marey & Georges Demeny

    Marey & Demeny both photographers and inventors in France, working at the same time as Muybridge (and perhaps even more pioneering!) established a programme of research which was to lead to the creation of the ‘Station Physiologique' where they would use a variety of methods to visually record and study various kinds of movement. This was all going on at the dawn of film and many of their inventions were precursors or direct ancestors of the standard film camera and projector. They also recorded some images of sound , data and movement as light which is what I am interested in (see this post on early sound visualization/ photoacoustics)

    by Chris ( at June 17, 2014 12:59 PM

    June 16, 2014

    Richard Hughes, ColorHug

    datarootdir v.s. datadir

    Public Service Announcement: Debian helpfully defines datadir to be /usr/share/games for some packages, which means that the AppData and MetaInfo files get installed into /usr/share/games/appdata which isn’t picked up by the metadata parsers.

    It’s probably safer to install the AppData files into $datarootdir/appdata as this will work even if a distro has redefined datadir to be something slightly odd. I’ve changed the examples on the AppData page, but if you maintain a game on Debian with AppData then this might affect you when Debian starts extracting AppSpream metadata in the next few weeks. Anyone affected will be getting email in the next few days, although it only looks to affect very few people.

    by hughsie at June 16, 2014 03:52 PM

    Altus Metrum

    keithp&#x27;s rocket blog: Altos1.4

    AltOS 1.4 — TeleGPS support, features and bug fixes

    Bdale and I are pleased to announce the release of AltOS version 1.4.

    AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

    This is a major release of AltOS, including support for our new TeleGPS board and a host of new features and bug fixes

    AltOS Firmware — TeleGPS added, new features and fixes

    Our new tracker, TeleGPS, works quite differently than a flight computer

    • Starts tracking and logging at power-on

    • Disables RF and logging only when connected to USB

    • Doesn't log position when it isn't moving for a long time.

    TeleGPS transmits our digital telemetry protocol, APRS and radio direction finding beacons.

    For TeleMega, we've made the firing time for the additional pyro channels (A-D) configurable, in case the default (50ms) isn't long enough.

    AltOS Beeping Changes

    The three-beep startup tones have been replaced with a report of the current battery voltage. This is nice on all of the board, but particularly useful with EasyMini which doesn't have the benefit of telemetry reporting its state.

    We also changed the other state tones to "Farnsworth" spacing. This makes them all faster, and easier to distinguish from the numeric reports of voltage and altitude.

    Finally, we've added the ability to change the frequency of the beeper tones. This is nice when you have two Altus Metrum flight computers in the same ebay and want to be able to tell the beeps apart.

    AltOS Bug Fixes

    Fixed a bug which prevented you from using TeleMega's extra pyro channel 'Flight State After' configuration value.

    AltOS 1.3.2 on TeleMetrum v2.0 and TeleMega would reset the flight number to 2 after erasing flights; that's been fixed.

    AltosUI — New Maps, igniter tab and a few fixes

    With TeleGPS tracks now potentially ranging over a much wider area than a typical rocket flight, the Maps interface has been updated to include zooming and multiple map styles. It also now uses less memory, which should make it work on a wider range of systems.

    For TeleMega, we've added an 'Igniter' tab to the flight monitor interface so you can check voltages on the extra pyro channels before pushing the button.

    We're hoping that the new Maps interface will load and run on machines with limited memory for Java applications; please let us know if this changes anything for you.

    TeleGPS — All new application just for TeleGPS

    While TeleGPS shares the same telemetry and data logging capabilities as all of the Altus Metrum flight computers, its use as a tracker is expected to be both broader and simpler than the rocketry-specific systems. We've build a custom TeleGPS application that incorporates the mapping and data visualization aspects of AltosUI, but eliminates all of the rocketry-specific flight state tracking.

    June 16, 2014 02:48 AM

    June 11, 2014

    Richard Hughes, ColorHug

    Application Addons in GNOME Software

    Ever since we rolled out the GNOME Software Center, people have wanted to extend it to do other things. One thing that was very important to the Eclipse developers was a way of adding addons to the main application, which seems a sensible request. We wanted to make this generic enough so that it could be used in gedit and similar modular GNOME and KDE applications. We’ve deliberately not targeted Chrome or Firefox, as these applications will do a much better job compared to the package-centric operation of GNOME Software.

    So. Do you maintain a plugin or extension that should be shown as an addon to an existing desktop application in the software center? If the answer is “no” you can probably stop reading, but otherwise, please create a file something like this:

    <?xml version="1.0" encoding="UTF-8"?>
    <!-- Copyright 2014 Your Name Here <> -->
    <component type="addon">
    <name>Code Assistance</name>
    <summary>Code assistance for C, C++ and Objective-C</summary>
    <url type="homepage"></url>

    This wants to be installed into /usr/share/appdata/gedit-code-assistance.metainfo.xml — this isn’t just another file format, this is the main component schema used internally by AppStream. Some notes when creating the file:

    • You can use anything as the <id> but it needs to be unique and sensible and also match the .metainfo.xml filename prefix
    • You can use appstream-util validate gedit-code-assistance.metainfo.xml if you install appstream-glib from git.
    • Don’t put the application name you’re extending in the <name> or <summary> tags — so you’d use “Code Assistance” rather than “GEdit Code Assistance
    • You can omit the <url> if it’s the same as the upstream project
    • You don’t need to create the metainfo.xml if the plugin is typically shipped in the same package as the application you’re extending
    • Please use <_name> and <_summary> if you’re using intltool to translate either your desktop file or the existing appdata file and remember to add the file to if you use one

    Please grab me on IRC if you have any questions or concerns, or leave a comment here. Kalev is currently working on the GNOME Software UI side, and I only finished the metadata extractor for Fedora today, so don’t expect the feature to be visible until GNOME 3.14 and Fedora 21.

    by hughsie at June 11, 2014 04:36 PM

    June 09, 2014

    Altus Metrum

    bdale&#x27;s rocket blog: TeleGPS v1.0

    Keith and I are pleased to announce the immediate availability of TeleGPS v1.0!

    TeleGPS is our response to the many requests we've received for an easy-to-use tracking-only board that just provides GPS position information over radio. Combining the same uBlox Max 7Q GPS receiver used in TeleMega and TeleMetrum v2.0 with a 16mW transmitter yields a board that is 1.5 x 1.0 inches (38.1 x 25.4 mm).

    As usual for our products, TeleGPS is designed for use under FCC Part 97 (ham radio) rules or equivalent authorization. In addition to the GPS receiver and UHF radio transmitter, TeleGPS includes on-board flash data storage and a micro USB connector for configuration, post-flight data download, and to power a LiPo battery charger.

    TeleGPS works with our existing ground station products and/or any radio equipped with APRS support, and also emits audible radio direction finding beeps. While TeleGPS can be used with our existing AltosUI and AltosDroid ground station software, Keith is working on a simpler, dedicated application optimized for use with TeleGPS.

    Altus Metrum products are available directly from Bdale's web store, and from these distributors:

    All Altus Metrum products are completely open hardware and open source. The hardware design details and all source code are openly available for download, and advanced users are invited to join our developer community and help to enhance and extend the system. You can learn more about Altus Metrum products at

    Thank you all for your continuing support of Altus Metrum, and we hope to see you on a flight line somewhere soon!

    June 09, 2014 12:53 AM

    June 06, 2014

    Video Circuits

    Live Performance

    So me and Dale played live on tuesday, here are some video and audio shot by Anne.
    Dale plays his cassette tape images as scores that are also instruments, I am generating audio from my modular and video from DIY circuits and a fed back video mixer. I'm playing solo in London on Saturday here .

    by Chris ( at June 06, 2014 04:35 AM

    Film Preservation

    Forgot to post these photos from some amazing training I did. If anyone wants to let me look after their video art, computer art or abstract animation I would be more than happy to help.

    by Chris ( at June 06, 2014 04:08 AM

    May 30, 2014

    Video Circuits

    ANALOG DREAMSCAPE: Video & Computer Art in Chicago 1973-1985

    Video & Computer Art in Chicago 1973-1985

    Friday, June 13th @ 7pm

    University of Illinois at Chicago

    Institute for the Humanities

    701 South Morgan, Lower Level - Stevenson Hall

    Chicago, IL 60607

    In partnership with the Institute for Humanities at UIC, South Side Projections presents ANALOG DREAMSCAPE, a screening and discussion with Daniel J. Sandin and new media historian Jon Cates. Sandin is a trailblazing video artist and director emeritus of the Electronic Visualization Laboratory (co-founded with Tom DeFanti), an interdisciplinary program at the crossroads of art and computer science. Among his many technological accomplishments is the Sandin Image Processor, and analog video synthesizer made in 1973 with the revolutionary ability to radically manipulate images in real time. An early advocate for the DIY, open source ethos, Sandin made the blueprints of rht Image Processor available to the public so that others could hack his original design. The result was a treasure trove of abstract, psychedelic short films that remain utterly hypnotic three decades later. Similar to contemporary glitch aesthetics, the artwork made with the Image Processor conjures up the unconscious of a circuit board, creating a chromatic blur of geometric shapes and patterns. EVL colleague Larry Cube used technology to creating the 3-D computer models used in the Death Star briefing room sequence of Star Wars: A New Hope. Sandin’s additional credits include the first data glove, a device used to control computers via finger movement, and the CAVE™, and immersive virtual reality environment inspired by the Plato’s allegory. He has received numerous grants and his early video “Spiral PTL” (made in collaboration with DeFanti and Mimi Shevitz) is featured in the inaugural collection of video art at the Museum of Modern Art. This program will feature a retrospective of work created by Sandin and others (from both UIC and SAIC) using the Image Processor and early digital computer systems developed at EVL. For more information visit

    by Chris ( at May 30, 2014 09:14 AM

    NOT ABOUT ART: A sampler of short films by AL RAZUTIS—VISUAL ALCHEMY

    NOT ABOUT ART: A sampler of short films by AL RAZUTIS—VISUAL ALCHEMY

    1200 N Alvarado St. (@ Sunset Blvd.) Los Angeles, CA.

    8 PM
    Thursday, June 12 

    Celebrating avant-garde, Structuralist, formalist, mythopoeic, Situationist and anarchist influences over nearly 50 years of film-making, Al Razutis is pioneer in film/video hybrids, optical manipulations, radical media performance, holographic and 3-D art practice, and all-around troublemaking. Filmmaker will be in attendance to introduce, comment, and engage with audience on the film-forms, context, and interpretation of film practice outside of art institutions, outside of commercial and popular notions of film as experimental and underground cinema. Al Razutis in person!

    by Chris ( at May 30, 2014 09:13 AM

    May 28, 2014

    Richard Hughes, ColorHug

    AppData progress and the email deluge

    In the last few days, I’ve been asking people to create and ship AppData files upstream. I’ve:

    • Sent 245 emails to upstream maintainers
    • Opened 38 launchpad bugs
    • Created 5 bugs
    • Opened 72 sourceforge feature requests
    • Opened 138 github issues
    • Created 8 bugs on Fedora trac
    • Opened ~20 accounts on random issue trackers
    • Used 17 “contact” forms

    In doing this, I’ve visited over 600 upstream websites, helpfully identifying 28 that are stated as abandoned by thier maintainer (and thus removed from the metadata). I’ve also blacklisted quite a few things that are not actually applications and not suitable for the software center.

    I’ve deliberately not included GNOME in this sweep, as a lot of the core GNOME applications already have AppData and most of the gnomies already know what to do. I also didn’t include XFCE appications, as XFCE has agreed to adopt AppData on the mailing list and are in the process of doing this already. KDE is just working out how to merge the various files created by Matthias, and I’ve not heard anything from LXDE or MATE. So, I only looked at projects not affiliated with any particular desktop.

    For far, the response has been very positive, with at least 10% of the requests been actioned and some projects even doing new releases that I’ve been slowly uploading into Fedora. Another ~10% of requests are acknowlegdments from maintainers thay they would do this sometime before the next release. I have found a lot of genuinely interesting applications in my travels, and lot of junk. The junk is mostly unmaintained, and so my policy of not including applications (unless they have AppData manually added by the distro packager) that have not had an upstream release for the last 5 years seems to be valid.

    At least 5 of the replies have been very negative, e.g. “how dare you ask me to do something — do it yourself” and things like “Please do not contact me again – I don’t want any new users“. The vast number of people have not responded yet — so I’m preparing myself for a deluge over the next few weeks from the people that care.

    My long term aim is to only show applications in Fedora 22 with AppData, so it seemed only fair to contact the various upstream projects about an initiative they’re probably not familiar with. If we don’t get > 50% of applications in Fedora with the extra data we’ll have to reconsider such a strong stance. So far we’ve reached over 20%, which is pretty impressive for a standard I’ve been pushing for such a short amount of time.

    So, if you’ve got an email from me, please read it and reply — thanks.

    by hughsie at May 28, 2014 11:15 AM

    May 25, 2014

    Bunnie Studios

    I Broke My Phone’s Screen, and It Was Awesome

    So this past week has been quite a whirlwind — we wrapped up the Novena campaign and smashed all our stretch goals, concluding with over $700k raised, and I got my hair cut in a bar at midnight by none other than the skilled hands of Lenore of Evil Mad Scientist Laboratories (I blame Jake! :). It was an exhilarating week; xobs and I are really grateful for the outpouring of support and we’re looking forward to working with the community to build an open hardware ecosystem that can grow for years to come.

    On my way back home to Singapore, I stopped by Dongguan to have a visit with my supply chain partners to hammer out production plans for Novena. Unfortunately, as I was getting out of the taxi at the Futian border checkpoint going into China, I dropped my phone on the sidewalk and shattered its screen.

    There is no better place in the world to break your phone’s screen than the border crossing into Shenzhen. Within an hour of dropping the phone, I had a new screen installed by skilled hands in Hua Qiang Bei, for a price of $25.

    Originally, I thought I would replace the screen myself — on my broken phone, I hastily visited iFixit for details on the procedure to replace the screen, and then booked it over to Hua Qiang Bei to purchase the replacement parts and tools I would need. The stall I visited quoted me about US$120 for a new screen, but then the lady grabbed my phone out of my hands, and launched a built in self test program on the phone by dialing *#0*# into the phone dialer UI.

    She confirmed that there were no bad pixels on my OLED display and that the digitizer was still functional, but just cracked. She then offered to buy my broken OLED+digitizer assembly off of me, but only if they did the work to replace my screen. I said it would be fine as long as I could watch them do the job, to make sure they aren’t swapping out any other parts on me.

    They had no problem with that, of course — so my phone came apart, had the old broken OLED+digitizer assembly separated, adhesive stripped from the phone body, replaced with a proper new film of adhesive, a “new” (presumably refurbished) OLED+digitizer fitted and re-assembled in 20 minutes. The whole service including parts and labor came out to $25. I kept on thinking “man I should take pictures of this” but unfortunately the device I would use to take said pictures was in pieces in front of me. But, I’ll hint that the process involved a hair dryer (used as a heat gun), copious amounts of contact cleaner (used to soften the adhesive on the OLED+digitizer module), and a very long thumbnail (in lieu of a spudger/guitar pick).

    This is the power of recycling and repair — instead of paying $120 for a screen and throwing away what is largely a functional piece of electronics, I just had to pay for the cost of just replacing the broken glass itself. I had originally assumed that the glass on the digitizer is inseparable from the OLED, but apparently those clever folks in Hua Qiang Bei have figured out an efficient method to recycle these parts. After all, the bulk of the assembly’s cost is in the OLED display, and the touchscreen sensor electronics (which are also grafted onto the module) are also undamaged by the fall. Why waste perfectly good parts, anyways?

    And so, my phone had a broken screen for all of an hour, and it was fixed for less than the cost of shipping spare parts to Singapore. There is no better place to break your phone than in Shenzhen!

    by bunnie at May 25, 2014 08:07 AM

    Name that Ware May 2014

    The Ware for May 2014 is shown below.

    Thanks to @jamoross from Safecast for contributing this ware!

    by bunnie at May 25, 2014 08:07 AM

    Winner, Name that Ware April 2014

    The Ware for April 2014 is a Propeller II from Parallax. Kudos to David for nailing it; email me for your prize!

    by bunnie at May 25, 2014 08:06 AM

    Video Circuits

    F.C. Judd

    The work of Frederick Charles Judd previously neglected somewhat by the history books, has over the last few years received renewed interest due to Ian Helliwell's work. Ian's articles, films and exhibitions have collected and disseminated many of Fred's forgotten work and ideas. One of these was his Chromasonics system, which effectively combined CRT based Lissajous figures with a high speed colour wheel to allow full colour display of the electronic images with movement generated by sound. Fred also wrote a series of articles in Practical Electronics magazine on how to construct such a system as well as other audio visualization techniques such as colour organs. Fred is now recognised as an important electronic and tape music composer with a re-issued collection of works available here.
    I wanted to focus on his visual work, so here are a series of scans from my collection and some links.

    Here are some stills of the images generated by the 
    Chromasonics system 

    They are very reminiscent of work by Ben F. Laposky although moving rather than static photographs. Fred was also aware of the Oramics system build by Daphne Oram which also used CRT's. Orams system however used them to turn images of waveforms into electronic signals rather than visualise the sounds themselves. Ian's film Practical Electronica contains some footage Fred created of the Chromasonics system in action. below is a full colour image from the cover of Practical electronics. Fred's work on audio definitely inspired wide range of experimenters, I wonder if any visual work by his readers survive. 

    Here are some images the construction of the Chromasonics system notice the large colour wheel that synchronised with the refresh rate of the displayed images so as to selectively colourise different signals allowing for multi colour display.

    Here are some stills of Chromasonics and the trailer for Ian's film

    These are some clippings of the the displays Fred developed.

    And finally a few pics of Fred at work creating sound!

    Practical Electronica will be screened on Tuesday in London link to the event here 

    by Chris ( at May 25, 2014 05:26 AM

    May 22, 2014

    Video Circuits

    Sketches of my Sister Plus Laurie


    "Experimental video produced at Electron Movers, a video art coop in 1975. The video is made on 1/2" EIAJ B&W video. The dancers are delayed by running the video between two video tape decks. Te sound was produced on a Buchala audio Synthesizer at the national center for Experiments in Television. The video processing equipment was built by George Brown and Built by Alan Powell."

    by Chris ( at May 22, 2014 09:30 AM


    KR580VM80A - getting ready for reverse engineering : weekend die-shot

    We decided to take a closer look at the most popular soviet processor KR580VM80A (first shoot), so that group of enthusiasts (russian only) would be able to recover schematic from it's layout.

    A bit more dirt, but less overetch:

    Dark field - metal is clearly visible:

    Polarized light - metal and vias are visible:

    After metallization etch. Any ideas why polysilicon gone?

    May 22, 2014 03:19 AM

    May 20, 2014

    Michele's GNSS blog

    Galileo RTK with NV08C-CSM hw 4.1

    Being European I am often subject to skepticism about Galileo and compelled to justify its delays and usefulness. Explaining why Galileo is better and needed is beyond the scope of my blog but IMHO there is one key selling point that not many people stress.

    Galileo was designed from the ground up in close collaboration with the USA: GPS and Galileo share L1 (1575.42 MHz) and L5/E5a (1176.45 MHz). In the future, a dual frequency GPS+Galileo L1/L5 receiver will deliver products with an incredible ratio between (performance+availability)/silicon.
    According to scheduled launches there could be 12+ satellites supporting open L1+L5 by the end of this year already.
    In the meantime, NVS touches base first in the mass-market receiver domain by delivering consistent carrier-phase measurements for GPS+Glonass+Galileo.

    I have recently run zero-baseline double-differences in static, perfect visibility conditions using a high-end survey antenna:
    Figure 1: Galileo double differences in static zero-baseline (NV08C-CSM hw4.1)
    Using E11 as reference, the carrier phase noise is well contained within 0.01 circles (2mm).
    With RTKLIB I run a Galileo-only static IAR and the result is as expected:

    Figure 2: Static IAR with Galileo only (4 IOVs)
    The combined GPS+Galileo static IAR looks like this:

    Figure 3: Static IAR with GPS+Galileo
    Note the 12 satellites above the 10° elevation mask used in the computation of carrier ambiguities :)

    Understandably, Skytraq is working on GPS+Beidou carrier phase and I may publish some results on that too although visibility of Beidou MEOs is not great from here.

    In the meantime, for people who wonder where uBlox NEO6T stands in terms of GPS carrier phase noise in similar conditions to the above here it is my result:
    Figure 4: GPS double differences in static zero-baseline (uBlox NEO6T)
    Which shows similar noise levels to NV08C-CSM.

    by (Michele Bavaro) at May 20, 2014 10:25 PM

    May 17, 2014


    Toshiba TCD1201D - linear CCD : weekend die-shot

    Toshiba TCD1201D is a monochrome 2048-pixel linear CCD. You can also notice few extra pixels for calibration shielded with aluminum.

    Die size 34814x802 µm.

    With this die we've reached JPEG limits, full image would be 80k+ pixels wide, so we'll show beginning and the end of the CCD separately:

    We are grateful to Kony for this chip.

    May 17, 2014 12:45 PM

    Bunnie Studios

    See you at Maker Faire Bay Area!

    Looking forward to seeing everyone at Maker Faire Bay Area, happening May 17 & 18 at the San Mateo Event Center. xobs and I will be giving a short half-hour talk starting at 10:30AM in the Expo hall on Saturday about Novena, on the Electronics stage. Afterwards, xobs will be hanging out with his Novena at the Freescale booth, also in the Expo hall, about halfway down on the left hand side across from the Atmel/Arduino booth. If you’re curious to see it or just want to stop by and say hi, we welcome you!

    Also, the whole chibitronics crew will be in the Expo hall as well, in the second row between Sony, PCH, and Qualcomm (‽‽‽). We’ll be teaching people how to craft circuits onto paper; attendees who can score a first-come, first-serve spot will receive free circuit stickers and also get a chance to be instructed by the wonderful and dynamic creative genius behind chibitronics, Jie Qi.

    by bunnie at May 17, 2014 04:51 AM

    May 12, 2014

    Michele's GNSS blog

    GNSS carrier phase, RTLSDR, and fractional PLLs (the necessary evil)

    A mandatory principle when processing GNSS -in order to have high accuracy carrier phase- is to have a well defined frequency planning. This entails knowing precisely how the Local Oscillator (LO) frequency is generated.
    With RTL-SDR it is not a trivial task given that both R820T and RTL2832U use fractional Phase Locked Loops (PLLs) in order to derive respectively the high-side mixing frequency and the Digital Down Conversion (DDC) carrier.
    I guess most people use RTL-SDR with a 50ppm crystal so the kind of inaccuracies I am going to describe are buried under the crystal inaccuracy ..within reason.

    Let us start from the common call

    &gt; rtl_sdr -f 1575420000

    This means "set to 1575.42 MHz" but what is hidden is:
    1) R820T, set to 1575.42e6 + your IF
    2) RTL2832U, downconvert the R820T IF to baseband
    .. there are approximations everywhere.

    Now, the R820T has a 16 bit fractional PLL register meaning that it can only set to frequencies multiple of 439.45 Hz (exactly).
    Instead, the RTL2832U has a 22 bit fractional PLL register meaning that is can recover IFs in steps of 6.8665 Hz (exactly).
    Of course, nor 1575.42e6, nor 3.57e6 are exact multiples of either frequency so one always ends up with a mismatch between what he/she thinks he has set, and what really ends up with. Most of the times, this is fine. For GNSS it is not since carrier is accumulated over long intervals and even a few tenths of Hz will make it diverge from the truth.
    So I went down the route of characterising the necessary evil of fractional PLLs.

    The first test I did was to set the tuner to 1575421875, which leads to a -1875 Hz center frequency but is nicely represented in 16 bits using a 28.8 MHz reference (remember the R820T). In fact, 54 + 0.7021484375 = 54 + [1011001111000000]/2^16. ..ok well actually it fits on 10 :)

    Here I found a small bug in the driver  and replaced the following messy (IMHO) code: 

    /* sdm calculator */
    while (vco_fra > 1) {
        if (vco_fra > (2 * pll_ref_khz / n_sdm)) {
            sdm = sdm + 32768 / (n_sdm / 2);
            vco_fra = vco_fra - 2 * pll_ref_khz / n_sdm;
            if (n_sdm >= 0x8000)
        n_sdm <<= 1;


    mysdm = (((vco_freq<<16)+pll_ref)/(2*pll_ref)) & 0xFFFF;

    Then I modified the IF of the R820T from 3.57 MHz to 3.6 MHz, as it is only a few kHz away and it is nicely represented on 16 bit  ..ok well it actually fits in 3 :)
    Modifying the IF also impacted the RTL2832U fractional register of course.
    I still had a significant error (about 115 Hz) which I could measure comparing the scaled code rate and the carrier rate (which should be proportional of a factor 1540).
    After a long time wondering what could be happening, I decided to start tweaking the bits of the R820T.
    One in particular called PLL dithering seemed suspicious. Disabling it kind of doubled the error to about 220Hz. Sad.. but I did recall now the resolution of the tuner (439.45 Hz) and guessed that there is a hidden 17th bit which toggles randomly when "dithering" and is instead fixed to 1 when "not dithering". A couple of references which could explain why are here:

    How sneaky! But I could nicely recover that 17th bit with the RTL2832U (which has 22).
    So I have now rock-solid code-carrier assistance ^_^
    Figure 1: Code-carrier mismatch when tracking a satellite with RTL-SDR
    One step closer to integer ambiguity resolution?


    by (Michele Bavaro) at May 12, 2014 09:30 PM

    Bunnie Studios

    Novena in the X-Ray

    Last week, Nadya Peek from MIT’s CBA gave me the opportunity to play with their CT scanner. I had my Novena laptop with me, so we extracted the motherboard and slapped it into the scanner. Here are some snapshots of the ethernet jacks, which are enclosed metal boxes and thus a target for “intervention” (e.g. NSA ANT FIREWALK featuring their nifty TRINITY MCM).

    Plus, it’s just fun to look at X-rays of your gear.

    The X-ray reveals the expected array of ferrite cores implementing the transformers required by gigabit ethernet.

    by bunnie at May 12, 2014 06:43 PM

    May 08, 2014

    Bunnie Studios

    An Oscilloscope Module for Novena

    One of Novena’s most distinctive features is its FPGA co-processor. An FPGA, or Field Programmable Gate Array, is a sea of logic gates and memory elements that can be wired up according to hardware descriptions programmed in languages such as Verilog or VHDL. Verilog can be thought of as a very strictly typed C where every line of the code executes simultaneously. Thus, every bit of logic in Novena’s Spartan 6 LX45 FPGA could theoretically perform a computation every clock cycle — all 43,000 logic cells, 54,000 flip flops, and 58 fixed-point multiply accumulate DSP blocks. This potential for massive parallelism underlies one half of the exciting prospects enabled by an FPGA.

    The other exciting half of an FPGA relates to its expansive I/O capabilities. Every signal pin of an FPGA can be configured to comply with a huge range of physical layer specifications, from vanilla CMOS to high-speed differential standards such as TMDS (used in HDMI) and SSTL (used to talk to DDR memories). Each signal pin is also backed by a high speed SERDES (serializer/deserializer) and sophisticated clock management technologies. Need a dozen high-precision PWM channels for robotics? No problem, an FPGA can easily do that. Need an HDMI interface or two? Also no problem. Need a bespoke 1000 MT/s ADC interface? Simple matter of programming – and all with the same set of signal pins.

    Novena also hangs a 2Gbit DDR3 memory chip directly off the FPGA. The FPGA contains a dedicated memory controller that talks DDR3 at a rate of 800MT/s over a 16-bit bus, yielding a theoretical peak memory bandwidth of 12.8 Gbits/s. This fast, deep memory is useful for caching and buffering data locally.

    Thus, the FPGA can be thought of as the ultimate hardware hacking primitive. In order to unlock the full potential of the FPGA, we decided to bring most of the spare I/Os on the chip to a high speed expansion header. The high speed header is a bit less convenient than Arduino shield connectors if all you need to do is flash an LED, but as a trade-off the header is rated for signal speeds of over a gigabit per second per pin.

    However, the GPBB (General Purpose Breakout Board) featured as one of the Novena crowdfunding campaign stretch goals resolves this inconvenience by converting the high speed signal format into a much lower performance but more convenient 0.1” pin header format, suitable for most robotics and home automation projects.

    Enter the Oscilloscope
    A problem that xobs and I frequently encounter is the need for a highly programmable, travel-friendly oscilloscope. There’s a number of USB scope solutions that don’t quite cut it in terms of analog performance and UX, and there are no self-contained solutions we know of today that allow us to craft stimulus-response loops of the type needed for fuzzing, glitching, power analysis, or other similar hardware hacking techniques.

    Fortunately, Novena is an ideal platform for implementing a bespoke oscilloscope solution – which we’ve gone ahead and done. Here’s a video demonstrating the basic functionality of our oscilloscope solution running on Novena (720p version in VP8 or H.264):

    Novena was plugged into the large-screen TV via HDMI to make filming the video a little bit easier.

    In a nutshell, the oscilloscope offers two 8-bit channels at 1GSPS or one 8-bit channel at 2GSPS with an analog bandwidth of up to 900MHz. As a side bonus we also wired in a set of 10 digital channels that can be used as a simple logic analyzer. Here’s some high resolution photos of the oscilloscope expansion board:

    Here’s the schematics.

    This combination of the oscilloscope expansion board plus Novena is a major step toward the realization of our dream of a programmable, travel-friendly oscilloscope. The design is still a couple revisions away from being production ready, but even in its current state it’s a useful hacking tool.

    At this point, I’m going to geek out and talk about the tech behind the implementation of the oscilloscope board.

    Oscilloscope Architecture
    Below is a block diagram of the oscilloscope’s digital architecture.

    The FPGA is configured to talk to an ADC08D1020 dual 1GSPS ADC, designed originally by National Semiconductor but now sold as TI. The interface to the ADC is a pair of 8-bit differential DDR busses, operating at up to 500MHz, which is demultiplexed 1:8 into a 64-bit internal datapath. Upon receipt of a trigger condition, the FPGA stores a real-time sample data from the ADC into local DDR3 memory, and later on the CPU can stream data out of the DDR3 memory via the Linux Generic Netlink API. Because the DDR3 memory’s peak bandwidth is only 1.6GSPS, deep buffer capture of 256 Msamples is only available for net sample rates below 1GSPS; higher sample rates are limited to the internal memory capacity of the FPGA, still a very usable 200 ksamples depth. The design is written in Verilog and consumes about 15% of the FPGA, leaving plenty of space for implementing other goodies like digital filters and other signal processing.

    The ADC is clocked by an Analog Devices AD9520 PLL, which derives its time base from a TCXO. This PLL + TCXO combination gives us better jitter performance than the on-chip PLL of the FPGA, and also gives us more flexibility on picking sampling rates.

    The power system uses a hybrid of boost, buck, and inverting switching regulators to bring voltages to the minimum-dropout required for point-of-use LDOs to provide clean power to sensitive analog subsystems. This hybrid approach makes the power system much more complex, but helps keep the power budget manageable.

    Perhaps the most unique aspect of our oscilloscope design is the partitioning of the analog signal chain. Getting a signal from the point of measurement to the ADC is a major engineering challenge. Remarkably, the same passive probe I held in the 90′s is still a standard workhorse for scopes like my Tektronix TDS5104B almost a quarter century later. This design longevity is extremely rare in the world of electronics. With a bandwidth of several hundred MHz but an impedance measured in mega-ohms and a load capacitance measured in picofarads, it makes one wonder why we even bother with 50-ohm cables when we have stuff like oscilloscope probes. There’s a lot of science behind this, and as a result well-designed passive probes, such as the Tektronix P6139B, cost hundreds of dollars.

    Unfortunately, high quality scope probes are made out of unicorn hair and unobtanium as far as I’m concerned, so when thinking about our design, I had to take a clean-sheet look at the problem. I decided to look at an active probe solution, whilst throwing away any notion of backward compatibility with existing scope probes.

    I started the system design by first considering the wires (you can tell I’m a student of Tom Knight – one of his signature phrases is “it’s the wires, stupid!”). I concluded the cheapest high-bandwidth commodity cable that is also rated for a high insertion count is probably the SATA cable. It consists of two differential pairs and it has to support signal bandwidths measured in GHz, yet it costs just a couple of bucks. On the downside, any practical probing solution needs to present an impedance of almost a million times greater than that required by SATA, to avoid loading down the circuitry under test. This means we have to cram a high performance amplifier into a PCB that fits in the palm of your hand. Thankfully, Moore’s Law took care of that in the intervening decades from when passive oscilloscope probes were first invented out of necessity.

    The LMH6518 is a single-chip solution for oscilloscope front-ends that is almost perfect for this scenario. It’s a 900 MHz, digitally controlled variable gain amplifier (VGA) with the added feature of an auxilliary output that’s well-suited for functioning as a trigger channel; conveniently, a SATA cable has two differential pairs, so we allocate one for measurement and one for trigger. We also strap a conventional 8-pin ribbon cable to the SATA cable for passing power and I2C.

    The same LMH6518 VGA can be combined with a variety of front-end amplifiers to create a range of application-specific probes. We use a 1GHz FET op-amp (the ADA4817) to do the impedance transformation required of a “standard” digital oscilloscope. We use a relatively low impedance but “true differential” amplifier to measure voltages developed across a series sense resistor for power signature analysis. And we have a very high-impedance, high CMRR instrumentation amplifier front end for capturing signals developed across small loops and stubs of wire, useful for detecting parasitic electromagnetic emissions from circuits and cables.

    Above: digital probe

    Above: power signature analysis probe

    Above: sidechannel emissions probe

    However, the design isn’t quite perfect. The LMH6518 burns a lot of power – a little over a watt; and the pre-amp plus power regulators add about another watt overall to the probe’s power footprint. Two watts isn’t that bad on an absolute scale, but two watts in the palm of your hand is searing hot; the amplifier chip gets to almost 80C. So, I designed a set of custom aluminum heatsinks for the probes to help spread and dissipate the heat.

    When I handed the aluminum-cased probes to xobs, I warned him that the heat sinks are either going to solve the heat issue, or it’s going to turn the probes into a ball of flaming hot metal. Unfortunately, the heatsink gets to about 60C in still air, which is an ergonomic challenge – the threshold for pain is typically around 45-50C, so it’s very uncomfortable to hold the aluminum cases directly. It’s alright to hold the probes by the plastic connectors on the back, but this requires special training and users will instinctively want to hold a probe by its body. So, probably I’ll have to do some thermal optimization of the design and add either a heat pipe to a large heatsink off the probe body, or use a small fan to force air over the probes. It turns out just a tiny bit of airflow is all that’s need to keep the probes cool, but with passive convection alone they are simply too hot to handle. This won’t, of course, stop us from using them as-is; we’re okay with having to be a little bit careful to gain access to a very capable device. However, nanny-state laws and potentially litigious customers make it too risky to sell this solution to end consumers right now.

    Firmware Architecture

    xobs defined the API for the oscilloscope. The driver is based upon the Generic Netlink API native to the Linux kernel, and relies upon the libnl-genl libraries for the user-space implementation. Out of the various APIs available in the Linux kernel to couple kernelspace to userspace, Netlink was the best match, as it is stream-oriented and inherently non-blocking. This API has been optimized for high throughput and low latency, since it is also the core of the IP network stacks that on servers push gigabits of bandwidth. It’s also more mature than the nascent Linux IIO subsystem.

    In the case of xobs’ driver, he creates a custom generic netlink protocol which he registers with the name “kosagi-fpga”. Generic netlink sockets support the concept of specific commands, and he currently supports the following:

    /* list of valid commands */
    enum kosagi_fpga_commands {

    The current implementation provisions two memory-mapped address spaces for the CPU to communicate with the FPGA, split along two different chip select lines. Chip Select 0 (CS0) is used for simple messages and register settings, while Chip Select 1 (CS1) is used for streaming data to and from the FPGA. Therefore, when the CPU wants to set capture buffer sizes, trigger conditions, or initiate a transfer, it communicates using CS0. When it wants to stream data from the FPGA, it will do so via CS1.

    The core of the API is the KOSAGI_CMD_TRIGGER_SAMPLE and KOSAGI_CMD_READ commands. To request a sample from the oscilloscope, the userspace program emits a KOSAGI_CMD_TRIGGER_SAMPLE command to the kosagi-fpga Netlink interface. This will cause the CPU to communicate with the FPGA via the CS0 EIM memory space control registers, setting up the trigger condition and the transfer FIFO from the FPGA.

    The userspace program will then emit a KOSAGI_CMD_READ command to retrieve the data. Upon receiving the read command, the kernel initiates a burst read from CS1 EIM memory space to a kernel buffer using memcpy(), which is forwarded back to the userspace that requested the data using the genlmsg_unicast() Netlink API call. Userspace retrieves the data stream from the kernel by calling the nl_recv() API call.

    This call is currently configured to block until the data is available for the userspace program, but it can also be configured to timeout as well. However, a timeout is generally not necessary as the call will succeed in a fraction of a millisecond due to the high speed and determinism of the transfer interface.

    In addition to handling data transfers, the kernel module implementing this API also handles housekeeping functions, such as configuring the FPGA and controlling power to the analog front end. FPGA configuration is handled automatically upon driver load (via insmod, modprobe, or udev) via the request_firmware() API built into the Linux kernel. The FPGA bitstream is located in the kernel firmware directory, usually /lib/firmware/novena_fpga.bit.

    Power management functions have their own dedicated Netlink commands. Calling these commands causes the respective GPIO for the expansion connector power switch to be toggled. When the expanion connector is power-cycled, the module also resets the FPGA and reloads its firmware, allowing for a complete reset of the expansion subsystem without having to power cycle the CPU.

    Above: a snippet of a trace captured by the scope when probing a full-speed USB data line.

    xobs also wrote a wonderful demo program in Qt for the oscilloscope, and through this we were able to do some preliminary performance characterization. The rise-time performance of the probe is everything I had hoped for, and the very long capture buffer provided by the FPGA’s DDR3 memory enables a new dimension of deep signal analysis. This, backed with Novena’s horsepower, tight integration with Linux and a hackable architecture makes for a compelling – and portable – signal analysis solution for field work.

    If the prospect of a a hackable oscilloscope excites you as much as it does us, please consider backing our crowdfunding campaign for Novena and spreading the word to your friends; there’s only a few days left. Developing complex hardware and software systems isn’t cheap, and your support will enable us to focus on bringing more products like this to market.

    by bunnie at May 08, 2014 05:59 AM

    May 03, 2014

    Bunnie Studios

    Novena’s Hackable Bezel

    When designing Novena, I had to balance budget against hackability. Plastic parts are cheap to produce, but the tools to mold them are very expensive and difficult to modify. Injection mold tooling cost for a conventional clamshell (two-body) laptop runs upwards of $250,000. In contrast, Novena’s single body design has a much lower tooling cost, making it feasible to amortize tooling costs over a smaller volume.

    The decision to use flat sheet aluminum for the LCD bezel was also driven in part to reduce tooling costs. Production processing for aluminum can be done using CNC, virtually eliminating up-front tooling costs. Furthermore, aluminum has great hack value, as it can be cut, drilled, tapped, and bent with entry-level tools. This workability means end users can easily add connectors, buttons, sensors, and indicators to the LCD bezel. Users can even design in a custom LCD panel, since there’s almost no setup cost for machining aluminum.

    One of my first mods to the bezel is a set of 3D-printed retainers, custom designed to work with my preferred keyboard. The retainers screw into a set of tapped M2.5 mounting holes around the periphery of the LCD.

    The idea is that the retainers hold my keyboard against the LCD bezel when transporting the laptop, protecting the LCD from impact damage while making it a little more convenient for travel.

    Such an easily customizable bezel means a limitless combination of keyboards and LCDs can be supported without requiring expensive modifications to injection molding tools.

    The flat design also means it’s easy to laser-cut a bezel using other materials. Here’s an example made out of clear acrylic. The acrylic version looks quite pretty, although as a material acrylic is much softer and less durable than aluminum.

    I also added a notch on the bottom part of the bezel to accommodate breakout boards plugged into the FPGA expansion connector.

    The low up-front cost to modify and customize the bezel enables experimentation and serendipitous hacks. I’m looking forward to seeing what other Novena users do with their bezels!

    by bunnie at May 03, 2014 07:08 AM

    May 02, 2014


    SkyWorks AAT4292 - 7-bit high-side IO expander: weekend die-shot

    SkyWorks AAT4292 is a 7-bit IO expander with 100mA 1.1Ω high-side switches per channel.
    Die size 1193x618 µm.

    After metallization etch:

    May 02, 2014 11:27 PM

    Richard Hughes, ColorHug

    AppData, meet SPDX. SPDX, meet AppData

    A few long months ago I asked everyone shipping a desktop application to also write an AppData file for the software installer. So far over 300 projects have written these files and there are over 500 upstream screenshots that have been taken. The number has been growing steadily, and most active projects now ship a file upstream. So, what do I want you to do now? :)

    The original AppData specification had something like this:

    <?xml version="1.0" encoding="UTF-8"?>
    <id type="desktop">gnome-power-statistics.desktop</id>

    This had a couple of problems. First was the spelling of license. I’m from Blightly, and forgot that I was supposed to be coding in en_US. The second was people frequently got confused that they were supposed to be specifying the license of that specific metadata file, rather than the license of the project as a whole. A few months ago we fixed this, and added the requirement of a copyright statement to please the Debian overlords:

    <?xml version="1.0" encoding="UTF-8"?>
    <!-- Copyright 2013 Richard Hughes <> -->
    <id type="desktop">gnome-power-statistics.desktop</id>
    <project_license>GPL-2.0+ and GFDL-1.3</project_license>

    The project licenses just have to be valid SPDX strings. You can use “ and ” and “ or ” or even brackets if required, just like in a spec file. The reason for standardising on SPDX is that it’s being used on lots of distros now, and we can also make the licence substrings clickable in gnome-software very easily.

    So, if you’ve already written an AppData file please do three things:

    • Make sure Copyright exists at the top of the file after the <?xml header
    • Convert license into metadata_license and change to a SPDX ID
    • Add project_license and add all the licenses used in your project.

    In Fedora 21 I’m currently doing a mapping from License: in the spec file to the SPDX format, although it’s not a 1:1 mapping hence why I need this to be upstream. KDE is already shipping project_license in thier AppData files, but I’m not going to steal that thunder. Stay tuned.

    by hughsie at May 02, 2014 12:54 PM

    Video Circuits

    Film Night Documentation

    The film night went very well, and a few of the London based artists managed to meet for the first time which was cool, It was very nice to see this kind of work in the real world and I am looking to scale up for the next screening possibly with a live element so watch out for that

    Thanks to Ben for putting the event on, Alex, James, Kate, Jerry, Lawrence, Andi and Gary and everyone else who made it down to watch and all the artists who participated, here are some shots the first few are from Alex.

    by Chris ( at May 02, 2014 08:56 AM

    April 30, 2014

    Mirko Vogt,

    Protected: There’s no such thing as bad publicity…

    This post is password protected. To view it please enter your password below:

    by mirko at April 30, 2014 02:38 PM

    April 29, 2014

    Bunnie Studios

    Circuit Stickers Manufacturing Retrospective: From Campaign to First Shipment

    Last December, Jie Qi and I launched a crowdfunding campaign to bring circuit stickers under the brand name of “chibitronics” to the world.

    Our original timeline stated we would have orders shipped to Crowd Supply for fulfillment by May 2014. We’re really pleased that we were able to meet our goal, right on time, with the first shipment of over a thousand starter kits leaving the factory last week. 62 cartons of goods have cleared export in Hong Kong airport, and a second round of boxes are due to leave our factory around May 5, meaning we’ve got a really good chance of delivering product to backers by Mid-May.

    Above: 62 cartons containing over a thousand chibitronics starter kits waiting for pickup.

    Why On-Time Delivery Is So Important
    A personal challenge of mine was to take our delivery commitment to backers very seriously. I’ve seen too many under-performing crowdfunding campaigns; I’m deeply concerned that crowdfunding for hardware is becoming synonymous with scams and spams. Kickstarter and Indiegogo have been plagued by non-delivery and scams, and their blithe caveat emptor attitude around campaigns is a reflection of an entrenched conflict of interest between consumers and crowdfunding websites: “hey, thanks for the nickel, but what happened to your dollar is your problem”.

    I’m honestly worried that crowdfunding will get such a bad reputation that it won’t be a viable platform for well-intentioned entrepreneurs and innovators in a few years.

    I made the contentious choice to go with Crowd Supply in part because they show more savvy around vetting hardware products, and their service offering to campaigns — such as fulfillment, tier-one customer support, post-campaign pre-order support, and rolling delivery dates based on demand vs. capacity — is a boon for hardware upstarts. Getting fulfillment, customer support and an ongoing e-commerce site as part of the package essentially saves me one headcount, and when your company consists of just two or three people that’s a big deal.

    Crowd Supply doesn’t have the same media footprint or brand power that Kickstarter has, which means it is harder to do a big raise with them, but at the end of the day I feel it’s very important to establish an example of sustainable crowdfunding practices that is better for both the entrepreneur and the consumer. It’s not just about a money grab today: it’s about building a brand and reputation that can be trusted for years to come.

    Bottom line is, if I can’t prove to current and future backers that I can deliver on-time, I stand to lose a valuable platform for launching my future products.

    On-Time Delivery Was not Easy
    We did not deliver chibitronics on time because we had it easy. When drawing up the original campaign timeline, I had a min/max bounds on delivery time spanning from just after Chinese New Year (February) to around April. I added one month beyond the max just to be safe. We ended up using every last bit of padding in the schedule.

    I made a lot of mistakes along the way, and through a combination of hard work, luck, planning, and strong factory relationships, we were able to battle through many hardships. Here’s a few examples of lessons learned.

    A simple request for one is not necessarily a simple request for another. Included with every starter kit is a fantastic book (free to download) written by Jie Qi which serves as a step-by-step, self-instruction guide to designing with circuit stickers. The book is unusual because you’re meant to paste electronic circuits into it. We had to customize several aspects of the printing, from the paper thickness (to get the right light diffusion) to the binding (for a better circuit crafting experience) to the little pocket in the back (to hold swatches of Z-tape and Linqstat material). Most of these requests were relatively easy to accommodate, but one in particular threw the printer for a loop. We needed the metal spiral binding of the book to be non-conductive, so if someone accidentally laid copper tape on the binding it wouldn’t cause a short circuit.

    Below is an example of how a circuit looks in the book — in this case, the DIY pressure sensor tutorial (click on image for a larger version).

    Checking for conductivity of a wire seems like a simple enough request for someone who designs circuits for a living, but for a book printer, it’s extremely weird. No part of traditional book printing or binding requires such knowledge. Because of this, the original response from the printer is “we can’t guarantee anything about the conductivity of the binding wire”, and sure enough, the first sample was non-conductive, but the second was conductive and they could not explain why. This is where face to face meetings are invaluable. Instead of yelling at them over email, we arranged a meeting with the vendor during one of my monthly trips to Shenzhen. We had a productive discussion about their concerns, and at the conclusion of the meeting we ordered them a $5 multimeter in exchange for a guarantee of a non-conductive book spine. In the end, the vendor was simply unwilling to guarantee something for which he had no quality control procedure — an extremely reasonable position — and we just had to educate the vendor on how to use a multimeter.

    To wit, this unusual non-conductivity requirement did extend our lead time by several days and added a few cents to the cost of the book, but overall, I’m willing to accept that compromise.

    Never skip a checkplot. I alluded to this poignant lesson with the following tweet:

    The pad shapes for chibitronics are complex polyline geometries, which aren’t handled so gracefully by Altium. One problem I’ve discovered the hard way is the soldermask layer occasionally disappears for pads with complex geometry. One version of the file will have a soldermask opening, and in the next save checkpoint, it’s gone. This sort of bug is rare, but it does happen. Normally I do a gerber re-import check with a third-party tool, but since this was a re-order of an existing design that worked before, and I was in a rush, I skipped the check. Result? thousands of dollars of PCBs scrapped, four weeks gone from the schedule. Ouch.

    Good thing I padded my delivery dates, and good thing I keep a bottle of fine scotch on hand to help bitter reminders of what happens when I get complacent go down a little bit easier.

    If something can fit in a right and a wrong way, the wrong way will happen. I’m paranoid about this problem — I’ve been burned by it many times before. The effects sticker sheet is a prime example of this problem waiting to happen. It is an array of four otherwise identical stickers, except for the LED flashing pattern they output. The LED flashing pattern is controlled by software, and trying to manage four separate firmware files and get them all loaded into the right spot in a tester is a nightmare waiting to happen. So, I designed the stickers to all use exactly the same firmware; their behaviors set by the value of a single external resistor.

    So the logic goes: if all the stickers have the same firmware, it’s impossible to have a “wrong way” to program the stickers. Right?

    Unfortunately, I also designed the master PCB panels so they were perfectly symmetric. You can load the panels into the assembly robot rotated by pi radians and the assembly program runs flawlessly — except that the resistors which set the firmware behavior are populated in reverse order from the silkscreen labels. Despite having fiducial holes and text on the PCBs in both Chinese and English that are uniquely orienting, this problem actually happened. The first samples of the effects stickers were “blinking” where it said “heartbeat”, “fading” where it said “twinkle”, and vice-versa.

    Fortunately, the factory very consistently loaded the boards in backwards, which is the best case for a problem like this. I rushed a firmware patch (which is in itself a risky thing to do) that reversed the interpretation of the resistor values, and had a new set of samples fedexed to me in Singapore for sanity checking. We also built a secondary test jig to add a manual double-check for correct flashing behavior on the line in China. Although, in making that additional test, we were confronted with another common problem –

    Some things just don’t translate well into Chinese. When coming up with instructions to describe the difference between “fading” (a slow blinking pattern) and “twinkling” (a flickering pattern), it turns out that the Chinese translation for “blink” and “twinkle” are similar. Twinkle translates to 闪烁 (“flickering, twinkling”) or 闪耀 (to glint, to glitter, to sparkle), whereas blink translates to 闪闪 (“flickering, sparkling, glittering”) or 闪亮 (“brilliant, shiny, to glisten, to twinkle”). I always dread making up subjective descriptions for test operators in Chinese, which is part of the reason we try to automate as many tests as possible. As one of my Chinese friends once quipped, Mandarin is a wonderful language for poetry and arts, but difficult for precise technical communications.

    Above is an example of the effects stickers in action. How does one come up with a bulletproof, cross-cultural explanation of the difference between fading (on the left) and twinkling (on the right), using only simple terms anyone can understand, e.g. avoiding technical terms such as random, frequency, hertz, periodic, etc.

    After viewing the video, our factory recommended to use “渐变” (gradual change) for fade and “闪烁” (flickering, twinkling) for twinkle. I’m not yet convinced this is a bulletproof description, but it’s superior to any translation I could come up with.

    Funny enough, it was also a challenge for Jie and I to agree upon what a “twinkle” effect should look like. We had several long conversations on the topic, followed up by demo videos to clarify the desired effect. The implementation was basically tweaking code until it “looked about right” — Jie described our first iteration of the effect as “closer to a lightning storm than twinkling”. Given the difficulty we had describing the effect to each other, it’s no surprise I’m running into challenges accurately describing the effect in Chinese.

    Eliminate single points of failure. When we built test jigs, we built two copies of each, even though throughput requirements demanded just one. Why? Just in case one failed. And guess what, one of them failed, for reasons as of yet unknown. Thank goodness we built two copies, or I’d be in China right now trying to diagnose why our sole test jig isn’t working.

    Sometimes last minute changes are worth it. About six weeks ago, Jie suggested that we should include a stencil with the sensor/microcontroller kits. She reasoned that it can be difficult to lay out the copper tape patterns for complex stickers, such as the microcontroller (featuring seven pads), without a drawing of the contact patterns. I originally resisted the idea — we were just weeks away from finalizing the order, and I didn’t want to delay shipment on account of something we didn’t originally promise. As Jie is discovering, I can be very temperamental, especially when it comes to things that can cause schedule slips (sorry Jie, thanks for bearing with me!). However, her arguments were sound and so I instructed our factory to search for a stencil vendor. Two weeks passed and we couldn’t find anyone willing to take the job, but our factory’s sourcing department wasn’t going to give up so easily. Eventually, they found one vendor who had enough material in stock to tool up a die cutter and turn a couple thousand stencils within two weeks — just barely in time to meet the schedule.

    When I got samples of the sensor/micro kit with the stencils, I gave them a whirl, and Jie was absolutely right about the utility of the stencils. The user experience is vastly improved when you have a template to work from, particularly for the microcontroller sticker with seven closely spaced pads. And so, even though it wasn’t promised as part of the original campaign, all backers who ordered the sensor/micro kit are getting a free stencil to help with laying out their designs.

    Chinese New Year has a big impact the supply chain. Even though Chinese New Year (CNY) is a 2-week holiday, our initial schedule essentially wrote off the month of February. Reality matched this expectation, but I thought it’d be helpful to share an anecdote on exactly how CNY ended up impacting this project. We had a draft manuscript of our book in January, but I couldn’t get a complete sample until March. It’s not because the printer was off work for a month straight — their holiday, like everyone else’s, was about two weeks long. However, the paper vendor started its holiday about 10 days before the printer, and the binding vendor ended its holiday about 10 days after the printer. So even though each vendor took two weeks off, the net supply chain for printing a custom book was out for holiday for around 24 days — effectively the entire month of February. The staggered observance of CNY is necessary because of the sheer magnitude of human migration that accompanies the holiday.

    Shipping is expensive, and difficult. When I ran the initial numbers on shipping, one thing I realized is we weren’t selling circuit stickers — at least by volume and weight, our principle product is printed paper (the book). So, to optimize logistics cost, I was pushing to ship starter kits (which contain a book) and additional stand-alone book orders by ocean, rather than air.

    We actually had starter kits and books ready to go almost four weeks ago, but we just couldn’t get a reasonable quotation for the cost of shipping them by ocean. We spent almost three weeks haggling and quoting with ocean freight companies, and in the end, their price was basically the same as going by air, but would take three weeks longer and incurred more risk. It turns out that freight cost is a minor component of going by ocean, and you get killed by a multitude of surcharges, from paying the longshoreman to paying all the intermediate warehouses and brokers that handle your goods at the dock. All these fixed costs add up, such that even though we were shipping over 60 cartons of goods, air shipping was still a cost-effective option. To wit, a Maersk 40′ sea container will fit over 1250 cartons each containing 40 starter kits, so we’re still an order of magnitude away from being able to efficiently utilize ocean freight.

    We’re not out of the Woods Yet. However excited I am about this milestone, I have to remind myself not to count my chickens before they hatch. Problems ranging from a routine screw-up by UPS to a tragic aviation accident to a logistics problem at Crowd Supply’s fulfillment depot to a customs problem could stymie an on-time delivery.

    But, at the very least, at this point we can say we’ve done everything reasonably within our power to deliver on-time.

    We are looking forward to hearing our backer’s feedback on chibitronics. If you are curious and want to join in on the fun, the Crowd Supply site is taking orders, and Jie and I will be at Maker Faire Bay Area 2014, in the Expo hall, teaching free workshops on how to learn and play with circuit stickers. We’re looking forward to meeting you!

    by bunnie at April 29, 2014 11:01 AM

    April 28, 2014

    LZX Industries

    Production & Availability Update

    The past 18 months have been distracted for us here at LZX Industries, due to big changes in both of our lives. We’ve been doing our best to keep up with production, but we know many of you have been waiting on the core modules required to start your systems (Color Video Encoder & Video Sync Generator) for several months now. We are working to make these available again ASAP, as well as new releases we’ve been prototyping. Thank you so much for your patience. We’re very excited to launch into a new era of LZX in 2014.

    Here is a consolidated list of currently in stock and out of stock items.

    In stock at Analogue Haven:
    8 Stage Video Quantizer & Sequencer
    Audio Frequency Decoder
    Color Time Base Corrector
    Differentiator (assembled and DIY kit)
    Function Generator (assembled and DIY kit)
    Sync Bus Bridge Cable
    Triple Video Fader & Key Generator
    Triple Video Interface
    Triple Video Multimode Filter
    Video Blending Matrix
    Video Divisions
    Video Flip Flops
    Video Logic
    Video Ramps
    Video Sync Distribution Chain Cable
    Voltage Bridge

    Out of stock:
    BitVision (assembled and DIY kit)
    Color Video Encoder
    Colorspace Mapper
    Triple Video Processor
    Video Sync Generator
    Video Waveform Generator
    Voltage Interface I

    Other distributors (Modular Square, Fukusan Kigyo and Equinox Oz) have small amounts of stock or are entirely sold out. We will be restocking everything as soon as possible and concentrating more on documentation and user resources soon.

    by Liz Larsen at April 28, 2014 02:11 PM

    April 27, 2014


    #oggstreamer – UserInterface for optional LEDs

    I just added a small but neat feature I want to include in the V1.0 of the OggStreamer firmware – it is a user interface that allows to control the optional LEDs on the right side of the device.

    How it works – the main application (oggs_app) is creating a named pipe in the temporary directory, the name of the file is /tmp/userleds

    so the following command will set the optional green led on

    echo 1 > /tmp/userleds

    you can send any parameter from 0 to 7 to /tmp/userleds. The parameter is interpreted as binary representation of the leds. 1 is the GREEN led, 2 is the YELLOW led, 4 is the RED led. 0 is all leds OFF and 7 is all leds ON

    this feature unfolds its potential when you combine it with shell scripts – for example – everyone likes blinking LEDs.

    while [ 1 ]; do
      echo 7 > /tmp/userleds
      sleep 1
      echo 0 > / tmp/userleds
      sleep 1

    But also more useful tasks can be done, for example monitoring whether an IPAddr can be pinged.

    while [ 1 ]; do
      ping -c 1 $1
      if [ "$?" == "1" ]; then
         #ping did not succed -> display RED Led
         echo 4 > /tmp/userleds
         #ping did succed -> display GREEN Led
         echo 1 > /tmp/userleds
      sleep 10



    by oggstreamer at April 27, 2014 05:59 PM

    April 24, 2014

    Bunnie Studios

    Name that Ware, April 2014

    The Ware for April 2014 is shown below.

    Apologies for the cracked/munged die, that’s how I received it. The die shot isn’t too high resolution, but I have a feeling its gross features are distinct enough that this ware will be guessed quickly.

    by bunnie at April 24, 2014 05:35 PM

    Winner, Name that Ware March 2014

    While there is no solid consensus on the precise function of this ware, there is a very solid body of evidence that March’s ware is part of a missile guidance system, likely from the AIM series of missiles made by Raytheon Missile Systems. Presumably Raytheon re-uses their missile avionics chassis across multiple product lines, hence it’s difficult to say exactly which design it’s from. The US has exported AIM-9 missiles to…a lot of places, including for example, Iraq, Iran, Pakistan, and nearby Asian countries including Singapore, Taiwan, Japan, Malaysia, and Thailand. So these scrap parts could have come from anywhere, not necessarily the US military. Next time I see one of these on the market, though, I think I will pick it up; it’ll make a great conversation piece for the coffee table.

    As for a winner, I think I’ll go with Chip; congrats, email me for your prize. I found the insight into the CAGE code to be a nice tip, I’ll have to use that in the future. I actually come across military-looking hardware surprisingly regularly in the scrap markets, and they do make for a lively name that ware.

    by bunnie at April 24, 2014 05:35 PM

    Peter Zotov, whitequark

    On tests and types

    Much has been said on the virtues of testing and type systems. However, I say that neither of them ultimately matters. Your codebase could be pure PHP, using exclusively goto for control flow, have no tests, comments, or variable names whatsoever—if you are rightfully sure that it is correct (for any possible input, produces valid output with limited use of resources), the codebase is perfect.

    The big question, of course, is “how can we be sure that it is correct?” There have been impressive advances in the field of authomatic theorem proving, e.g. Coq and CompCert. Unfortunately, we neither able nor should put a scientist behind every menial programming job; even if we could, undecidability means that we could only get a definite answer for a subset of interesting problems.

    The only available option is to rely on human judgement. Any tools or methods a programmer would employ are only useful as long as they enable a deeper understanding of the program, as they rightfully convince her that the program is indeed correct. If you don’t make an error, don’t test for it.

    Invariably, people do not all think alike. There is no single way of reasoning about programs and their behavior; there could be no single technique that enables writing nicest possible code. Thinking that the way that suits you most is the superior of them all is just arrogance. We can do better than that.

    I’m not saying that all languages and methods are born equal. They are not. But let’s reason about them in terms of how easier they make it for a human to analyze the code, for it is the only thing that matters, and not peculiarities of syntax or the big names behind.

    I’m also not saying that all code must be perfect. It doesn’t matter if a few pixels in a cat picture have the wrong color. But you better be sure they do not get executed.

    April 24, 2014 07:44 AM

    Bunnie Studios

    Design Novena’s Logo and Win a Desktop!

    Novena needs a logo. And we need you to help! Today we’re announcing a competition to design a logo for Novena, and the winner will get a desktop version of Novena and a T-shirt emblazoned with their logo.

    The competition starts today. Submissions should be sent to by the end of May 11th. On May 12th, all submissions will be posted in an update, and on May 15th we’ll pick a winner.

    We’re also adding a $25 tier for backers who would like to receive a T-shirt with our new logo on it. The base color of the T-shirt will be royal blue, like the blue anodization of Novena’s bezel, and the base fit will be the American Apparel Jersey T-shirt (S,M,L,XL,2XL,3XL) or the Bella Girly Jersey V-Neck T-shirt (S,M,L,XL,2XL — ladies, Bella sizes run small, so round up for a comfortable fit). We aim to ship the T-shirts within 2 months of campaign conclusion.

    For the logo, here are the guidelines for design:

  • Single-color design strongly preferred. However, a multi-color master design can work if a single-color variant also looks good.
  • No halftones or grayscale: logo must be screen printable, laser etchable, and chemically etchable.
  • Only submissions in vector format will be considered, but do include a PNG preview.
  • Target size is approximately 30mm-50mm x 10-15mm tall (printable on the lower left bezel or as an etched metal plaque screwed in place).
  • Target color is Pantone 420U (gray), but other color suggestions and schemes are welcome.
  • Ideally looks good backlit, so we can also make stickers that go on the exposed LCD backlight for a nice effect.
  • The design could say “Novena” or “novena”, but we’re open minded to other names or an icon with no text. Novena was an arbitrary code name we picked based on our naming scheme of Singapore MRT stations.
  • Design should not infringe on any other trademarks.
  • By submitting an entry, the submitter agrees to having their submission publicly posted for review and, if the submitter’s entry is selected as the winner, to automatically give Kosagi globally unlimited, royalty-free and exclusive use of the logo design, with a desktop version of Novena as the sole compensation for the single winning submission. Submitters retain the rights to non-winning submissions.

    If you’ve already backed Novena at the desktop tier or above and you are the chosen winner, we will refund you the campaign value ($1,195) of the desktop pledge level.

    Thanks in advance to everyone who will participate in the Novena logo design competition!

    by bunnie at April 24, 2014 05:17 AM

    April 22, 2014

    Bunnie Studios

    Stretch Goals for Novena Campaign

    First, a heartfelt “thank you” to all those who have backed our crowdfunding campaign to bring Novena-powered open computing devices to the world. xobs and I are very flattered to have reached almost 70% of our goal already.

    One excellent outcome of the campaign is a lot of people have reached out to us to extend the Novena platform and make it even better, and so we’re offering a diverse range of stretch goals to provide an even better open laptop for all walks of users.

    Stretch #1: Partnering with Jon Nettleton for Open 2D/3D Graphics Drivers on Novena: +$50k ($300k total)

    We designed Novena to be the most open platform we could practically build. The hardware blueprints and software source code are available for download. The entire OS is buildable from human-readable source, and requires no binary blobs to boot and run well.

    However, there are elements of the i.MX6 SoC that lie dormant, due to a lack of open source drivers. In particular, the 2D/3D graphics accelerator in the i.MX6 has closed-source drivers. While we don’t force you to use these closed-source drivers, a major impediment to us being “libre” is the lack of open source drivers for these components.

    We’re excited to announce a partnership with Jon Nettleton, an expert on Linux graphics drivers, to enable this crucial piece of the libre puzzle. Here is a short statement from Jon Nettleton himself on the prospect:

    Novena Backers and OSS enthusiasts,

    I am very pleased to announce myself, Jon Nettleton (a.k.a. jnettlet, linux4kix), as a stretch-goal partner for the Novena Project. I will be taking on the task of assuring that the shipping Novena platforms will not require a binary userspace driver for 2D/3D graphics acceleration. Utilizing my experience working on Linux graphics drivers along with my strong community involvement, I will be making sure that contributing developers have everything they need to keep the Etnaviv driver project moving forward.

    To accomplish this we are requesting an additional $10,000 of funding. This additional capital will be used to not just fund my development effort, but to also provide incentives for other contributing developers. It will also benefit me the time to coordinate with other hardware vendors interested in supporting an open source graphics driver implementation for the Vivante chipset, and getting them involved. There is no “US“ and “THEM” in this effort. “WE” will bring to fruition a modern graphics accelerated desktop platform for the Novena Project.

    Therefore, if we can raise $50k over our original target of $250k, we will donate the $10k that Jon needs for the effort for providing open 2D/3D graphics drivers for the Novena platform. The remainder of that raised will be used to help cover the costs of building the hardware you ordered.

    Significantly, since this is an open source effort, everyone in the i.MX6 community can benefit from the outcome of this funding. Because of this, we’ve added a “Buy Jon a Six Pack ($30)” pledge tier (capped at 417 pledges) so that existing i.MX6 users who want to contribute toward this goal without buying our hardware can participate. For every dollar contributed to this pledge tier, we will give Jon Nettleton at least 80 cents, regardless of our ability to reach the first stretch goal. The other ~20 cents go toward compulsory campaign operation costs and financial operator transaction fees.

    Stretch #2: General-Purpose Breakout Board: +$100k ($350k total)

    We include a FPGA and a nice high-speed connector, but many users just want to toggle a GPIO or take a simple analog reading without having to design and build a PCBA from scratch. If we can raise an additional $50k over the previous stretch goal, we will include a General Purpose Breakout Board (GPBB) with every piece of hardware we ship.

    The GPBB buffers 16 FPGA outputs and 8 FPGA inputs to be compatible with either 3.3V or 5V, gang-selectable via software. It also provides six 10-bit analog inputs (up to 200ksps sample rate) and two 10bit analog outputs (~100ksps max rate), all broken out to an easy-to-use 40-pin male 0.1″ dual-row header.

    The GPBB is handy for all kinds of control and sensing situations. Because the GPBB is backed by a powerful FPGA, each of the buffered FPGA output lines can be programmed for a wide range of applications. For example, an FPGA output could be configured as a precision PWM channel with hard-real time feedback control for demanding robotics motor driver applications. Or it can be used to interface with bespoke serial protocols, such as those found in modern LED strip lighting.

    For user who don’t want to muck with FPGA code and prefer to grapple a GPIO from the command line, we have user-space drivers for the board prepared in Linux, through a combination of the Linux GPIO API, and the Linux I2C API. As a result it’s a snap to script up simple applications using your favorite high level language.

    Significantly, the GPBB isn’t vaporware — we developed this board originally for use as a breakout for production testing circuit stickers from our Chibitronics product line. At this very moment, the GPBB design is being used to drive mass production of circuit stickers.

    Stretch #3: ROMulator Breakout Board: +$150k ($400k total)

    We designed Novena to be a versatile hacking tool. Case in point, last December we reported results at 30C3 revealing a secret knock that can allow arbitrary code execution on select SD card controllers. We discovered this in part with the assistance of Novena.

    We used Novena as a ROMulator — a FLASH ROM emulator. For this application, we developed a flexible PCB that’s so thin, it can be soldered in between a TSOP FLASH ROM and the underlying PCB. In this mode, we can use the FPGA built into Novena to snoop the traffic going to and from the FLASH ROM.

    Alternately, the FPGA can be used to emulate a ROM device using its local 256 MiB of DDR3 memory. Since the DDR3 controller implementation is multi-ported, during ROM emulation one can inspect and modify the ROM contents on the fly without disrupting target operation. This has a number of powerful applications, from ToC/ToU attacks to speeding up firmware development on devices that load from NAND.

    If we can raise an additional $50k over the previous tier, we’ll include a ROMulator Breakout Board (in addition to the General Purpose Breakout Board) with every piece of hardware shipped.

    Stretch #4: MyriadRF Software Defined Radio: +$250k ($500k total) or >200 backers for the desktop/laptop/heirloom version

    Software! Defined! Radio! We’re very excited to offer the possibility of teaming up with MyriadRF, to provide a custom-made SDR solution for Novena. Their open hardware SDR solution operates in all the major radio bands, including LTE, CDMA, TD-CDMA, W-CDMA, WiMAX, 2G and many more.

    The retail price of the MyriadRF is $299, and MyriadRF has graciously pulled strings with their fabrication partner and enabled a low minimum order quantity of 200 units to build this custom version for Novena. If we can clear a total raise of $500k or at least 200 total backers for the desktop/laptop/heirloom version, we’ll include with every desktop/laptop/heirloom version a MyriadRF SDR board. Since the MyriadRF is such a high ticket-item, only desktop and higher tiers are eligible to receive this reward.

    Significantly, the MyriadRF extends beyond the front of the Novena case, so part of the money from this tier is going toward buying the extra tooling to provision a removable panel on the front edge of the case, so that when the SDR module is installed it can comfortably hang out of the case, giving easy access to the U.FL RF connectors.

    If you find these stretch goals exciting and/or useful, please visit our campaign page and join the community helping to bring open hardware to the world, and please help us spread the word!

    by bunnie at April 22, 2014 05:08 PM

    April 21, 2014

    Andrew Zonenberg, Silicon Exposed

    Getting my feet wet with invasive attacks, part 1: Target recon

    This is part 1 of a 2-part series. Part 2, The Attack, is here.

    One of the reasons I've gone a bit dark lately is that running CSCI 6974, RPI's experimental hardware reverse engineering class, has been eating up a lot of my time.

    I wanted to make the final lab for the course a nice climax to the semester and do something that would show off the kinds of things that are possible if you have the right gear, so it had to be impressive and technically challenging. The obvious choice was a FIB circuit edit combined with invasive microprobing.

    After slaving away for quite a while (this was started back in January or so) I've managed to get something ready to show off :) The work described here will be demonstrated in front of my students next week as part of the fourth lab for the class.

    The first step was to pick a target. I was interested in the Xilinx XC2C32A for several reasons and was already using other parts of the chip as a teaching subject for the class. It's a pure-digital CMOS CPLD (no analog sense amps and a fairly regular structure) made on a relatively modern process (180 nm 4-metal UMC) but not so modern as to be insanely hard to work with. It was also quite cheap ($1.25 a pop for the slowest speed grade in VQG44 package on DigiKey) so I could afford to kill plenty of them during testing

    The next step was to decap a few, label interesting pins, and draw up a die floorplan. Here's a view of the die at the implant layer after Dash etch; P-type doping shows up as brown. (John did all of the staining work and got great results. Thanks!)

    XC2C32A die floorplan after Dash etch
    The bottom half of the die is support infrastructure with EEPROM banks for storing the configuration bitstream toward the center and JTAG/configuration stuff in a U-shape below and to either side of the memory array. (The EEPROM is mislabeled "flash" in this image because I originally assumed it was 1T NOR flash. Higher magnification imaging later showed this to be wrong; the bit cells are 2T EEPROM.)

    The top half of the die is the actual programmable logic, laid out in a "butterfly" structure. The center spine is the ZIA (global routing, also referred to as the AIM in some datasheets), which takes signals from the 32 macrocell flipflops and 33 GPIO pins and routes them into the function blocks. To either side of the spine are the two FBs, which consist of an 80 x 56 AND array (simplifying a bit... the actual structure is more like 2 blocks x 20 rows x 2 interleaved cells x 56 columns), a 56 x 16 OR array, and 16 macrocells.

    I wanted some interesting data to show my students so there were two obvious choices. First, I could try to defeat the code protection somehow and read bitstreams out of a locked device via JTAG. Second, I could try to read internal device state at run time. The second seemed a bit easier so I decided to run with it (although defeating the lock bits is still on my longer-term TODO.)

    The obvious target for probing internal runtime state is the ZIA, since all GPIO inputs and flipflop states have to go through here. Unfortunately, it's almost completely undocumented! Here's the sum total of what DS090 has to say about it (pages 5-6):
    The Advanced Interconnect Matrix is a highly connected low power rapid switch. The AIM is directed by the software to deliver up to a set of 40 signals to each FB for the creation of logic. Results from all FB macrocells, as well as, all pin inputs circulate back through the AIM for additional connection available to all other FBs as dictated by the design software. The AIM minimizes both propagation delay and power as it makes attachments to the various FBs.
    Thanks for the tidbit, Xilinx, but this really isn't gonna cut it. I need more info!

    The basic ZIA structure was pretty obvious from inspection of the implant layer: 20 identical copies of the same logic. This suggested that each row was responsible for feeding two signals left and two right.

    SEM imaging of the implant layer showed the basic structure to be largely, but not entirely, symmetric about the left-right axis. At the far outside a few cells of the PLA AND array can be seen. Moving toward the center is what appears to be a 3-stage buffer, presumably for driving the row's output into the PLA. The actual routing logic is at center.

    The row appeared entirely symmetric top-to-bottom so I focused my future analysis on the upper half.

    Single row of the ZIA seen at the implant layer after Dash etch. Light gray is P-type doping, medium gray is N-type doping, dark gray is STI trenches.
    Looking at the top metal layer revealed the expected 65 signals.

    Single row of the ZIA seen on metal 4
    The signals were grouped into six groups with 11, 11, 11, 11, 11, and 10 signals in them. This led me to suspect that there was some kind of six-fold structure to the underlying circuitry, a suspicion which was later proven correct.

    Inspection of the configuration EEPROM for the ZIA showed it to be 16 bits wide by 48 rows high.

    ZIA configuration EEPROM (top few rows)
    Since the global configuration area in the middle of the chip was 8 rows high this suggested that each of the 40 remaining EEPROM rows configured the top or bottom half of a ZIA row.

    Of the 16 bits in each row, 8 bits presumably controlled the left-hand output and 8 controlled the right. This didn't make a lot of sense at first: dense binary coding would require only 7 bits for 65 channels and one-hot coding would need 65 bits.

    Reading documentation for related device families sometimes helps to shed some light on how a part was designed, so I took a look at some of the whitepapers for the older 350 nm CoolRunner XPLA3 series. They went into some detail on how full crossbar routing was wasteful of chip area and often not necessary to get sufficient routability. You don't need to be able to generate every 40! permutations of a given subset of signals as long as you can route every signal somehow. Instead, the XPLA3's designers connected only a handful of the inputs to each row and varied the input selection for each row so as to allow almost every possible subset to be selected somehow.

    This suggested a 2-level hierarchy to the ZIA mux. Instead of being a 65:1 mux it was a 65:N hard-wired mux followed by a N:1 programmable mux feeding left and another N:1 feeding right. 6 seemed to be a reasonable guess for N, given the six groups of wires on metal 4.

    ZIA mux structure
    This hypothesis was quickly confirmed by looking at M3 and M3-M4 vias: Each row had six short wires on M3, one under each of the six groups of wires in the bus. Each of these short lines was connected by one via to one of the bus lines on M4. The via pattern varied from row to row as expected.

    ZIA M3-M4 vias

    I extracted the full via pattern by copying a tracing of M4 over the M3 image and using the power vias running down the left side as registration marks. (Pro tip: Using a high accelerating voltage, like 20 kV, in a SEM gives great results on aluminum processes with tungsten via plugs. You get backscatters from vias through the metal layer that you can use for aligning image stacks.) A few of the rows are shown above.

    At this point I felt I understood most of the structure so the next step was full circuit extraction! I had John CMP a die down to each layer and send to me for high-res imaging in the SEM.

    The output buffers were fairly easy. As I expected they were just a 3-stage inverter cascade.

    Output buffer poly/diffusion/contact tracing

    Output buffer M1 tracing
    Output buffer gate-level schematic

    Individual cell schematics
    Nothing interesting was present on any of the upper layers above here, just power distribution.

    The one surprising thing about the output buffer was that the NMOS on the third stage had a substantially wider channel than the PMOS. This is probably something to do with optimizing output rise/fall times.

    Looking at the actual mux logic showed that it was mostly tiles of the same basic pattern (a 6T SRAM cell, a 2-input NOR gate, and a large multi-fingered NMOS pass transistor) except for the far left side.

    Gate-level layout of mux area

    Left side of mux area, gate-level layout
    The same SRAM-feeding-NOR2 structure is seen, but this time the output is a small NMOS or PMOS pass transistor.

    After tracing M1, it became obvious what was going on.

    Left side of mux area, M1

    The upper and lower halves control the outputs to function blocks 1 and 2 respectively. The two SRAM bits allow each output (labeled MUXOUT_FBx) to be pulled high, low, or float. A global reset line of some sort, labeled OGATE, is used to gate all logic in the entire ZIA (and presumably the rest of the chip); when OGATE is high the SRAM bits are ignored and the output is forced high.

    Here's what it looks like in schematic:

    Gate-level schematics of pullup/pulldown logic
    Cell schematics
    In the schematics I drew the NOR2v0x1 cell as its de Morgan dual (AND with inverted inputs) since this seemed to make more sense in the context of the circuit: the output is turned on when the active-low input is low and OGATE is turned off.

    It's interesting to note that while almost all of the config bits in the circuit are active-low, PULLUP is active-high. This is presumably done to allow the all-ones state (a blank EEPROM array) to put the muxes in a well-defined state rather than floating.

    Turning our attention to the rest of the mux array shows a 6:1 one-hot-coded mux made from NMOS pass transistors. This, combined with the 2 bits needed for the pull-high/pull-low module, adds up to the expected 8.  The same basic pattern shown below is tiled three times.
    Basic mux tile, poly/implant
    Basic mux tile, M1
    (Sorry for the misalignment of the contact layer, this was a quick tracing and as long as I was able to make sense of the circuit I didn't bother polishing it up to look pretty!)

    The resulting schematic:

    Schematic of muxes

    M2 was used for some short-distance routing as well as OGATE, power/ground busing, and the SRAM bit lines.

    M2 and M2-M3 vias

    M3 was used for OGATE, power busing, SRAM word lines, the mask-programmed muxes, and the tri-state bus within the final mux.

    M3 and M3-M4 vias

    And finally, M4. I never found out what the leftmost power line went to, it didn't appear to be VCCINT or ground but was obviously power distribution. There's no reason for VCCIO to be running down the middle of the array so maybe VCCAUX? Reversing the global config logic may provide the answer.

    A bit of trial and error poking bits in bitstreams was sufficient to determine the ordering of signals. From right to left we have FB1's GPIO pins, the input-only pin, FB2's GPIO pins, then FB1's flipflops and finally FB2's flipflops.

    Now that I had good intel on the target, it was time to plan the strike!

    Part 2, The Attack, is here.

    by Andrew Zonenberg ( at April 21, 2014 10:11 PM

    April 19, 2014

    Video Circuits


    So Ben from Cinematograph Film Club asked me to do a Video Circuits film night as part of his latest group of screenings, So I have put together a list of artists work that I will be screening on the night, I might bring down a CRT and some video gear for fun as well :3

    Time 20:00
    Date 27 April
    Location The Duke Of Wellington, London N1

    here is the facebook event link

    and here is a little experiment form my diy video synth


    James Alec Hardy

    by Chris ( at April 19, 2014 05:31 AM

    April 18, 2014

    Video Circuits

    Jonathan Gillie

    Here is some interesting video collage work from Jonathan Gillie using Tachyons+ gear to generate the video effects and then arranged in after effects

    by Chris ( at April 18, 2014 05:59 AM

    April 15, 2014

    Peter Zotov, whitequark

    A guide to extension points in OCaml

    Extension points (also known as “-ppx syntax extensions”) is the new API for syntactic extensions in OCaml. The old API, known as camlp4, is very flexible, but also huge, practically undocumented, lagging behind the newly introduced syntax in the compiler, and just overall confusing to those attempting to use it.

    Extension points are an excellent and very simple replacement introduced by Alain Frisch. In this article, I will explain how to amend OCaml’s syntax using the extension points API.

    Extension points are first released in OCaml 4.02. You will need to switch to 4.02 or a newer compiler, preferably using opam:

    opam switch 4.02.1
    opam install camlp4 ocamlfind oasis

    What is Camlp4?

    At its core, camlp4 (P4 stands for Pre-Processor-Pretty-Printer) is a parsing library which provides extensible grammars. That is, it makes possible to define a parser and then, later, make a derived parser by adding a few rules to the original one. The OCaml syntax (two OCaml syntaxes, in fact, the original one and a revised one introduced specifically for camlp4) is just a special case.

    When using camlp4 syntax extensions with OCaml, you write your program in a syntax which is not compatible with OCaml’s (neither original nor revised one). Then, the OCaml compiler (when invoked with the -pp switch) passes the original source to the preprocessor as text; when the preprocessor has finished its work, it prints back valid OCaml code.

    There are a lot of problems with this approach:

    • It is confusing to users. Camlp4 preprocessors can define almost any imaginable syntax, so unless one is also familiar with all the preprocessors used, it is not in general possible to understand the source.

    • It is confusing to tools, for much the same reason. For example, Merlin has no plans to support camlp4 in general, and has implemented a workaround for few selected extensions, e.g. pa_ounit.

    • Writing camlp4 extensions is hard. It requires learning a new (revised) syntax and a complex, scarcely documented API (try module M = Camlp4;; in utop—the signature is 16255 lines long. Yes, sixteen thousand.)

    • It is not well-suited for type-driven code generation, which is probably the most common use case for syntax extensions, because it is hard to make different camlp4 extensions cooperate; type_conv was required to enable this functionality.

    • Last but not the least, using camlp4 prevents OCaml compiler from printing useful suggestions in error messages like File "", line 17: This '(' might be unmatched. Personally, I find that very annoying.

    What is the extension points API?

    The extension points API is much simpler:

    • A syntax extension is now a function that maps an OCaml AST to an OCaml AST. Correspondingly, it is no longer possible to extend syntax in arbitrary ways.

    • To make syntax extensions useful for type-driven code generation (like type_conv), the OCaml syntax is enriched with attributes.

      Attributes can be attached to pretty much any interesting syntactic construct: expressions, types, variant constructors, fields, modules, etc. By default, attributes are ignored by the OCaml compiler.

      Attributes can contain a structure, expression or pattern as their payload, allowing a very wide range of behavior.

      For example, one could implement a syntax extension that would accept type declarations of form type t = A [@id 1] | B [@id 4] of int [@@id_of] and generate a function mapping a value of type t to its integer representation.

    • To make syntax extensions useful for implementing custom syntactic constructs, especially for control flow (like pa_lwt), the OCaml syntax is enriched with extension nodes.

      Extension nodes designate a custom, incompatible variant of an existing syntactic construct. They’re only available for expression constructs: fun, let, if and so on. When the OCaml compiler encounters an extension node, it signals an error.

      Extension nodes have the same payloads as attributes.

      For example, one could implement a syntax extension what would accept a let binding of form let%lwt (x, y) = f in x + y and translate them to Lwt.bind f (fun (x, y) -> x + y).

    • To make it possible to insert fragments of code written in entirely unrelated syntax into OCaml code, the OCaml syntax is enriched with quoted strings.

      Quoted strings are simply strings delimited with {<delim>| and |<delim>}, where <delim> is a (possibly empty) sequence of lowercase letters. They behave just like regular OCaml strings, except that syntactic extensions may extract the delimiter.

    Using the extension points API

    On a concrete level, a syntax extension is an executable that receives a marshalled OCaml AST and emits a marshalled OCaml AST. The OCaml compiler now also accepts a -ppx option, specifying one or more extensions to preprocess the code with.

    To aid this, the internals of the OCaml compiler are now exported as the standard findlib package compiler-libs. This package, among other things, contains the interface defining the OCaml AST (modules Asttypes and Parsetree) and a set of helpers for writing the syntax extensions (modules Ast_mapper and Ast_helper).

    I won’t describe the API in detail; it’s well-documented and nearly trivial (especially when compared with camlp4). Rather, I will describe all the necessary plumbing one needs around an AST-mapping function to turn it into a conveniently packaged extension.

    It is possible, but extremely inconvenient, to pattern-match and construct the OCaml AST manually. The extension points API makes it much easier:

    • It provides an Ast_mapper.mapper type and Ast_mapper.default_mapper value:
    type mapper = {
      (* ... *)
      expr: mapper -> expression -> expression;
      (* ... *)
      structure: mapper -> structure -> structure;
      structure_item: mapper -> structure_item -> structure_item;
      typ: mapper -> core_type -> core_type;
      type_declaration: mapper -> type_declaration -> type_declaration;
      type_kind: mapper -> type_kind -> type_kind;
      value_binding: mapper -> value_binding -> value_binding;
      (* ... *)
    val default_mapper : mapper

    The default_mapper is a “deep identity” mapper, i.e. it traverses every node of the AST, but changes nothing.

    Together, they provide an easy way to use open recursion, i.e. to only handle the parts of AST which are interesting to you.

    • It provides a set of helpers in the Ast_helper module which simplify constructing the AST. (Unlike Camlp4, extension points API does not provide code quasiquotation, at least for now.)

      For example, Exp.tuple [Exp.constant (Const_int 1); Exp.constant (Const_int 2)] would construct the AST for (1, 2). While unwieldy, this is much better than elaborating the AST directly.

    • Finally, it provides an Ast_mapper.run_main function, which handles the command line arguments and I/O.

    AST quasiquotation

    It is not very convenient to construct and deconstruct ASTs directly. To avoid this, the ppx_tools library provides AST quasiquotation: it allows to embed AST fragments as literals inside the source code.

    For example, it is possible to construct an expression using [%expr 2 + 2], inject a sub-AST from a variable into an expression with [%expr 2 + [%e number]], and even match over ASTs using match expr with [%expr [%e? lhs] + [%e? rhs]] -> lhs, rhs.

    ppx_tools also provides a rewriter tool that allows to test your syntax extension by feeding it source code fragments without using the somewhat awkward debugging options that the OCaml compiler provides.

    See the ppx_tools README for further information.


    Let’s assemble it all together to make a simple extension that replaces [%getenv "<var>"] with the compile-time contents of the variable <var>.

    First, let’s take a look at the AST that [%getenv "<var>"] would parse to. To do this, invoke the OCaml compiler as ocamlc -dparsetree

    let _ = [%getenv "USER"]
      structure_item ([1,0+0]..[1,0+24])
        expression ([1,0+8]..[1,0+24])
          Pexp_extension "getenv"
            structure_item ([1,0+17]..[1,0+23])
              expression ([1,0+17]..[1,0+23])
                Pexp_constant Const_string("USER",None)

    As you can see, the grammar category we need is “expression”, so we need to override the expr field of the default_mapper:
    open Ast_mapper
    open Ast_helper
    open Asttypes
    open Parsetree
    open Longident
    let getenv s = try Sys.getenv s with Not_found -> ""
    let getenv_mapper argv =
      (* Our getenv_mapper only overrides the handling of expressions in the default mapper. *)
      { default_mapper with
        expr = fun mapper expr ->
          match expr with
          (* Is this an extension node? *)
          | { pexp_desc =
              (* Should have name "getenv". *)
              Pexp_extension ({ txt = "getenv"; loc }, pstr)} ->
            begin match pstr with
            | (* Should have a single structure item, which is evaluation of a constant string. *)
              PStr [{ pstr_desc =
                      Pstr_eval ({ pexp_loc  = loc;
                                   pexp_desc = Pexp_constant (Const_string (sym, None))}, _)}] ->
              (* Replace with a constant string with the value from the environment. *)
              Exp.constant ~loc (Const_string (getenv sym, None))
            | _ ->
              raise (Location.Error (
                      Location.error ~loc "[%getenv] accepts a string, e.g. [%getenv \"USER\"]"))
          (* Delegate to the default mapper. *)
          | x -> default_mapper.expr mapper x;
    let () = register "getenv" getenv_mapper

    The sample code also demonstrates how to report errors from the extension.

    This syntax extension can be easily compiled e.g. with ocamlbuild -package compiler-libs.common ppx_getenv.native.

    You can verify that this produces the desirable result by asking OCaml to pretty-print the transformed source with ocamlc -dsource -ppx ./ppx_getenv.native, or, if ppx_tools is installed, ocamlfind ppx_tools/rewriter ./ppx_getenv.native

    let _ = "whitequark"


    When your extension is ready, it’s convenient to build and test it with OASIS, use ocamlfind to allow other packages to use it, and distribute via opam.

    The OASIS configuration I suggest is as follows:

    # (header...)
    OCamlVersion: >= 4.02
    FilesAB:      lib/META.ab
    PreInstallCommand:   $ocamlfind install ppx_getenv lib/META
    PreUninstallCommand: $ocamlfind remove ppx_getenv
    Executable ppx_getenv
      Path:           lib
      BuildDepends:   compiler-libs.common
      CompiledObject: best
    Test test_ppx_protobuf
      Command:        ocamlbuild -I lib -package oUnit  \
                                 -cflags '-ppx $ppx_getenv' \
                                 lib_test/test_ppx_getenv.byte --
      TestTools:      ppx_getenv

    Findlib (ocamlfind) also supports ppx syntax extensions in version 1.5.2 or newer. To use it, add a file called lib/META.ab:

    version = "$(pkg_ver)"
    ppx = "ppx_getenv"

    To use the syntax extension in other OCaml projects, simply require the ocamlfind package ppx_getenv, e.g. as ocamlfind ocamlc -package ppx_getenv. This will pass all necessary options to the compiler.

    The OPAM documentation nicely explains how to create a package, with instructions fully suitable for OASIS.

    Note that ideally, a build system should install a ppx extension under lib/ppx_getenv and use ppx = "./ppx_getenv" in the META file. This is to avoid polluting the global executable namespace with package-specific executables, and also avoiding name conflicts. However, OASIS does not make this easy, so in this example the executable is installed under bin.


    The extension points API is ready to be used in applications and is much nicer than camlp4.


    If you are writing an extension, you’ll find this material useful:

    Other than the OCaml sources, I’ve found Alain Frisch’s two articles (1, 2) on the topic extremely helpful. I only mention them now because they’re quite outdated.

    April 15, 2014 11:53 PM

    Bunnie Studios

    Myriad RF for Novena

    This is so cool. Myriad-RF has created a port of their wideband software defined radio to Novena (read more at their blog). Currently, it’s just CAD files, but if there’s enough interest in SDR on Novena, they may do a production run.

    The board above is based on the Myriad-RF 1. It is a fully configurable RF board that covers all commonly used communication frequencies, including LTE, CDMA, TD-CDMA, W-CDMA, WiMAX, 2G and many more. Their Novena variant plugs right into our existing high speed expansion slot — through pure coincidence both projects chose the same physical connector format, so they had to move a few traces and add a few components to make their reference design fully inter-operable with our Novena design. Their design (and the docs for the transceiver IC) is also fully open source, and in fact they’ve one-upped us because they use an open tool (KiCad) to design their boards.

    I can’t tell you how excited I am to see this. One of our major goals in doing a crowdfunding campaign around Novena is to raise community awareness of the platform and to grow the i.MX6 ecosystem. We can’t do everything we want to do with the platform by ourselves, and we need the help of other talented developers, like those at Myriad-RF, to unlock the full potential of Novena.

    by bunnie at April 15, 2014 07:03 PM

    April 14, 2014

    Sebastien Bourdeauducq,

    EHSM-2014 CFP


    Exceptionally Hard & Soft Meeting
    pushing the frontiers of open source and DIY
    DESY, Hamburg site, June 27-29 2014

    Collaboration between open source and research communities empowers open hardware to explore new grounds and hopefully deliver on the “third industrial revolution”. The first edition of the Exceptionally Hard and Soft Meeting featured lectures delivered by international makers, hackers, scientists and engineers on topics such as nuclear fusion, chip design, vacuum equipment machining, and applied quantum physics. Tutorials gave a welcoming hands-on introduction to people of all levels, including kids.

    EHSM is back in summer 2014 for another edition of the most cutting-edge open source conference. This year we are proud to welcome you to an exceptional venue: DESY, Europe’s second-largest particle physics laboratory!

    Previous EHSM lectures may be viewed at:

    Attendance is open to all curious minds.

    EHSM is entirely supported by its attendees and sponsors. To help us make this event happen, please donate and/or order your ticket as soon as possible by visiting our website
    Prices are:

    • 45E – student/low-income online registration
    • 95E – online registration
    • 110E – door ticket
    • 272E – supporter ticket, with our thanks and your name on the website.
    • 1337E – gold supporter ticket, with our thanks and your company/project logo on the website and the printed programme.

    EHSM is a non-profit event where the majority of the budget covers speakers’ travel and transportation of exhibition equipment.

    Is there a device in your basement that demonstrates violations of Bell’s inequalities? We want to see it in action. Are you starting up a company to build nuclear fusion reactors? Tell us about it. Does your open source hardware or software run some complex, advanced and beautiful scientific instruments? We are eager to learn about it. Do you have stories to tell about your former job manufacturing ultra high vacuum equipment in the Soviet Union? We want to hear about your experiences. Do you have a great design for a difficult open source product that can be useful to millions? Team up with the people who can help implement your ideas.

    Whoever you are, wherever you come from, you are welcome to present technologically awesome work at EHSM. Travel assistance and visa invitation letters provided upon request. All lectures are in English.

    This year, we will try to improve the conference’s documentation by publishing proceedings. When relevant, please send us a paper on
    your presentation topic. We are OK with previously published work, we simply expect high quality and up-to-date content.

    To submit your presentation, send a mail to with typically the following information:

    • Your name(s). You can be anonymous if you prefer.
    • Short bio
    • Title of the presentation
    • Abstract
    • How much time you would like
    • Full paper (if applicable)
    • Links to more information (if available)
    • Contact information (e-mail + mobile phone if possible)
    • If you need us to arrange your trip:
    • Where you would be traveling from
    • If you need accommodation in Hamburg

    We will again have an exhibition area where you can show and demonstrate your work – write to the same email address to apply for space. If you are bringing bulky or high-power equipment, make sure to let us know:

    • What surface you would use
    • What assistance you would need for equipment transport between your lab and the conference
    • If you need 3-phase electric power (note that Germany uses 230V/400V 50Hz)
    • What the peak power of your installation would be

    Tutorials on any technology topic are also welcome, and may cater to all levels, including beginners and kids.

    We are counting on you to make this event awesome. Feel free to nominate other speakers that you would like to see at the conference, too – just write us a quick note and we will contact them.

    Conference starts: morning of June 27th, 2014
    Conference ends: evening of June 29th, 2014
    Early registration fee ends: February 1st, 2014
    Please submit lectures, tutorials and exhibits before: May 15th, 2014

    Conference location:
    Notkestrasse 85
    22607 Hamburg, Germany

    - EHSM e.V. <>

    by lekernel at April 14, 2014 07:42 PM

    April 13, 2014

    Peter Zotov, whitequark

    XCompose support in Sublime Text

    Sublime Text is an awesome editor, and XCompose is very convenient for quickly typing weird Unicode characters. However, these two don’t combine: Sublime Text has an annoying bug which prevents the xim input method, which handles XCompose files, from working.

    What to do? If Sublime Text was open-source, I’d make a patch. But it is not. However, I still made a patch.

    If you just want XCompose to work, then add the sublime-imethod-fix PPA to your APT sources, install the libsublime-text-3-xim-xcompose package, and restart Sublime Text. (That’s it!) Or, build from source if you’re not on Ubuntu.

    However, if you’re interested in all the gory (and extremely boring) details, with an occasional animated gif, read on.

    Hunting the bug

    To describe the bug, I will first need to explain its natural environment. In Linux, a desktop graphics stack consists of an X11 server and an application using the Xlib library for drawing the windows and handling user input. When it was conceived, a top-notch UI looked like this:

    The X11 protocol and Xlib library are quite high-level: originally, you were expected to send compact, high-level instructions over the wire (such as “fill a rectangle at (x,y,x’,y’)”) in order to support thin clients over slow networks. However, thin clients and mainframes vanished, and in their place came a craving for beautiful user interfaces; and X11 protocol, primitive as it is, draws everything as if it came from 1993. (It is also worth noting that X went from X1 to X11 in three years, and has not changed since then.)

    The Compose key and XCompose files are a remnant of that era. Xlib has a notion of input method; that is, you would feed raw keypresses (i.e. the coordinates of keys on the keyboard) to Xlib and it would return you whole characters. This ranged from extremely simple US input method (mapping keys to characters 1:1) to more complex input methods for European languages (using a dedicated key to produce composite characters like é and ç) to very intricate Chinese and Japanese input methods with complex mappings between Latin input and ideographic output.

    Modern GUI toolkits like GTK and Qt ignore the X11 protocol almost entirely. The only drawing operation in use is “transfer this image and slap it over a rectangular area” (which isn’t even present in the original X11 protocol). Similarly, they pretty much ignore the X input method, favoring more modern scim and uim.

    XCompose is probably the only useful part of the whole X11 stack. Unfortunately, native XCompose support is not present anywhere except the original X input method. Fortunately, both GTK and Qt allow changing their input method to XIM. Unfortunately, Sublime Text somehow ignored the X input method completely even when instructed to use it.

    Sublime Text draws its own UI entirely to make it look nice on all the platforms. As such, on Linux it has three layers of indirection: first its own GUI toolkit, then GTK, which it uses to avoid dealing with the horror of X11, then X11 itself.

    The Xlib interface for communicating with the input method is pretty simple: it’s just the XmbLookupString function. You would feed it the XPressedKeyEvents containing key codes that you receive from the X11 server, and it would give back a string, possibly empty, with the sequence of characters you need to insert in your text area. Also, in order to start communicating, you need to initialize an X input context corresponding to a particular X window. (An X window is what you’d call a window, but also what you’d call a widget—say, a button has its own X11 window.)

    GTK packs the input method communication logic in the gtk_im_context_xim_filter_keypress function it has in its wrapper around the X input method. From there, it’s a pretty deep hole:

    • gtk_im_context_xim_filter_keypress uses a helper gtk_im_context_xim_get_ic to get the X input context, and if no context is returned, it resorts to a trivial US keymap;
    • gtk_im_context_xim_get_ic pulls the X input method handle and associated GTK settings from the ((GtkIMContextXIM *)context_xim)->im_info field;
    • which is initialized by the set_ic_client_window helper;
    • which refuses to initialize it if ((GtkIMContextXIM *)context_xim)->client_window is NULL;
    • which is called (through one more layer of indirection used by GTK to change the input methods on the fly) by Sublime Text itself;
    • which passes NULL as the client_window.

    Now, why does that happen? Sublime Text calls gtk_im_context_set_client_window (the helper that eventually delegates to set_ic_client_window) in a snippet of code which looks roughly like this:

    void sublimetext::gtk2::initialize() {
      // snip
      GtkWindow *window = gtk_window_new ();
      // a bit more initialization
      GtkIMContext *context = gtk_im_multicontext_new ();
      gtk_im_context_set_client_window(GTK_IM_CONTEXT(context), window->bin.container.widget.window);
      // snip

    What is that window->bin.container.widget.window? It contains the GdkWindow of the GtkWindow; Sublime Text has to fetch it to pass to gtk_im_context_set_client_window, which wants a GdkWindow.

    What is a GdkWindow? It’s a structure used by GTK to wrap X11 windows on Linux and other native structures on the rest of platforms. As such, if the GdkWindow and its underlying X11 window are not yet created, say, because these windows were yet never shown, the field would contain NULL. And since Sublime Text attempts to bind the IM context to the window immediately after creating the latter, this is exactly the bug which we observe.

    It is worth noting that while no input methods that require the window to be know work, a simple GTK fallback that queries the system for the key configured as Compose key, but uses internally defined tables with commonly used sequences, does. This is why if you launch Sublime Text as GTK_IM_METHOD=whatever-really subl allows you to enter ° with <Multi_key> <o> <o>, but not customize it by changing any of the XCompose files.

    Cooking the meat

    How do we fix this? I started with a simple gdb script:

    # Run as: $ GTK_IM_MODULE=xim gdb -script fix-xcompose-sublime-text-3061.gdb
    file /opt/sublime_text/sublime_text
    set follow-fork-mode child
    set detach-on-fork off
    inferior 2
    set follow-fork-mode parent
    set detach-on-fork on
    b *0x5b3267
    del 1
    set $multicontext = (GtkIMMulticontext*) $r13
    set $window = (GtkWindow*) $rbx
    b gtk_widget_show if widget==$window
    del 2
    call gtk_im_context_set_client_window($multicontext,$window->bin.container.widget.window)
    detach inferiors 1 2

    On a high level, the script does four things:

    1. Sublime Text forks at startup, so the script has to do a little funny dance to attach gdb to the correct process.
    2. Then, it stops at the point in the initialization sequence where my Sublime Text build calls gtk_im_context_set_client_window, and captures the window and multicontext variables, which the compiler happened to leave around in spare registers.
    3. Then, it waits until GTK surely initializes a GdkWindow for the window GtkWindow.
    4. Then, it calls gtk_im_context_set_client_window again, exactly as Sublime Text would, but at the right time.

    The script works. However, it is slow at startup and not very convenient in general. In particular, I would have to rewrite it every time Sublime Text updates. So, I opted for a better approach.

    LD_PRELOAD (see also tutorial: 1, 2) is a convenient feature of Linux dynamic linker which allows to substitute some functions contained in a shared library with different functions contained in another shared library. This is how, for example, fakeroot performs its magic.

    Initially I wanted to intercept gtk_window_new and gtk_im_multicontext_new to get the GtkIMMulticontext and the GtkWindow Sublime Text creates—they’re the first ever created—and then gtk_im_context_filter_keypress to call gtk_im_context_set_client_window before the first keypress is handled. But, somehow these calls were not intercepted by LD_PRELOAD; perhaps a weird way Sublime Text calls dlsym? I never figured it out.

    So, eventually I settled on intercepting the initialization of the GTK XIM input method plugin (which is loaded by GTK itself and therefore can be intercepted easily) and replacing its filter_keypress handler with my own. A filter_keypress handler receives a GtkIMContext and a GdkEvent, which contains the pointer to GdkWindow, so that would give me all the information I need.

    That worked.

    Celebrating the game

    Indeed, the goal was achieved in full. It only took me about ten hours, with practically no prior knowledge of libx11 or libgtk internals, access to Sublime Text source, or experience in reverse engineering.

    But what was this for? I don’t think I ever needed to type ಠ_ಠ in Sublime Text.

    I think I just like the sense of control over my tools.

    April 13, 2014 10:06 PM

    April 12, 2014

    Andrew Zonenberg, Silicon Exposed

    Getting my feet wet with invasive attacks, part 2: The attack

    This is part 2 of a 2-part series. Part 1, Target Recon, is here.

    Once I knew what all of the wires in the ZIA did, the next step was to plan an attack to read signals out.

    I decapped an XC2C32A with concentrated sulfuric acid and soldered it to my dev board to verify that it was alive and kicking.

    Simple CR-II dev board with integrated FTDI USB-JTAG
    After testing I desoldered the sample and brought it up to campus to introduce it to some 30 keV Ga+ ions.

    I figured that all of the exposed packaging would charge, so I'd need to coat the sample with something. I normally used sputtered Pt but this is almost impossible to remove after deposition so I decided to try evaporated carbon, which can be removed nicely with oxygen plasma among other things.

    I suited up for the cleanroom and met David Frey, their resident SEM/FIB expert, in front of the Zeiss 1540 FIB system. He's a former Zeiss engineer who's very protective of his "baby" and since I had never used a FIB before there was no way he was going to let me touch his, so he did all of the work while I watched. (I don't really blame him... FIB chambers are pretty cramped and it's easy to cause expensive damage by smashing into something or other. Several SEMs I've used have had one detector or another go offline for repair after a more careless user broke something.)

    The first step was to mill a hole through the 900 nm or so of silicon nitride overglass using the ion beam.

    Newly added via, not yet filled
    Once the via was drilled and it appeared we had made contact with the signal trace, it was time to backfill with platinum. The video below is sped up 10x to avoid boring my readers ;)

    Metal deposition in a FIB is basically CVD: a precursor gas is injected into the chamber near the sample and it decomposes under the influence of beam-generated secondary electrons.

    Once the via was filled we put down a large (20 μm square) square pad we could hit with an electrical probe needle.

    Probe pad
    Once everything was done and the chamber was vented I removed the carbon coating with oxygen plasma (the cleanroom's standard photoresist removal process), packaged up my sample, went home, and soldered it back to the board for testing. After powering it up... nothing! The device was as dead as a doornail, I couldn't even get a JTAG IDCODE from it.

    I repeated the experiment a week or two later, this time soldering bare stub wires to the pins so I could test by plugging the chip into a breadboard directly. This failed as well, but watching my benchtop power supply gave me a critical piece of information: while VCCINT was consuming the expected power (essentially zero), VCCIO was leaking by upwards of 20 mA.

    This ruled out beam-induced damage as I had not been hitting any of the I/O circuitry with the ion beam. Assuming that the carbon evaporation process was safe (it's used all the time on fragile samples, so this seemed a reasonably safe assumption for the time being), this left only the plasma clean as the potential failure point.

    I realized what was going on almost instantly: the antenna effect. The bond wire and leadframe connected to each pad in the device was acting as an antenna and coupling some of the 13.56 MHz RF energy from the plasma into the input buffers, blowing out the ESD diodes and input transistors, and leaving me with a dead chip.

    This left me with two possible ways to proceed: removing the coating by chemical means (a strong oxidizer could work), or not coating at all. I decided to try the latter since there were less steps to go wrong.

    Somewhat surprisingly, the cleanroom staff had very limited experience working with circuit edits - almost all of their FIB work was process metrology and failure analysis rather than rework, so they usually coated the samples.

    I decided to get trained on RPI's other FIB, the brand-new FEI Versa 3D. It's operated by the materials science staff, who are a bit less of the "helicopter parent" type and were actually willing to give me hands-on training.

    FEI Versa 3D SEM/FIB
    The Versa can do almost everything the older 1540 can do, in some cases better. Its one limitation is that it only has a single-channel gas injection system (platinum) while the 1540 is plumbed for platinum, tungsten, SiO2, and two gas-assisted etches.

    After a training session I was ready to go in for an actual circuit edit.

    FIB control panel
    The Versa is the most modern piece of equipment I've used to date: it doesn't even have the classical joystick for moving the stage around. Almost everything is controlled by the mouse, although a USB-based knob panel for adjusting magnification, focus, and stigmators is still provided for those who prefer to turn something with their fingers.

    Its other nice feature is the quad-image view which lets you simultaneously view an ion beam image, an e-beam image, the IR camera inside the chamber (very helpful for not crashing your sample into a $10,000 objective lens!), and a navigation camera which displays a top-down optical view of your sample.

    The nav-cam has saved me a ton of time. On RPI's older JSM-6335 FESEM, the minimum magnification is fairly high so I find myself spending several minutes moving my sample around the chamber half-blind trying to get it under the beam. With the Versa's nav-cam I'm able to set up things right the first time.

    I brought up both of the beams on the aluminum sample mounting stub, then blanked them to try a new idea: Move around the sample blind, using the nav-cam only, then take single images in freeze-frame mode with one beam or the other. By reducing the total energy delivered to the sample I hoped to minimize charging.

    This strategy was a complete success, I had some (not too severe) charging from the e-beam but almost no visible charging in the I-beam.

    The first sample I ran on the Versa was electrically functional afterwards, but the probe pad I deposited was too thin to make reliable contact with. (It was also an XC2C64A since I had run out of 32s). Although not a complete success, it did show that I had a working process for circuit edits.

    After another batch of XC2C32As arrived, I went up to campus for another run. The signal of interest was FB2_5_FF: the flipflop for function block 2 macrocell 5. I chose this particular signal because it was the leftmost line in the second group from the left and thus easy to recognize without having to count lines in a bus.

    The drilling went flawlessly, although it was a little tricky to tell whether I had gone all the way to the target wire or not in the SE view. Maybe I should start using the backscatter detector for this?

    Via after drilling before backfill
    I filled in the via and made sure to put down a big pile of Pt on the probe pad so as to not repeat my last mistake.

    The final probe pad, SEM image
    Seen optically, the new pad was a shiny white with surface topography and a few package fragments visible through it.

    Probe pad at low mag, optical image
    At higher magnification a few slightly damaged CMP filler dots can be seen above the pad. I like to use filler metal for focusing and stigmating the ion beam at milling currents before I move to the region of interest because it's made of the same material as my target, it's something I can safely destroy, and it's everywhere - it's hard to travel a significant distance on a modern IC without bumping into at least a few pieces of filler metal.

    Probe pad at higher magnification, optical image. Note damaged CMP filler above pad.
    I soldered the CPLD back onto the board and was relieved to find out that it still worked! The next step was to write some dummy code to test it out:

    `timescale 1ns / 1ps
    module test(clk_2048khz, led);

    //Clock input
    (* LOC = "P1" *) (* IOSTANDARD = "LVCMOS33" *)
    input wire clk_2048khz;

    //LED out
    (* LOC = "P38" *) (* IOSTANDARD = "LVCMOS33" *)
    output reg led = 0;

    //Don't care where this is placed
    reg[17:0] count = 0;
    always @(posedge clk_2048khz)
    count <= count + 1;

    //Probe-able signal on FB2_5 FF at 2x the LED blink rate
    (* LOC = "FB2_5" *) reg toggle_pending = 0;
    always @(posedge clk_2048khz) begin
    if(count == 0)
    toggle_pending <= !toggle_pending;

    //Blink the LED
    always @(posedge clk_2048khz) begin
    if(toggle_pending && (count == 0))
    led <= !led;


    This is a 20-bit counter that blinks a LED at ~2 Hz from a 2048 KHz clock on the board. The second-to-last stage of the counter (so ~4 Hz) is constrained to FB2_5, the signal we're probing.

    After making sure things still worked I attached the board's plastic standoffs to a 4" scrap silicon wafer with Gorilla Glue to give me a nice solid surface I could put on the prober's vacuum chuck.

    Test board on 4" wafer
    Earlier today I went back to the cleanroom. After dealing with a few annoyances (for example, the prober with a wide range of Z axis travel, necessary for this test, was plugged into the electrical test station with curve tracing capability but no oscilloscope card) I landed a probe on the bond pad for VCCIO and one on ground to sanity check things. 3.3V... looks good.

    Moving carefully, I lifted the probe up from the 3.3V bond pad and landed it on my newly added probe pad.

    Landing a probe on my pad. Note speck of dirt and bent tip left by previous user. Maybe he poked himself mounting the probe?
    It took a little bit of tinkering with the test unit to figure out where all of the trigger settings were, but I finally saw a ~1.8V, 4 Hz squarewave. Success!

    Waveform sniffed from my probe pad
    There's still a bit of tweaking needed before I can demo it to my students (among other things, the oscilloscope subsystem on the tester insists on trying to use the 100V input range, so I only have a few bits of ADC precision left to read my 1.8V waveform) but overall the attack was a success.

    by Andrew Zonenberg ( at April 12, 2014 11:54 PM


    Phillips PCF8574 - 8-bit I2C port expander : weekend die-shot

    Phillips PCF8574 is 8-bit I2C port expander, 3µm manufacturing technology.

    April 12, 2014 06:50 PM

    April 08, 2014


    Fake audiophile opamps: OPA627 (AD744?!)

    Walking around ebay I noticed insanely cheap OPA627's. It's rather old, popular and high-quality opamps, often used in audiophile gear. Manufacturer (Texas Instruments / Burr Brown) sells them 16-80$ each (depending on package & options) while on ebay it's cost was 2.7$, shipping included.

    Obviously, something fishy was going on. I ordered one, and for comparison - older one in metal can package, apparently desoldered from some equipment. Let's see if there is any difference.

    Plastic one was dissolved in acid, metal can was easily cut:


    Remarked "metal can" TI/BB OPA627 chip first. We can see here at least 4 laser-trimmed resistors. Laser trimmed resistors are needed here due to unavoidable manufacturing variation - parts inside opamps needs to be balanced perfectly.

    "Chinese" 2.7$ chip. There is only 1 laser trimmed resistor, but we also notice markings AD (Analog Devices?) and B744. Is it really AD744? If we check datasheet на AD744 - we'll see that metal photo perfectly matches one in the datasheet .

    What happened here?

    Some manufacturer in China put an effort to find cheaper substitute for OPA627 - it appeared to be AD744. AD744 has similar speed (500ns to 0.01%), similar type (*FET), supports external offset compensation. AD744 also support external frequency compensation (for high-speed high-gain applications) but there was no corresponding pin on the OPA627 - so this feature is unused.

    On the other hand AD744 has higher noise (3x) and higher offset voltage (0.5mV vs 0.1mV).

    So they bought AD744 in the form of dies or wafers, packaged them and marked as OPA627. It does not seems they earned alot of money here - it's more of an economic sabotage. Good thing that they did not used something like LM358 - in that case it would have been much easier to notice the difference without looking inside...

    Metal "OPA627" appeared to be some unidentified BB part remarked as OPA627.

    Be careful when choosing suppliers - otherwise your design might get "cost-optimized" for you :-)

    PS. Take a look at our previous story about fake FT232RL.

    April 08, 2014 06:38 AM

    April 07, 2014

    Video Circuits

    Joy to the World by William Laziza (1994)

    Recovered by the XFR STN projectJoy to the World is Visual Music designed for ambient presentation. Joy to the World combines, optical image processing, Amiga graphics and recursive video imagery with synthesized sound. What is unique about this piece is that the audio is used to create the visuals is also the sound track. This work was created at the Micro Museum.

    by Chris ( at April 07, 2014 10:48 AM

    April 06, 2014

    Altus Metrum

    keithp&#x27;s rocket blog: Java-Sound-on-Linux

    Java Sound on Linux

    I'm often in the position of having my favorite Java program (AltosUI) unable to make any sounds. Here's a history of the various adventures I've had.

    Java and PulseAudio ALSA support

    When we started playing with Java a few years ago, we discovered that if PulseAudio were enabled, Java wouldn't make any sound. Presumably, that was because the ALSA emulation layer offered by PulseAudio wasn't capable of supporting Java.

    The fix for that was to make sure pulseaudio would never run. That's harder than it seems; pulseaudio is like the living dead; rising from the grave every time you kill it. As it's nearly impossible to install any desktop applications without gaining a bogus dependency on pulseaudio, the solution that works best is to make sure dpkg never manages to actually install the program with dpkg-divert:

    # dpkg-divert --rename /usr/bin/pulseaudio

    With this in place, Java was a happy camper for a long time.

    Java and PulseAudio Native support

    More recently, Java has apparently gained some native PulseAudio support in some fashion. Of course, I couldn't actually get it to work, even after running the PulseAudio daemon but some kind Debian developer decided that sound should be broken by default for all Java applications and selected the PulseAudio back-end in the Java audio configuration file.

    Fixing that involved learning about said Java audio configuration file and then applying patch to revert the Debian packaging damage.

    $ cat /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/

    You can see the PulseAudio mistakes at the top of that listing, with the corrected native interface settings at the bottom.

    Java and single-open ALSA drivers

    It used to be that ALSA drivers could support multiple applications having the device open at the same time. Those with hardware mixing would use that to merge the streams together; those without hardware mixing might do that in the kernel itself. While the latter is probably not a great plan, it did make ALSA a lot more friendly to users.

    My new laptop is not friendly, and returns EBUSY when you try to open the PCM device more than once.

    After downloading the jdk and alsa library sources, I figured out that Java was trying to open the PCM device multiple times when using the standard Java sound API in the simplest possible way. I thought I was going to have to fix Java, when I figured out that ALSA provides user-space mixing with the 'dmix' plugin. I enabled that on my machine and now all was well.

    $ cat /etc/asound.conf
    pcm.!default {
        type plug
        slave.pcm "dmixer"
    pcm.dmixer  {
        type dmix
        ipc_key 1024
        slave {
            pcm "hw:1,0"
            period_time 0
            period_size 1024
            buffer_size 4096
            rate 44100
        bindings {
            0 0
            1 1
    ctl.dmixer {
        type hw
        card 1
    ctl.!default {
        type hw
        card 1

    As you can see, my sound card is not number 0, it's number 1, so if your card is a different number, you'll have to adapt as necessary.

    April 06, 2014 05:30 AM

    April 05, 2014

    Peter Zotov, whitequark

    Page caching with Nginx

    For Amplifr, I needed a simple page caching solution, which would work with multiple backend servers and require minimal amount of hassle. It turns out that just Nginx (1.5.7 or newer) is enough.

    First, you need to configure your backend. This consists of emitting a correct Cache-Control header and properly responding to conditional GET requests with If-Modified-Since header.

    Amplifr currently emits Cache-Control: public, max-age=1, must-revalidate for cacheable pages. Let’s take a closer look:

    • public means that the page has no elements specific to the particular user, so the cache can send the cache content to several users.
    • max-age=1 means that the content can be cached for one second. As will be explained later, max-age=0 would be more appropriate, but that directive would prevent the page from being cached.
    • must-revalidate means that after the cached content has expired, the cache must not respond with cached content unless it has forwarded the request further and got 304 Not Modified back.

    This can be implemented in Rails with a before_filter:

    class FooController < ApplicationController
      before_filter :check_cache
      def check_cache
        response.headers['Cache-Control'] = 'public, max-age=1, must-revalidate'
        # `stale?' renders a 304 response, thus halting the filter chain, automatically.
        stale?(last_modified: @current_site.updated_at)

    Now, we need to make Nginx work like a public cache:

    http {
      # ...
      proxy_cache_path /var/cache/nginx/foo levels=1:2 keys_zone=foocache:5m max_size=100m;
      server {
        # ...
          proxy_pass              http://foobackend;
          proxy_cache             foocache;
          proxy_cache_key         "$host$request_uri";
          proxy_cache_revalidate  on;
          # Optionally;
          # proxy_cache_use_stale error timeout invalid_header updating
                                  http_500 http_502 http_503 http_504;

    The key part is the proxy_cache_revalidate setting. Let’s take a look at the entire flow:

    • User agent A performs GET /foo HTTP/1.1 against Nginx.
    • Nginx has a cache miss and performs GET /foo HTTP/1.0 against the backend.
    • Backend generates the page and returns 200 OK.
    • Nginx detects that Cache-Control permits it to cache the response for 1 second, caches it and returns the response to user agent A.
    • (time passes…)
    • User agent B performs GET /foo HTTP/1.1 against Nginx.
    • Nginx has a cache hit (unless the entry was evicted), but the entry has already expired. Instructed by proxy_cache_revalidate, it issues GET /foo HTTP/1.0 against the backend and includes an If-Modified-Since header.
    • Backend checks the timestamp in If-Modified-Since and detects that Nginx’s cache entry is not actually stale, returning 304 Not Modified. It doesn’t spend any time generating content.
    • Nginx sets the expiration time on cache entry to 1 second from now and returns the cached response to the user agent B.

    Some notes on this design:

    1. Technically, performing a conditional GET requires sending an HTTP/1.1 request, but Nginx is only able to talk HTTP/1.0 to the backends. This doesn’t seem to be a problem in practice. and you can make Nginx send HTTP/1.1 requests to the backend using proxy_http_version.
    2. Ideally, specifying max-age=0 in Cache-Control would instruct the cache to store and always revalidate the response, but Nginx doesn’t cache it at all instead. HTTP specification permits both behaviors.
    3. You can specify proxy_cache_use_stale directive, so that if the server crashes or becomes unresponsive, Nginx would still serve some cached content. If the frontpage is static, it’s a good way to ensure it will be accessible at all times.

    April 05, 2014 09:25 AM

    Video Circuits

    Magnetic Tape

    Magnetic Tape is interesting to me. On reels or in cassettes each recording (or potential recording) is like a little curly drawing that pulls the sound through space, Only one position on the tape is read and so the linear nature of the tape allows the signal to vary the output is attached to over time. I messed around with wire recorders along time ago because I liked the fact that the sound is concentrated into a tiny line like space with the heaviness of the mark being replaced by the amplitude of the waveforms encoded as magnetic information. Here are some of my diy wall mounted ones, winding the pickup heads was a long day.

    I also like Nam June Paik’s 1963 work Random Access allot, tape is attached to the wall as a drawing with the playback head made available as a mobile stylus so you can retrace his steps and listen to the recordings using the same gestures he used to stick them down. A kind of playable graphical notation

    A good friend Dale has gone way further with visual tape based work and kindly sent me some photos of slightly insane pieces he is putting together at the moment. He selects tape based on it's visual tonality and creates geometric slightly illusory patterns building a second information set encoded in the the recording medium. I don't know if Dale does, but I find these relate to visual music and graphic notation practices too. Ill probably try to convince him at his art show at here in london on the 10th of April at six, come if you want to hang out we will probably drink beer after too.

    another interesting artist I found using tape in a slightly different
    way is Terence Hannum, I like the areas of ground left visible. pretty black!

    There are loads of other examples I'm sure I would really like to find a graphic score where the composer has stuck down tape, creating a kind of instrument,notation recording in one, it must have been done. If only VHS was as easy to read without a moving head, pixelvision cameras hacked might be the only answer!

    by Chris ( at April 05, 2014 08:05 AM

    April 04, 2014

    Moxie Processor

    Sign Extension

    Moxie zero-extends all 8 and 16-bit loads from memory. Until recently, however, the GCC port didn’t understand how loads worked, and would always shift loaded values back and forth to either empty out the upper bits or sign-extend the loaded value. While correct, it was overly bloated. If we’re loading an unsigned char into a register, there’s no need to force the upper bits to clear. The hardware does this for us.

    For instance, this simple C code….

    ..would compile to…

    Thanks to help from hackers on the GCC mailing list, I was finally able to teach the compiler how to treat memory loads correctly. This led to two changes…

    1. The introduction of 8 and 16-bit sign extension instructions (sex.b and sex.s). Sometimes we really do need to sign-extend values, and logical shift left followed by arithmetic shift right is a pretty expensive way to do this on moxie.
    2. The char type is now unsigned by default. If you have zero-extending 8-bit loads then you had better make your char type unsigned, otherwise your compiler output will be littered with sign extension instructions.

    Now for the C code above, we get this nice output….

    I believe that this was the last major code quality issue from the GCC port, and the compiler output should be pretty good now

    I’ve updated the upstream GCC, binutils and gdb (sim) repositories, my QEMU fork in github, as well as the MoxieLite VHDL core in the moxie-cores git repo.

    by green at April 04, 2014 08:40 AM

    April 02, 2014

    Bunnie Studios

    Crowdfunding the Novena Open Laptop

    We’re launching a crowdfunding campaign around our Novena open hardware computing platform. Originally, this started as a hobby project to build a computer just for me and xobs – something that we would use every day, easy to extend and to mod, our very own Swiss Army knife. I’ve posted here a couple of times about our experience building it, and it got a lot of interest. So by popular demand, we’ve prepared a crowdfunding offering and you can finally be a backer.


    Novena is a 1.2GHz, Freescale quad-core ARM architecture computer closely coupled with a Xilinx FPGA. It’s designed for users who want to modify and extend their hardware: all the documentation for the PCBs are open and free to download, and it comes with a variety of features that facilitate rapid prototyping.

    We are offering four variations, and at the conclusion of the Crowd Supply campaign on May 18, all the prices listed below will go up by 10%:

    • “Just the board” ($500): For crafty people who want to build their case and define their own style, we’ll deliver to you the main PCBA, stuffed with 4GiB of RAM, 4GiB microSD card, and an Ath9k-based PCIe wifi card. Boots to a Debian desktop over HDMI.
    • “All-in-One Desktop” ($1195): Plug in your favorite keyboard and mouse, and you’re ready to go; perfect for labs and workbenches. You get the circuit board above, inside a hacker-friendly case with a Full HD (1920×1080) IPS LCD.
    • “Laptop” ($1995): For hackers on the go, we’ll send you the same case and board as above, but with battery controller board, 240 GiB SSD, and a user-installed battery. As everyone has their own keyboard preference, no keyboard is included.
    • “Heirloom Laptop” ($5000): A show stopper of beauty; a sure conversation piece. This will be the same board, battery, and SSD as above, but in a gorgeous, hand-crafted wood and aluminum case made by Kurt Mottweiler in Portland, Oregon. As it’s a clamshell design, it’s also the only offering that comes with a predetermined keyboard.

    All configurations will come with Debian (GNU/Linux) pre-installed, but of course you can build and install whatever distro you prefer!

    Novena Gen-2 Case Design

    Followers of this blog may have seen a post featuring a prototype case design we put together last December. These were hand-built cases made from aluminum and leather and meant to validate the laptop use case. The design was rough and crafted by my clumsy hands – dubbed “gloriously fuggly [sic]” – yet the public response was overwhelmingly positive. It gave us confidence to proceed with a 2nd generation case design that we are now unveiling today.

    The first thing you’ll notice about the design is that the screen opens “the wrong way”. This feature allows the computer to be usable as a wall-hanging unit when the screen is closed. It also solves a major problem I had with the original clamshell prototype – it was a real pain to access the hardware for hacking, as it’s blocked by the keyboard mounting plate.

    Now, with the slide of a latch, the screen automatically pops open thanks to an internal gas spring. This isn’t just an open laptop — it’s a self-opening laptop! The internals are intentionally naked in this mode for easy access; it also makes it clear that this is not a computer for casual home use. Another side benefit of this design is there’s no fan noise – when the screen is up, the motherboard is exposed to open air and a passive heatsink is all you need to keep the CPU cool.

    Another feature of this design is the LCD bezel is made out of a single, simple aluminum sheet. This allows users with access to a minimal machine shop to modify or craft their own bezels – no custom tooling required. Hopefully this makes adding knobs and connectors, or changing the LCD relatively easy. In order to encourage people to experiment, we will ship desktop and laptop devices with not one, but two LCD bezels, so you don’t have to worry about having an unusable machine if you mess up one of the bezels!

    The panel covering the “port farm” on the right hand side of the case is designed to be replaceable. A single screw holds it in place, so if you design your own motherboard or if you want to upgrade in the future, you’re not locked into today’s port layout. We take advantage of this feature between the desktop and the laptop versions, as the DC power jack is in a different location for the two configurations.

    Finally, the inside of the case features a “Peek Array”. It’s an array of M2.5 mounting holes (yes, they are metric) populating the extra unused space inside the case, on the right hand side in the photo above. It’s named after Nadya Peek, a graduate student at MIT’s Center for Bits and Atoms. Nadya is a consummate maker, and is a driving force behind the CBA’s Fab Lab initiative. When I designed this array of mounting bosses, I imagined someone like Nadya making their own circuit boards or whatever they want, and mounting it inside the case using the Peek Array.

    The first thing I used the Peek Array for is the speaker box. I desire loud but good quality sound out of my laptop, so I 3D printed a speakerbox that uses 36mm mini-monitor drivers, and mounted it inside using the Peek Array. I would be totally stoked if a user with real audio design experience was to come up with and share a proper tuned-port design that I could install in my laptop. However, other users with weight, space or power concerns can just as easily design and install a more modest speaker.

    I started the Gen-2 case design in early February, after xobs and I finally decided it was time to launch a crowdfunding campaign. With a bit of elbow grease and the help of a hard working team of engineers and project managers at my contract manufacturing partner, AQS (that’s Celia and Chemmy pictured above, doing an initial PCBA fitting two weeks ago), I was able to bring a working prototype to San Jose and use it to give my keynote at EELive today.

    The Heirloom Design (Limited Quantities)

    One of the great things about open hardware is it’s easier to set up design collaborations – you can sling designs and prototypes around without need for NDAs or cumbersome legal agreements. As part of this crowdfunding campaign, I wanted to offer a really outstanding, no-holds barred laptop case – something you would be proud to have for years, and perhaps even pass on to your children as an heirloom. So, we enlisted the help of Kurt Mottweiler to build an “heirloom laptop”. Kurt is a designer-craftsman situated in Portland, Oregon and drawing on his background in luthiery, builds bespoke cameras of outstanding quality from materials such as wood and aluminum. We’re proud to have this offering as part of our campaign.

    For the prototype case, Kurt is featuring rift-sawn white oak and bead-blasted-and-anodized 6061 aluminum. He developed a composite consisting of outer layers of paper backed wood veneer over a high-density cork core with intervening layers of 5.5 ounce fiberglass cloth, all bonded with a high modulus epoxy resin. This composite is then gracefully formed into semi-monocoque curves, giving a final wavy shape that is both light, stiff, and considers the need for air cooling.

    The overall architecture of Kurt’s case mimics the industry-standard clamshell notebook design, but with a twist. The keyboard used within the case is wireless, and can be easily removed to reveal the hardware within. This laptop is an outstanding blend of tasteful design, craftsmanship, and open hardware. And, to wit, since these are truly hand-crafted units, no two units will be exactly alike – each unit will have its own grain and a character that reflects Kurt’s judgment for that particular piece of wood.

    How You can Help

    For the crowdfunding campaign to succeed, xobs and I need a couple hundred open source enthusiasts to back the desktop or standard laptop offering.

    And that underlies the biggest challenge for this campaign – how do we offer something so custom and so complex at a price that is comparable to a consumer version, in low volumes? Our minimum funding goal of $250,000 is a tiny fraction of what’s typically required to recover the million-plus dollar investment behind the development and manufacture of a conventional laptop.

    We meet this challenge with a combination of unique design, know-how, and strong relationships with our supply chain. The design is optimized to reduce the amount of expensive tooling required, while still preserving our primary goal of being easy to hack and modify. We’ve spent the last year and a half poring over three revisions of the PCBA, so we have high confidence that this complex design will be functional and producible. We’re not looking to recover that R&D cost in the campaign – that’s a sunk cost, as anyone is free to download the source and benefit from our thoroughly vetted design today. We also optimized certain tricky components, such as the LCD and the internal display port adapter, for reliable sourcing at low volumes. Finally, I spent the last couple of months traveling the world, lining up a supply chain that we feel confident can deliver this design, even in low volume, at a price comparable to other premium laptop products.

    To be clear, this is not a machine for the faint of heart. It’s an open source project, which means part of the joy – and frustration – of the device is that it is continuously improving. This will be perhaps the only laptop that ships with a screwdriver; you’ll be required to install the battery yourself, screw on the LCD bezel of your choice, and you’ll get the speakers as a kit, so you don’t have to use our speaker box design – if you have access to a 3D printer, you can make and fine tune your own speaker box.

    If you’re as excited about having a hackable, open laptop as we are, please back our crowdfunding campaign at Crowd Supply, and follow @novenakosagi for real-time updates.

    by bunnie at April 02, 2014 03:58 PM

    March 31, 2014


    Elphel, inc. on trip to Geneva, Switzerland.

    University of Geneva

    Monday, April 14, 2014 – 18:15 at Uni-Mail, room MR070, University of Geneva. Elphel, Inc. is giving a conference entitled “High Performance Open Hardware for Scientific Applications”. Following the conference, you will be invited to attend a round-table discussion to debate the subject with people from Elphel and Javier Serrano from CERN. Javier studied Physics and Electronics Engineering. He is the head of the Hardware and Timing section in CERN’s Beams Control group, and the founder of the Open Hardware Repository. Javier has co-authored the CERN Open Hardware Licence. He and his colleagues have also recently started contributing improvements to KiCad, a free software tool for the design of Printed Circuit Boards Elphel Inc. is invited by their partner specialized in stereophotogrammetry applications – the Swiss company Foxel SA, from April 14-21 in Geneva, Switzerland. You can enjoy a virtual tour of the Geneva University by clicking on the links herein below: (make sure to use the latest version of Firefox or Chromium to view the demos) Foxel’s team would be delighted to have all of Elphel’s clients and followers to participate in the conference. A chat can also be organized in the next few days. Please contact us at Foxel SA. If you do not have the opportunity to visit us in Geneva, the conference will be streamed live and the recording will be available.

    by Alexandre at March 31, 2014 06:04 PM

    Andrew Zonenberg, Silicon Exposed

    Laser IC decapsulation experiments

    Laser decapsulation is commonly used by professional shops to rapidly remove material before finishing with a chemical etch. Upon finding out that one of my friends had purchased a laser cutting system, we decided to see how well it performed at decapping.

    Infrared light is absorbed strongly by most organics as well as some other materials such as glass. Most metals, especially gold, reflect IR strongly and thus should not be significantly etched by it. Silicon is nearly transparent to IR. The hope was that this would make laser ablation highly selective for packaging material over the die, leadframe, and bond wires.

    Unfortunately I don't have any in-process photos. We used a raster scan pattern at fairly low power on a CO2 laser with near-continuous duty cycle.

    The first sample was a Xilinx XC9572XL CPLD in a 44-pin TQFP.

    Laser-etched CPLD with die outline visible
    If you look closely you can see the outline of the die and wire bonds beginning to appear. This probably has something to do with the thermal resistances of gold bonding wires vs silicon and the copper leadframe.

    Two of the other three samples (other CPLDs) turned out pretty similar except the dies weren't visible because we didn't lase quite as long.
    Laser-etched CPLD without die visible
    I popped this one under my Olympus microscope to take a closer look.

    Focal plane on top of package
    Focal plane at bottom of cavity
    Scan lines from the laser's raster-etch pattern were clearly visible. The laser was quite effective at removing material at first glance, however higher magnification provided reason to believe this process was not as effective as I had hoped.
    Raster lines in molding compound
    Raster lines in molding compound
    Most engineers are not aware that "plastic" IC packages are actually not made of plastic. (The curious reader may find the "epoxy" page on a worthwhile read).

    Typical "plastic" IC molding compounds are actually composite materials made from glass spheres of varying sizes as filler in a black epoxy resin matrix. The epoxy blocks light from reaching the die and interfering with circuits through induced photocurrents and acts to bond the glass together. Unfortunately the epoxy has a thermal expansion coefficient significantly different from that of the die, so glass beads are added as a filler to counteract this effect. Glass is usually a significant percentage (80 or 90 percent) of the molding compound.

    My hope was that the laser would vaporize the epoxy and glass cleanly without damaging the die or bond wires. It seems that the glass near the edge of the beam fused together, producing a mess which would be difficult or impossible to remove. This effect was even more pronounced in the first sample.

    The edge of the die stood out strongly in this sample even though the die is still quite a bit below the surface. Perhaps the die (or the die-attach paddle under it) is a good thermal conductor and acted to heatsink the glass, causing it to melt rather than vaporize?
    The first sample seen earlier in the article, showing the corner of the die
    A closeup showed a melted, blasted mess of glass. About the only things able to easily remove this are mechanical abrasion or HF, both of which would probably destroy the die.
    Fused glass particles
    Fused glass particles

    I then took a look at the last sample, a PIC18F4553. We had etched this one all the way down to the die just to see what would happen.
    Exposed PIC18F4553 die
    Edge of the die showing bond pads
    Most bond wires were completely gone - it appeared that the glass had gotten so hot that it melted the wires even though they did not absorb the laser energy directly. The large reddish sphere at the center of the frame is what remains of a ball bond that did not completely vanish.

    The surface of the die was also covered by fused glass. No fine structure at all was visible.

    Looking at the overview photo, reddish spots were visible around the edge of the die and package. I decided to take a closer look in hopes of figuring out what was going on there.
    Red glass on the edge of the hole
    I was rather confused at first because there should have only been metal, glass, and plastic in that area - and none of these were red. The red areas had a glassy texture to them, suggesting that they were partly or mostly made of fused molding compound.

    Some reading on stained glass provided the answer - cranberry glass. This is a colloid of gold nanoparticles suspended in glass, giving it color from scattering incoming light.

    The normal process for making cranberry glass is to mix Au2O3 in with the raw materials before smelting them together. At high temperatures the oxide decomposes, leaving gold particles suspended in the glass. It appears that I've unintentionally found a second synthesis which avoids the oxidation step: flash vaporization of solid gold and glass followed by condensation of the vapor on a cold surface.

    by Andrew Zonenberg ( at March 31, 2014 02:37 PM

    March 29, 2014

    Video Circuits

    Art électronique

    As a British person from 2014 I have never wanted to be a young French kid in in a polo neck from 1978 until now, this is pretty much my dream audio visual studio, featuring some lovely  shots of the EMS Spectron video synthesizer in action as well as a whole host of other nice EMS and custom rack gear for sound and video experimentation.
    Thanks to Jeff my good friend from across the seas for digging this video up!

    by Chris ( at March 29, 2014 05:19 AM

    March 28, 2014


    SiTime SiT8008 - MEMS oscillator : weekend die-shot

    SiTime SiT8008 is a programmable MEMS oscillator reaching quartz precision but with higher reliability and lower g-sensitivity. Also SiTime is one of companies who received investments from Rosnano - Russian high-tech investment fund.

    Photo of MEMS die puzzled us for quite some time. Is it some sort of integrated SAW/STW resonator?

    The trick is that to reach maximum Q-factor (up to ~186'000 according to patents) MEMS resonator must operate in vacuum. So they package resonator _inside_ the die in hydrogen atmosphere, then anneal it in vacuum so that hydrogen escapes through silicon. So we see here only a cap with contacts to "buried" MEMS resonator. We were unable to reach the resonator itself without x-ray camera or ion mill.

    MEMS die size - 457x454 µm.

    Thankfully relevant patents were specified right on the die : US6936491 US7514283 US7075160 US7750758 :)

    Digital die contains LC PLL and digital logic for one-off frequency programming and temperature compensation.
    Die size - 1409x1572 µm.

    Poly level:

    Standard cells ~250nm techology.

    March 28, 2014 10:54 PM

    Geoffrey L. Barrows - DIY Drones

    Visually stabilizing a Crazyflie, including in the dark

    I've been working on adding visual stabilization to a Crazyflie nano quadrotor. I had two goals- First is to achieve the same type of hover that we demonstrated several years ago on an eFlite 'mCX. Second is to do so in extremely low light levels including in the dark, borrowing inspiration from biology. We are finally getting some decent flights.

    Above is a picture of our sensor module on a Crazyflie. The Crazyflie is really quite small- the four motors form a square about 6cm on a side. The folks at Bitcraze did a fantastic job assembling a virtual machine environment that makes it easy to modify and update the Crazyflie's firmware. Our sensor module comprises four camera boards (using an experimental low-light chip) connected to a main board with an STM32F4 ARM running. These cameras basically grab optical flow type information from the horizontal plane and then estimate motion based on global optical flow patterns. These global optical flow patterns are actually inspired from similar ones identified in fly visual systems.The result is a system that allows a pilot to maneuver the Crazyflie using control sticks, and then will hover in one location when the control sticks are released.

    Below is a video showing three flights. The first flight is indoors, with lights on. The second is indoors, with lights off but with some leaking light. The third is in the dark, but with IR LEDs mounted on the Crazyflie to work in the dark.

    There is still some drift, especially in the darker environments. I've identified a noise issue on the sensor module PCB, and already have a new PCB in fab that should clean things up.

    by Geoffrey L. Barrows at March 28, 2014 01:44 PM

    March 26, 2014

    Bunnie Studios

    Name that Ware, March 2014

    The Ware for March 2014 is shown below.

    I came across this at a gray market used parts dealer in Shenzhen. Round, high density circuit boards with big FPGAs and ceramic packages tend to catch my eye, as they reek of military or aerospace applications.

    I have no idea what this ware is from, or what it’s for, so it should be interesting judging the responses — if there is no definitive identification, I’ll go with the most detailed/thoughtful response.

    by bunnie at March 26, 2014 07:11 PM

    Winner, Name that Ware February 2014

    The Ware for February 2014 is an SPAC module from the racks of a 3C Series 16 computer, made by Honeywell (formerly 3C). According to the Ware’s submitter, the computer from which it came was either a DDP-116 or DDP-224 computer, but the exact identity is unknown as it was acquired in the 70′s and handed down for a generation.

    As for a winner, it’s tough to choose — so many thoughtful answers. I’ll go the easy route and declare jd the winner for having the first correct answer. Congrats, and email me for your prize!

    by bunnie at March 26, 2014 07:11 PM

    Richard Hughes, ColorHug

    GNOME Software on Ubuntu (II)

    So I did a bit more hacking on PackageKit, appstream-glib and gnome-software last night. We’ve now got screenshots from Debian (which are not very good) and long application descriptions from the package descriptions (which are also not very good). It works well enough now, although you now need PackageKit from master as well as appstream-glib and gnome-software.




    This is my last day of hacking on the Ubuntu version, but I’m hopeful other people can take what I’ve done and continue to polish the application so it works as well as it does on Fedora. Tasks left to do include:

    • Get aptcc to honour the DOWNLOADED filter flag so we can show applications in the ‘Updates’ pane
    • Get aptcc to respect the APPLICATION filter to speed up getting the installed list by an order of magnitude
    • Get gnome-software (or appstream-glib) to use the system stock icons rather than the shitty ones shipped in the app-install-data package
    • Find out a way to load localized names and descriptions from the app-install-data gettext archive and add support to appstream-glib. You’ll likely need to call dgettext(), bindtextdomain() and bind_textdomain_codeset()
    • Find out a way how to populate the ‘quality’ stars in gnome-software, which might actually mean adding more data to the app-install desktop files. This is kind of data we need.
    • Find out why aptcc sometimes includes the package summary in the licence detail position
    • Improve the package details to human readable code to save bullet points and convert to a UTF-8 dot
    • Get the systemd offline-updates code working, which is completely untested
    • Find out why aptcc seems to use a SHA1 hash for the repo name (e.g. pkcon repo-list)
    • Find out why aptcc does not set the data part of the package-id to be prefixed with installed: for installed packages

    If you can help with any of this, please grab me on #PackageKit on freenode.

    by hughsie at March 26, 2014 04:17 PM

    March 25, 2014

    Richard Hughes, ColorHug

    GNOME Software on Ubuntu

    After an afternoon of hacking on appstream-glib, I can show the fruits of my labours:


    This needs gnome-software and appstream-glib from git master (or gnome-apps-3.14 in jhbuild) and you need to manually run PackageKit with the aptcc backend (--enable-aptcc).


    It all kinda works with the data from /usr/share/app-install/*, but the icons are ugly as they are included in all kinds of sizes and formats, and also there’s no long descriptions except for the two (!) installed applications new enough to ship local AppData files.Also, rendering all those svgz files is muuuuch slower than a pre-processed png file like we ship with AppStream. The installed view also seems not to work. Only the C locale is present too, as I’ve not worked out how to get all the translations from an external gettext file in appstream-glib. I’d love to know how the Ubuntu software center gets long descriptions and screenshots also. But it kinda works. Thanks.

    by hughsie at March 25, 2014 05:41 PM

    March 24, 2014

    Michele's GNSS blog

    R820T with 28.8 MHz TCXO

    I recently looked around for tools to use as low cost spectrum scanners, being the objective frequency range 400 MHz to 1.7 GHz (incidentally, DVB-T and GPS).
    Of course rtl-sdr is an attractive option so I dusted off some dongles I had bought 6 months ago in China and played again with them, coming to the conclusion that I really like it especially after its main limitation is overcome :)

    The 28.8 MHz crystal is quite poor. I asked Takuji for a TCXO but he said he emptied his stock rapidly. Of course a replacement is nowhere to be found on the big distribution (Digikey, Mouser, Farnell, RS, etc..), so I went to an old time acquaintance at Golledge and, despite having to order 100 pieces, my request was fulfilled. After all, dongles look quite good with the new crystal:
    Figure 1: RTL-SDR with 28.8 MHz TCXO (Golledge GTXO-92)

    I measured the frequency deviation with my simple GPS software receiver and I happy to report that it is within spec, bounded to 2ppm. By the way, I tried using other GNSS software receivers and will write about my experience in another post soon.

    On the frequency plan side, the R820T combined with the RTL2832U is great for GPS. Most people would use it with an active antenna, where the LNA solves the problem of losses due to the impedance mismatch (50 against 75 ohm) and the noise figure of the tuner (3.5 dB according to datasheet).
    The frequency plan with an IF of 3.57 MHz solves elegantly the problem of LO feedthrough and I/Q unbalance typical of ZIF tuner. The IF is recovered automatically in the digital domain by the demodulator so it does not appear in the recorded file. 8bit I/Q recording at 2.048 Msps is more than sufficient for GPS and I also tracked Galileo E1B/C with it (despite some obvious power loss due to the narrow filter band). In my tests, I used a Dafang technology DF5225 survey antenna and the signal time plot shows that 5 bits are actually exercised. I powered the antenna with 3.3V from a Skytraq Venus8 (Ducat10 with S1216F8) through an all-by-one DC blocked passive 4-way splitter/combiner (6 dB unavoidable loss) from ETL-systems.

    Figure 2, 3 and 4: Power spectrum, histogram, and time series at L1.

    I posted three GPS files here:

    Since someone asked for it, here are the tracking results of Galileo E19 plotted after the fact with Matlab:

    and Galileo E20:

    More to come later,

    by (Michele Bavaro) at March 24, 2014 10:49 PM

    Richard Hughes, ColorHug

    GNOME Software 3.12.0 Released!

    Today I released gnome-software 3.12.0 — with a number of new features and a huge number of bugfixes:


    I think I’ve found something interesting to install — notice the auto-generated star rating which tells me how integrated the application is with my environment (i.e. is it available in my language) and if the application is being updated upstream. Those thumbnails look inviting:


    We can continue browsing while the application installs — also notice the ‘tick’ — this will allow me to create and modify application folders in gnome-shell so I can put the game wherever I like:


    The updates tab looks a little sad; there’s no update metadata on rawhide for my F20 GNOME 3.12 COPR, but this looks a lot more impressive on F20 or the yet-to-be-released F21. At the moment we’re using the AppData metadata in place of update descriptions there. Yet another reason to ship an AppData file.


    We can now safely remove sources, which means removing the applications and addons that we installed from them. We don’t want applications sitting around on our computer not being updated and causing dependency problems in the future.


    Development in master is now open, and we’ve already merged several large patches. The move to libappstream-glib is a nice speed boost, and other more user-visible features are planned. We also need some documentation; if you’re interested please let us know!

    by hughsie at March 24, 2014 05:31 PM

    March 22, 2014


    TI TL431 adjustable shunt regulator : weekend die-shot

    TI TL431 in an adjustable shunt regulator often used in linear supplies with external power transistor.
    Die size 1011x1013 µm.

    March 22, 2014 04:25 PM

    March 21, 2014

    Geoffrey L. Barrows - DIY Drones

    What can bees tell us about seeing and flying at night?

    (Image of Megalopta Genalis by Michael Pfaff, linked from Nautilus article)

    How would you like your drone to use vision to hover, see obstacles, and otherwise navigate, but do so at night in the presence of very little light? Research on nocturnal insects will (in my opinion) give us ideas on how to make this possible.

    A recent article in Nautilus describes the research being performed by Lund University Professor Eric Warrant on Megalopta Genalis, a bee that lives in the Central American rainforest and does it's foraging after sunset and before sunrise when light levels are low enough to keep most other insects grounded, but just barely adequate for the Megalopta to perform all requisite bee navigation tasks. This includes hovering, avoiding collisions with other obstacles, visually recognizing it's nest, and navigating out and back to it's nest by recognizing illumination openings in the branches above. Deep in the rainforest the light levels are much lower than out in the open- Megalopta seems able to perform these tasks when the light levels are as low as two or three photons per ommatidia (compound eye element) per second!

    Professor Warrant and his group theorize that the Megalopta's vision system uses "pooling" neurons that combine the acquired photons from groups of ommatidia to obtain the benefit of higher photon rates, a trick similar to how some camera systems extend their ability to operate in low light levels. In fact, I believe even the PX4flow does this to some extent when indoors. The "math" behind this trick is sound, but what is missing is hard neurophysiological evidence of this in the Megalopta, which Prof. Warrant and his colleagues are tying to obtain. As the article suggests, this work is sponsored in part by the US Air Force.

    You have to consider the sheer difference between the environment of Megalopta and the daytime environments in which we normally fly. On a sunny day, the PX4flow sensor probably acquires around 1 trillion photons per second. Indoors, that probably drops to about 10 billion photons per second. Now Megalopta has just under 10,000 ommatidia, so at 2 to 3 photons per ommatidia per second it experiences around 30,000 photons per second. That is a difference of up to seven orders of magnitude, which is even more dramatic when you consider that Megalopta's 30k photons are acquired omnidirectionally, and not just over a narrow field of view looking down.

    by Geoffrey L. Barrows at March 21, 2014 06:47 PM

    March 19, 2014

    Richard Hughes, ColorHug

    AppStream Logs, False Positives and You

    Quite a few people have asked me how the AppStream distro metadata is actually generated for thier app. The actual extraction process isn’t trivial, and on Fedora we also do things like supply missing AppData files for some key apps, and replacing some upstream screenshots on others.

    In order to make this more transparent, I’m going to be uploading the logs of each generation run. If you’ve got a few minutes I’d appreciate you finding your application there and checking for any warnings or errors. The directory names are actually Fedora package names, but usually it’s either 1:1 or fairly predictable.

    If you’ve got a application that’s being blacklisted when it shouldn’t be, or a GUI application that’s in Fedora but not in that list then please send me email or grab me on IRC. The rules for inclusion are here. Thanks.

    by hughsie at March 19, 2014 10:41 AM

    March 18, 2014

    Richard Hughes, ColorHug

    Announcing Appstream-Glib

    For a few years now Appstream and AppData adoption has been growing. We’ve got client applications like GNOME Software consuming the XML files, and we’ve got several implementations of metadata generators for a few distros now. We’ve